

Documentation? Maintainable? Test cases? You’re too attached to old paradigms in a new vibe based world.
Why do you need any of those? If you need any new features, you just re-engineer your prompt and ask the AI to rebuild it from scratch…
Documentation? Maintainable? Test cases? You’re too attached to old paradigms in a new vibe based world.
Why do you need any of those? If you need any new features, you just re-engineer your prompt and ask the AI to rebuild it from scratch…
Can someone explain how you accidentally rack up such a bill?
For example: You can deploy your Python script as a Lambda. Imagine somewhere in the Python script you’d call your own lambda - twice. You basically turned your lambda into a Fork Bomb that will spawn infinite lambdas
A lot of the times this comes down to a user error.
For example, very similar to your case, I knew someone that enabled Cloudtrail, and configured some things to have Cloudtrail logs dumped on S3. Guess what? Dumping things on S3 also creates a Cloudtrail that gets logged to S3 that Cloudtrail logs. Etc
Doing things like that and creating a loop can get you massive bills
Probably the best thing Ubisoft released since assassin’s creed black flag
They were streets ahead in their logo design…
We also got fully self driving cars in 2 years though, in 2016…
Snowe is sysadmin of programming.dev…
So source: Snowe
If you’re using Entity Framework for the mssql, I doubt that this library would work as a substitute.
Because that linq gets parsed into expression trees and then send to the underlying provider (mssql/mysql etc) to be converted into sql. So if you you some non-standard library those providers won’t be able to convert that linq to sql
Many people believe that the ToS was added to make Mozilla legally able to train AIs on the collected data.
“Don’t attribute to malice what is easily explained by incompetence”
So yea Mozilla wrote some terms that where ambiguous and could be interpreted in different ways, and ‘many people believed’ that they did this intentionally and had the worst intentions possible by their interpretation of the new ToS
Then Mozilla rewrote that ToS after seeing how people were interpreting the original ToS:
https://www.theverge.com/news/622080/mozilla-revising-firefox-terms-of-use-data
And yea, now ‘many people will believe’ that ‘Mozilla revised their decision to do this after the backslash’ - OR, it was never their intention and now phrased it better after the confusion
People just want to get their pitchforks out and start drama at any possible opportunity without evidence of wrongdoing… Mozilla added stupid stuff to the ToS, ok yea fair enough - but if they actually did “steal user data” - this would be very easily detectable with Wireshark or something
This feels like a personal attack
Programming.dev is hosting Iceshrimp: https://bytes.programming.dev
You could host your own instance, or if your opinion-pieces are programming related, post them there
No one’s questioning why he’s sorting it twice?
It’s called embeddings in other models as well:
https://huggingface.co/blog/getting-started-with-embeddings
https://ollama.com/blog/embedding-models
Also some feedback, a bit more technical, since I was trying to see how it works, more of a suggestion I suppose
It looks like you’re looping through the documents and asking it for known tags, right? ({str(db.current_library.tags)}.
)
I don’t know if I would do this through a chat completion and a chat response, there are special functions for keyword-like searching, like embeddings. It’s a lot faster, and also probably way cheaper, since you’re paying barely anything for embeddings compared to chat tokens
So the common way to do something like this in AI would be to use Vectors and embeddings: https://platform.openai.com/docs/guides/embeddings
So - you’d ask for an embedding (A vector) for all your tags first. Then you ask for embeddings of your document.
Then you can do a Nearest Neighbor Search for the tags, and see how closely they match
I’m not entirely sure what you hope to achieve: have a GPG encrypted subject, and have ThunderBird automatically understand that it’s encrypted, so it can be automatically decrypted?
Since you’re saying you’re building software to support this, what are you building? A ThunderBird plugin that can do this? Or just standalone software that you want to make compatible with ThunderBird default way of handling encryption?
There’s a Python WASM runtime, if you really want to run python in a browser for some reason…
Recruitment is now basically Dead Internet theory…
It gives an example:
For example, with the phrase “My favorite tropical fruits are __.” The LLM might start completing the sentence with the tokens “mango,” “lychee,” “papaya,” or “durian,” and each token is given a probability score. When there’s a range of different tokens to choose from, SynthID can adjust the probability score of each predicted token, in cases where it won’t compromise the quality, accuracy and creativity of the output.
So I suppose with a larger text, if all lists of things are “LLM Sorted”, it’s an indicator.
That’s probably not the only thing, if it can detect a bunch of these indicators, there’s a higher likelihood it’s LLM text
Because Wordpress is also hosting 1000s of plugins that WP engine users can install.
I’m not sure what the license regarding those things is, WP engine could probably just mirror it -
But they basically got locked out of the default ecosystem infrastructure.
Since you’re getting downvoted, maybe you want to explain why using Github free is “pointing a loaded gun at your foot”?
I’m using github for a bunch of my public repos as a free backup service… Why would I want to use a self hosted or way more obscure git forge? Seems riskier than just dumping it on github