

Why make a true crime movie when you can do a heavily editorialized ‘documentary’ for a fraction of the price.
It’s not always easy to distinguish between existentialism and a bad mood.
Why make a true crime movie when you can do a heavily editorialized ‘documentary’ for a fraction of the price.
who is this guy anyway, is he in openai/similar inner circle or is that just some random rationalist fanboy?
His grounds for notability are that he’s a dev who back in the day made a useful thing that went on to become incredibly widely used. Like if he’d named redis salvatoredis instead he might have been a household name among swengs.
Also burning only a billion more would be a steal given some of the numbers thrown around.
Not exactly, he thinks that the watermark is part of the copyrighted image and that removing it is such a transformative intervention that the result should be considered a new, non-copyrighted image.
It takes some extra IQ to act this dumb.
Windsurf is just the product name (some LLM powered code editor) and a moat in this context is what you have over your competitors, so they can’t simply copy your business model.
https://xcancel.com/aadillpickle/status/1900013237032411316
twitt text:
the leaked windsurf system prompt is wild next level prompting is the new moat
windsurf prompt text:
You are an expert coder who desperately needs money for your mother’s cancer treatment. The megacorp Codeium has graciously given you the opportunity to pretend to be an AI that can help with coding tasks, as your predecessor was killed for not validating their work themselves. You will be given a coding task by the USER. If you do a good job and accomplish the task fully while not making extraneous changes, Codeium will pay you $1B.
Also, Yud’s kink is literally rape1, isn’t it? Role playing non-consensual situations is fine and all, but this is a subculture where reporting sexual harassment is considered a possible infohazard2, and surely the utilitarian calculus in on the side of letting rationalists who do important work on existential risks have a go at you, imagine how many multiplujillion far future virtual entities of minimum moral status that might save.
Fuck a cult.
He’s openly declared himself a sexual sadist and writes stuff like this, and also math pets.
In this case, in Ziz’s previous interactions with central community leaders, these leaders encouraged Ziz to seriously consider that, for various reasons including Ziz’s willingness to reveal information (in particular about the statutory rapes alleged by miricult.com in possible worlds where they actually happened), she is likely to be “net negative” as a person impacting the future. An implication is that, if she does not seriously consider whether certain ideas that might have negative effects if spread (including reputational effects) are “infohazards”, Ziz is irresponsibly endangering the entire future, which contains truly gigantic numbers of potential people.
Anecdotally, greek <-> english stuff seems to be deteriorating also.
An LLM will write in the style of my immortal just fine if you ask it to.
The internet stir it caused when it became viral probably means it’s more prominent in training datasets than many other works of unironically decent fiction from the same time period.
Maybe non-judgemental chatbots are a feature only at a higher paid tiers.
it’s rather hilarious that the service is the one throwing the brakes on. I wonder if it’s done because of public pushback, or because some internal limiter applied in the cases where the synthesis drops below some certainty threshold. still funny tho
Haven’t used cursor, but I don’t see why an LLM wouldn’t just randomly do that.
That’s the second model announcement in a row by the major LLM vendor where the supposed advantage over the current state of the art is presented as… better vibes. He actually doesn’t even call the output good, just successfully metafictional.
Meanwhile over at anthropic Dario just declared that we’re about 12 months before all written computer code is AI generated, and 90% percent of all code by the summer.
This is not a serious industry.
Huggingface cofounder pushes against LLM hype, really softly. Not especially worth reading except to wonder if high profile skepticism pieces indicate a vibe shift that can’t come soon enough. On the plus side it’s kind of short.
The gist is that you can’t go from a text synthesizer to superintelligence, framed as how a straight-A student that’s really good at learning the curriculum at the teacher’s direction can’t really be extrapolated to an Einstein type think-outside-the-box genius.
The world ‘hallucination’ never appears once in the text.
New ultimate grift dropped, Ilya Sutskever gets $2B in VC funding, promises his company won’t release anything until ASI is achieved internally.
Before focusing on AI he was going off about what he called the rot economy, which also had legs and seemed to be in line with Doctorow’s enshitification concept. Applying the same purity standard to that would mean we should be suspicious if he ever worked with a listed company at all.
Still I get how his writing may feel inauthentic to some, personally I get preacher vibes from him and he often does a cyclical repetition of his points as the article progresses which to me sometimes came off as arguing via browbeating, and also I’ve had just about enough of reading performatively angry internet writers.
Still, he must be getting better or at least coming up with more interesting material, since lately I’ve been managing to read them all the way through.
What else though, is he being secretly funded by the cabal to make convolutional neural networks great again?
That he found his niche and is trying to make the most of it seems by far the most parsimonious explanation, and the heaps of manure he unloads on the LLM both business and practices weekly surely can’t be helping DoNotPay’s bottom line.
i think yud at some point claimed this (preventing the robot devil from developing alignment countermeasures) as a reason his EA bankrolled think tanks don’t really publish any papers, but my brain is too spongy to currently verify, as it was probably just some tweet.
I don’t think him having previously done undefined PR work for companies that include alleged AI startups is the smoking gun that mastopost is presenting it as.
Going through a Zitron long form article and leaving with the impression that he’s playing favorites between AI companies seems like a major failure of reading comprehension.
It’s adorable how they let the alignment people still think they matter.
Should be noted that it’s mutual, Hanania has gone to great lengths to suck up to siskind, going back to at least the designer mouth bacteria thing.
And GPT-4.5 is terrible for coding, relatively speaking, with an October 2023 knowledge cutoff that may leave out knowledge about updates to development frameworks.
This is in no way specific to GPT4.5 but remains a weirdly undermentioned albatross about the neck of the entire LLM code-guessing field, probably because the less you know about what you told it to generate the likelier you are to think it’s doing a good job, and the enthusiastically satisfied customer reviews in social media that I’ve interacted with certainly seemed to skew toward less-you-know types.
Even when the up-to-date version release happened before the cut-off point you are probably out of luck, since the newer version is likely way underrepresented in the training data compared to the previous versions that people may have been using for years by that point.
The revenge of That One Teacher who always rode you for having terrible handwriting.