I am too tired to put up with people complaining about “angies” and “woke lingo” while trying to excuse their eugenicist drivel with claims of being “extremely left leaning”. Please enjoy your trip to the scenic TechTakes egress.
I am too tired to put up with people complaining about “angies” and “woke lingo” while trying to excuse their eugenicist drivel with claims of being “extremely left leaning”. Please enjoy your trip to the scenic TechTakes egress.
“If you don’t know the subject, you can’t tell if the summary is good” is a basic lesson that so many people refuse to learn.
From the replies:
In cGMP and cGLP you have to be able to document EVERYTHING. If someone, somewhere messes up the company and authorities theoretically should be able to trace it back to that incident. Generative AI is more-or-less a black box by comparison; plus how often it’s confidently incorrect is well known and well documented. To use it in a pharmaceutical industry would be teetering on gross negligence and asking for trouble.
Also suppose that you use it in such a way that it helps your company profit immensely and—uh oh! The data it used was the patented IP of a competitor! How would your company legally defend itself? Normally it would use the documentation trail to prove that they were not infringing on the other company’s IP, but you don’t have that here. What if someone gets hurt? Do you really want to make the case that you just gave Chatgpt a list of results and it gave a recommended dosage for your drug? Probably not. When validating SOPs are they going to include listening to Chatgpt in it? If you do, then you need to make sure that OpenAI has their program to the same documentation standards and certifications that you have, and I don’t think they want to tangle with the FDA at the moment.
There’s just so, SO many things that can go wrong using AI casually in a GMP environment that end with your company getting sued and humiliated.
And a good sneer:
With a few years and a couple billion dollars of investment, it’ll be unreliable much faster.
Not A Sneer But: “Princ-wiki-a Mathematica: Wikipedia Editing and Mathematics” and a related blog post. Maybe of interest to those amongst us whomst like to complain.
the team have a bit of an elon moment
“Oh shit, which one of them endorsed the German neo-Nazis?”
Aaron likes a porn post
“Whew.”
Please don’t make posts to TechTakes that are just bare images without a description. The description can be simple, like “Screenshot from YouTube saying ‘Ad blockers violate YouTube’s Terms of Service’”. Some of our participants rely upon screenreaders. Or are crotchety old people who remember an Internet that wasn’t all three websites sharing snapshots of the other two websites.
“Drinking alone tonight?” the bartender asks.
I don’t see what useful information the motte and bailey lingo actually conveys that equivocation and deception and bait-and-switch didn’t. And I distrust any turn of phrase popularized in the LessWrong-o-sphere. If they like it, what bad mental habits does it appeal to?
The original coiner appears to be in with the brain-freezing crowd. He’s written about the game theory of “braving the woke mob” for a Tory rag.
In the department of not smelling at all like desperation:
On Wednesday, OpenAI launched a 1-800-CHATGPT (1-800-242-8478) telephone number that anyone in the US can call to talk to ChatGPT via voice chat for up to 15 minutes for free.
It had a very focused area of expertise, but for sincerity, you couldn’t beat 1-900-MIX-A-LOT.
Petition to replace “motte and bailey” per the Batman clause with “lying like a dipshit”.
Wojciakowski took the critiques on board. “Wow, tough crowd … I’ve learned today that you are sensitive to ensuring human readability.”
Christ, what an asshole.
For a client I recently reviewed a redlined contract where the counterparty used an “AI-powered contract platform.” It had inserted into the contract a provision entirely contrary to their own interests.
So I left it in there.
Please, go ahead, use AI lawyers. It’s better for my clients.
deleted by creator
Adam Christopher comments on a story in Publishers Weekly.
Says the CEO of HarperCollins on AI:
“One idea is a “talking book,” where a book sits atop a large language model, allowing readers to converse with an AI facsimile of its author.”
Please, just make it stop, somebody.
Robert Evans adds,
there’s a pretty good short story idea in some publisher offering an AI facsimile of Harlan Ellison that then tortures its readers to death
Kevin Kruse observes,
I guess this means that HarperCollins is getting out of the business of publishing actual books by actual people, because no one worth a damn is ever going to sign a contract to publish with an outfit with this much fucking contempt for its authors.
There’s a whole lot of assuming-the-conclusion in advocacy for many-worlds interpretations — sometimes from philosophers, and all the time from Yuddites online. If you make a whole bunch of tacit assumptions, starting with those about how mathematics relates to physical reality, you end up in MWI country. And if you make sure your assumptions stay tacit, you can act like an MWI is the only answer, and everyone else is being un-mutual irrational.
(I use the plural interpretations here because there’s not just one flavor of MWIce cream. The people who take it seriously have been arguing amongst one another about how to make it work for half a century now. What does it mean for one event to be more probable than another if all events always happen? When is one “world” distinct from another? The arguments iterate like the construction of a fractal curve.)
The peer reviewers didn’t say anything about it because they never saw it: It’s an unilluminating comparison thrown into the press release but not included in the actual paper.
“Quantum computation happens in parallel worlds simultaneously” is a lazy take trotted out by people who want to believe in parallel worlds. It is a bad mental image, because it gives the misleading impression that a quantum computer could speed up anything. But all the indications from the actual math are that quantum computers would be better at some tasks than at others. (If you want to use the names that CS people have invented for complexity classes, this imagery would lead you to think that quantum computers could whack any problem in EXPSPACE. But the actual complexity class for “problems efficiently solvable on a quantum computer”, BQP, is known to be contained in PSPACE, which is strictly smaller than EXPSPACE.) It also completely obscures the very important point that some tasks look like they’d need a quantum computer — the program is written in quantum circuit language and all that — but a classical computer can actually do the job efficiently. Accepting the goofy pop-science/science-fiction imagery as truth would mean you’d never imagine the Gottesman–Knill theorem could be true.
To quote a paper by Andy Steane, one of the early contributors to quantum error correction:
The answer to the question ‘where does a quantum computer manage to perform its amazing computations?’ is, we conclude, ‘in the region of spacetime occupied by the quantum computer’.
As a person whose job has involved teaching undergrads, I can say that the ones who are honestly puzzled are helpful, but the ones who are confidently wrong are exasperating for the teacher and bad for their classmates.