Here’s some context for the question. When image generating AIs became available, I tried them out and found that the results were often quite uncanny or even straight up horrible. I ended up seeing my fair share of twisted fingers, scary faces and mutated abominations of all kinds.
Some of those pictures made me think that since the AI really loves to create horror movie material, why not take advantage of this property. I started asking it to make all sorts of nightmare monsters that could have escaped from movies such as The Thing. Oh boy, did it work! I think I’ve found the ideal way to use an image generating AI. Obviously, it can do other stuff too, but with this particular category, the results are perfect nearly every time. Making other types of images usually requires some creative promptcrafting, editing, time and effort. When you ask for a “mutated abomination from Hell”, it’s pretty much guaranteed to work perfectly every time.
What about LLMs though? Have you noticed that LLMs like chatGPT tend to gravitate towards a specific style or genre? Is it longwinded business books with loads of unnecessary repetition or is it pointless self help books that struggle to squeeze even a single good idea in a hundred pages? Is it something even worse? What would be the ideal use for LLMs? What’s the sort of thing where LLMs perform exceptionally well?
it’s in the name: generative pre-trained transformer. the one thing ChatGPT and GPT-3/3.5/4 are truly good at is transforming data. it can restructure paragraphs to have a different flow, take class notes and make flashcards out of them (that’s how i use ChatGPT), or even take non-textual data and potentially present it in textual format if trained right
I’ve actually used GPT to summarize book reviews on Amazon and Goodreads. I’m not entirely sure if Bing really reads all the book reviews I tell it to read, but it seems to be pretty good at finding the details that matter to me. In my prompt I’m telling it to skip all the 5-star reviews so that it will only focus on finding common complaints. Based on the summary, I’ll then decide if I can live with the flaws the book has.
I’ve given some more thought to this transformer thing. It really is in the name, just like you said. Transforming text into another form really is the main area of expertise. I feel like I should give GPT some transformation tasks more often. Generating new stuff can be fun, but that might not be the best way to use it.
It’s a very simple observation about the name, but I think pointing it out has really changed the way I think about GPT. Thanks!