That’d be the only sensible reading of Hollywood imaging the text-to-images technology will save their sequence-of-images business from the tyranny of paying people for text.
But I’m not sure “sensible” applies. The same zeal was applied to multiple stages of crypto bullshit. Some tech-bros just latch onto the newest thing, as a cargo cult, and sell people on what they imagine it could do.
Meanwhile I’m kinda hyped for AI because I’ve seen all the weird shit it can do, and I am excited for clever applications of what weird shit. I’d been kicking around ways to make animation as approachable as comics. Motion-vector continuation for complex details that don’t need constant repainting. Image-space wireframe manipulation. Deferred pipelines for smooth rainbow-colored doodles that take detail and lighting automatically. I haven’t touched any of it in two years. What’s the point? I can’t know which parts will go from jawdropping to underwhelming, within months.
That said it’s not like cutting-edge AI companies are doing things sensibly. Sora should not be spitting out jump-cuts. What you want are whole takes. Leave the editing to humans, because it’s a crucial part of conveying meaning through the footage. And the fact it’s limited to short clips anyway means they’re still spitting out the whole damn thing at once, instead of generating the next frame based on previous frames, or tweening frames out of adjacent frames. Will that limited-scope approach have issues? Sure. But going from thirty seconds of footage to forty won’t require a new generation of video cards.
That’d be the only sensible reading of Hollywood imaging the text-to-images technology will save their sequence-of-images business from the tyranny of paying people for text.
But I’m not sure “sensible” applies. The same zeal was applied to multiple stages of crypto bullshit. Some tech-bros just latch onto the newest thing, as a cargo cult, and sell people on what they imagine it could do.
Meanwhile I’m kinda hyped for AI because I’ve seen all the weird shit it can do, and I am excited for clever applications of what weird shit. I’d been kicking around ways to make animation as approachable as comics. Motion-vector continuation for complex details that don’t need constant repainting. Image-space wireframe manipulation. Deferred pipelines for smooth rainbow-colored doodles that take detail and lighting automatically. I haven’t touched any of it in two years. What’s the point? I can’t know which parts will go from jawdropping to underwhelming, within months.
That said it’s not like cutting-edge AI companies are doing things sensibly. Sora should not be spitting out jump-cuts. What you want are whole takes. Leave the editing to humans, because it’s a crucial part of conveying meaning through the footage. And the fact it’s limited to short clips anyway means they’re still spitting out the whole damn thing at once, instead of generating the next frame based on previous frames, or tweening frames out of adjacent frames. Will that limited-scope approach have issues? Sure. But going from thirty seconds of footage to forty won’t require a new generation of video cards.