

When I was at computer toucher school at about the start of the century, under the moniker AI were taught (I think) fuzzy logic, incremental optimization and graph algorithms, and neural networks.
AI is a sci-fi trope far more than it ever was a well-defined research topic.
AI innovation in this space usually means automatically adding stuff to the model’s context.
It probably started meaning the (failed) build output got added in every iteration, but it’s entirely possible to feed the LLM debugger data from a runtime crash and hope something usable happens.