That’s an important point you raise. I feel like a big problem with the LLM projects we see today, including ChatGPT, Bard, etc., is that the developers have tunnel vision. Rather than using the LLM as one component of a system with many well-researched traditional algorithms doing what they do best, they want to do everything within the network.
This makes sense from a research perspective. It doesn’t make sense from an end-product perspective.
The more I play with LLMs, the more I feel like their true value is as something like “regular expressions on crack”.
That’s an important point you raise. I feel like a big problem with the LLM projects we see today, including ChatGPT, Bard, etc., is that the developers have tunnel vision. Rather than using the LLM as one component of a system with many well-researched traditional algorithms doing what they do best, they want to do everything within the network.
This makes sense from a research perspective. It doesn’t make sense from an end-product perspective.
The more I play with LLMs, the more I feel like their true value is as something like “regular expressions on crack”.