We (people in general) are dealing with two sets of crazy people, when it comes to AI:
- A crowd who overestimates AI capabilities. They often believe AI is "intelligent", AGI is coming "soon", AI will replace our jobs, the future is AI, all that babble.
- A crowd who believes generative models are only flash and smoke, a bubble that'll burst and leave nothing behind. A Ponzi scheme of sorts.
Both are wrong. And they're wrong in the same way: failure to see tech as tech. And you often see criticism towards #1 (it's fair!), but I'm glad to see criticism towards #2 (also fair!) popping up once in a while, like the author does.
...case in point best usage case for LLMs is
- the task is tedious, repetitive, basic. The info equivalent of cleaning dishes.
- the amount of errors in the output is OK for its purpose.