LLMs have legitimate uses today even if they are currently somewhat limited. In the future they will have more legitimate and illegitimate uses. The capabilities of current LLMs are often oversold though, which leads to a lot of this resentment.
Edit: also LLMs very much are AI (specifically ANI) and ML. It’s literally a form of deep learning. It’s not AGI, but nobody with half a brain ever claimed it was.
No they don’t. The only thing they can be somewhat reliable for is autocomplete, and the slight improvement in quality doesn’t compensate the massive increase in costs.
In the future they will have more legitimate and illegitimate uses
No. Thanks to LLM peddlers being excessively greedy and saturating the internet with LLM generated garbage newly trained models will be poisoned and only get worse with every iteration.
The capabilities of current LLMs are often oversold
LLMs have only one capability: to produce the most statistically likely token after a given chain of tokens, according to their model.
Future LLMs will still only have this capability, but since their models will have been trained on LLM generated garbage their results will quickly diverge from anything even remotely intelligible.
The problem is that people selling LLMs keep calling them AI, and people keep buying their bullshit.
AI isn’t necessarily bad. LLMs are.
LLMs have legitimate uses today even if they are currently somewhat limited. In the future they will have more legitimate and illegitimate uses. The capabilities of current LLMs are often oversold though, which leads to a lot of this resentment.
Edit: also LLMs very much are AI (specifically ANI) and ML. It’s literally a form of deep learning. It’s not AGI, but nobody with half a brain ever claimed it was.
No they don’t. The only thing they can be somewhat reliable for is autocomplete, and the slight improvement in quality doesn’t compensate the massive increase in costs.
No. Thanks to LLM peddlers being excessively greedy and saturating the internet with LLM generated garbage newly trained models will be poisoned and only get worse with every iteration.
LLMs have only one capability: to produce the most statistically likely token after a given chain of tokens, according to their model.
Future LLMs will still only have this capability, but since their models will have been trained on LLM generated garbage their results will quickly diverge from anything even remotely intelligible.