- cross-posted to:
- technology@beehaw.org
- cross-posted to:
- technology@beehaw.org
The fact that Twitch chat was able to beat Pokemon Red faster than AI’s been able to get to Vermillion is pretty funny to me
Repeat after me: LLMs do not have intelligence.
They don’t. But nevertheless, the progress they’ve made in a year is very impressive.
The question left to be seen is how it’ll look in a year or two: hardly any improvement, or a beaten elite four?
Claude frequently finds itself pointlessly revisiting completed towns, getting stuck in blind corners of the map for extended periods, or fruitlessly talking to the same unhelpful NPC over and over, to cite just a few examples of distinctly sub-human in-game performance.
Claude is just like me fr
If the graphics are a hurdle, choose a newer version. Why are they obsessed with Gen1?
I wonder if it’s because the game itself is the simplest. They keep adding new systems with each iteration—as well as better graphics—so maybe they want to start with the easiest.
I wonder if it knows or can learn to save its state before big battles or trying to catch a legendary pokemon without Master Ball and reset if it doesn’t succeed initially. I was a very conservative Pokemon player
So, it’s really just playing like any 5 year old.
5 year olds are more capable than this
- “PhD level intelligence”
- Fails at basic tasks
I have never found an LLM model this relatable before.
To be fair, PhD holders sometimes fail at basic tasks
That is the joke, lol
They’re good for coding tho altho I’m not high-level experienced with that or anything. Just seems to get it right for the simple things I’m working on and the iterating over what you started with helps me a lot
They’re good for coding
https://pivot-to-ai.com/2025/03/19/ai-coding-tools-fix-bugs-by-adding-bugs/
It really isn’t. At best it can generate short snippet solutions but it absolutely botches projects that require contextual knowledge of the program stack.
My job actively avoids because of the havoc it tends to cause not to mention legal issues.
I think context is what’s going to kill LLMs. They keep throwing hacks on top of it to try to make it understand context, but it never really “understands” it just makes it look like it is following the context by simulating a few pertinent cues. Every interaction is essentially a fresh slate with a few prompts hiding underneath to seed it with what looks like context, but trying to actually preserve the context of the model to the level that we would consider actual “intelligence” never mind long term planning and actual “thinking” would explode towards infinity so fast there are probably not enough resources in the universe to even do it once.
Damn. This is brutally relatable.