They’re good for coding tho altho I’m not high-level experienced with that or anything. Just seems to get it right for the simple things I’m working on and the iterating over what you started with helps me a lot
It really isn’t. At best it can generate short snippet solutions but it absolutely botches projects that require contextual knowledge of the program stack.
My job actively avoids because of the havoc it tends to cause not to mention legal issues.
I think context is what’s going to kill LLMs. They keep throwing hacks on top of it to try to make it understand context, but it never really “understands” it just makes it look like it is following the context by simulating a few pertinent cues. Every interaction is essentially a fresh slate with a few prompts hiding underneath to seed it with what looks like context, but trying to actually preserve the context of the model to the level that we would consider actual “intelligence” never mind long term planning and actual “thinking” would explode towards infinity so fast there are probably not enough resources in the universe to even do it once.
They’re good for coding tho altho I’m not high-level experienced with that or anything. Just seems to get it right for the simple things I’m working on and the iterating over what you started with helps me a lot
https://pivot-to-ai.com/2025/03/19/ai-coding-tools-fix-bugs-by-adding-bugs/
It really isn’t. At best it can generate short snippet solutions but it absolutely botches projects that require contextual knowledge of the program stack.
My job actively avoids because of the havoc it tends to cause not to mention legal issues.
I think context is what’s going to kill LLMs. They keep throwing hacks on top of it to try to make it understand context, but it never really “understands” it just makes it look like it is following the context by simulating a few pertinent cues. Every interaction is essentially a fresh slate with a few prompts hiding underneath to seed it with what looks like context, but trying to actually preserve the context of the model to the level that we would consider actual “intelligence” never mind long term planning and actual “thinking” would explode towards infinity so fast there are probably not enough resources in the universe to even do it once.