You make fair points and in case it’s not clear, I don’t personally see it as “pure hype.” I’ve been around a particular AI service for… I don’t know, maybe more than a couple of years now? Some of my feel on it comes from using AI, some of it comes from discussing it with others (sometimes like beating a dead horse conversations that have been had a hundred times), and some of it comes from my technical understanding of AI, which is certainly not on the level of an ML engineer, but is quite a lot more than a layperson.
My skepticism is more about degrees than anything else. I know it can be useful in some capacities (though it is informative to hear more about the programming end, as I’ve not personally experimented with it in that capacity - my info on that more comes from others who have). But widespread adoption doesn’t seem as inevitable as some people make it out to be, or if it is, it seems like it’ll be more a consequence of capital forcing it to happen, regardless of whether it makes sense. To focus on programming as example since that’s the main subject here, comparing to other forms of advancement in programming, the benefits seem much more clearly weighted in favor of benefits in those other cases, with little known drawbacks. The main drawback along the way being one already brought up in this thread, like the dwindling knowledge of lower level system operations, especially in older languages. With generative AI, maybe it’s just cause of how much time I’ve spent around it, but I see much more obvious and difficult-to-contend-with drawbacks, like those mentioned in my previous post.
I still think it can have a place as an assistive tool, but I’m not sure it’s going to be as simple of an overall improvement as some things that are considered advancements. And to your point about evolution of the tech, where my mind goes in part on that, is the field may need breakthroughs beyond the transformer architecture before it can reach useful widespread adoption. The hallucination factor alone makes it borderline useless and at best untrustworthy for certain kinds of tasks and skill levels (e.g. not having the knowledge to know when it’s wrong, or the awareness to cross-reference with something or someone who is not an LLM). Now if there was a breakthrough (well more like multiple breakthroughs) in infrastructure and design where it could link up with a more logic-based system, with a connection to the internet, and cross-reference on things, and show its sources, and utilize an evaluative system to learn from its mistakes (which I understand is a lot, but just saying) then I think it could much more easily be seen as an overall benefit. Or even without the self-learning, if it could do all those other things (making it more trustworthy, more reasoned (not just a lingual mimic of reasoning), and more “show your work”) and it could be manually added onto with new knowledge where needed without needing a huge budget and team of professionals, then I think it could more feasibly reach widespread adoption (without being forced on people). It’s possible it will be pushed on people anyway, at least in the capitalist west, and capital will say “catch up or starve”, but there is already some not-insignificant AI hate out there and people may not make it easy for them.
Right, the reality is going to be nuanced. There will be niches where this tool will be helpful, and others where it doesn’t really make sense. We’re in a hype phase right now, and people are still figuring out good uses for it. It’s also worth noting that people are already actively working on solutions for the hallucination problem and doing actual reasoning. The most interesting approach I’ve seen so far is neurosymbolics. It combines a deep neural net with a symbolic logic engine. The neural net does what it’s good at which is parsing raw input data and classifying, and symbolic logic system operates on the classified data. This way you can have the system actually reason through a problem, explain the steps, correct it, etc. This is a fun read about it https://arxiv.org/abs/2305.00813
I do think the AI might present a problem for the capitalist system as a whole because if vast majority of work really can be automated going forward, then the whole model of working for a living will fall apart. It will be very interesting to see how the capitalist world grapples with this assuming it lasts that long to begin with.
You make fair points and in case it’s not clear, I don’t personally see it as “pure hype.” I’ve been around a particular AI service for… I don’t know, maybe more than a couple of years now? Some of my feel on it comes from using AI, some of it comes from discussing it with others (sometimes like beating a dead horse conversations that have been had a hundred times), and some of it comes from my technical understanding of AI, which is certainly not on the level of an ML engineer, but is quite a lot more than a layperson.
My skepticism is more about degrees than anything else. I know it can be useful in some capacities (though it is informative to hear more about the programming end, as I’ve not personally experimented with it in that capacity - my info on that more comes from others who have). But widespread adoption doesn’t seem as inevitable as some people make it out to be, or if it is, it seems like it’ll be more a consequence of capital forcing it to happen, regardless of whether it makes sense. To focus on programming as example since that’s the main subject here, comparing to other forms of advancement in programming, the benefits seem much more clearly weighted in favor of benefits in those other cases, with little known drawbacks. The main drawback along the way being one already brought up in this thread, like the dwindling knowledge of lower level system operations, especially in older languages. With generative AI, maybe it’s just cause of how much time I’ve spent around it, but I see much more obvious and difficult-to-contend-with drawbacks, like those mentioned in my previous post.
I still think it can have a place as an assistive tool, but I’m not sure it’s going to be as simple of an overall improvement as some things that are considered advancements. And to your point about evolution of the tech, where my mind goes in part on that, is the field may need breakthroughs beyond the transformer architecture before it can reach useful widespread adoption. The hallucination factor alone makes it borderline useless and at best untrustworthy for certain kinds of tasks and skill levels (e.g. not having the knowledge to know when it’s wrong, or the awareness to cross-reference with something or someone who is not an LLM). Now if there was a breakthrough (well more like multiple breakthroughs) in infrastructure and design where it could link up with a more logic-based system, with a connection to the internet, and cross-reference on things, and show its sources, and utilize an evaluative system to learn from its mistakes (which I understand is a lot, but just saying) then I think it could much more easily be seen as an overall benefit. Or even without the self-learning, if it could do all those other things (making it more trustworthy, more reasoned (not just a lingual mimic of reasoning), and more “show your work”) and it could be manually added onto with new knowledge where needed without needing a huge budget and team of professionals, then I think it could more feasibly reach widespread adoption (without being forced on people). It’s possible it will be pushed on people anyway, at least in the capitalist west, and capital will say “catch up or starve”, but there is already some not-insignificant AI hate out there and people may not make it easy for them.
Right, the reality is going to be nuanced. There will be niches where this tool will be helpful, and others where it doesn’t really make sense. We’re in a hype phase right now, and people are still figuring out good uses for it. It’s also worth noting that people are already actively working on solutions for the hallucination problem and doing actual reasoning. The most interesting approach I’ve seen so far is neurosymbolics. It combines a deep neural net with a symbolic logic engine. The neural net does what it’s good at which is parsing raw input data and classifying, and symbolic logic system operates on the classified data. This way you can have the system actually reason through a problem, explain the steps, correct it, etc. This is a fun read about it https://arxiv.org/abs/2305.00813
I do think the AI might present a problem for the capitalist system as a whole because if vast majority of work really can be automated going forward, then the whole model of working for a living will fall apart. It will be very interesting to see how the capitalist world grapples with this assuming it lasts that long to begin with.