Sure it is programming, but it’s a different style of programming. Modern high level languages are still primarily focused on the actual implementation details of the code, they’re not really declarative in nature.
Meanwhile, as I wrote in my original comment, the LLM could use a gradient descent type approach to converge on a solution. For example, if you define a signature for what the API looks like as a constraint it, can keep iterating on the code to get there. In fact, you don’t even need LLMs to do this. For example, Barliman is a constraints solver that does program synthesis this way. It’s also smart enough to reuse functions it already implemented to build more complex ones. It’s possible that these kinds of approaches could be combined with LLMs in the future, where LLM could generate an initial solution, and a solver can refine it.
Finally, the fact that LLMs fail at some tasks today does not mean that these kinds of tasks are fundamentally intractable. The pattern has been that progress is happening at a very quick pace right now, and we don’t know what the plateau will be. I’ve been playing around with DeepSeek R1 for code generation, and a lot of the time what it outputs is clean and correct code that requires little or no modification. It’s light years ahead of anything I’ve tried even a year ago. I expect it’s only going to get better going forward.
Barliman is interesting. What if I write f(0) = 2, f(1) = 3, f(2) = 5? Would it be smart enough to generate a functioning prime number lookup function (nevermind efficiency)?
Idk about Barliman but just logically speaking, why would it generate a prime number lookup when a parabola (or even an exponential) fits those values just as well? Apart from an AI using pattern matching to recognize that the inputs being given “look like” a list of primes (in which case the AI is just going to be outputting someone else’s pre-programmed prime number generating algorithm, not one that it came up with itself), the problem with inputting any N number of data points and asking for the function that generated them is that there is always ambiguity, and the simplest solution conceptually will just be an N-1 order polynomial.
Well just add more data points (7, 11, 13, etc) until the simplest solution is a prime solution. Would this program be smart enough to generate the right algorithm?
Of course, an AI would just generate a prime algorithm because it’s well known, but not all problems are well known. In fact, most problems I’m solving everyday are not. So for the AI, it may as well be the prime lookup problem for Barliman.
Well just add more data points (7, 11, 13, etc) until the simplest solution is a prime solution.
As i said, i don’t know anything about this program, but conceptually i think you first need to define what you mean by “simplest solution”. Because at least for me a polynomial is the simplest solution regardless how many data points there are because then the problem is reducible to a set of linear equations that a computer can easily solve.
However if we specify that the solution needs to be one with the lowest number of parameters possible, then it gets interesting. Then you can have an algorithm start to iteratively test solutions, and try to reduce complexity until it hopefully arrives at something resembling a prime number generator. Or it may not. I don’t know if this is even possible because one well known problem with the “gradient descent” approach is that your algorithm can get stuck in a local valley. It thinks it’s found the optimal solution but it’s stuck in a false minimum because it does not have the “imagination” to start to test an entirely different class of solutions that may at first be much less efficient.
That’s the problem with “AI” isn’t it? At least at the stage it is today. It can only really solve the kinds of problems that have already been solved. Maybe with some variations in the parameters, but the general structure of a problem needs to be one that it has already encountered in its training.
For me the worry is what happens if the practice of using LLMs to generate code becomes so ubiquitous that in idk, 50-100 years people forget how to actually write code themselves? We already have this problem thanks to higher level programming languages that an ever dwindling number of people have any kind of competencies in reading let alone writing Assembly code. Compilers do all that for us. So there is this danger of computers literally becoming black boxes that nobody understands anymore because we’ve abstracted the tools we use to interface with them so much that what is really going on at the most basic level just looks like magic to us.
If nothing else this could create some really weird social phenomena where people start to develop all sorts of superstitions and unscientific beliefs about computers because even the people working on them professionally just don’t understand them. I’m a bit anxious that all of this is pointing to how our societies, rather than adopting a more materialist and scientific world view, will instead just be entering a new age of obscurantism.
And what happens when something goes wrong and you need to debug something on a more fundamental level? What happens when only computers will be able to “understand” how computers actually work?
This is my biggest worry about AI. Not that it will try to “take over the world” or any of that other sci-fi apocalypse stuff, but simply how it will negatively affect humans on a social and psychological level, change how we relate to technology, knowledge and skills…and even to each other.
If you don’t really need to acquire and train these kinds of skills anymore because you always have an “AI” do your work for you, will we all just become incapable of doing these things ourselves? If someone always gives you the answers to your math homework, how are you supposed to learn? I wonder if this how people thought about industrialization in the 19th century? Why do i feel like i’m turning into those people who were saying much the same things about the Internet in the early 90s?
I think we’re already largely there. Nobody really knows how the full computing stack works anymore. The whole thing is just too big to fit in your head. So, there is a lot of software out there that’s already effectively a black box. There’s a whole joke how large legacy systems are basically like generation ships where new devs have no idea how or why the system was built that way, and they just plug holes as they go.
However, even if people forget how to write code, it’s not like it’s a skill that can’t be learned if it becomes needed again. And if we do get to the point where LLMs are good enough that people forget how to write code, then it means the LLMs just become the way people write code. I don’t see how it’s different from people who only know how to use a high level language today. A Js dev will not know how to work with pointers, do manual memory management and so on. You can even take it up a level and look at it from a perspective of a non technical person asking a developer to write a program for them. They’re already in this exact scenario, and that’s vast majority of the population.
And given the specification writing approach I described, I don’t actually see that much of a problem with the code being a black box. You would basically create contracts and LLM will fill them, and this way you have some guarantees about the behavior of the system.
It’s possible people start developing mysticism about software, but at this point most people already treat technology like magic. I expect there will always be people who have an inclination towards a scientific view of the world, and who enjoy understanding how things work. I don’t think LLMs are going to change that.
Personally, I kind of see a synthesis between AI tools and humans going forward. We’ll be using this tech augment our abilities, and we’ll just focus on solving bigger problems together. I don’t expect there’s going to be some sort of intellectual collapse, rather the opposite could happen where people start tacking problems on the scale that seems unimaginable today.
Sure it is programming, but it’s a different style of programming. Modern high level languages are still primarily focused on the actual implementation details of the code, they’re not really declarative in nature.
Meanwhile, as I wrote in my original comment, the LLM could use a gradient descent type approach to converge on a solution. For example, if you define a signature for what the API looks like as a constraint it, can keep iterating on the code to get there. In fact, you don’t even need LLMs to do this. For example, Barliman is a constraints solver that does program synthesis this way. It’s also smart enough to reuse functions it already implemented to build more complex ones. It’s possible that these kinds of approaches could be combined with LLMs in the future, where LLM could generate an initial solution, and a solver can refine it.
Finally, the fact that LLMs fail at some tasks today does not mean that these kinds of tasks are fundamentally intractable. The pattern has been that progress is happening at a very quick pace right now, and we don’t know what the plateau will be. I’ve been playing around with DeepSeek R1 for code generation, and a lot of the time what it outputs is clean and correct code that requires little or no modification. It’s light years ahead of anything I’ve tried even a year ago. I expect it’s only going to get better going forward.
Barliman is interesting. What if I write
f(0) = 2, f(1) = 3, f(2) = 5
? Would it be smart enough to generate a functioning prime number lookup function (nevermind efficiency)?Idk about Barliman but just logically speaking, why would it generate a prime number lookup when a parabola (or even an exponential) fits those values just as well? Apart from an AI using pattern matching to recognize that the inputs being given “look like” a list of primes (in which case the AI is just going to be outputting someone else’s pre-programmed prime number generating algorithm, not one that it came up with itself), the problem with inputting any N number of data points and asking for the function that generated them is that there is always ambiguity, and the simplest solution conceptually will just be an N-1 order polynomial.
Well just add more data points (7, 11, 13, etc) until the simplest solution is a prime solution. Would this program be smart enough to generate the right algorithm?
Of course, an AI would just generate a prime algorithm because it’s well known, but not all problems are well known. In fact, most problems I’m solving everyday are not. So for the AI, it may as well be the prime lookup problem for Barliman.
As i said, i don’t know anything about this program, but conceptually i think you first need to define what you mean by “simplest solution”. Because at least for me a polynomial is the simplest solution regardless how many data points there are because then the problem is reducible to a set of linear equations that a computer can easily solve.
However if we specify that the solution needs to be one with the lowest number of parameters possible, then it gets interesting. Then you can have an algorithm start to iteratively test solutions, and try to reduce complexity until it hopefully arrives at something resembling a prime number generator. Or it may not. I don’t know if this is even possible because one well known problem with the “gradient descent” approach is that your algorithm can get stuck in a local valley. It thinks it’s found the optimal solution but it’s stuck in a false minimum because it does not have the “imagination” to start to test an entirely different class of solutions that may at first be much less efficient.
That’s the problem with “AI” isn’t it? At least at the stage it is today. It can only really solve the kinds of problems that have already been solved. Maybe with some variations in the parameters, but the general structure of a problem needs to be one that it has already encountered in its training.
Given enough constraints sure, it’s basically just logic programming at its core.
For me the worry is what happens if the practice of using LLMs to generate code becomes so ubiquitous that in idk, 50-100 years people forget how to actually write code themselves? We already have this problem thanks to higher level programming languages that an ever dwindling number of people have any kind of competencies in reading let alone writing Assembly code. Compilers do all that for us. So there is this danger of computers literally becoming black boxes that nobody understands anymore because we’ve abstracted the tools we use to interface with them so much that what is really going on at the most basic level just looks like magic to us.
If nothing else this could create some really weird social phenomena where people start to develop all sorts of superstitions and unscientific beliefs about computers because even the people working on them professionally just don’t understand them. I’m a bit anxious that all of this is pointing to how our societies, rather than adopting a more materialist and scientific world view, will instead just be entering a new age of obscurantism.
And what happens when something goes wrong and you need to debug something on a more fundamental level? What happens when only computers will be able to “understand” how computers actually work?
This is my biggest worry about AI. Not that it will try to “take over the world” or any of that other sci-fi apocalypse stuff, but simply how it will negatively affect humans on a social and psychological level, change how we relate to technology, knowledge and skills…and even to each other.
If you don’t really need to acquire and train these kinds of skills anymore because you always have an “AI” do your work for you, will we all just become incapable of doing these things ourselves? If someone always gives you the answers to your math homework, how are you supposed to learn? I wonder if this how people thought about industrialization in the 19th century? Why do i feel like i’m turning into those people who were saying much the same things about the Internet in the early 90s?
I think we’re already largely there. Nobody really knows how the full computing stack works anymore. The whole thing is just too big to fit in your head. So, there is a lot of software out there that’s already effectively a black box. There’s a whole joke how large legacy systems are basically like generation ships where new devs have no idea how or why the system was built that way, and they just plug holes as they go.
However, even if people forget how to write code, it’s not like it’s a skill that can’t be learned if it becomes needed again. And if we do get to the point where LLMs are good enough that people forget how to write code, then it means the LLMs just become the way people write code. I don’t see how it’s different from people who only know how to use a high level language today. A Js dev will not know how to work with pointers, do manual memory management and so on. You can even take it up a level and look at it from a perspective of a non technical person asking a developer to write a program for them. They’re already in this exact scenario, and that’s vast majority of the population.
And given the specification writing approach I described, I don’t actually see that much of a problem with the code being a black box. You would basically create contracts and LLM will fill them, and this way you have some guarantees about the behavior of the system.
It’s possible people start developing mysticism about software, but at this point most people already treat technology like magic. I expect there will always be people who have an inclination towards a scientific view of the world, and who enjoy understanding how things work. I don’t think LLMs are going to change that.
Personally, I kind of see a synthesis between AI tools and humans going forward. We’ll be using this tech augment our abilities, and we’ll just focus on solving bigger problems together. I don’t expect there’s going to be some sort of intellectual collapse, rather the opposite could happen where people start tacking problems on the scale that seems unimaginable today.