• footfaults@lemmygrad.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    13 days ago

    I dunno, this all sounds like the same arguments they had about “low code/ no code” solutions that were in vogue a couple years ago.

    • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
      link
      fedilink
      arrow-up
      0
      ·
      13 days ago

      I’d argue this is quite different because it’s a tool actual developers can use, and it very much does save you time. There’s a lot of hype around this tech, but I do think people will settle on good use cases for it as it matures.

  • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
    link
    fedilink
    arrow-up
    0
    ·
    13 days ago

    I expect that programmers are going to incresingly focus on defining specifications while LLMs will handle the grunt work. Imagine declaring what the program is doing, e.g., “This API endpoint must return user data in <500ms, using ≤50MB memory, with O(n log n) complexity”, and an LLM generates solutions that adhere to those rules. It could be an approach similar to the way genetic algorithms work, where LLM can try some initial solutions, then select ones that are close to the spec, and iterate until the solution works well enough.

    I’d also argue that this is a natural evolution. We don’t hand-assemble machine code today, most people aren’t writing stuff like sorting algorithms from scratc, and so on. I don’t think it’s a stretch to imagine that future devs won’t fuss with low-level logic. LLMs can be seen as “constraint solvers” akin to a chess engine, but for code. It’s also worth noting that Modern tools already do this in pockets. AWS Lambda lets you define “Run this function in 1GB RAM, timeout after 15s”, imagine scaling that philosophy to entire systems.

    • kredditacc@lemmygrad.ml
      link
      fedilink
      arrow-up
      0
      ·
      13 days ago

      Defining specifications IS programming. We are well pass the age in which we write specs first, Assembly later. Now most programming languages focus primary on high level business logic. These are called “high-level programming languages”.

      The question then becomes “Can LLMs become even higher level programming languages?”. I think the answer to that question would be “no”. LLMs have the ability to mix existing code examples into new code, these code are not guaranteed to work, so they need human intervention to make it right. And it must be “human” specifically because LLMs have not the intrinsic understanding of the code and therefore would often stuck in a loop (of emitting the same code back and forth).

      I have also tried them before, LLMs are good at well-documented problems, well-documented frameworks, and well-documented programming languages. Let them solve niche problems, and you would spend all day tweaking prompts to get it to give the right answer, at which point, why not simply write the code yourself? LLMs also fail with big enough codebases.

      • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        13 days ago

        Sure it is programming, but it’s a different style of programming. Modern high level languages are still primarily focused on the actual implementation details of the code, they’re not really declarative in nature.

        Meanwhile, as I wrote in my original comment, the LLM could use a gradient descent type approach to converge on a solution. For example, if you define a signature for what the API looks like as a constraint it, can keep iterating on the code to get there. In fact, you don’t even need LLMs to do this. For example, Barliman is a constraints solver that does program synthesis this way. It’s also smart enough to reuse functions it already implemented to build more complex ones. It’s possible that these kinds of approaches could be combined with LLMs in the future, where LLM could generate an initial solution, and a solver can refine it.

        Finally, the fact that LLMs fail at some tasks today does not mean that these kinds of tasks are fundamentally intractable. The pattern has been that progress is happening at a very quick pace right now, and we don’t know what the plateau will be. I’ve been playing around with DeepSeek R1 for code generation, and a lot of the time what it outputs is clean and correct code that requires little or no modification. It’s light years ahead of anything I’ve tried even a year ago. I expect it’s only going to get better going forward.

        • cfgaussian@lemmygrad.ml
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          13 days ago

          For me the worry is what happens if the practice of using LLMs to generate code becomes so ubiquitous that in idk, 50-100 years people forget how to actually write code themselves? We already have this problem thanks to higher level programming languages that an ever dwindling number of people have any kind of competencies in reading let alone writing Assembly code. Compilers do all that for us. So there is this danger of computers literally becoming black boxes that nobody understands anymore because we’ve abstracted the tools we use to interface with them so much that what is really going on at the most basic level just looks like magic to us.

          If nothing else this could create some really weird social phenomena where people start to develop all sorts of superstitions and unscientific beliefs about computers because even the people working on them professionally just don’t understand them. I’m a bit anxious that all of this is pointing to how our societies, rather than adopting a more materialist and scientific world view, will instead just be entering a new age of obscurantism.

          And what happens when something goes wrong and you need to debug something on a more fundamental level? What happens when only computers will be able to “understand” how computers actually work?

          This is my biggest worry about AI. Not that it will try to “take over the world” or any of that other sci-fi apocalypse stuff, but simply how it will negatively affect humans on a social and psychological level, change how we relate to technology, knowledge and skills…and even to each other.

          If you don’t really need to acquire and train these kinds of skills anymore because you always have an “AI” do your work for you, will we all just become incapable of doing these things ourselves? If someone always gives you the answers to your math homework, how are you supposed to learn? I wonder if this how people thought about industrialization in the 19th century? Why do i feel like i’m turning into those people who were saying much the same things about the Internet in the early 90s?

          • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
            link
            fedilink
            arrow-up
            0
            ·
            13 days ago

            I think we’re already largely there. Nobody really knows how the full computing stack works anymore. The whole thing is just too big to fit in your head. So, there is a lot of software out there that’s already effectively a black box. There’s a whole joke how large legacy systems are basically like generation ships where new devs have no idea how or why the system was built that way, and they just plug holes as they go.

            However, even if people forget how to write code, it’s not like it’s a skill that can’t be learned if it becomes needed again. And if we do get to the point where LLMs are good enough that people forget how to write code, then it means the LLMs just become the way people write code. I don’t see how it’s different from people who only know how to use a high level language today. A Js dev will not know how to work with pointers, do manual memory management and so on. You can even take it up a level and look at it from a perspective of a non technical person asking a developer to write a program for them. They’re already in this exact scenario, and that’s vast majority of the population.

            And given the specification writing approach I described, I don’t actually see that much of a problem with the code being a black box. You would basically create contracts and LLM will fill them, and this way you have some guarantees about the behavior of the system.

            It’s possible people start developing mysticism about software, but at this point most people already treat technology like magic. I expect there will always be people who have an inclination towards a scientific view of the world, and who enjoy understanding how things work. I don’t think LLMs are going to change that.

            Personally, I kind of see a synthesis between AI tools and humans going forward. We’ll be using this tech augment our abilities, and we’ll just focus on solving bigger problems together. I don’t expect there’s going to be some sort of intellectual collapse, rather the opposite could happen where people start tacking problems on the scale that seems unimaginable today.

        • kredditacc@lemmygrad.ml
          link
          fedilink
          arrow-up
          0
          ·
          13 days ago

          Barliman is interesting. What if I write f(0) = 2, f(1) = 3, f(2) = 5? Would it be smart enough to generate a functioning prime number lookup function (nevermind efficiency)?

          • cfgaussian@lemmygrad.ml
            link
            fedilink
            arrow-up
            0
            ·
            edit-2
            13 days ago

            Idk about Barliman but just logically speaking, why would it generate a prime number lookup when a parabola (or even an exponential) fits those values just as well? Apart from an AI using pattern matching to recognize that the inputs being given “look like” a list of primes (in which case the AI is just going to be outputting someone else’s pre-programmed prime number generating algorithm, not one that it came up with itself), the problem with inputting any N number of data points and asking for the function that generated them is that there is always ambiguity, and the simplest solution conceptually will just be an N-1 order polynomial.

            • kredditacc@lemmygrad.ml
              link
              fedilink
              arrow-up
              0
              ·
              13 days ago

              Well just add more data points (7, 11, 13, etc) until the simplest solution is a prime solution. Would this program be smart enough to generate the right algorithm?

              Of course, an AI would just generate a prime algorithm because it’s well known, but not all problems are well known. In fact, most problems I’m solving everyday are not. So for the AI, it may as well be the prime lookup problem for Barliman.

              • cfgaussian@lemmygrad.ml
                link
                fedilink
                arrow-up
                0
                ·
                edit-2
                13 days ago

                That’s the problem with “AI” isn’t it? At least at the stage it is today. It can only really solve the kinds of problems that have already been solved. Maybe with some variations in the parameters, but the general structure of a problem needs to be one that it has already encountered in its training.

              • cfgaussian@lemmygrad.ml
                link
                fedilink
                arrow-up
                0
                ·
                edit-2
                13 days ago

                Well just add more data points (7, 11, 13, etc) until the simplest solution is a prime solution.

                As i said, i don’t know anything about this program, but conceptually i think you first need to define what you mean by “simplest solution”. Because at least for me a polynomial is the simplest solution regardless how many data points there are because then the problem is reducible to a set of linear equations that a computer can easily solve.

                However if we specify that the solution needs to be one with the lowest number of parameters possible, then it gets interesting. Then you can have an algorithm start to iteratively test solutions, and try to reduce complexity until it hopefully arrives at something resembling a prime number generator. Or it may not. I don’t know if this is even possible because one well known problem with the “gradient descent” approach is that your algorithm can get stuck in a local valley. It thinks it’s found the optimal solution but it’s stuck in a false minimum because it does not have the “imagination” to start to test an entirely different class of solutions that may at first be much less efficient.

  • amemorablename@lemmygrad.ml
    link
    fedilink
    arrow-up
    0
    ·
    13 days ago

    I admit my read of the article was partly skimming, so maybe they covered this point, but from everything I’ve seen with LLMs, I’m skeptical their impact is going to change much, unless it’s to make things shittier by forcing them where they aren’t ready. AI as a whole could change a lot that is hard to predict because AI is kinda synonymous with automation and could be many developments of many different kinds of technologies. But the current crop of AI hype and what it’s capable of? Where I see it most taking over is the capitalistic “content churn” industry. For anything that needs to be thinking beyond “cash in and move onto the next one”, I don’t see how it gets integrated very effectively.

    Part of what makes me doubt it is efficiency. Although there are some notable advances in efficiency, such as Deepseek’s cost reduction in training, generative AI is overall a resource-heavy technology. Both training and inference are costly (environmentally, in GPUs, etc., not just in price tag). Another point is competence. The more complicated a task is, the easier it is for the AI to make mistakes, some of which only an expert in the related subject matter would pick up on, which makes it a high competence task just to evaluate the AI’s results and make sure it isn’t doing more harm than good. Another is learning. You could look at the competence example and say, a human in training needs similar evaluation, but the human in training will usually learn from their mistakes, with correction, and not make them as often in the future. The AI won’t unless you retrain it and then it is still highly limited due to its statistical and tokenizing nature. Another element is trust. The western market has much more of a vested interest than, say, China, in selling the idea that AI as it is now will work and will integrate and therefore will make a profit; otherwise, its house of cards gold rush investments go to waste and the industry tanks (the fragility of that seen already in how easily Deepseek upset the equilibrium, or lack thereof).

    I think programmers and programming as a field is in more danger (or danger of change, depending on how you want to look at it) from capitalists than from generative AI. The field already zipped past a phase where I can still remember reading about someone talking about a fizz buzz example as a test of basic programming competence, to the internet being stuffed to the brim with coding bootcamp stuff and “master algorithms and data structures” doctrine. And that change happened before generative AI. I don’t know what the hard numbers are, so I could be deceived on it, but by all appearances, programming became much more saturated via all the “learn to code” stuff, coupled with more companies cutting jobs in general, resulting in it being a field that is significantly harder to get into and harder to stay in. And again, all of that before generative AI.

    I don’t mean this toward you, Yogthos, of course, but I think there is a certain amount of programmers being in denial about the field being touched by capitalism in general. This sort of unspoken belief that because programming is so important, the trend will just sort of continue that way and it will continue to be a lucrative and cozy ivory tower to hang out in. But that won’t stop capitalists from trying to reduce payroll as much as possible, whether it truly makes rational sense or not.

    • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
      link
      fedilink
      arrow-up
      0
      ·
      13 days ago

      It seems like AI is a very polarizing topic, and people tend to either think it’ll do everything or reject it as pure hype. Typically, the reality of the usefulness of new tech tends to lie somewhere in between. I don’t expect that programmers will disappear as a profession in the foreseeable future. My view is that LLMs are becoming a genuinely useful tool, and they will be increasingly able to take care of writing boilerplate freeing up developers to do more interesting things.

      For example, just the other day I had to create a SQL schema for an API endpoint, and I was able to throw sample JSON into DeepSeek R1 to get a reasonable schema out of it that needed practically no modifications. It probably would’ve taken me a couple of hours of work to design and write it. I also find you can generally figure out how to do something quicker with these tools than by searching sites like stack overflow or random blogs. Even if it doesn’t give a correct solution, it can point you in the right direction. Another use I can see is having it search through code bases finding where specific functionality is. This would be very helpful with finding your way around large projects. So, my experience is that there are already a lot of legitimate time saving uses for this tech. And as you note it’s hard to say where we start getting into diminishing returns territory.

      Efficiency of these things is still a valid concern, but I don’t think we’ve really tried optimizing things much yet. The fact that DeepSeek was able to get such a huge improvement makes me think that there are a lot of other low hanging fruit to be plucked in the near future. I also think it’s highly likely we’ll be combining LLMs with other types of AI such as symbolic logic. This is already being tried with neurosymbolic systems. Different types of machine learning algorithms could tackle different types of problems more efficiently. There are also interesting things happening on the hardware side with stuff like analog chips showing up. Making the chip analog is way more efficient for this stuff since we’re currently emulating analog systems on top digital ones.

      I very much agree regarding the point of capitalism being a huge negative factor here. AI being used abusively is just another reason to fight against this system.

      • RedClouds@lemmygrad.ml
        link
        fedilink
        arrow-up
        0
        ·
        13 days ago

        Efficiency problems aside (hopefully R1 keeps us focused on increasing efficiency while still being useful), I find it super useful when you set a pattern and let it fill it out for you.

        On a side project, I built out 10 or 15 structs and then implemented one of them in a particular pattern and I just asked it to finish off the rest. I did like 10% of the work, but because I set the pattern, it finished everything else flawlessly.

        • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
          link
          fedilink
          arrow-up
          0
          ·
          13 days ago

          Oh yeah, I noticed that too. Once you give it a few examples, it’s good at iterating on that. And this is precisely the kind of drudgery I want to automate. There is a lot of code you end up having to write that’s just glue that holds things together, and it’s basically just a repetitive task that LLMs can automate.

      • amemorablename@lemmygrad.ml
        link
        fedilink
        arrow-up
        0
        ·
        13 days ago

        You make fair points and in case it’s not clear, I don’t personally see it as “pure hype.” I’ve been around a particular AI service for… I don’t know, maybe more than a couple of years now? Some of my feel on it comes from using AI, some of it comes from discussing it with others (sometimes like beating a dead horse conversations that have been had a hundred times), and some of it comes from my technical understanding of AI, which is certainly not on the level of an ML engineer, but is quite a lot more than a layperson.

        My skepticism is more about degrees than anything else. I know it can be useful in some capacities (though it is informative to hear more about the programming end, as I’ve not personally experimented with it in that capacity - my info on that more comes from others who have). But widespread adoption doesn’t seem as inevitable as some people make it out to be, or if it is, it seems like it’ll be more a consequence of capital forcing it to happen, regardless of whether it makes sense. To focus on programming as example since that’s the main subject here, comparing to other forms of advancement in programming, the benefits seem much more clearly weighted in favor of benefits in those other cases, with little known drawbacks. The main drawback along the way being one already brought up in this thread, like the dwindling knowledge of lower level system operations, especially in older languages. With generative AI, maybe it’s just cause of how much time I’ve spent around it, but I see much more obvious and difficult-to-contend-with drawbacks, like those mentioned in my previous post.

        I still think it can have a place as an assistive tool, but I’m not sure it’s going to be as simple of an overall improvement as some things that are considered advancements. And to your point about evolution of the tech, where my mind goes in part on that, is the field may need breakthroughs beyond the transformer architecture before it can reach useful widespread adoption. The hallucination factor alone makes it borderline useless and at best untrustworthy for certain kinds of tasks and skill levels (e.g. not having the knowledge to know when it’s wrong, or the awareness to cross-reference with something or someone who is not an LLM). Now if there was a breakthrough (well more like multiple breakthroughs) in infrastructure and design where it could link up with a more logic-based system, with a connection to the internet, and cross-reference on things, and show its sources, and utilize an evaluative system to learn from its mistakes (which I understand is a lot, but just saying) then I think it could much more easily be seen as an overall benefit. Or even without the self-learning, if it could do all those other things (making it more trustworthy, more reasoned (not just a lingual mimic of reasoning), and more “show your work”) and it could be manually added onto with new knowledge where needed without needing a huge budget and team of professionals, then I think it could more feasibly reach widespread adoption (without being forced on people). It’s possible it will be pushed on people anyway, at least in the capitalist west, and capital will say “catch up or starve”, but there is already some not-insignificant AI hate out there and people may not make it easy for them.

        • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
          link
          fedilink
          arrow-up
          0
          ·
          13 days ago

          Right, the reality is going to be nuanced. There will be niches where this tool will be helpful, and others where it doesn’t really make sense. We’re in a hype phase right now, and people are still figuring out good uses for it. It’s also worth noting that people are already actively working on solutions for the hallucination problem and doing actual reasoning. The most interesting approach I’ve seen so far is neurosymbolics. It combines a deep neural net with a symbolic logic engine. The neural net does what it’s good at which is parsing raw input data and classifying, and symbolic logic system operates on the classified data. This way you can have the system actually reason through a problem, explain the steps, correct it, etc. This is a fun read about it https://arxiv.org/abs/2305.00813

          I do think the AI might present a problem for the capitalist system as a whole because if vast majority of work really can be automated going forward, then the whole model of working for a living will fall apart. It will be very interesting to see how the capitalist world grapples with this assuming it lasts that long to begin with.