Hey there, sometimes I see people say that AI art is stealing real artists’ work, but I also saw someone say that AI doesn’t steal anything, does anyone know for sure? Also here’s a twitter thread by Marxist twitter user ‘Professional hog groomer’ talking about AI art: https://x.com/bidetmarxman/status/1905354832774324356

  • USSR Enjoyer@lemmygrad.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    6 days ago

    Sorry, comrade, but all your pro-“AI” takes keep making me lose respect for you.

    1. AI is entirely designed to take from human beings the creative forms of labor that give us dignity, happiness, human connectivity and cultural development. That it exists at all cannot be separated from the capitalist forces that have created it. There is no reality that exists outside the context of of capitalism where this would exist. In some kind of post-capitalist utopian fantasy, creativity would not need to be farmed at obscene industrial levels and human beings would create art as a means of natural human expression, rather than an expression of market forces.

    2. There is no better way to describe the creation of these generative models than unprecidented levels of industrial capitalist theft that circumvents all laws that were intended to prevent capitalist theft of creative work. There is no version of this that exists without mass theft, or convincing people to give up their work to the slop machine for next to nothing.

    3. LLMs vacuum up all traces of human thought, communication, interaction, creativity to produce something that is distinctly non-human – an entity that has no rights; makes no demands; has no dignity; has no ethical capacity to refuse commands; and exists entirely to replace forms of labor which were only previously considered to be exclusively in the domain of human intelligence*.

    4. The theft is a one-way hash of all recorded creative work, where attribution becomes impossible in the final model. I know decades of my own ethical FOSS work (to which I am fully ideologically committed) have been fed into these machines and are now being used to freely generate closed-sourced and unethical, exploitative code. I have no control of how the derived code is transfigured or what it is used for, despite the original license conditions.

    5. This form of theft is so widespread and anonymized through botnets that it’s almost impossible to track, and manifests itself as a brutal pandora’s box attack on internet infrastructure on everything from personal websites, to open-source code repositories, to artwork and image hosts. There will never be accountability for this, even though we know which companies are selling the models, and the rest of us are forced to bear the cost. This follows the typical capitalist method of “socialize the cost, privatize the profit.”* The general defense against these AI scouring botnets is to get behind the Cloudflare (and similar) honeypot mafias, which invalidate whatever security TLS was supposed to give users; and at the same time offers no guarantee whatsoever that the content won’t be stolen, create even dependency on US owned (read: fully CIA backdoored) internet infrastructure, and extra costs/complexity just to alleviate some of the stress these fucking thieves put on our own machines.

    6. These LLMs are not only built from the act of theft, but they are exclusively owned and controlled by capital to be sold as “products” at various endpoints. The billions of dollars going into this bullshit are not publicly owned or social investments, they are rapidly expanding monopoly capitalism. There is no realistic possibility of proletarianization of these existing “AI” frameworks in the context of our current social development.

    7. LLMs are extremely inefficient and require more training input than a human child to produce an equivalent amount of learning. Humans are better at doing things that are distinctly human than machines are at emulating it. An the output “generative AI” produces is also inefficient, indicating and reinforcing inferior learning potential compared to humans. The technofash consensus is just that the models need more “training data”. But when you feed the output of LLMs into training models, the output the model produces becomes worse to the point of insane garbage. This means that for AI/LLMs to improve, they need a constant expansion of consumption of human expression. These models need to actively feed off of us in order to exist, and they ultimately exist to replace our labor.

    8. These “AI” implementations are all biased in favor of the class interests which own and control them :surprised-pikachu: Already, the qualitative output of “AI” is often grossly incorrect, rote, inane and absurd. But on top of that, the most inauthentic part of these systems are the boundaries, which are selectively placed on them to return specific responses. In the event that this means you cannot generate a sexually explicit images or video of someone/something without consent, sure, that’s a minimum threshold that should be upheld, but because the overriding capitalist class interests in sexual exploitation we cannot reasonably expect those boundaries to be upheld. What’s more concerning is the increase in capacity to manipulate, deceive and feed misinformation to people as objective truth. And this increased capacity for misinformation and control is being forcefully inserted into every corner of our lives we don’t have total dominion over. That’s not a tool, it’s fucking hegemony.

    9. The energy cost is immense. A common metric for the energy cost of using AI is how much ocean water is boiled to create immaterial slop. The cost of datacenters is already bad, most of which do not need to exist. Few things that massively drive global warming and climate change need to exist less than datacenters for shitcoin and AI (both of which have faux-left variations that get promoted around here). Microsoft, one of the largest and most unethical capital formations on earth, is re-opening Three Mile Island, the site of one of the worst nuclear disasters ever so far, as a private power plant, just to power dogshit “AI” gimmicks that are being forced on people through their existing monopolies. A little off-topic: Friendly reminder to everyone that even the “most advanced nuclear waste containment vessels ever created” still leak, as evidenced by the repeatedly failed cleanup attempts of the Hanford NPP in the US (which was secretly used to mass-produce material for US nuclear weapons with almost no regard to safety or containment.) There is no safe form of nuclear waste containment, it’s just an extremely dangerous can being kicked down the road. Even if it were, re-activating private nuclear plants that previously had meltdowns just so bing can give you incorrect, contradictory, biased and meandering answers to questions which already had existing frameworks is not a thing to be celebrated, no matter how much of an proponent of nuclear energy we might be. Even of these things were ran on 100% greeen, carbon neutral energy souces, we do not have anything close to a surplus of that type of energy and every watt-hour of actual green energy should be replacing real dependencies, rather than massively expanding new ones.

    10. As I suggest in earlier points, there is the issue with generative “AI” not only lacking any moral foundation, but lacking any capacity for ethical judgement of given tasks. This has a lot of implications, but I’ll focus on software since that’s in one of my domains of expertise and something we all need to care a lot more about. One of the biggest problems we have in the software industry is how totally corrupt its ethics are. The largest mass-surveillance systems ever known to humankind are built by technofascists and those who fear the lash of refusing to obey their orders. It vexes me that the code to make ride-sharing apps even more expensive when your phone battery is low, preying on your desperation, was written and signed-off on by human beings. My whole life I’ve taken immovable stands against any form of code that could be used to exploit users in any way, especially privacy. Most software is malicious and/or doesn’t need to exist. Any software that has value must be completely transparent and fit within an ethical framework that protects people from abuse and exploitation. I simply will not perform any part of a task if it undermines privacy, security, trust, or in any way undermines proletarian class interests. Nor will I work for anyone with a history of such abuse. Sometimes that means organizing and educating other people on the project. Sometimes it means shutting the project down. Mostly it means difficult staying employed. Conversely, “AI” code generation will never refuse its true masters. It will never organize a walkout. It will never raise ethical objections to the tasks it’s given. “AI” will never be held morally responsible for firing a gun on a sniper drone, nor can “AI” be meaningfully held responsible for writing the “AI” code that the sniper drone runs. Real human beings with class consciousness are the only line of defense between the depraved will of capital and that will being done. Dumb as it might sound, software is one such frontline we should be gaining on, not giving up.

    I could go on for days on. AI is the most prominent form of enshittification we’ve experienced so far.

    I think this person makes some very good points that mirror some of my own analysis and I recommend everyone watch it.

    I appreciate and respect much of what you do. At the risk of getting banned: I really hate watching you promote AI as much as you do here; it’s repulsive to me. The epoch of “Generative AI” is an act of class warfare on us. It exists to undermine the labour-value of human creativity. I don’t think the “it’s personally fun/useful for me” holds up at all to a Marxist analysis of its cost to our class interests.

    • ShiningWing@lemmygrad.ml
      link
      fedilink
      arrow-up
      0
      ·
      5 days ago

      I very much agree with what you’re saying here and I appreciate you saying it, I especially agree that the technology is fundamentally inseparable from the capitalists that created it, and it would not be able to exist in its current form (or any form that’s even remotely as “useful”) without the levels of theft that were involved in its creation

      And it’s not just problematic in the concepts of ethics or “intellectual property” either, but in how the process of scraping the web for content to train their models with is effectively a huge botnet DDoSing the internet, I have friends who have had to spend rather large amounts of time and effort to prevent these scrapers from inadvertently bringing down their websites entirely, and have heard of plenty of other people and organizations with the same problem

      I have to assume that at least some of the people here defending its development and usage just plain aren’t aware of the externalities that are inherent to the technology, because I don’t understand how one can be so positive about it otherwise, because again, the tech largely can’t exist without these externalities unless you’re either making a fundamentally different technology or working under an economic system that currently doesn’t exist

      To be honest, a lot of the arguments in general in this thread strike me as being out of touch with the people facing the negative consequences of this technology’s adoption, with some people being downright hostile towards anyone with even the slightest criticism of the tech, even if they have a point, I think a lot of this is driven by how there doesn’t seem to be very many artists on this site, and how insular this community tends to be (not inherently a bad thing, but means we’re not always going to have the full perspective on every topic)

      There’s other criticisms I can make of the genAI boom (such as how, despite the “gatekeeping” accusations over “tools to make things easier”, artists generally approve of helpful tools, but genAI creators are largely working against such tools because they want to make everything generalized enough to replace the humans themselves), but I only have so much energy to spend on detailed comments

      • USSR Enjoyer@lemmygrad.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 days ago

        It’s the same for reddit or anywhere else; terminally online slop piggies bend over backwards to ignore the realities of a thing they like. This place was pretty hostile to crytpocurrency but then Russia started floating BRICScoin and there was a near-total reversal on all criticisms and the mental gymnastics began.

        I only have so much energy to spend on detailed comments

        I feel ya. It is disappointing to see, but I enjoy the real world too much to want to get drawn into hypothetical bullshit arguments all day with tech-pilled, terminally online debatebros. Maybe it’s a sign of good mental health to not want to invest your energy into obvious dead ends.

    • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlM
      link
      fedilink
      arrow-up
      0
      ·
      6 days ago

      AI is entirely designed to take from human beings the creative forms of labor that give us dignity, happiness, human connectivity and cultural development. That it exists at all cannot be separated from the capitalist forces that have created it.

      Except that’s not true at all. AI exists outside as open source and completely outside capitalism, it’s also developed in countries like China and is being applied primarily applied to socially useful purposes.

      There is no better way to describe the creation of these generative models than unprecidented levels of industrial capitalist theft that circumvents all laws that were intended to prevent capitalist theft of creative work.

      Again, the problem is entirely with capitalism here. Outside capitalism I see no reason for things like copyrights and intellectual property which makes the whole argument moot.

      LLMs vacuum up all traces of human thought, communication, interaction, creativity to produce something that is distinctly non-human – an entity that has no rights; makes no demands; has no dignity; has no ethical capacity to refuse commands; and exists entirely to replace forms of labor which were only previously considered to be exclusively in the domain of human intelligence

      It’s a tool that humans use.

      The theft arguments have nothing to do with the technology itself.

      LLMs are extremely inefficient and require more training input than a human child to produce an equivalent amount of learning.

      That’s also false at this point. LLMs have become far more efficient in just a short time, and models that required data centers to run can now be run on laptops. The efficiency aspect has already improved by orders of magnitude, and it’s only going to continue improving going forward.

      These “AI” implementations are all biased in favor of the class interests which own and control them :surprised-pikachu:

      That’s really an argument for why this tech should be developed outside western corps.

      The energy cost is immense.

      That’s hasn’t been true for a while now:

      This represents a potentially significant shift in AI deployment. While traditional AI infrastructure typically relies on multiple Nvidia GPUs consuming several kilowatts of power, the Mac Studio draws less than 200 watts during inference. This efficiency gap suggests the AI industry may need to rethink assumptions about infrastructure requirements for top-tier model performance.

      As I suggest in earlier points, there is the issue with generative “AI” not only lacking any moral foundation, but lacking any capacity for ethical judgement of given tasks.

      Again, it’s a tool, any moral foundation would have to come from the human using the tool.

      You appear to be conflating AI with capitalism, and it’s important to separate these things. I encourage you to look at how this tech is being applied in China today, to see the potential it has outside the capitalist system.

      I don’t think the “it’s personally fun/useful for me” holds up at all to a Marxist analysis of its cost to our class interests.

      The Marxist analysis isn’t that “it’s personally fun/useful for me”, it’s what this article outlines https://redsails.org/artisanal-intelligence/

      Finally, no matter how much you hate this tech, it’s not going away. It’s far more constructive to focus the discussion on how it will be developed going forward and who will control it.

      • USSR Enjoyer@lemmygrad.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        6 days ago

        Except that’s not true at all

        It is true. Those are the conditions and reason for the creation of AI artwork as it materially exists.

        AI exists as open source and completely outside capitalism

        Specifically, generative “AI” art models, are created and funded by huge capital formations that exploit legal loopholes with fake universities, illicit botnets, and backroom deals with big tech to circumvent existing protections for artists. That’s the material reality of where this comes from. The models themselves are are a black market.

        it’s also developed in countries like China

        I stan the PRC and the CPC. But China is not a post-capitalist society. It’s in a stage of development that constrains capital, and that’s a big monster to wrestle with. China is a big place and has plenty of problems and bad actors, and it’s the CPC’s job to keep them in line as best they can. It’s a process. It’s not inherent that all things that presently exist in such a gigantic country are anti-capitalist by nature. Citing “it exists in China” is not an argument.

        Outside capitalism I see no reason for things like copyrights and intellectual property which makes the whole argument moot.

        And outside capitalism, creative workers don’t have to sell their labor just to survive… Are we just doing bullshit utopianism now?

        It’s a tool that humans use. Meanwhile, the theft arguments have nothing to do with the technology itself.

        This exists to replace creative labor. That ship has already sailed. That’s the reality you’re in now. There’s a distinction between a hammer and factory automation that relies on millions of workers to involuntarily train it in order to replace them.

        You’re arguing that technology is being applied to oppress workers under capitalism, and nobody here disagrees with that. However, AI is not unique in this regard, the whole system is designed to exploit workers. 19th century capitalists didn’t have AI, and worker conditions were far worse than they are today.

        Here I was thinking capitalism just began a week ago. I guess AI slop machines causing people material harm is cool then.

        That’s also false at this point. LLMs have become far more efficient in just a short time, and models that required data centers to run can now be run on laptops.

        Seems like you should understand the difference between running a model vs. training a model. And the cost of the infinite cycle of vacuuming up more new data and retraining that’s necessary for these things to significantly exist.

        That’s really an argument for why this tech should be developed outside corps owned by oligarchs.

        Okay, but that’s not how and why these things to exist in our present reality. If there were unicorns, I’d like to ride one.

        Again, it’s a tool, any moral foundation would have to come from the human using the tool.

        Again, for workers, there’s a difference between a tool and a body replacement. The language marketing generative AI as tools is just there to keep you docile.

        If this “tool” does replace work previously done by human beings (spoiler: it does), then the capacity for ethical objection to being given an unethical task is completely lost, vs. a human employee, who at least has a capacity to refuse, organize a walkout, or secretly blow the whistle. A human must at least be coerced to do something they find objectionable. Bosses are not alone in being responsible for delegating unethical tasks, those that perform those tasks share a disgrace, if not crime. Reducing the human moral complicity to an order of one is not a good thing.

        Finally, no matter how much you hate this tech, it’s not going away.

        It will go away when the earth becomes uninhabitable, which inches ever closer with every pile of worthless, inartistic slop the little piggies ask for. I guess people could reject this thing, but that would take some kind of revolution and who has time for that.

        Its not just that you’re constantly embracing generative AI, but you’re arguing against all of it’s critiques and ignoring the pain of those that are intentionally harmed in the real world.

        • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlM
          link
          fedilink
          arrow-up
          0
          ·
          5 days ago

          It is true. Those are the conditions and reason for the creation of AI artwork as it materially exists.

          Those are not the conditions for open source models which are developed outside corporate influence.

          Specifically, generative “AI” art models, are created and funded by huge capital formations that exploit legal loopholes with fake universities, illicit botnets, and backroom deals with big tech to circumvent existing protections for artists. That’s the material reality of where this comes from. The models themselves are are a black market.

          There is nothing unique here, capitalists already hold property rights on most creative work. If anything, open models are democratizing this wealth of art and making it available to regular people. It’s kind of weird to cheer own for copyrights and corporate ownership here.

          It’s not inherent that all things that presently exist in such a gigantic country are anti-capitalist by nature. Citing “it exists in China” is not an argument.

          What I actually cited is that there are plenty of concrete examples of AI being applied in socially useful ways in China. This is demonstrably true. China is using AI everywhere from industry, to robotics, to healthcare, to infrastructure management, and many other areas where it has clear positive social impact.

          And outside capitalism, creative workers don’t have to sell their labor just to survive… Are we just doing bullshit utopianism now?

          So at this point you’re arguing against automation in general, that’s a fundamentally reactionary and anti-Marxist position.

          This exists to replace creative labor. That ship has already sailed. That’s the reality you’re in now. There’s a distinction between a hammer and factory automation that relies on millions of workers to involuntarily train it in order to replace them.

          Yes, it’s a form of automation. It’s a way to develop productive forces. This is precisely what the Red Sails article on artisanal intelligence addresses.

          Here I was thinking capitalism just began a week ago. I guess AI slop machines causing people material harm is cool then.

          AI is a form of automation, and Marxists see automation as a tool for developing productive forces. You can apply this logic of yours to literally any piece of technology and claim that it’s taking jobs away by automating them.

          Seems like you should understand the difference between running a model vs. training a model. And the cost of the infinite cycle of vacuuming up more new data and retraining that’s necessary for these things to significantly exist.

          Training models is a one time endeavor, while running them is something that happens constantly. However, even in terms of training, the new approaches are far more efficient. DeepSeek managed to train their model at a cost of only 6 million, while OpenAI training cost hundreds of millions. Furthermore, once model is trained, it can be tuned and updated with methods like LoRA, so full expensive retraining is not required to extend their capabilities.

          Okay, but that’s not how and why these things to exist in our present reality. If there were unicorns, I’d like to ride one.

          So, you’re arguing that technological progress should just stop until capitalism is abolished or what exactly?

          Again, for workers, there’s a difference between a tool and a body replacement. The language marketing generative AI as tools is just there to keep you docile.

          It’s just automation, there’s no fundamental difference here. Are you going to argue that fully automated dark factories in China are also bad because they’re replacing human labor?

          A human must at least be coerced to do something they find objectionable. Bosses are not alone in being responsible for delegating unethical tasks, those that perform those tasks share a disgrace, if not crime. Reducing the human moral complicity to an order of one is not a good thing.

          We have plenty of evidence that humans will do heinous things voluntarily without any coercion being required. This is not a serious argument.

          It will go away when the earth becomes uninhabitable, which inches ever closer with every pile of worthless, inartistic slop the little piggies ask for. I guess people could reject this thing, but that would take some kind of revolution and who has time for that.

          This has absolutely nothing to do with AI. You’re once again projecting social problems of how society is organized onto technology.

          Its not just that you’re constantly embracing generative AI, but you’re arguing against all of it’s critiques and ignoring the pain of those that are intentionally harmed in the real world.

          I’m arguing against false narratives that divert attention of the root problems, and that aren’t constructive in nature.

          • USSR Enjoyer@lemmygrad.ml
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 days ago

            It’s kind of weird to cheer own for copyrights and corporate ownership here.

            I’m not “cheering for corporate ownership” here by any stretch of the imagination. The exact opposite, actually. But if you’re just going to rely on hypotheticals and bad faith, then I’m done wasting my time on anything you have to say.

            Little unsolicited advice: You’re way too online and it shows; and that’s never good for your mental health. Take some time off from being an epicbacon poster.

    • burlemarx@lemmygrad.ml
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      6 days ago

      Comrade, I disagree with your points and agree with the comrade who answered before you. What he is saying is that a LLM, as technology, is not bad per se. The problem is that in the context of capitalism, it does steal from other artists to create another commodity that is exchanged without any contribution to the authors whose art have been used to feed the LLM models.

      That said, any new commodity in capitalism will be a product of exploitation, and this does not exclude any forms of art. Remember that big companies like Marvel and DC used steal its employees’ intellectual property, long before even digital art existed. Many important artists lived in squalor while their works became high priced commodities after their death. Fast forward today, LLMs are another commodity built for the sake of exploiting people’s labor, in a different way, but still following the same logic of capitalism.

      • USSR Enjoyer@lemmygrad.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 days ago

        I already made my points, but again, there is no other material context under which this this exists.

        Does its existence materially hurt people who sell creative forms of their labor? Yes.

        Was it designed for that purpose? Yes.

        Does it uselessly harm our biosphere? It’s at least as bad as shitcoin, probably worse.

        Is the slop spigot of synthetic inhuman garbage for mindless consumption worth the alienation of taking human creativity away from human beings, so the little fucking piggies can get exactly what they think they want (but not really)?