US experts who work in artificial intelligence fields seem to have a much rosier outlook on AI than the rest of us.

In a survey comparing views of a nationally representative sample (5,410) of the general public to a sample of 1,013 AI experts, the Pew Research Center found that “experts are far more positive and enthusiastic about AI than the public” and “far more likely than Americans overall to believe AI will have a very or somewhat positive impact on the United States over the next 20 years” (56 percent vs. 17 percent). And perhaps most glaringly, 76 percent of experts believe these technologies will benefit them personally rather than harm them (15 percent).

The public does not share this confidence. Only about 11 percent of the public says that “they are more excited than concerned about the increased use of AI in daily life.” They’re much more likely (51 percent) to say they’re more concerned than excited, whereas only 15 percent of experts shared that pessimism. Unlike the majority of experts, just 24 percent of the public thinks AI will be good for them, whereas nearly half the public anticipates they will be personally harmed by AI.

  • EndlessNightmare@reddthat.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 days ago

    AI has it’s place, but they need to stop trying to shoehorn it into anything and everything. It’s the new “internet of things” cramming of internet connectivity into shit that doesn’t need it.

    • poopkins@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 days ago

      You’re saying the addition of Copilot into MS Paint is anything short of revolutionary? You heretic.

  • SSNs4evr@leminal.space
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 days ago

    The problem could be that, with all the advancements in technology just since 1970, all the medical advancements, all the added efficiencies at home and in the workplace, the immediate knowledge-availability of the internet, all the modern conveniences, and the ability to maintain distant relationships through social media, most of our lives haven’t really improved.

    We are more rushed and harried than ever, life expectancy (in the US) has decreased, we’ve gone from 1 working adult in most families to 2 working adults (with more than 1 job each), income has gone down. Recreation has moved from wholesome outdoor activities to an obese population glued to various screens and gaming systems.

    The “promise of the future” through technological advancement, has been a pretty big letdown. What’s AI going to bring? More loss of meaningful work? When will technology bring fewer working hours and more income - at the same time? When will technology solve hunger, famine, homelessness, mental health issues, and when will it start cleaning my freaking house and making me dinner?

    When all the jobs are gone, how beneficial will our overlords be, when it comes to universal basic income? Most of the time, it seems that more bad comes from out advancements than good. It’s not that the advancements aren’t good, it’s that they’re immediately turned to wartime use considerations and profiteering for a very few.

  • snooggums@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 days ago

    Experts are working from their perspective, which involves being employed to know the details of how the AI works and the potential benefits. They are invested in it being successful as well, since they spent the time gaining that expertise. I would guess a number of them work in fields that are not easily visible to the public, and use AI systems in ways the public never will because they are focused on things like pattern recognition on virii or idendifying locations to excavate for archeology that always end with a human verifying the results. They use AI as a tool and see the indirect benefits.

    The general public’s experience is being told AI is a magic box that will be smarter than the average person, has made some flashy images and sounds more like a person than previous automated voice things. They see it spit out a bunch of incorrect or incoherent answers, because they are using it the way it was promoted, as actually intelligent. They also see this unreliable tech being jammed into things that worked previously, and the negative outcome of the hype not meeting the promises. They reject it because how it is being pushed onto the public is not meeting their expectations based on advertising.

    That is before the public is being told that AI will drive people out of their jobs, which is doubly insulting when it does a shitty job of replacing people. It is a tool, not a replacement.

  • CancerMancer@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 days ago

    Just about every major advance in technology like this enhanced the power of the capitalists who owned it and took power away from the workers who were displaced.

  • VirtualOdour@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 days ago

    People aren’t very smart, have trouble understanding new things and fear change - of course they express negative options.

    Most Americans would have said the sama about electricity, computers, the internet, mobile phones…

  • Clent@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 days ago

    I do as a software engineer. The fad will collapse. Software engineering hiring will increase but the pipeline of new engineers will is dry because no one wants to enter the career with companies hanging ai over everyone’s heads. Basic supply and demand says my skillset will become more valuable.

    Someone will need to clean up the ai slop. I’ve already had similar pistons where I was brought into clean up code bases that failed being outsourced.

    Ai is simply the next iteration. The problem is always the same business doesn’t know what they really want and need and have no ability to assess what has been delivered.

    • futatorius@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 days ago

      If it walks and quacks like a speculative bubble…

      I’m working in an organization that has been exploring LLMs for quite a while now, and at least on the surface, it looks like we might have some use cases where AI could prove useful. But so far, in terms of concrete results, we’ve gotten bupkis.

      And most firms I’ve encountered don’t even have potential uses, they’re just doing buzzword engineering. I’d say it’s more like the “put blockchain into everything” fad than like outsourcing, which was a bad idea for entirely different reasons.

      I’m not saying AI will never have uses. But as it’s currently implemented, I’ve seen no use of it that makes a compelling business case.

    • lobut@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 days ago

      A complete random story but, I’m on the AI team at my company. However, I do infrastructure/application rather than the AI stuff. First off, I had to convince my company to move our data scientist to this team. They had him doing DevOps work (complete mismanagement of resources). Also, the work I was doing was SO unsatisfying with AI. We weren’t tweaking any models. We were just shoving shit to ChatGPT. Now it was be interesting if you’re doing RAG stuff maybe or other things. However, I was “crafting” my prompt and I could not give a shit less about writing a perfect prompt. I’m typically used to coding what I want but I had to find out how to write it properly: “please don’t format it like X”. Like I wasn’t using AI to write code, it was a service endpoint.

      During lunch with the AI team, they keep saying things like “we only have 10 years left at most”. I was like, “but if you have AI spit out this code, if something goes wrong … don’t you need us to look into it?” they were like, “yeah but what if it can tell you exactly what the code is doing”. I’m like, “but who’s going to understand what it’s saying …?” “no, it can explain the type of problem to anyone”.

      I said, I feel like I’m talking to a libertarian right now. Every response seems to be some solution that doesn’t exist.

    • ImmersiveMatthew@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 days ago

      I too am a developer and I am sure you will agree that while the overall intelligence of models continues to rise, without a concerted focus on enhancing logic, the promise of AGI likely will remain elusive.  AI cannot really develop without the logic being dramatically improved, yet logic is rather stagnant even in the latest reasoning models when it comes to coding at least.

      I would argue that if we had much better logic with all other metrics being the same, we would have AGI now and developer jobs would be at risk. Given the lack of discussion about the logic gaps, I do not foresee AGI arriving anytime soon even with bigger a bigger models coming.

      • Clent@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 days ago

        If we had AGI, the number of jobs that would be at risk would be enormous. But these LLMs aren’t it.

        They are language models and until someone can replace that second L with Logic, no amount of layering is going to get us there.

        Those layers are basically all the previous AI techniques laid over the top of an LLM but anyone that has a basic understanding of languages can tell you how illogical they are.

        • ImmersiveMatthew@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          4 days ago

          Agreed. I would add that not only would job loss be enormous, but many corporations are suddenly going to be competing with individuals armed with the same AI.

    • mctoasterson@reddthat.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 days ago

      AI can look at a bajillion examples of code and spit out its own derivative impersonation of that code.

      AI isn’t good at doing a lot of other things software engineers actually do. It isn’t very good at attending meetings, gathering requirements, managing projects, writing documentation for highly-industry-specific products and features that have never existed before, working user tickets, etc.

  • moonlight@fedia.io
    link
    fedilink
    arrow-up
    0
    ·
    8 days ago

    Depends on what we mean by “AI”.

    Machine learning? It’s already had a huge effect, drug discovery alone is transformative.

    LLMs and the like? Yeah I’m not sure how positive these are. I don’t think they’ve actually been all that impactful so far.

    Once we have true machine intelligence, then we have the potential for great improvements in daily life and society, but that entirely depends on how it will be used.

    It could be a bridge to post-scarcity, but under capitalism it’s much more likely it will erode the working class further and exacerbate inequality.

    • MangoCats@feddit.it
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 days ago

      Machine learning? It’s already had a huge effect, drug discovery alone is transformative.

      Machine learning is just large automated optimization, something that was done for many decades before, but the hardware finally reached a power-point where the automated searches started out-performing more informed selective searches.

      The same way that AlphaZero got better at chess than Deep Blue - it just steam-rollered the problem with raw power.

    • Pennomi@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 days ago

      As long as open source AI keeps up (it has so far) it’ll enable technocommunism as much as it enables rampant capitalism.

      • moonlight@fedia.io
        link
        fedilink
        arrow-up
        0
        ·
        8 days ago

        I considered this, and I think it depends mostly on ownership and means of production.

        Even in the scenario where everyone has access to superhuman models, that would still lead to labor being devalued. When combined with robotics and other forms of automation, the capitalist class will no longer need workers, and large parts of the economy would disappear. That would create a two tiered society, where those with resources become incredibly wealthy and powerful, and those without have no ability to do much of anything, and would likely revert to an agricultural society (assuming access to land), or just propped up with something like UBI.

        Basically, I don’t see how it would lead to any form of communism on its own. It would still require a revolution. That being said, I do think AGI could absolutely be a pillar of a post capitalist utopia, I just don’t think it will do much to get us there.

        • FourWaveforms@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          8 days ago

          It will only help us get there in the hands of individuals and collectives. It will not get us there, and will be used to the opposite effect, in the hands of the 1%.

        • MangoCats@feddit.it
          link
          fedilink
          English
          arrow-up
          0
          ·
          8 days ago

          or just propped up with something like UBI.

          That depends entirely on how much UBI is provided.

          I envision a “simple” taxation system with UBI + flat tax. You adjust the flat tax high enough to get the government services you need (infrastructure like roads, education, police/military, and UBI), and you adjust the UBI up enough to keep the wealthy from running away with the show.

          Marshall Brain envisioned an “open source” based property system that’s not far off from UBI: https://marshallbrain.com/manna

        • MangoCats@feddit.it
          link
          fedilink
          English
          arrow-up
          0
          ·
          8 days ago

          It would still require a revolution.

          I would like to believe that we could have a gradual transition without the revolution being needed, but… present political developments make revolution seem more likely.

  • Mallspice@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 days ago

    When Miyazaki said the AI ghiblifier is an affront to art, I couldn’t help but think that before WW1, tanks were called an affront to horsemanship.

  • CosmoNova@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 days ago

    AI is mainly a tool for the powerful to oppress the lesser blessed. I mean cutting actual professionals out of the process to let CEOs wildest dreams go unchecked has devastating consequences already if rumors are to believed that some kids using ChatGPT cooked up those massive tariffs that have already erased trillions.

    • applemao@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 days ago

      Yet my libertarian centrist friend INSISTS that AI is great for humanity. I keep telling him the billionaires don’t give a fuck about you and he keeps licking boots. How many others are like this??

    • Pennomi@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 days ago

      I would agree with that if the cost of the tool was prohibitively expensive for the average person, but it’s really not.

      • CosmoNova@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 days ago

        It‘s too expensive for society already as it has stolen work from millions to even be trained with millions more to come. We literally cannot afford to work for free when the rich already suck up all that productivity increase we‘ve gained over the last century.

        • Pennomi@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          7 days ago

          I disagree. While intellectual property legally exists, ethically there’s no reason to be protective of it.

          Information should be a shared resource for everyone, and all these open weights models are a good example of that in action.

          • CosmoNova@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            7 days ago

            Prepare to die on that hill I guess because this couldn‘t be further of what is happening right now. Copyright exists but only for top oligarchs.

  • SabinStargem@lemmy.today
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 days ago

    I think AI will be useful, but like any nascent technology, it will have to be accessible for the public before the everyman would adopt it. IMO, we are currently at the 2nd or 3rd stage in the picture below.

  • kreskin@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 days ago

    Its just going to help industry provide inferior services and make more profit. Like AI doctors.