• NutWrench@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    21 hours ago

    The “1 trillion” never existed in the first place. It was all hype by a bunch of Tech-Bros, huffing each other’s farts.

    • ocassionallyaduck@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      1 day ago

      Your confidence in this statement is hilarious the fact that it doesn’t help your argument at all. If anything, the fact they refined their model so well on older hardware is even more remarkable, and quite damning when OpenAI claims it needs literally cities worth of power and resources to train their models.

    • b161@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      0
      ·
      1 day ago

      AI is overblown, tech is overblown. Capitalism itself is a senseless death cult based on the non-sensical idea that infinite growth is possible with a fragile, finite system.

  • DiaDeLosMuertos@aussie.zone
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 days ago

    I am extremely ignorant of all this AI thing. So please can somebody “Explain Like I’m 5” why can this new thing can wipe off over a trillion dollars in US stock ? I would appreciate it a lot if you can help.

    • Cynicus Rex@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 days ago

      "You see, dear grandchildren, your grandfather used to have an apple orchard. The fruits were so sweet and nutritious that every town citizen wanted a taste because they thought it was the only possible orchard in the world. Therefore the citizens gave a lot of money to your grandfather because the citizens thought the orchard would give them more apples in return, more than the worth of the money they gave. Little did they know the world was vastly larger than our ever more arid US wasteland. Suddenly an oriental orchard was discovered which was surprisingly cheaper to plant, maintain, and produced more apples. This meant a significant potential loss of money for the inhabitants of the town called Idiocracy. Therefore, many people asked their money back by selling their imaginary not-yet-grown apples to people who think the orchard will still be worth more in the future.

      This is called investing, children, it can make a lot of money, but it destroys the soul and our habitat at the same time, which goes unnoticed by all these people with advanced degrees. So think again when you hear someone speak with fancy words and untamed confidence. Many a times their reasoning falls below the threshold of dog poop. But that’s a story for another time. Sweet dreams."

    • Yozul@beehaw.org
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      2 days ago

      Let’s say I make a thing. Let’s say somebody offers to buy it from me for $10. I sell it to them, and then let’s say somebody else makes a better thing, and now no one will pay more than $2 for my thing. If my thing is a publicly traded corporation, then that just “wiped off” $8 from the stock market. The person I sold it to “lost” $8. Corporations that make AI and the hardware to run it just “lost” a lot of value.

    • Cere@aussie.zone
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      2 days ago

      Basically US company’s involved in AI have been grossly over valued for the last few years due to having a sudo monopoly over AI tech (companies like open ai who make chat gpt and nvidia who make graphics cards used to run ai models)

      Deep seek (Chinese company) just released a free, open source version of chat gpt that cost a fraction of the price to train (setup) which has caused the US stock valuations to drop as investors are realising the US isn’t the only global player, and isn’t nearly as far ahead as previously thought.

      Nvidia is losing value as it was previously believed that top of the line graphics cards were required for ai, but turns out they are not. Nvidia have geared their company strongly towards providing for ai in recent times.

    • CeeBee_Eh@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      2 days ago

      I asked it about Tiananmen Square, it told me it can’t answer that because it can only respond with “harmless” responses.

        • Alsephina@lemmy.ml
          link
          fedilink
          English
          arrow-up
          0
          ·
          7 minutes ago

          Unfortunately it’s trained on the same US propaganda filled english data as any other LLM and spits those same talking points. The censors are easy to bypass too.

        • Phoenicianpirate@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 day ago

          Yeah but China isn’t my main concern right now. I got plenty of questions to ask and knowledge to seek and I would rather not be broadcasting that stuff to a bunch of busybody jackasses.

          • Mongostein@lemmy.ca
            link
            fedilink
            arrow-up
            0
            ·
            1 day ago

            I agree. I don’t know enough about all the different models, but surely there’s a model that’s not going to tell you “<whoever’s> government is so awesome” when asking about rainfall or some shit.

        • Phoenicianpirate@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 day ago

          Thank you very much. I did ask chatGPT was technical questions about some… subjects… but having something that is private AND can give me all the information I want/need is a godsend.

          Goodbye, chatGPT! I barely used you, but that is a good thing.

        • boomzilla@programming.dev
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          2 days ago

          I watched one video and read 2 pages of text. So take this with a mountain of salt. From that I gathered that deepseek R1 is the model you interact with when you use the app. The complexity of a model is expressed as the number of parameters (though I don’t know yet what those are) which dictate its hardware requirements. R1 contains 670 bn Parameter and requires very very beefy server hardware. A video said it would be 10th of GPUs. And it seems you want much of VRAM on you GPU(s) because that’s what AI crave. I’ve also read 1BN parameters require about 2GB of VRAM.

          Got a 6 core intel, 1060 6 GB VRAM,16 GB RAM and Endeavour OS as a home server.

          I just installed Ollama in about 1/2 an hour, using docker on above machine with no previous experience on neural nets or LLMs apart from chatting with ChatGPT. The installation contains the Open WebUI which seems better than the default you got at ChatGPT. I downloaded the qwen2.5:3bn model (see https://ollama.com/search) which contains 3 bn parameters. I was blown away by the result. It speaks multiple languages (including displaying e.g. hiragana), knows how much fingers a human has, can calculate, can write valid rust-code and explain it and it is much faster than what i get from free ChatGPT.

          The WebUI offers a nice feedback form for every answer where you can give hints to the AI via text, 10 score rating thumbs up/down. I don’t know how it incooperates that feedback, though. The WebUI seems to support speech-to-text and vice versa. I’m eager to see if this docker setup even offers programming APIs.

          I’ll probably won’t use the proprietary stuff anytime soon.

          • tooclose104@lemmy.ca
            link
            fedilink
            arrow-up
            0
            ·
            2 days ago

            Apparently phone too! Like 3 cards down was another post linking to instructions on how to run it locally on a phone in a container app or termux. Really interesting. I may try it out in a vm on my server.

  • PlutoniumAcid@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    2 days ago

    So if the Chinese version is so efficient, and is open source, then couldn’t openAI and anthropic run the same on their huge hardware and get enormous capacity out of it?

    • AdrianTheFrog@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 days ago

      OpenAI could use less hardware to get similar performance if they used the Chinese version, but they already have enough hardware to run their model.

      Theoretically the best move for them would be to train their own, larger model using the same technique (as to still fully utilize their hardware) but this is easier said than done.

    • Jhex@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      2 days ago

      Not necessarily… if I gave you my “faster car” for you to run on your private 7 lane highway, you can definitely squeeze every last bit of the speed the car gives, but no more.

      DeepSeek works as intended on 1% of the hardware the others allegedly “require” (allegedly, remember this is all a super hype bubble)… if you run it on super powerful machines, it will perform nicer but only to a certain extend… it will not suddenly develop more/better qualities just because the hardware it runs on is better

      • merari42@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        1 day ago

        Didn’t deepseek solve some of the data wall problems by creating good chain of thought data with an intermediate RL model. That approach should work with the tried and tested scaling laws just using much more compute.

      • PlutoniumAcid@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        2 days ago

        This makes sense, but it would still allow a hundred times more people to use the model without running into limits, no?

    • Yggnar@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      2 days ago

      It’s not multimodal so I’d have to imagine it wouldn’t be worth pursuing in that regard.

  • SocialMediaRefugee@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    2 days ago

    This just shows how speculative the whole AI obsession has been. Wildly unstable and subject to huge shifts since its value isn’t based on anything solid.

    • ByteJunk@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      2 days ago

      It’s based on guessing what the actual worth of AI is going to be, so yeah, wildly speculative at this point because breakthroughs seem to be happening fairly quickly, and everyone is still figuring out what they can use it for.

      There are many clear use cases that are solid, so AI is here to stay, that’s for certain. But how far can it go, and what will it require is what the market is gambling on.

      If out of the blue comes a new model that delivers similar results on a fraction of the hardware, then it’s going to chop it down by a lot.

      If someone finds another use case, for example a model with new capabilities, boom value goes up.

      It’s a rollercoaster…

      • WoodScientist@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 days ago

        There are many clear use cases that are solid, so AI is here to stay, that’s for certain. But how far can it go, and what will it require is what the market is gambling on.

        I would disagree on that. There are a few niche uses, but OpenAI can’t even make a profit charging $200/month.

        The uses seem pretty minimal as far as I’ve seen. Sure, AI has a lot of applications in terms of data processing, but the big generic LLMs propping up companies like OpenAI? Those seems to have no utility beyond slop generation.

        Ultimately the market value of any work produced by a generic LLM is going to be zero.

        • UndercoverUlrikHD@programming.dev
          link
          fedilink
          arrow-up
          0
          ·
          2 days ago

          It’s difficult to take your comment serious when it’s clear that all you’re saying seems to based on ideological reasons rather than real ones.

          Besides that, a lot of the value is derived from the market trying to figure out if/what company will develop AGI. Whatever company manages to achieve it will easily become the most valuable company in the world, so people fomo into any AI company that seems promising.

          • Jhex@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            2 days ago

            Besides that, a lot of the value is derived from the market trying to figure out if/what company will develop AGI. Whatever company manages to achieve it will easily become the most valuable company in the world, so people fomo into any AI company that seems promising.

            There is zero reason to think the current slop generating technoparrots will ever lead into AGI. That premise is entirely made up to fuel the current “AI” bubble

            • Leg@sh.itjust.works
              link
              fedilink
              arrow-up
              0
              ·
              1 day ago

              They may well lead to the thing that leads to the thing that leads to the thing that leads to AGI though. Where there’s a will

              • Jhex@lemmy.world
                link
                fedilink
                arrow-up
                0
                ·
                1 day ago

                sure, but that can be said of literally anything. It would be interesting if LLM were at least new but they have been around forever, we just now have better hardware to run them

                • NιƙƙιDιɱҽʂ@lemmy.world
                  link
                  fedilink
                  arrow-up
                  0
                  ·
                  edit-2
                  22 hours ago

                  That’s not even true. LLMs in their modern iteration are significantly enabled by transformers, something that was only proposed in 2017.

                  The conceptual foundations of LLMs stretch back to the 50s, but neither the physical hardware nor the software architecture were there until more recently.

        • NιƙƙιDιɱҽʂ@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          2 days ago

          Language learning, code generatiom, brainstorming, summarizing. AI has a lot of uses. You’re just either not paying attention or are biased against it.

          It’s not perfect, but it’s also a very new technology that’s constantly improving.

          • Toofpic@feddit.dk
            link
            fedilink
            arrow-up
            0
            ·
            1 day ago

            I decided to close the post now - there is place for any opinion, but I can see people writing things which are completely false however you look at them: you can dislike Sam Altman (I do), you can worry about China’s interest in entering the competition now and like that (I do), but the comments about LLM being useless while millions of people use it daily for multiple purposes sound just like lobbying.

  • Clent@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 days ago

    No surprise. American companies are chasing fantasies of general intelligence rather than optimizing for today’s reality.

    • Naia@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 days ago

      That, and they are just brute forcing the problem. Neural nets have been around for ever but it’s only been the last 5 or so years they could do anything. There’s been little to no real breakthrough innovation as they just keep throwing more processing power at it with more inputs, more layers, more nodes, more links, more CUDA.

      And their chasing a general AI is just the short sighted nature of them wanting to replace workers with something they don’t have to pay and won’t argue about it’s rights.

      • supersquirrel@sopuli.xyz
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        2 days ago

        Also all of these technologies forever and inescapably must rely on a foundation of trust with users and people who are sources of quality training data, “trust” being something US tech companies seem hell bent on lighting on fire and pissing off the yachts of their CEOs.

    • Eatspancakes84@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      2 days ago

      That’s the thing: if the cost of AI goes down , and AI is a valuable input to businesses that should be a good thing for the economy. To be sure, not for the tech sector that sells these models, but for all of the companies buying these services it should be great.

    • Cowbee [he/they]@lemmy.ml
      link
      fedilink
      arrow-up
      0
      ·
      2 days ago

      On the brightside, the clear fragility and lack of direct connection to real productive forces shows the instability of the present system.

      • leftytighty@slrpnk.net
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 days ago

        And no matter how many protectionist measures that the US implements we’re seeing that they’re losing the global competition. I guess protectionism and oligarchy aren’t the best ways to accomplish the stated goals of a capitalist economy. How soon before China is leading in every industry?

        • Cowbee [he/they]@lemmy.ml
          link
          fedilink
          arrow-up
          0
          ·
          2 days ago

          This conclusion was foregone when China began to focus on developing the Productive Forces and the US took that for granted. Without a hard pivot, the US can’t even hope to catch up to the productive trajectory of China, and even if they do hard pivot, that doesn’t mean they even have a chance to in the first place.

          In fact, protectionism has frequently backfired, and had other nations seeking inclusion into BRICS or more favorable relations with BRICS nations.

  • Arehandoro@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    2 days ago

    Nvidia’s most advanced chips, H100s, have been banned from export to China since September 2022 by US sanctions. Nvidia then developed the less powerful H800 chips for the Chinese market, although they were also banned from export to China last October.

    I love how in the US they talk about meritocracy, competition being good, blablabla… but they rig the game from the beginning. And even so, people find a way to be better. Fascinating.

    • shawn1122@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 days ago

      You’re watching an empire in decline. It’s words stopped matching its actions decades ago.

    • Breve@pawb.social
      link
      fedilink
      arrow-up
      0
      ·
      2 days ago

      Don’t forget about the tariffs too! The US economy is actually a joke that can’t compete on the world stage anymore except by welding their enormous capital from a handful of tech billionaires.

  • synae[he/him]@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 days ago

    Idiotic market reaction. Buy the dip, if that’s your thing? But this is all disgusting, day trading and chasing news like fucking vultures

    • SoulWager@lemmy.ml
      link
      fedilink
      arrow-up
      0
      ·
      2 days ago

      Yep. It’s obviously a bubble, but one that won’t pop from just this, the motive is replacing millions of employees with automation, and the bubble will pop when it’s clear that won’t happen, or when the technology is mature enough that we stop expecting rapid improvement.

      • WoodScientist@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 days ago

        I love the fact that the same executives who obsess over return to office because WFH ruins their socialization and sexual harassment opportunities think think they’re going to be able to replace all their employees with AI. My brother in Christ. You have already made it clear that you care more about work being your own social club than you do actual output or profitability. You are NOT going to embrace AI. You can’t force an AI to have sex with you in exchange for keeping its job, and that’s the only trick you know!

      • Umbrias@beehaw.org
        link
        fedilink
        arrow-up
        0
        ·
        2 days ago

        Well both of those things have been true months if not years, so if those are the conditions for a pop then they are met.

        • lud@lemm.ee
          link
          fedilink
          arrow-up
          0
          ·
          2 days ago

          How are both conditions meer when all this just started 2(?) years ago? And progress is still going very fast.

          • Umbrias@beehaw.org
            link
            fedilink
            arrow-up
            0
            ·
            2 days ago

            all this started in 2023? alas no time marches on, llm have been a thing for decades and the main boom happened more in 2021. progress is not fast, no, these are companies throwing as much compute at their problems as they can. deepseek’s caused a 2t drop by being marginal progress in a field (llms specifically) out of ideas.

            • lud@lemm.ee
              link
              fedilink
              arrow-up
              0
              ·
              2 days ago

              The huge AI LLM boom/bubble started after chatGPT came out.

              But of fucking course it existed before.

        • SoulWager@lemmy.ml
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          2 days ago

          It’s gambling. The potential payoff is still huge for whoever gets there first. Short term anyway. They won’t be laughing so hard when they fire everyone and learn there’s nobody left to buy anything.

              • Umbrias@beehaw.org
                link
                fedilink
                arrow-up
                0
                ·
                2 days ago

                Oh! Hahahaha. No.

                the vc techfeudalist wet dreams of llm replacing humans are dead, they just want to milk the illusion as long as they can.

                • SoulWager@lemmy.ml
                  link
                  fedilink
                  arrow-up
                  0
                  ·
                  edit-2
                  2 days ago

                  The tech is already good enough that any call center employees should be looking for other work. That one is just waiting on the company-specific implementations. In twenty years, calling a major company’s customer service and having any escalation path that involves a human will be as rare as finding a human elevator operator today.