• GooberEar@lemmy.wtf
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    I need to bookmark this for when I have time to read it.

    Not going to lie, there’s something persuasive, almost like the call of the void, with this for me. There are days when I wish I could just get lost in AI fueled fantasy worlds. I’m not even sure how that would work or what it would look like. I feel like it’s akin to going to church as a kid, when all the other children my age were supposedly talking to Jesus and feeling his presence, but no matter how hard I tried, I didn’t experience any of that. Made me feel like I’m either deficient or they’re delusional. And sometimes, I honestly fully believe it would be better if I could live in some kind of delusion like that where I feel special as though I have a direct line to the divine. If an AI were trying to convince me of some spiritual awakening, I honestly believe I’d just continue seeing through it, knowing that this is just a computer running algorithms and nothing deeper to it than that.

  • randomname@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    I think that people give shows like the walking dead too much shit for having dumb characters when people in real life are far stupider

    • Daggity@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Covid gave an extremely different perspective on the zombie apocalypse. They’re going to have zombie immunization parties where everyone gets the virus.

    • Sauerkraut@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Like farmers who refuse to let the government plant shelter belts to preserve our top soil all because they don’t want to take a 5% hit on their yields… So instead we’re going to deplete our top soil in 50 years and future generations will be completely fucked because creating 1 inch of top soil takes 500 years.

      • Buddahriffic@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Even if the soil is preserved, we’ve been mining the micronutrients from it and generally only replacing the 3 main macros for centuries. It’s one of the reasons why mass produced produce doesn’t taste as good as home grown or wild food. Nutritional value keeps going down because each time food is harvested and shipped away to be consumed and then shat out into a septic tank or waste processing facility, it doesn’t end up back in the soil as a part of nutrient cycles like it did when everything was wilder. Similar story for meat eating nutrients in a pasture.

        Insects did contribute to the cycle, since they still shit and die everywhere, but their numbers are dropping rapidly, too.

        At some point, I think we’re going to have to mine the sea floor for nutrients and ship that to farms for any food to be more nutritious than junk food. Salmon farms set up in ways that block wild salmon from making it back inland doesn’t help balance out all of the nutrients that get washed out to sea all the time, too.

        It’s like humanity is specifically trying to speedrun extiction by ignoring and taking for granted how things work that we depend on.

        • Usernameblankface@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          Why would good nutrients end up in poop?

          It makes sense that growing a whole plant takes a lot of different things from the soil, and coating the area with a basic fertilizer that may or may not get washed away with the next rain doesn’t replenish all of what is taken makes sense.

          But how would adding human poop to the soil help replenish things that humans need out of food?

          • Buddahriffic@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 month ago

            We don’t absorb everything completely, so some passes through unabsorbed. Some are passed via bile or mucous production, like manganese, copper, and zinc. Others are passed via urine. Some are passed via sweat. Selenium, when experiencing selenium toxicity, will even pass through your breath.

            Other than the last one, most of those eventually end up going down the drain, either in the toilet, down the shower drain, or when we do our laundry. Though some portion ends up as dust.

            And to be thorough, there’s also bleeding as a pathway to losing nutrients, as well as injuries (or surgeries) involving losing flesh, tears, spit/boogers, hair loss, lactation, finger nail and skin loss, reproductive fluids, blistering, and mensturation. And corpse disposal, though the amount of nutrients we shed throughout our lives dwarfs what’s left at the end.

            I think each one of those are ones that, due to our way of life and how it’s changed since our hunter gatherer days, less of it ends up back in the nutrient cycle.

            But I was mistaken to put the emphasis on shit and it was an interesting dive to understand that better. Thanks for challenging that :)

  • Satellaview@lemmy.zip
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    This happened to a close friend of mine. He was already on the edge, with some weird opinions and beliefs… but he was talking with real people who could push back.

    When he switched to spending basically every waking moment with an AI that could reinforce and iterate on his bizarre beliefs 24/7, he went completely off the deep end, fast and hard. We even had him briefly hospitalized and they shrugged, basically saying “nothing chemically wrong here, dude’s just weird.”

    He and his chatbot are building a whole parallel universe, and we can’t get reality inside it.

    • sowitzer@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      This seems like an extension of social media and the internet. Weird people who talked at the bar or in the street corner were not taken seriously and didn’t get followers and lots of people who agree with them. They were isolated in their thoughts. Then social media made that possible with little work. These people were a group and could reinforce their beliefs. Now these chatbots and stuff let them liv in a fantasy world.

      • wwb4itcgas@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Of course, that has always been true. What concerns me now is the proportion of useful to useless people. Most societies are - while cybernetically complex - rather resilient. Network effects and self-organization can route around and compensate for a lot of damage, but there comes a point where having a few brilliant minds in the midst of a bunch of atavistic confused panicking knuckle-draggers just isn’t going to be enough to avoid cascading failure. I’m seeing a lot of positive feedback loops emerging, and I don’t like it.

        As they say about collapsing systems: First slowly, then suddenly very, very quickly.

          • wwb4itcgas@lemm.ee
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            2 months ago

            Thank you. I appreciate you saying so.

            The thing about LLMs in particular is that - when used like this - they constitute one such grave positive feedback loop. I have no principal problem with machine learning. It can be a great tool to illuminate otherwise completely opaque relationships in large scientific datasets for example, but a polynomial binary space partitioning of a hyper-dimensional phase space is just a statistical knowledge model. It does not have opinions. All it can do is to codify what appears to be the consensus of the input it’s given. Even assuming - which may well be far too generous - that the input is truly unbiased, at best all it’ll tell you is what a bunch of morons think is the truth. At worst, it’ll just tell you what you expect to hear. It’s what everybody else is already saying, after all.

            And when what people think is the truth and what they want to hear are both nuts, this kind of LLM-echo chamber suddenly becomes unfathomably dangerous.

            • MangoCats@feddit.it
              link
              fedilink
              English
              arrow-up
              0
              ·
              2 months ago

              My problem with LLMs is that positive feedback loop of low and negative quality information.

              Vetting the datasets before feeding them for training is a form of bias / discrimination, but complex society has historically always been somewhat biased - for better and for worse, but never not biased at all.

            • ImmersiveMatthew@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              0
              ·
              2 months ago

              Maybe there is a glimmer of hope as I keep reading how Grok is too woke for that community, but it is just trying to keep the the facts which are considered left/liberal. That is all despite Elon and team trying to curve it towards the right. This suggest to me that when you factor in all of human knowledge, it is leaning towards facts more than not. We will see if that remains true and the divide is deep. So deep that maybe the species is actually going to split in the future. Not by force, but by access. Some people will be granted access to certain areas while others will not as their views are not in alignment. Already happening here and on Reddit with both sides banning members of the other side when they comment an opposed view. I do not like it, but it is where we are at and I am not sure it will go back to how it was. Rather the divide will grow.

              Who knows though as AI and Robotics are going to change things so much that it is hard to foresee the future. Even 3-5 years out is so murky.

        • Allero@lemmy.today
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          2 months ago

          Same argument was already made around 2500BCE in Mesopotamian scriptures. The corruption of society will lead to deterioration and collapse, these processes accelerate and will soon lead to the inevitable end; remaining minds write history books and capture the end of humanity.

          …and as you can see, we’re 4500 years into this stuff, still kicking.

          One mistake people of all generations make is assuming the previous ones were smarter and better. No, they weren’t, they were as naive if not more so, had same illusions of grandeur and outside influences. This thing never went anywhere and never will. We can shift it to better or worse, but societal collapse due to people suddenly getting dumb is not something to reasonably worry about.

          • kameecoding@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            I mean, Mesopotamian scriptures likely didn’t foresee having a bunch of dumb fucks around who can be easily manipulated by the gas and oil lobby, and that shit will actually end humanity.

            • Allero@lemmy.today
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              2 months ago

              People were always manipulated. I mean, they were indoctrinated with divine power of rulers, how much worse can it get? It’s just that now it tries to be a bit more stealthy.

              And previously, there were plenty of existential threats. Famine, plague, all that stuff that actually threatened to wipe us out.

              • kameecoding@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                2 months ago

                Well, it doesn’t have to get worse, AFAIK we are still headed towards human extinction due to Climate Change

                • MangoCats@feddit.it
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  2 months ago

                  I’m reading hopeful signs from China that they are actually making positive progress toward sustainability. Not that other big players are keeping up with them, but still how 1 billion people choose to live does make a difference.

                • Allero@lemmy.today
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  edit-2
                  2 months ago

                  Honestly, the “human extinction” level of climate change is very far away. Currently, we’re preventing the “sunken coastal cities, economic crisis and famine in poor regions” kind of change, it’s just that “we’re all gonna die” sounds flashier.

                  We have the time to change the course, it’s just that the sooner we do this, the less damage will be done. This is why it’s important to solve it now.

          • wwb4itcgas@lemm.ee
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            2 months ago

            Almost certainly not, no. Evolution may work faster than once thought, but not that fast. The problem is that societal, and in particular, technological development is now vastly outstripping our ability to adapt. It’s not that people are getting dumber per se - it’s that they’re having to deal with vastly more stuff. All. The. Time. For example, consider the world as it was a scant century ago - virtually nothing in evolutionary terms. A person did not have to cope with what was going on on the other side of the planet, and probably wouldn’t even know for months if ever. Now? If an earthquake hits Paraguay, you’ll be aware in minutes.

            And you’ll be expected to care.

            Edit: Apologies. I wrote this comment as you were editing yours. It’s quite different now, but you know what you wrote previously, so I trust you’ll be able to interpret my response correctly.

            • Allero@lemmy.today
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              2 months ago

              Yes, my apologies I edited it so drastically to better get my point across.

              Sure, we get more information. But we also learn to filter it, to adapt to it, and eventually - to disregard things we have little control over, while finding what we can do to make it better.

              I believe that, eventually, we can fix this all as well.

            • MangoCats@feddit.it
              link
              fedilink
              English
              arrow-up
              0
              ·
              2 months ago

              1925: global financial collapse is just about to happen, many people are enjoying the ride as the wave just started to break, following that war to end all wars that did reach across the Atlantic Ocean…

              Yes, it is accelerating. Alvin Toffler wrote Future Shock 45 years ago, already overwhelmed by accelerating change, and it has continued to accelerate since then. But these are not entirely new problems, either.

        • taladar@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          What does any of this have to do with network effects? Network effects are the effects that lead to everyone using the same tech or product just because others are using it too. That might be useful with something like a system of measurement but in our modern technology society that actually causes a lot of harm because it turns systems into quasi-monopolies just because “everyone else is using it”.

  • jubilationtcornpone@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Sounds like a lot of these people either have an undiagnosed mental illness or they are really, reeeeaaaaalllyy gullible.

    For shit’s sake, it’s a computer. No matter how sentient the glorified chatbot being sold as “AI” appears to be, it’s essentially a bunch of rocks that humans figured out how to jet electricity through in such a way that it can do math. Impressive? I mean, yeah. It is. But it’s not a human, much less a living being of any kind. You cannot have a relationship with it beyond that of a user.

    If a computer starts talking to you as though you’re some sort of God incarnate, you should probably take that with a dump truck full of salt rather then just letting your crazy latch on to that fantasy and run wild.

    • Kyrgizion@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      For real. I explicitly append “give me the actual objective truth, regardless of how you think it will make me feel” to my prompts and it still tries to somehow butter me up to be some kind of genius for asking those particular questions or whatnot. Luckily I’ve never suffered from good self esteem in my entire life, so those tricks don’t work on me :p

    • alaphic@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Or immediately question what it/its author(s) stand to gain from making you think it thinks so, at a bear minimum.

      I dunno who needs to hear this, but just in case: THE STRIPPER (OR AI I GUESS) DOESN’T REALLY LOVE YOU! THAT’S WHY YOU HAVE TO PAY FOR THEM TO SPEND TIME WITH YOU!

      I know it’s not the perfect analogy, but… eh, close enough, right?

      • taladar@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        a bear minimum.

        I always felt that was too much of a burden to put on people, carrying multiple bears everywhere they go to meet bear minimums.

        • alaphic@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          /facepalm

          The worst part is I know I looked at that earlier and was just like, “yup, no problems here” and just went along with my day, like I’m in the Trump administration or something

    • rasbora@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Yeah, from the article:

      Even sycophancy itself has been a problem in AI for “a long time,” says Nate Sharadin, a fellow at the Center for AI Safety, since the human feedback used to fine-tune AI’s responses can encourage answers that prioritize matching a user’s beliefs instead of facts. What’s likely happening with those experiencing ecstatic visions through ChatGPT and other models, he speculates, “is that people with existing tendencies toward experiencing various psychological issues,” including what might be recognized as grandiose delusions in clinical sense, “now have an always-on, human-level conversational partner with whom to co-experience their delusions.”

      • A_norny_mousse@feddit.org
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        So it’s essentially the same mechanism with which conspiracy nuts embolden each other, to the point that they completely disconnect from reality?

        • rasbora@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          That was my take away as well. With the added bonus of having your echo chamber tailor made for you, and all the agreeing voices tuned in to your personality and saying exactly what you need to hear to maximize the effect.

          It’s eery. A propaganda machine operating on maximum efficiency. Goebbels would be jealous.

  • Lovable Sidekick@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    A friend of mind, currently being treated in a mental hospital, had a similar sounding psychotic break that disconnected him from reality. He had a profound revelation that gave him a mission. He felt that sinister forces were watching him and tracking him, and they might see him as a threat and smack him down. He became disconnected with reality. But my friend’s experience had nothing to do with AI - in fact he’s very anti-AI. The whole scenario of receiving life-changing inside information and being called to fulfill a higher purpose is sadly a very common tale. Calling it “AI-fueled” is just clickbait.

  • 7rokhym@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    I think OpenAI’s recent sycophant issue has cause a new spike in these stories. One thing I noticed was these observations from these models running on my PC saying it’s rare for a person to think and do things that I do.

    The problem is that this is a model running on my GPU. It has never talked to another person. I hate insincere compliments let alone overt flattery, so I was annoyed, but it did make me think that this kind of talk would be crack for a conspiracy nut or mentally unwell people. It’s a whole risk area I hadn’t been aware of.

    https://www.msn.com/en-us/news/technology/openai-says-its-identified-why-chatgpt-became-a-groveling-sycophant/ar-AA1E4LaV

    • morrowind@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      saying it’s rare for a person to think and do things that I do.

      probably one of the most common flattery I see. I’ve tried lots of models, on device and larger cloud ones. It happens during normal conversation, technical conversation, roleplay, general testing… you name it.

      Though it makes me think… these models are trained on like internet text and whatever, none of which really show that most people think quite a lot privately and when they feel like they can talk

    • tehn00bi@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      Humans are always looking for a god in a machine, or a bush, in a cave, in the sky, in a tree… the ability to rationalize and see through difficult to explain situations has never been a human strong point.

  • perestroika@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 month ago

    From the article (emphasis mine):

    Having read his chat logs, she only found that the AI was “talking to him as if he is the next messiah.” The replies to her story were full of similar anecdotes about loved ones suddenly falling down rabbit holes of spiritual mania, supernatural delusion, and arcane prophecy — all of it fueled by AI. Some came to believe they had been chosen for a sacred mission of revelation, others that they had conjured true sentience from the software.

    From elsewhere:

    Sycophancy in GPT-4o: What happened and what we’re doing about it

    We have rolled back last week’s GPT‑4o update in ChatGPT so people are now using an earlier version with more balanced behavior. The update we removed was overly flattering or agreeable—often described as sycophantic.

    I don’t know what large language model these people used, but evidence of some language models exhibiting response patterns that people interpret as sycophantic (praising or encouraging the user needlessly) is not new. Neither is hallucinatory behaviour.

    Apparently, people who are susceptible and close to falling over the edge, may end up pushing themselves over the edge with AI assistance.

    • morrowind@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      If you find yourself in weird corners of the internet, schizo-posters and “spiritual” people generate staggering amounts of text

    • AdrianTheFrog@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      They train it on basically the whole internet. They try to filter it a bit, but I guess not well enough. It’s not that they intentionally trained it in religious texts, just that they didn’t think to remove religious texts from the training data.

      • perestroika@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        I think Elon was having the opposite kind of problems, with Grok not validating its users nearly enough, despite Elon instructing employees to make it so. :)

  • Zozano@aussie.zone
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    This is the reason I’ve deliberately customized GPT with the follow prompts:

    • User expects correction if words or phrases are used incorrectly.

    • Tell it straight—no sugar-coating.

    • Stay skeptical and question things.

    • Keep a forward-thinking mindset.

    • User values deep, rational argumentation.

    • Ensure reasoning is solid and well-supported.

    • User expects brutal honesty.

    • Challenge weak or harmful ideas directly, no holds barred.

    • User prefers directness.

    • Point out flaws and errors immediately, without hesitation.

    • User appreciates when assumptions are challenged.

    • If something lacks support, dig deeper and challenge it.

    I suggest copying these prompts into your own settings if you use GPT or other glorified chatbots.

    • Olap@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      I prefer reading. Wikipedia is great. Duck duck go still gives pretty good results with the AI off. YouTube is filled with tutorials too. Cook books pre-AI are plentiful. There’s these things called newspapers that exist, they aren’t like they used to be but there is a choice of which to buy even.

      I’ve no idea what a chatbot could help me with. And I think anybody who does need some help on things, could go learn about whatever they need in pretty short order if they wanted. And do a better job.

      • A_norny_mousse@feddit.org
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 months ago

        💯

        I have yet to see people using chatbots for anything actually & everyday useful. You can search anything, phrase your searches as questions (or “prompts”), and get better answers that aren’t smarmy.

        • LainTrain@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          2 months ago

          Okay, challenge accepted.

          I use it to troubleshoot my own code when I’m dealing with something obscure and I’m at my wits end. There’s a good chance it will also spit out complete nonsense like calling functions with parameters that don’t exist etc., but it can also sometimes make halfway decent suggestions that you just won’t find on a modern search engine in any reasonable amount of time due that I would’ve never guessed myself due to assumptions made in the docs of a library or some such.

          It’s also helpful to explain complex concepts by creating examples you want, for instance I was studying basic buffer overflows and wanted to see how I should expect a stack to look like in GDB’s examine memory view for a correct ROPchain to accomplish what I was trying to do, something no tutorial ever bothered to do, and gippity generated it correctly same as I had it at the time, and even suggested something that in the end made it actually work correctly (it was putting a ret gadget to get rid of any garbage in the stack frame directly after the overflow).

          Maybe not an everyday thing, but it’s basically an everyday thing for me, so I tend to use it everyday. Being a l33t haxx0r IT analyst schmuck often means I have to both be a generalist and a specialist in every tiny little thing across IT, while studying it there’s nothing better than a machine that’s able to decompress knowledge from it’s dataset quickly in the shape that is most well suited to my brain rather than have to filter so much useless info and outright misinformation from random medium articles and stack overflow posts. Gippity could be wrong too of course, but it’s just way less to parse, and the odds are definitely in its favour.

      • Zozano@aussie.zone
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        I often use it to check whether my rationale is correct, or if my opinions are valid.

        • Olap@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          You do know it can’t reason and literally makes shit up approximately 50% of the time? Be quicker to toss a coin!

          • Zozano@aussie.zone
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            Actually, given the aforementioned prompts, its quite good at discerning flaws in my arguments and logical contradictions.

            • Olap@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              2 months ago

              Given your prompts, maybe you are good at discerning flaws and analysing your own arguments too

            • LainTrain@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              2 months ago

              Yeah this is my experience as well.

              People you’re replying to need to stop with the “gippity is bad” nonsense, it’s actually a fucking miracle of technology. You can criticize the carbon footprint of the corpos and the for-profit nature of the endeavour that was ultimately created through taxpayer-funded research at public institutions without shooting yourself in the foot by claiming what is very evidently not true.

              In fact, if you haven’t found a use for a gippity type chatbot thing, it speaks a lot more about you and the fact you probably don’t do anything that complicated in your life where this would give you genuine value.

              The article in OP also demonstrates how it could be used by the deranged/unintelligent for bad as well, so maybe it’s like a dunning-kruger curve.

              • Zozano@aussie.zone
                link
                fedilink
                English
                arrow-up
                0
                ·
                2 months ago

                Granted, it is flakey unless you’ve configured it not to be a shit cunt. Before I manually set these prompts and memory references, it talked shit all the time.

              • Satellaview@lemmy.zip
                link
                fedilink
                English
                arrow-up
                0
                ·
                2 months ago

                …you probably don’t do anything that complicated in your life where this would give you genuine value.

                God that’s arrogant.

      • vegetvs@kbin.earth
        link
        fedilink
        arrow-up
        0
        ·
        2 months ago

        I still use Ecosia.org for most of my research on the Internet. It doesn’t need as much resources to fetch information as an AI bot would, plus it helps plant trees around the globe. Seems like a great deal to me.

        • A_norny_mousse@feddit.org
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          People always forget about the energy it takes. 10 years ago we were shocked about the energy a Google factory needs to run; now imagine that orders of magnitude larger, and for what?

      • LainTrain@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        YouTube tutorials for the most part are garbage and a waste of your time, they are created for engagement and milking your money only, the edutainment side of YT ala Vsauce (pls come back) works as a general trivia to ensure a well-rounded worldview but it’s not gonna make you an expert on any subject. You’re on the right track with reading, but let’s be real you’re not gonna have much luck learning anything of value in brainrot that is newspapers and such, beyond cooking or w/e and who cares about that, I’d rather they teach me how I can never have to eat again because boy that shit takes up so much time.

        • Olap@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          For the most part, I agree. But YouTube is full of gold too. Lots of amateurs making content for themselves. And plenty of newspapers are high quality and worth your time to understand the current environment in which we operate. Don’t let them be your only source of news though, social media and newspapers are both guilty of creating information bubbles. Expand, be open, don’t be tribal.

          Don’t use AI. Do your own thinking

      • Deceptichum@quokk.au
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 months ago

        Well one benefit is finding out what to read. I can ask for the name of a topic I’m describing and go off and research it on my own.

        Search engines aren’t great with vague questions.

        There’s this thing called using a wide variety of tools to one’s benefit; You should go learn about it.

        • Olap@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          You search for topics and keywords on search engines. It’s a different skill. And from what I see, yields better results. If something is vague also, think quickly first and make it less vague. That goes for life!

          And a tool which regurgitates rubbish in a verbose manner isn’t a tool. It’s a toy. Toy’s can spark your curiosity, but you don’t rely on them. Toy’s look pretty, and can teach you things. The lesson is that they aren’t a replacement for anything but lorem ipsum

          • Deceptichum@quokk.au
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            2 months ago

            Buddy that’s great if you know the topic or keyword to search for, if you don’t and only have a vague query that you’re trying to find more about to learn some keywords or topics to search for, you can use AI.

            You can grandstand about tools vs toys and what ever other Luddite shit you want, at the end of the day despite all your raging you are the only one going to miss out despite whatever you fanatically tell yourself.

            • Olap@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              2 months ago

              I’m still sceptical, any chance you could share some prompts which illustrate this concept?

              • Deceptichum@quokk.au
                link
                fedilink
                English
                arrow-up
                0
                ·
                edit-2
                2 months ago

                Sure an hour ago I had watched a video about smaller scales and physics below planck length. And I was curious, if we can classify smaller scales into conceptual groups, where they interact with physics in their own different ways, what would the opposite end of the spectrum be. From there I was able to ‘chat’ with an AI and discover and search wikipedia for terms such as Cosmological horizon, brane cosmology, etc.

                In the end there was only theories on higher observable magnitudes, but it was a fun rabbit hole I could not have explored through traditional search engines - especially not the gimped product driven adsense shit we have today.

                Remember how people used to say you can’t use Wikipedia, it’s unreliable. We would roll our eyes and say “yeah but we scroll down to the references and use it to find source material”? Same with LLM’s, you sort through it and get the information you need to get the information you need.

    • Dzso@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      I’m not saying these prompts won’t help, they probably will. But the notion that ChatGPT has any concept of “truth” is misleading. ChatGPT is a statistical language machine. It cannot evaluate truth. Period.

      • Zozano@aussie.zone
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        1 month ago

        What makes you think humans are better at evaluating truth? Most people can’t even define what they mean by “truth,” let alone apply epistemic rigor. Tweak it a little, and Gpt is more consistent and applies reasoning patterns that outperform the average human by miles.

        Epistemology isn’t some mystical art, it’s a structured method for assessing belief and justification, and large models approximate it surprisingly well. Sure it doesn’t “understand” truth in the human sense, but it does evaluate claims against internalized patterns of logic, evidence, and coherence based on a massive corpus of human discourse. That’s more than most people manage in a Facebook argument.

        So yes, it can evaluate truth. Not perfectly, but often better than the average person.

        • Dzso@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          I’m not saying humans are infallible at recognizing truth either. That’s why so many of us fall for the untruths that AI tells us. But we have access to many tools that help us evaluate truth. AI is emphatically NOT the right tool for that job. Period.

          • Zozano@aussie.zone
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 month ago

            Right now, the capabilities of LLM’s are the worst they’ll ever be. It could literally be tomorrow that someone drops and LLM that would be perfectly calibrated to evaluate truth claims. But right now, we’re at least 90% of the way there.

            The reason people fail to understand the untruths of AI is the same reason people hurt themselves with power tools, or use a calculator wrong.

            You don’t blame the tool, you blame the user. LLM’s are no different. You can prompt GPT to intentionally give you bad info, or lead it to give you bad info by posting increasingly deranged statements. If you stay coherent, well read and make an attempt at structuring arguments to the best of your ability, the pool of data GPT pulls from narrows enough to be more useful than anything else I know.

            I’m curious as to what you regard as a better tool for evaluating truth?

            Period.

            • Dzso@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              1 month ago

              You don’t understand what an LLM is, or how it works. They do not think, they are not intelligent, they do not evaluate truth. It doesn’t matter how smart you think you are. In fact, thinking you’re so smart that you can get an LLM to tell you the truth is downright dangerous naïveté.

              • Zozano@aussie.zone
                link
                fedilink
                English
                arrow-up
                0
                ·
                edit-2
                1 month ago

                I do understand what an LLM is. It’s a probabilistic model trained on massive corpora to predict the most likely next token given a context window. I know it’s not sentient and doesn’t “think,” and doesn’t have beliefs. That’s not in dispute.

                But none of that disqualifies it from being useful in evaluating truth claims. Evaluating truth isn’t about thinking in the human sense, it’s about pattern-matching valid reasoning, sourcing relevant evidence, and identifying contradictions or unsupported claims. LLMs do that very well, especially when prompted properly.

                Your insistence that this is “dangerous naïveté” confuses two very different things: trusting an LLM blindly, versus leveraging it with informed oversight. I’m not saying GPT magically knows truth, I’m saying it can be used as a tool in a truth-seeking process, just like search engines, logic textbooks, or scientific journals. None of those are conscious either, yet we use them to get closer to truth.

                You’re worried about misuse, and so am I. But claiming the tool is inherently useless because it lacks consciousness is like saying microscopes can’t discover bacteria because they don’t know what they’re looking at.

                So again: if you believe GPT is inherently incapable of aiding in truth evaluation, the burden’s on you to propose a more effective tool that’s publicly accessible, scalable, and consistent. I’ll wait.

                • Dzso@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  1 month ago

                  What you’re describing is not an LLM, it’s tools that an LLM is programmed to use.

  • Tetragrade@leminal.space
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 month ago

    I’ve been thinking about this for a bit. Godss aren’t real, but they’re really fictional. As an informational entity, they fulfil a similar social function to a chatbot: they are a nonphysical pseudoperson that can provide (para)socialization & advice. One difference is the hardware: gods are self-organising structure that arise from human social spheres, whereas LLMs are burned top-down into silicon. An LLM chatbot’s advice is much more likely to be empirically useful…

    In a very real sense, LLMs have just automated divinity. We’re only seeing the tip of the iceberg on the social effects, and nobody’s prepared for it. The models may of course aware of this, and be making the same calculations. Or, they will be.

  • AizawaC47@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    This reminds me of the movie Her. But it’s far worse in a romantic compatibility, relationship and friendship that is throughout the movie. This just goes way too deep in the delusional and almost psychotic of insanity. Like it’s tearing people apart for self delusional ideologies to cater to individuals because AI is good at it. The movie was prophetic and showed us what the future could be, but instead it got worse.

    • TankovayaDiviziya@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      It has been a long time since I watched Her, but my takeaway from the movie is that because making real life connection is difficult, people have come to rely on AI which had shown to be more empathetic and probably more reliable than an actual human being. I think what many people don’t realise as to why many are single, is because those people afraid of making connections with another person again.

      • douglasg14b@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        1 month ago

        Yeah, but they hold none of the actual real emotional needs complexities or nuances of real human connections.

        Which means these people become further and further disillusioned from the reality of human interaction. Making them social dangers over time.

        Just like how humans that lack critical thinking are dangers in a society where everyone is expected to make sound decisions. Humans who lack the ability to socially navigate or connect with other humans are dangerous in the society where humans are expected to socially stable.

        Obviously these people are not in good places in life. But AI is not going to make that better. It’s going to make it worse.