• ZkhqrD5o@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 days ago

    Guys, can we please call it LLM and not a vague advertising term that changes its meaning on a whim?

      • ZkhqrD5o@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 days ago

        Yes, but the LLM does the writing. Someone probably carelessly copy pasta’d some text from OCR.

        • Simyon@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          7 days ago

          Fair enough, though another possibility I see is that the automated training process for LLMs used OCR for those papers (Or an already existing text version in the internet was using bad OCR) and those papers with the mashed word were written partially or fully by an LLM.

          Either way, the blanket term “AI” sucks and it’s honestly getting kind of annoying. Same with how much LLMs are used.

    • ZILtoid1991@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 days ago

      For some weird reason, I don’t see AI amp modelling being advertised despite neural amp modellers exist. However, the very technology that was supposed to replace the guitarists (Suno, etc) are marketed as AI.

      • RobertoOberto@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        8 days ago

        I think that’s because in the first case, the amp modeller is only replacing a piece of hardware or software they already have. It doesn’t do anything particularly “intelligent” from the perspective of the user, so I don’t think using “AI” in the marketing campaign would be very effective. LLMs and photo generators have made such a big splash in the popular consciousness that people associate AI with generative processes, and other applications leave them asking, “where’s the intelligent part?”

        In the second case, it’s replacing the human. The generative behaviors match people’s expectations while record label and streaming company MBAs cream their pants at the thought of being able to pay artists even less.

  • Dr. Bob@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 days ago

    When I was in grad school I mentioned to the department chair that I frequently saw a mis-citation for an important paper in the field. He laughed and said he was responsible for it. He made an error in the 1980s and people copied his citation from the bibliography. He said it was a good guide to people who cited papers without reading them.

    • Treczoks@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 days ago

      At university, I faked a paper on economics (not actually my branch of study, but easily to fake) and put it on the shelf in their library. It was filled with nonsense formulas that, if one took the time and actually solved the equations properly, would all produce the same number as a result: 19920401 (year of publication, April Fools Day). I actually got two requests from people who wanted to use my paper as a basis for their thesis.

  • zephorah@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 days ago

    Another basic demonstration on why oversight by a human brain is necessary.

    A system rooted in pattern recognition that cannot recognize the basic two column format of published and printed research papers

      • Cethin@lemmy.zip
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 days ago

        The LLM systems are pattern recognition without any logic or awareness is the issue. It’s pure pattern recognition, so it can easily find some patterns that aren’t desired.

        • thedeadwalking4242@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          9 days ago

          As unpopular as opinion this is, I really think AI could reach human level intelligence in our life time. The human brain is nothing but a computer, so it has to be reproducible. Even if we don’t exactly figure out how are brains work we might be able to create something better.

          • dustyData@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            9 days ago

            The human brain is not a computer. It was a fun simile to make in the 80s when computers rose in popularity. It stuck in popular culture, but time and time again neuroscientists and psychologists have found that it is a poor metaphor. The more we know about the brain the less it looks like a computer. Pattern recognition is barely a tiny fraction of what the human brain does, not even the most important function, and computers suck at it. No computer is anywhere close to do what a human brain can do in many different ways.

            • Akrenion@slrpnk.net
              link
              fedilink
              English
              arrow-up
              0
              ·
              8 days ago

              Some Scientists are connectiong i/o on brain tissue. These experiments show stunning learning capabilities but their ethics are rightly questioned.

              • Cethin@lemmy.zip
                link
                fedilink
                English
                arrow-up
                0
                ·
                8 days ago

                I don’t get how the ethics of that are questionable. It’s not like they’re taking brains out of people and using them. It’s just cells that are not the same as a human brain. It’s like taking skin cells and using those for something. The brain is not just random neurons. It isn’t something special and magical.

                • Akrenion@slrpnk.net
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  8 days ago

                  We haven’t yet figured out what it means to be conscious. I agree that a person can willingly give permission to be experimented on and even replicated. However there is probably a line where we create something conscious for the act of a few months worth of calculations.

                  There wouldn’t be this many sci-fi books about cloning gone wrong if we already knew all it entails. This is basically the matrix for those brainoids. We are not on the scale of whole brain reproduction but there is a reason for the ethics section on the cerebral organoid wiki page that links to further concerns in the neuro world.

              • dustyData@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                8 days ago

                Reading about those studies is pretty interesting. Usually the neurons do most of the heavy lifting, adapting to the I/O chip input and output. It’s almost an admittance that we don’t yet fully understand what we are dealing with, when we try to interface with our rudimentary tech.

            • barsoap@lemm.ee
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              8 days ago

              It stuck in popular culture, but time and time again neuroscientists and psychologists have found that it is a poor metaphor.

              Notably, neither of those two disciplines are computer science. Silicon computers are Turing complete. They can (given enough time and scratch space) compute everything that’s computable. The brain cannot be more powerful than that you’d break causality itself: God can’t add 1 and 1 and get 3, and neither can god sort a list in less than O(n log n) comparisons. Both being Turing complete also means that they can emulate each other. It’s not a metaphor: It’s an equivalence. Computer scientists have trouble telling computers and humans apart just as topologists can’t distinguish between donuts and coffee mugs.

              Architecturally, sure, there’s massive difference in hardware. Not carbon vs. silicon but because our brains are nowhere close to being von Neumann machines. That doesn’t change anything about brains being computers, though.

              There’s, big picture, two obstacles to AGI: First, figuring out how the brain does what it does and we know that current AI approaches aren’t sufficient,secondly, once understanding that, to create hardware that is even just a fraction as fast and efficient at executing erm itself as the brain is.

              Neither of those two involve the question “is it even possible”. Of course it is. It’s quantum computing you should rather be sceptical about, it’s still up in the air whether asymptotic speedups to classical hardware are even physically possible (quantum states might get more fuzzy the more data you throw into a qbit, the universe might have a computational upper limit per unit volume or such).

              • dustyData@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                8 days ago

                Notably, computer science is not neurology. Neither is equipped to meddle in the other’s field. If brains were just very fast and powerful computers, then neuroscientist should be able to work with computers and engineers on brains. But they are not equivalent. Consciousness, intelligence, memory, world modeling, motor control and input consolidation are way more complex than just faster computing. And Turing completeness is irrelevant. The brain is not a Turing machine. It does not process tokens one at a time. Turing completeness is a technology term, it shares with Turing machines the name alone, as Turing’s philosophical argument was not meant to be a test or guarantee of anything. Complete misuse of the concept.

                • barsoap@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  edit-2
                  8 days ago

                  If brains were just very fast and powerful computers, then neuroscientist should be able to work with computers and engineers on brains.

                  Does not follow. Different architectures require different specialisations. One is research into something nature presents us, the other (at least the engineering part) is creating something. Completely different fields. And btw the analytical tools neuroscientists have are not exactly stellar, that’s why they can’t understand microprocessors (the paper is tongue in cheek but also serious).

                  But they are not equivalent.

                  They are. If you doubt that, you do not understand computation. You can read up on Turing equivalence yourself.

                  Consciousness, intelligence, memory, world modeling, motor control and input consolidation are way more complex than just faster computing.

                  The fuck has “fast” to do with “complex”. Also the mechanisms probably aren’t terribly complex, how the different parts mesh together to give rise to a synergistic whole creates the complexity. Also I already addressed the distinction between “make things run” and “make them run fast”. A dog-slow AGI is still an AGI.

                  The brain is not a Turing machine. It does not process tokens one at a time.

                  And neither are microprocessors Turing machines. A thing does not need to be a Turing machine to be Turing complete.

                  Turing completeness is a technology term

                  Mathematical would be accurate.

                  it shares with Turing machines the name alone,

                  Nope the Turing machine is one example of a Turing complete system. That’s more than “shares a name”.

                  Turing’s philosophical argument was not meant to be a test or guarantee of anything. Complete misuse of the concept.

                  You’re probably thinking of the Turing test. That doesn’t have to do anything with Turing machines, Turing equivalence, or Turing completeness, yes. Indeed, getting the Turing test involved and confused with the other three things is probably the reason why you wrote a whole paragraph of pure nonsense.

              • bigpEE@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                edit-2
                8 days ago

                Re: quantum computing, we know quantum advantage is real both for certain classes of problems, e.g. theoretically using Grover’s, and experimentally for toy problems like bosonic sampling. It’s looking like we’re past the threshold where we can do error correction, so now it’s a question of scaling. I’ve never heard anyone discuss a limit on computation per volume as applying to QC. We’re down to engineering problems, not physics, same as your brain vs computer case.

                • barsoap@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  8 days ago

                  From all I know none of the systems that people have built come even close to testing the speedup: Is error correction going to get harder and harder the larger the system is, the more you ask it to compute? It might not be the case but quantum uncertainty is a thing so it’s not baseless naysaying, either.

                  Let me put on my tinfoil hat: Quantum physicists aren’t excited to talk about the possibility that the whole thing could be a dead end because that’s not how you get to do cool quantum experiments on VC money and it’s not like they aren’t doing valuable research, it’s just that it might be a giant money sink for the VCs which of course is also a net positive. Trying to break the limit might be the only way to test it, and that in turn might actually narrow things down in physics which is itching for experiments which can break the models because we know that they’re subtly wrong, just not how, data is needed to narrow things down.

          • fckreddit@lemmy.ml
            link
            fedilink
            English
            arrow-up
            0
            ·
            8 days ago

            The only way AI is going reach human-level intelligence is if we can actually figure out what happens to information in our brains. No one can really tell if and when that is going to happen.

            • thedeadwalking4242@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              8 days ago

              Not necessarily, human made intelligence may use separate methods. The human brain is messy it’s possible more can be done with less.

              • Lyrl@lemm.ee
                link
                fedilink
                English
                arrow-up
                0
                ·
                8 days ago

                Maybe more with less is possible, but we are currently doing less variety of skill with way, way more energy. From https://www.humanbrainproject.eu/en/follow-hbp/news/2023/09/04/learning-brain-make-ai-more-energy-efficient/

                It is estimated that a human brain uses roughly 20 Watts to work – that is equivalent to the energy consumption of your computer monitor alone, in sleep mode. On this shoe-string budget, 80–100 billion neurons are capable of performing trillions of operations that would require the power of a small hydroelectric plant if they were done artificially.

                • thedeadwalking4242@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  8 days ago

                  Currently but it’s a start and 100 years is a long time. 100 years ago we didn’t even have computers, barely cars, and doctors still didn’t really wash their hands.

          • Tlaloc_Temporal@lemmy.ca
            link
            fedilink
            English
            arrow-up
            0
            ·
            9 days ago

            I somewhat agree. Given enough time we can make a machine that does anything a human can do, but some things will take longer than others.

            It really depends on what you call human intelligence. Lots of animals have various behaviors that might be called intelligent, like insane target tracking, adaptive pattern recognition, kinematic pathing, and value judgments. These are all things that AI aren’t close to doing yet, but that could change quickly.

            There are perhaps other things that we take for granted than might end up being quite difficult and necessary, like having two working brains at once, coherent recursive thoughts, massively parallel processing, or something else we don’t even know about yet.

            I’d give it a 50-50 chance for singularity this century, if development isn’t stopped for some reason.

          • WorldsDumbestMan@lemmy.today
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            9 days ago

            We would have to direct it in specific directions that we don’t understand. Think what a freak accident we REALLY are!

            EDIT: I would just copy-paste the human brain in some digital form, modify it so that it is effectively immortal inside the simulation, set simulation speed to * 10.000.000, and let it take it’s revenge for being imprisoned into an eternal void of suffering.

          • zephorah@lemm.ee
            link
            fedilink
            English
            arrow-up
            0
            ·
            8 days ago

            I strongly encourage you to at least scratch the surface on human memory data.

  • Slovene@feddit.nl
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 days ago

    I thought vegetative electron microscopy was one of the most important procedures in the development of the Rockwell retro encabulator?

  • LovableSidekick@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    8 days ago

    Also, look how people jump to the conclusion that you’re either at one extreme side or the opposite extreme side of any issue - usually the worst one - because they spot an element of what you’re saying that matches up with some meme they saw, that reduces a complex issue to a clear-cut binary choice between good and evil. Not that memes or AI are bad, people just lazily apply them way beyond their level of precision.

    This is why we can’t have nice things.

  • KulunkelBoom@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 days ago

    This is why newspapers, books, magazines, scientific papers, etc., should be on paper - because it’s too easy for “digital” to become nonsense, false, or maliciously changed.

  • Birbatron@slrpnk.net
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    8 days ago

    It is worthwhile to note that the enzyme did not attack Norris of Leeds university, that would be tragic.

  • BattleGrown@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 days ago

    I recently reviewed a paper, for a prestigious journal. Paper was clearly from the academic mill. It was horrible. They had a small experimental engine, and they wrote 10 papers about it. Results were all normalized and relative, key test conditions not even mentioned, all described in general terms… and I couldn’t even be sure if the authors were real (korean authors, names are all Park, Kim and Lee). I hate where we arrived in scientific publishing.

    • daniskarma@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      8 days ago

      To be fair, scientific publishing has been terrible for years, a deeply flawed system at multiple levels. Maybe this is the push it needs to reevaluate itself into something better.

      • Tja@programming.dev
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 days ago

        And to be even fairer, scientific reviewing hasn’t been better. Back in my PhD days, I got a paper rejected from a prestigious conference for being too simple and too complex from two different reviewers. The reviewer that argue “too simple” also gave a an example of a task that couldn’t be achieved which was clearly achievable.

        Goes without saying, I’m not in academia anymore.

        • joonazan@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          0
          ·
          8 days ago

          Startups on the other hand have people pursuing ideas that have been proven to not work. The better starups mostly just sell old innovations that do work.

    • GreatDong3000@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 days ago

      Do you usually get to see the names of the authors you are reviewing papers of in a prestigious journal?

      • BattleGrown@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 days ago

        I try to avoid reviews, but the editor is a close friend of mine and i’m an expert of the topic. The manuscript was only missing the date

    • Comment105@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 days ago

      People shit on Hossenfelder but she has a point. Academia partially brought this on themselves.

        • andros_rex@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          8 days ago

          Her video on trans issues has made it very difficult to take her seriously as a thinker. The same types of manipulative half truths and tropes I see from TERFs pretending they have the “reasonable” view, while also spreading the hysteric media narrative about the kids getting transed.

          • zqps@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            0
            ·
            7 days ago

            I didn’t even see that. Just a few clips of her rants about other things she confidently knows nothing about, like a less incoherent Jordan Peterson.

      • Schadrach@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 days ago

        People shit on Hossenfelder but she has a point. Academia partially brought this on themselves.

        Somehow I briefly got her and Pluckrose reversed in my mind, and was still kinda nodding along.

        If you don’t know who I mean, Pluckrose and two others produced a bunch of hoax papers (likening themselves to the Sokal affair) of which 4 were published and 3 were accepted but hadn’t been published, 4 were told to revise and resubmit and one was under review at the point they were revealed. 9 were rejected, a bit less than half the total (which included both the papers on autoethnography). The idea was to float papers that were either absurd or kinda horrible like a study supporting reducing homophobia and transphobia in straight cis men by pegging them (was published in Sexuality & Culture) or one that was just a rewrite of a section of Mein Kampf as a feminist text (was accepted by Affilia but not yet published when the hoax was revealed).

        My personal favorite of the accepted papers was “When the Joke Is on You: A Feminist Perspective on How Positionality Influences Satire” just because of how ballsy it is to spell out what you are doing so obviously in the title. It was accepted by Hypatia but hadn’t been published yet when the hoax was revealed.