• fubarx@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    18 hours ago

    If you want to truly understand the way the world works, you may want to have a way to model the objects, events, interactions, and all their semantic connections, as they change over time.

    But that’s too hard.

    Let’s just have a universal parrot that has been trained to retain and repeat everything it has ever heard. Sooner or later, the parrot will sound like it understands things, without having to deal with that other messy stuff. All it needs is more examples to mimic.

    • hypna@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      17 hours ago

      If our definition of intelligence is something like the ability to represent a sufficiently complete mental model of reality to allow description and prediction of the real world, then I think the approach of trying to create a mind in a purely textual bubble is probably hopeless. I suspect the best you could get is some kind of pseudo-mind capable of producing text as if it were an intelligence with a useful model of reality. It’s only mental model is of what text has been written about reality. It can only be a disconnected, imitation of a mind.

      But I actually do think that the weighted neural network model has a fair shot at producing intelligence. We only have one example of one type of system that produces intelligence, and this approach wisely takes that as it’s inspiration.

      Which brings me to your point. I’d wager the missing piece is the variety of inputs that natural minds use to develop their own mental models; sight, sound, touch, smell, taste, and also the symbolic inputs we get from language.

      I understand that the current ML models use tokenized words as their input, and I have no idea how one could adapt that system to synthesize values from that kind of diversity of inputs, but I suspect the answer to that problem is the missing piece.

      • fubarx@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        15 hours ago

        Epistemology is the study of knowledge and there’s a rich history of some deep thinkers who have been noodling about a lot of this for centuries.

        There well might be a magic recursive bullet out there, trying to mimic neurons and end up with some sort of synthetic intelligence. But what we have now is nowhere near it. At best, we have electronic parrots cosplaying as deep thinkers and burning through GPUs like there’s no tomorrow. Everyone’s trying to shovel mediocrity as good enough.

        Way back in the days of Papert and Minsky and lisp-based AI, people had enough humility to get philosophers involved in these discussions, but the tools and tech were too weak. The current crop of chucklefucks, backed by big money, are too full of hubris to want to go read up lowly liberal-arts thinkers like Locke, Hume, Kant, and Russell.

        Let’s just press forward. The promise is always over the next mountain.