• fubarx@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    12 hours ago

    If you want to truly understand the way the world works, you may want to have a way to model the objects, events, interactions, and all their semantic connections, as they change over time.

    But that’s too hard.

    Let’s just have a universal parrot that has been trained to retain and repeat everything it has ever heard. Sooner or later, the parrot will sound like it understands things, without having to deal with that other messy stuff. All it needs is more examples to mimic.

    • hypna@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 hours ago

      If our definition of intelligence is something like the ability to represent a sufficiently complete mental model of reality to allow description and prediction of the real world, then I think the approach of trying to create a mind in a purely textual bubble is probably hopeless. I suspect the best you could get is some kind of pseudo-mind capable of producing text as if it were an intelligence with a useful model of reality. It’s only mental model is of what text has been written about reality. It can only be a disconnected, imitation of a mind.

      But I actually do think that the weighted neural network model has a fair shot at producing intelligence. We only have one example of one type of system that produces intelligence, and this approach wisely takes that as it’s inspiration.

      Which brings me to your point. I’d wager the missing piece is the variety of inputs that natural minds use to develop their own mental models; sight, sound, touch, smell, taste, and also the symbolic inputs we get from language.

      I understand that the current ML models use tokenized words as their input, and I have no idea how one could adapt that system to synthesize values from that kind of diversity of inputs, but I suspect the answer to that problem is the missing piece.