• golli@lemm.ee
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    8 days ago

    I only have a rudimentary understanding of LLMs, so can someone with more knowledge answer me some questions on this topic?

    I’ve heard of data poisoning, which to my understanding means that one can manipulate/bias these models through the training data. Is this a potential problem with this model beyond the obvious censorship that seems to happen in the online version, but apparently can be circumvented? I’m asking because that seems to be fairly obvious, but minor biases might be hard to impossible to detect.

    Also is the data it was trained on available as well at all? Or is it just the techniques on how it was trained and the resulting weights? Because without the former i’d imagine it would be impossible to verify any subtle manipulation in the training data or even just its selection.

    • Excel@beehaw.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      5 days ago

      There is no evidence that poisoning has had any effect on LLMs. It’s likely that it never will, because garbage inputs aren’t likely to get reinforced during training. It’s all just wishful thinking from the haters.

      Every AI will always have bias, just as every person has bias, because humanity has never agreed on what is “truth”.