There is no evidence that poisoning has had any effect on LLMs. It’s likely that it never will, because garbage inputs aren’t likely to get reinforced during training. It’s all just wishful thinking from the haters.
Every AI will always have bias, just as every person has bias, because humanity has never agreed on what is “truth”.
There is no evidence that poisoning has had any effect on LLMs. It’s likely that it never will, because garbage inputs aren’t likely to get reinforced during training. It’s all just wishful thinking from the haters.
Every AI will always have bias, just as every person has bias, because humanity has never agreed on what is “truth”.