Following the success of Chinese generative AI platform DeepSeek released last month, the university backgrounds of the platform’s young team of dev...
I don’t know how these things work but it seems strange it would it inherit censorship. Especially because deepseek starts to answer but then erases it and “computer says no”
That censorship is not inherited. The censorship in the corpus of training data is though.
Seems like the CPC just approached someone in Deepseek and said “fix this up before the next release” and provided tool to determine what qualifies for removal, which is why it happens post-generation.
I bet it’s cause they originally trained it using the OpenAI API, so it inherited a lot of the biases from it.
I don’t know how these things work but it seems strange it would it inherit censorship. Especially because deepseek starts to answer but then erases it and “computer says no”
That censorship is not inherited. The censorship in the corpus of training data is though.
Seems like the CPC just approached someone in Deepseek and said “fix this up before the next release” and provided tool to determine what qualifies for removal, which is why it happens post-generation.
Oh yeah the one where it starts to answer seems deliberate.
They just don’t want to open any cans of worms. Western models are also quite censored, because LLMs are unreliable dogshit