• 0 Posts
  • 4 Comments
Joined 2 years ago
cake
Cake day: June 20th, 2023

help-circle
  • I’m gonna take a second stab at replying, because you seem to be arguing in good faith.

    My original point is that online chatbots have arbitrary curbs that are built in. I can run GPT 2.5 on my self host machine, and if I knew how to do it (I don’t) I could probably get it to have no curbs via retraining and clever prompting. The same is true of the deepseek models.

    I don’t personally agree that there’s a huge difference between one model being curbed from discussing xi and another from discussing what the current politics du jour in the western sphere are. When you see platforms like meta censoring LGTBTQ topics but amplifying hate speech, or official congressional definitions of antisemitism including objection to active and on-going genocide, the idea of what government censorship is and isn’t becomes confusing.

    Having personally received the bizarre internal agency emails circulating this week encouraging me to snitch out my colleagues to help root out the evils of DEIA thought in US gov’t the last week has only crystallized it for me. I’m not sure I care that much about Chinese censorship or authoritarianism; I’ve got budget authoritarianism at home, and I don’t even get high-speed rail out of the bargain. At least they don’t depend on forever wars and all of the attendant death and destruction that come with them to prop up their ponzi-scheme economies. Will they in the future, probably? They are basically just a heavily centralized/regulated capitalist enterprise now, so who knows. But right now? Do they engage in propaganda? Cyber-espionage? Yes and Yes. So do we, so do you, so does everyone who has a seat at the geopolitical table and the economy to afford it.

    The point of all of this isn’t US GOOD CHINA BAD or US BAD CHINA GOOD. The article is about the deepseek models tearing out the floor of US dominance in AI. Personally, having deployed it and played with it, yeah. None of these products are truly useful to me yet, and I remain skeptical of their eventual value, but right now, party censorship or not, you can download a version of an LLM that you can run, retrain and bias however you want, and it costs you the bandwidth it took to download. And it performs on par with US commercial offerings that require pricey subscriptions. Offerings that apparently require huge public investment to keep afloat.


  • Wow what even is beehaw, I had no idea. At least China is honest about what they’re doing. The amount of bad faith in these replies is insane.

    If you’re a shill, fine, good job. But if you’re not, have you paid any attention to the real world around you? We spent the last year enabling genocide, and the best fruits of our over-hyped tech and intellectual innovation factories are being revealed as the bullshit that most people always understood them to be.

    The fact that you can accuse me of being dishonest, while providing no basis or evidence, while multiple federal agencies are under a strict gag order from any communication or purchasing with outside contacts… I mean really?

    Like are you guys just another CIA adjacent cutout that believes in identity politics and SSRIs but has zero ability to critically assess the actual world around them?


  • If you’re going to accuse China of state censorship, then I suppose you are also vehemently opposed to the censorship we apply to our media, social media and “AI” platforms, and since you dislike the lack of journalistic integrity in this article for pointing out that state censorship you would support similar caveats being added to articles about OpenAI, Meta, X in regards to how they handle issues like Gaza, Culture War topics and coverage of political candidates?

    It’s fair to bring up comparisons when your critique is claiming an imbalance in portrayal between the “realities” of ai development in China and the US.


  • There’s a strong argument that any consumer facing chatbot AI is “censored”. I’ve had chatGPT clam up in bizarre ways after it misinterprets what I’m asking. It just depends on company owning the product and what they view their legal exposure to be.

    Also, we are applying huge govt subsidies to ai industry based on thin value evidence at this very moment. And we provide subsidies for many of our industries to help prop them up, sometimes to hugely bad effect. It’s what countries do to build, maintain and win industrial arms races.

    Deepseek-R1 is open source and you can download it and run it offline. I’m not a power user but was able to get a functioning offline version of the 32B distill model running on a spare machine I had in a hour or so from scratch. I used online deepseek for most of the process to provide instructions and troubleshoot. I can’t comment on how amazing it is, other than to say so far it’s felt about as good as my interactions with GPT4 on the free chatGPT tier. In both cases I remain skeptical about their deep business use outside of certain areas.

    From what I’ve read, you can use the base, and methodology and train your own new model if you have the technical ability and desire (rumor is meta AI has shelved their WIP and adopted deepseek as their new basis). This would imply that if you wanted to be able to talk to your LLM about topics like Taiwan, you could absolutely set up a model that would do that.