LOL
Training is transformative use. Same reason I don’t care which DVDs they show the draw-anything robot.
If I somehow stole ChatGPT’s weights and pruned them to one-tenth their size, that’d be on-par with leaking the source code to a game. Any support would be yo-ho-ho vigilante justice kinds of support.
But I just point my chatbot at your chatbot, and mine winds up better and smaller… tough shit.
Open AI stole all of our data to train their model. If this is true, no sympathy.
That is what I mean, it’s a difference between an AI with robbed content in its knowledge/lenguage base and an AI assistant which only search iformation in the web to answer, linking to the corresponding pages. Way more intelligent and ethic use of an AI.
If there’s one thing we know about American AI companies it’s that they have a spotless record when it comes to data ethics. Never touched unauthorized data. Swear! Not even once. Of course not.
Bruh, these guys trained their own AI on so called “puplicly available” content. Except it was, and still is, completely without consent from, or compensation to said artists/bloggers/creators etc… Don’t throw rocks when you live in a glass house 🤌
Another reason why I use Andi, because it don¡t gut copyright content to the own knowledge base, it’s is a search assistant, not a chatbot like others, it search by the concept and give a direct answer to your question, listing also the links of the sources and pages where it found the answers. It’s LLM is only made to “understand” (to call it something) your question to search pages que contain information about it and to understand the content to be capable to summarize it. There isn’t third party or copyright content in the LLM. It’s knowledge is real time web content like any other search engine. Even in it’s (reduced) chat capabilities, always show the sources where it found it’s answers.
Traditional search works with keywords, listing thousends of pages where appears this keyword, that means that 99% of the list has nothing to do with what you are looking for, this is the reason why AI searches give a better result, but not Chatbots, which search the answers in a own knowledge base and invent answers if not.
Oh are we supposed to care about substantial evidence of theft now? Because there’s a few artists, writers, and other creatives that would like to have a word with you…
Good luck suing them
so he’s just admitting that deepseek did a better job than openai but for a fraction of the price? it only gets better.
It’s funny that they did all that and open-sourced it too. Like some kid accusing another to copy their homeworks while the other kid did significantly better and also offered to share.
So what? It’s absolutely true and makes absolutely no difference to anyone
Yes, so what?
https://stratechery.com/2025/deepseek-faq/
Who the fuck cares? They’re all doing this.
FUD, just to distract from the crushing multibillion dollar defeat they’ve just been dealt. First stage of grief: denial. Second: anger. Third: bargaining. We’re somewhere between 2 and 3 right now.
Nope, it’s definitely true, but sensationalism. Almost all models are trained using gpt
Womp womp. I’m sure openAI asked for permission from the creators for all its training data, right? Thief complains about someone else stealing their stolen goods, more at 11.
Copycat gets copycatted.
When you can’t win, accuse them of cheating.
But, but they committed the copyright infringement first. It’s theirs. That’s totally unfair. What are tech bros going to do? Admit they are grossly over valued? They’ve already spent the billions.
Here’s the thing… It was a bubble because you can’t wall off the entire concept of AI. This revelation was just an acceleration displaying what should’ve been obvious.
There are many many open models available for people to fuck around with. I have in a homelab setting, just to keep abreast of what is going on, get a general idea how it works and what its capable of.
What most normie followers of AI don’t seem to understand is, whether you’re doing LLM or machine learning object detection or something, you can get open software that is “good enough” and run it locally. If you have a raspberry pi you can run some of this stuff, and it will be slow, but acceptable for many use cases.
So the concept that only OpenAI would ever hold the keys and should therefore have massive valuation in perpetuity, that is just laughable. This Chinese company just highlighted that you can bruteforce train more optimized models on garbage-tier hardware.
Me pretending to care about David Sacks claim: