- open weights, as the training dataset is not open/available afaik. But yum :D
One thing to be kept in mind, though:
verified this myself with the 1.1b model
Thanks for clarification!
So… as far as I understand from this thread, it’s basically a finished model (llama or qwen) which is then fine tuned using an unknown dataset? That’d explain the claimed 6M training cost, hiding the fact that the heavy lifting has been made by others (US of A’s Meta in this case). Nothing revolutionary to see here, I guess. Small improvements are nice to have, though. I wonder how their smallest models perform, are they any better than llama3.2:8b?
why are you so heavily and openly advertising Deepseek?
That article is written by DeepSeek R1 isn’t it
Old woman smell would be my least concern if I had fingers like her
Is it a war or a cartel, though?