Hello, I’ve been hearing a lot about this new DeepSeek LLM, and was wondering, would it be possible to get the 600+ billion parameter model running on my GPU? I’ve heard something about people have got it to run on their MacBooks. I have i7 4790K, 32GB DDR3, and 7900 XTX 24GB VRAM. I’m running Arch Linux, this computer is just for AI stuff really, not gaming as much. I did tried running the distilled 14B parameter model, but it didn’t work for me, I was using GPT4All to run it. I’m thinking about getting one of the NVIDIA 5090s in the future. Thanks in advance!
They run smaller variations of it in their personal machines. There are models that fit in almost any machine, but IME the first model that is useful is the 32b, which you can probably run on the XTX. Anything less than that, only for the more trivial tasks.
Just to confirm - I tried the 7b and it was fast but pretty untrustworthy. OPs 24 GB if vram should be enough to run a medium sized version, tho…