Hello, I’ve been hearing a lot about this new DeepSeek LLM, and was wondering, would it be possible to get the 600+ billion parameter model running on my GPU? I’ve heard something about people have got it to run on their MacBooks. I have i7 4790K, 32GB DDR3, and 7900 XTX 24GB VRAM. I’m running Arch Linux, this computer is just for AI stuff really, not gaming as much. I did tried running the distilled 14B parameter model, but it didn’t work for me, I was using GPT4All to run it. I’m thinking about getting one of the NVIDIA 5090s in the future. Thanks in advance!
I run the 32b one on my 7900 XTX in Alpaca https://jeffser.com/alpaca/
There is no way to fit the full model in any single AMD or Nvidia GPU in existence.
check this out
I don’t know how big the original model is but I have an RX 6700 XT and I can easily run the Llama 3 8B distill of Deepseek R1 with 32k context. I just haven’t figured out how to get good results yet, it always does the
<thinking><thinking/>
thing.To run the full 671B sized model (404GB in size), you would need more than 404GB of combined GPU memory and standard memory (and that’s only to run it, you would most probably want it all to be GPU memory to make it run fast).
With 24GB of GPU memory, the largest model which would fit from the R1 series would be the 32b-qwen-distill-q4_K_M (20GB in size) available at ollama (and possibly elsewhere).
I run the 32b Version on my 6700xt with an R9 3700x using ollama. It runs well but it gets a bit slower on complex problems. I once ran an 70b Llama model, but it took a long time to finish.
Hey not to side track ops post or your own but I’m new to the home llm space and I was wondering once you have the model set up is there a gui? And how do you input tasks for it to do?
You can use the Terminal or something like AnythingLLM. It has a GUI and you can import pictures and Websites.
I have the same GPU but I always run 7B/8B variants as exl2. Do you use GGUF to use your system RAM?
i also have a 6700xt but i don’t get ollama running on it. it only defaults to the cpu ryzen 5600 I plan to tackle this problem on a free weekend and now i have a new Reason for solving it.
on some Linux distros like Arch Linux you might need to install a ollama-rocm package too
They run smaller variations of it in their personal machines. There are models that fit in almost any machine, but IME the first model that is useful is the 32b, which you can probably run on the XTX. Anything less than that, only for the more trivial tasks.
Just to confirm - I tried the 7b and it was fast but pretty untrustworthy. OPs 24 GB if vram should be enough to run a medium sized version, tho…