• fmstrat@lemmy.nowsci.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 days ago

    Ollama’s version is distilled with Qwen or Llama depending on parameter size, so it’s going to behave very different than the original model, since it is very different.

    • CodexArcanum@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 days ago

      Except if you look at the top of OP’s picture, they are also running deepseek-r1:14B through ollama. I downloaded my copy on Sunday, so these should be fairly comparable situations.

      I agree though that none of this applies to the full cloud-hosted model. I don’t want my account banned, so I’m not much for testing these boundary pushes in a surveilled environment. I imagine they have additional controls on the web version, including human intervention.