• No_Ones_Slick_Like_Gaston@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    8 days ago

    There’s a lot of explaining to do for Meta, OpenAI, Claude and Google gemini to justify overpaying for their models now that there’s l a literal open source model that can do the basics.

    • suoko@feddit.itOP
      link
      fedilink
      arrow-up
      0
      ·
      7 days ago

      I’m testing right now vscode+continue+ollama+gwen2.5-coder. With a simple GPU it’s already OK.

    • suoko@feddit.itOP
      link
      fedilink
      arrow-up
      0
      ·
      7 days ago

      You still need an expensive hardware to run it. Unless myceliumwebserver project will start

      • johant@lemmy.ml
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        7 days ago

        I’m testing 14B Qwen DeepSeek R1 through ollama and it’s impressive. I would think I could switch most of my current usage of chatgpt to this one (not alot I should admit though). Hardware is amd 7950x3d with nvidia 3070 ti. Not the cheapest hardware but not the most expensive either. It’s of course not as good as the full model on deepseek.com but I can run it truly locally, right now.

        • Scipitie@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          0
          ·
          7 days ago

          How much vram does your TI pack? Is that the standard 8gb ddr6?

          I will because I’m surprised and impressed that a 14b model runs smoothly.

          Thanks for the insights!

          • birdcat@lemmy.ml
            link
            fedilink
            arrow-up
            0
            ·
            11 hours ago

            i dont even have a GPU and the 14b model runs at an acceptable speed. but yes, faster and bigger would be nice… or knowing how to distill the biggest one, cuz I only use it for something very specific.

          • johant@lemmy.ml
            link
            fedilink
            arrow-up
            0
            ·
            5 days ago

            sorry it should have said 3080 ti which has 12 GB of Vram. Also I guess the model is Q4.

      • No_Ones_Slick_Like_Gaston@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        7 days ago

        Correct. But what’s more expensive a single computing instance that’s local or cloud based credit eating SAS AI that does not produce significantly better results?