What are your thoughts on #privacy and #itsecurity regarding the #LocalLLMs you use? They seem to be an alternative to ChatGPT, MS Copilot etc. which basically are creepy privacy black boxes. How can you be sure that local LLMs do not A) “phone home” or B) create a profile on you, C) that their analysis is restricted to the scope of your terminal? As far as I can see #ollama and #lmstudio do not provide privacy statements.

  • ddh@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    23 days ago

    I run Ollama with Open WebUI at home.

    A) the containers they run in by default can’t access the Internet, but they are provided access if we turn on web search or want to download new models. Ollama and Open WebUI are fairly popular products and I haven’t seen any evidence of nefarious activity so far.

    B) they create a profile on me and my family members that use them, by design. We can add sensitive documents that the models can use.

    C) they are restricted by what we type and the documents we provide.

    • fmstrat@lemmy.nowsci.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      22 days ago

      To add to this, I run the same setup, but add Continue to VSCode. It makes an interface similar to Cursor that uses the Ollama instance.

      One thing to be careful of, the Ollama port has no authentication (ridiculous, but it is what it is).

      You’ll need either a card with 12-16GB VRAM for the recommended models for code generation and auto complete, or you may he able to get away with an 8GB card if it’s a second card in the system. You can also run on CPU, but it’s very slow that way.

      @ShotDonkey@lemmy.world

    • ShotDonkey@lemmy.worldOP
      link
      fedilink
      arrow-up
      0
      ·
      23 days ago

      Thank you. As far as I can see these models are for free. Doing data mining on users would be a tempting thing, right? Ollama does not specify this on their homepage, no payed plans, no ‘free for private use’ etc. How do they pay their staff and electricity and harware bills for model training? Do you know anything on the underlying business models?

      • ddh@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        0
        ·
        23 days ago

        Ollama and Open WebUI, as far as I know, are just open source software projects created to run pre-trained models, and have the same business model as many other open source projects on Github.

        The models themselves come from Google, Meta and others. Have a look at all the models available on Hugging Face. The models themselves are just binary files. They’ve been trained and there are no ongoing costs to use them apart from energy your computer uses to run them.