• fubarx@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 days ago

    Made some 30 of them talking to the app server and all the containers inside Docker.

    Now we can ask how they’re all doing and asking application-level questions about records, levels, etc., as well as system level questions like how much RAM the db server is using. Makes for a fun demo. Not sure how useful it will be in production.

  • corvus@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 days ago

    I use jan-beta GUI, you can use locally any model that supports tool calling like qwen3-30B or jan-nano. You can download and install MCP servers (from, say, mcp.so) that serve different tools for the model to use, like web search, deep research, web scrapping, download or summarize videos, etc there are hundreds of MCP servers for different use cases.

    • wise_pancake@lemmy.caOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      3 days ago

      Never heard of this tool but I’ll check it out.

      Mostly I’ve just been making my own dockerfiles and spinning up my own mcp instances.

      They’re actually not hard to build so I’m trying to build my own for small utilities, that way I don’t get caught up in an is_even style dependency web.

    • wise_pancake@lemmy.caOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      4 days ago

      Basically it’s a layer to let your LLMs plug into tools.

      They generally run on your machine (I use docker to sandbox them) and may or may not call out to useful APIs

      One example is I just wrote one to connect to my national weather services RSS feed, so my LLMs can get and summarize the weather for me in the morning.

      Works well with Gemma 3n