Ollama’s version is distilled with Qwen or Llama depending on parameter size, so it’s going to behave very different than the original model, since it is very different.
Ollama’s version is distilled with Qwen or Llama depending on parameter size, so it’s going to behave very different than the original model, since it is very different.
You should have tried Dash to Panel instead of Dash to Dock, based on your preferences. That plus Wintile is what keeps me on Gnome vs Plasmathese days.
What makes you say that?
Yes, this is the case, I’m more wondering about kernel support for CPU assignment as it relates to those processes.
In soup.
Very cool.
I have been playing with Omnigen (a new open source image-based AI) for unblur and exposure adjustment. Works pretty darn well, and have considered creating a plugin for it for GIMP if no one else does.
https://steamcommunity.com/ still looks a lot like this on desktop.
Yea, this doesn’t sound horrible for what it is. Plenty of Linux systems are this bad.
I ran a BBS back in the day with like 200ish? users. Engagement was way more valuable than growth. More people makes things harder, not easier. More engagement from less people is easier to manage, and leads to better communities.
Lemmy feels the same.
Lemmy is for discourse. I’d rather see the healthy and interesting back and forth of an OP and commenter than 5K up votes.
So, if the UEFI firmware trusts a Microsoft tool that Microsoft trusted a third-party to make and that isn’t open source, it’s not the firmware provider’s fault?
Isn’t this like saying it’s OK for Boeing to be shit because a subcontractor assembled the plane with poorly investigated used parts?
To add to this, I run the same setup, but add Continue to VSCode. It makes an interface similar to Cursor that uses the Ollama instance.
One thing to be careful of, the Ollama port has no authentication (ridiculous, but it is what it is).
You’ll need either a card with 12-16GB VRAM for the recommended models for code generation and auto complete, or you may he able to get away with an 8GB card if it’s a second card in the system. You can also run on CPU, but it’s very slow that way.
Good choices. I take this a step further and bridge IRC, Signal, Gvoice, and WhatsApp from a plugged in device or container to Matrix. Then use ntfy for Matrix notifications. This gives me notifications for all of them in Matrix/Element and thus through ntfy.
Example: Instead of Molly I use mautrix-signal
bridge as the device and it feeds messages into Matrix.
There’s also a Telegram bridge: https://matrix.org/ecosystem/bridges/telegram/
Mechanism’s stuff is great, the main differences between Twystlock and it are:
So basically, I don’t ever intend to compete, because I don’t care. I made these to improve my CAD and engineering skills, and because I wanted them, as I do most things, and then send them out into the world 😉
The ones I designed and gave away: https://twystlock.com/ 😉
Specifically, the wall mount and belly stand.
If you ever feel limited by F-Droid, consider Obtainium. Supports F-Droid repos, and many others including straight from GitHub releases.
I use mine like old school Trillion. Element is my Discord, Signal, IRC client. One thing open on my desktop to do them all.
Edit: For instance, Beeper (https://www.beeper.com/) is just a defederated Synapse server with bridges.
Their instance is not federated.
This power can be purchased for a few $. Search for “Usb reversible adapter”. Or just keep usb-a to C adapters permanently in everything.
Hah, yea, good point.