I mean, god bless 'em for stealing already-stolen data from scumfuck tech oligarchs and causing a muti-billion dollar devaluation in the AI bubble. If people could just stop laundering the term “open source”, that’d be great.
Hello, tone-policing genocide-defender and/or carnist 👋
Instead of being mad about words, maybe you should think about why the words bother you more than the injustice they describe.
Have a day!
I mean, god bless 'em for stealing already-stolen data from scumfuck tech oligarchs and causing a muti-billion dollar devaluation in the AI bubble. If people could just stop laundering the term “open source”, that’d be great.
You won’t see me on the side of the “debate” that launders language in defense of the owning class ¯_(ツ)_/¯
Yes. That solution would be to not lie about it by calling something that isn’t open source “open source”.
That’s fine if you think the algorithm is the most important thing. I think the training data is equally important, and I’m so frustrated by the bastardization of the meaning of “open source” as it’s applied to LLMs.
It’s like if a normal software product provides a thin wrapper over a proprietary library that you must link against calling their project open source. The wrapper is open, but the actual substance of what provides the functionality isn’t.
It’d be fine if we could just use more honest language like “open weight”, but “open source” means something different.
The training data is the important piece, and if that’s not open, then it’s not open source.
I don’t want the data to avoid using the official one. I want the data so that so that I can reproduce the model. Without the training data, you can’t reproduce the model, and if you can’t do that, it’s not open source.
The idea that a normal person can scrape the same amount and quality of data that any company or government can, and tune the weights enough to recreate the model is absurd.
I’m not seeing the training data here… so it looks like the answer is yes, it’s not actually open source.
Mental Outlaw is a reich-wing freak, so that’s par for the course. Unfortunately, there are a fair amount of these shitheads in the Linux YouTube space.
Is it actually open source, or are we using the fake definition of “open source AI” that the OSI has massaged into being so corpo-friendly that the training data itself can be kept a secret?
It would be nice if AOSP (or the GOS devs) could expose KVM to userspace as a stop-gap for the dystopian nonsense that is Play “”“Integrity”“” API.
This would allow you to virtualize another device that could hopefully pass the dystopian checks so you can use the apps that are opting in to this nonsense. That, or just have a Linux distro, which has no dystopian bullshit to begin with.
That’s fine. I’m sure there are some sub-par TUIs out there. I’ve seen pretty great TUIs, especially the ones written in Rust (because of the excellent TUI lib they have).
GUIs are fine too (as long as they don’t use Electron lol).
It’s just an easy thing to contrast against.
Electron is one of the slowest, clunky, memory-hogging ways to have a UI, and TUIs are the exact opposite. I don’t care if (name of company that ships Electron slop here)
can ship your software webpage masquerading as software to more systems more easily. If your messaging “app” has input lag when I type something, it’s a dogshit experience.
Of course, there are ways to ship GUIs that aren’t all of the things wrong with Electron, but comparing TUIs with those is less interesting and more a question of if the person likes to live in their terminal or not.
As someone that has never used Puppet, I also wonder this. Ansible is agentless and works on basically anything. What do you gain by requiring an agent, like with this?
Why??? TUIs are the best kind of UI. They run anywhere and don’t siphon your system resources like garbage Electron apps.
What are you referring to? I don’t think these changes have anything to do with AI.
The closest thing in the article I found in the article was a mention of LLVM, which is a totally different thing from LLMs, if that’s what you’re thinking.
My use of the word “stealing” is not a condemnation, so substitute it with “borrowing” or “using” if you want. It was already stolen by other tech oligarchs.
You can call the algo open source if the code is available under an OSS license. But the larger project still uses proprietary training data, and therefor the whole model, which requires proprietary training data to function is not open source.