I went to go install it this morning to check it out, but I had to decline when I read the privacy policy. I might check it out on my desktop where I have a lot more tools to ensure my anonymity, but I’m not installing it on my phone. There is not one scrap of data you generate that they aren’t going to hoover up, combine with data they get from anyone who will sell it to them, and then turn around and resell it.
I’m sure other apps are just as egregious, which is one reason I’ve been deliberately moving away from native apps to WPAs. Yes, everything you can possibly do on the internet is a travesty for privacy, but I’m not going to be on the leading edge of giving myself to be sold.
it’s actually pretty easy to run locally as well. obviously not as easy as just downloading an app, but it’s gotten relatively straight-forward and the peace of mind is nice
That’s not the monster model, though. But yes, I run AI locally (barely on my 1660). What I can run locally is pretty decent in limited ways, but I want to see the o1 competitor.
figured i’d do this in a no comment since it’s been a bit since my last, but i just downloaded and ran the 70b model on my mac and it’s slower but running fine: 15s to first word, and about half as fast generating words after that but it’s running
this matches with what i’ve experienced with other models too: very large models still run; just much much slower
i’m not sure of things when it gets up to 168b model etc, because i haven’t tried but it seems that it just can’t load the whole model at once and there’s just a lot more loading and unloading which makes it much slower
I went to go install it this morning to check it out, but I had to decline when I read the privacy policy. I might check it out on my desktop where I have a lot more tools to ensure my anonymity, but I’m not installing it on my phone. There is not one scrap of data you generate that they aren’t going to hoover up, combine with data they get from anyone who will sell it to them, and then turn around and resell it.
I’m sure other apps are just as egregious, which is one reason I’ve been deliberately moving away from native apps to WPAs. Yes, everything you can possibly do on the internet is a travesty for privacy, but I’m not going to be on the leading edge of giving myself to be sold.
it’s actually pretty easy to run locally as well. obviously not as easy as just downloading an app, but it’s gotten relatively straight-forward and the peace of mind is nice
check out ollama, and find an ollama UI
That’s not the monster model, though. But yes, I run AI locally (barely on my 1660). What I can run locally is pretty decent in limited ways, but I want to see the o1 competitor.
figured i’d do this in a no comment since it’s been a bit since my last, but i just downloaded and ran the 70b model on my mac and it’s slower but running fine: 15s to first word, and about half as fast generating words after that but it’s running
this matches with what i’ve experienced with other models too: very large models still run; just much much slower
i’m not sure of things when it gets up to 168b model etc, because i haven’t tried but it seems that it just can’t load the whole model at once and there’s just a lot more loading and unloading which makes it much slower
that’s true - i was running 7b and it seemed pretty much instant, so was assuming i could do much larger - turns out only 14b on a 64gb mac
kudos on poking at the app privacy statement. the real interest in this is going to be running it locally on your own server backend.
so, yeah - as usual, apps bad, bad, bad. but the backend is what really matters.