A software developer and Linux nerd, living in Germany. I’m usually a chill dude but my online persona doesn’t always reflect my true personality. Take what I say with a grain of salt, I usually try to be nice and give good advice, though.

I’m into Free Software, selfhosting, microcontrollers and electronics, freedom, privacy and the usual stuff. And a few select other random things, too.

  • 2 Posts
  • 16 Comments
Joined 7 months ago
cake
Cake day: June 25th, 2024

help-circle
  • Sure, but we had the same thing with Alpaca, Llama2, Llama3, 3.2, Mistral, Phi… They’re all getting smaller and/or more intelligent since a year ago (and more) when models at small size started to compete with ChatGPT, or at least claim to do so… If that’s it… Shouldn’t it have happened like a year ago? We definitely had those graphs back then, when some Llama or Mistral outperformed the current ChatGPT of the time in some benchmarks… I think precedent for that headline “… outperforms ChatGPT” or “… is better than …” is Llama2 70B in summer 2023. And claiming that has been a pretty constant thing since then.

    Edit: Computerphile covered Deepseek: https://youtu.be/gY4Z-9QlZ64

    But I think I get it. If people really thought OpenAI was going to give trillions of dollars to Nvidia for hardware, and now there is some competition and more efficient AI available… That might come as a reality check. I just think the prospect of it all is a bit funny. AI is a big bubble especially since everyone thinks it’s going to make progress and big advances… And now it does… And stock price drops… That’s just silly. IMO. And I’d bet now is a good time to buy some stocks, since the better AI gets, the more it gets applied.)


  • I still think this isn’t connected to any facts. It’s just some speculation bubble doing weird things. We’ve been improving AI models for the better part of the last two years. Until now that’s been increasing the bubble. Now literally the same thing happens and they’re all selling their Nvidia stock… Wtf!?.. And by the way, we’re talking about a Chinese company here. As such they’re known to add unsubstanciated claims about how the political restriction on AI chips doesn’t really affect them. They’ve also done that before. It might not even be true.



  • Thanks for explaining. I get that. Seems we’re moving away from democracy and freedom these days. That’s hard to tackle. And there’s a multitude of reasons and dynamics at play. I’ve also learned at school we usually have reforms or revolution available. Plus a few successful forms of nonviolent resistance. Or civil war, war and a restart, continued oppression… We’ll see. I hope for the best. But in my opinion freedom is a constant fight, even in “free” countries, it’s not granted automatically or indefinitely.


  • You’re right, I’m not really sure if I understand what the article is about. And how it translates to the title and us, the people.

    I’m aware of oppressive regimes, weapon systems, surveillance, misinformation and manipulation taking place all around the world. And all of that becoming very efficient by technology, automation, algorithms, etc.

    I don’t think we can rely on the government or the companies, though. The goverment itself is the entitiy oppressing the people. And since the article is talking about the Trump situation… I mean all the billionaires and tech-bros were present at his inauguration ceremony, kissing his ass… I don’t think we can rely on them or their employees, either…

    So my thinking is, if it’s technology that’s going to solve this, or the citizens have any influence in the first place… as the title implies(?!), it has to be something like Free Software. Or at least something independent. Or is there anything else left?

    But I’d agree, me using LibreOffice and encrypting my phone is not going to change if some trans people get arrested somewhere… I really don’t understand what the article wants to tell me… We could overturn the government? Or stop sending weapons or similar tech to certain countries… But that’s all political. None of that is really related to technology in the sense that the answer lies within technology…




  • Lol. I hope you took that as an invitation to excercise your brain. I mean not everyone needs to know this… But it’s fairly simple maths. A cylinder has a volume of πr²h. So it’s 3000/(3.14x11x11) or bit less than 8cm.

    The smart method is to use your kitchen scale and just weigh stuff. That’s usually easier and qucker with most ingredients. And you generally get less measuring cups dirty. Kinda depends on who wrote your recipes, though. Mine usually come with measurings in weight. Whereas very old recipe books and American people often measure in volume. And the kitchen scale won’t really help there, unless you memorize the density of flour and butter 😅

    But seriously, weighing 3kg of water is far better than sticking your ruler into the soup and see if it levels out at 7.9cm…

    I got pots with a liter scale engraved inside. That’s very nice to have.

    But I seriously have to try some more mundane stuff with ChatGPT. Like this. Or ask it how to hard-boil my eggs…


  • I generally dislike how cross posts work on Lemmy. But that could be just me. I’m subscribed to communities I like to see. And I see it once regardless. Or several times if someone decides to cross-post. But that feels more like spam to me than anything useful.

    It’s usually a thing on regular SFW Lemmy for me. Where people post the same tech news to all 5 technology communities. Or their simple question that’s now answered 3 different times separately by different people who don’t see each other in the comments… And I also don’t want to unsubscribe and miss out. So I have to live with it.

    I don’t think it’s a big issue in NSFW though, or if it’s just 2 times the same picture every now and then.


  • I agree. Though I don’t think I’m surprised how good they are. I’ve just tried ChatGPT and Llama3 for now. And they struggle with literally everything. Comparing numbers, dividing them, multiplicating them… Division so far had a 100% failure rate for me. I think that model just memorized the basic multiplication tables and a few common additions and subtractions. I don’t see it getting the concepts. But I’ll try DeepSeek as you suggested and see how good it does. I doubt it’s far off but I’ll have to check. And I’ll try some middle school maths. Maybe it’s better at reasoning and transforming equations.




  • Nice article 😆 And I wonder if it’s going to stay that way. Or if it’s like a new invention which is still missing a (good) application. I have some other good use-case which are missing in the list and that’s image classification and description, speech to text and text to speech. And machine translation. I think that’s massively useful. But as pointed out in the article, generative AI does lots of things which harm people and society. I mean the promise is that it’s going to get better and stop lying so much, so we can have some proper applications as well. But that’s not a thing yet. And personally - I’m still waiting for AI to merge with robotics and do real hands-on work. Which could be very helpful in some professions. Or lead to a more dystopian future.

    And I believe all the accelleration of everything, spreading misinformation and making it super cheap and easy to manipulate and spam, is here to stay. That’s something we need to deal with, and it’s not easy or straightforward. If I were a tech bro, I’d advertise my AI solution to deal with the issues that arise with AI 😅