• 2 Posts
  • 14 Comments
Joined 2 years ago
cake
Cake day: June 11th, 2023

help-circle
  • If I can offer any tip, it would be to find someone you can become a sort of pen pal in an app like Tandem from a different place you’re interested in and then you might automatically get a better perspective of that place.

    A lot of people will be happy to tell you about their place, and you’d automatically have lots of people willing to talk to you just from being a native English speaker: as you probably know, there’s an endless list of people out there in the world who know English enough to sort of communicate, but they don’t have much chance to actually talk with a native in a chat where they won’t have to be embarrassed to make a few mistakes.



  • This seems to me like just a semantic difference though. People will say the LLM is “making shit up” when they’re outputting something that isn’t correct, and that happens (according to my knowledge) usually because the information you’re asking wasn’t represented enough in the training data to guide the answer always to that information.

    In any case, there is an expectation from users that LLMs can somehow be deterministic when they’re not at all. They’re a deep learning model that’s so complicated that’s impossible to predict what effect a small change in the input will have on the output. So it could give an expected answer for a certain question and give a very unexpected one just by adding or changing some word on the input, even if that appears irrelevant.


  • Not sure why this specific thing is worthy of an article. Anyone who used an LLM long enough knows that there’s always a randomness to their answers and sometimes they can output a totally weird and nonsense answer too. Just start a new chat and ask it again, it’ll give a different answer.

    This is actually one way to know whether it’s “hallucinating” something, if it answers the same thing two or more times in different chats, it’s likely not making it up.

    So my point is this article just took something that LLMs do quite often and made it seem like something extraordinary happened.







  • Meanwhile there was a video interviewing illegal Brazilians in the US who supported Trump. Most of those were in favor of expelling immigrants, if they broke laws (ignoring they are illegal themselves). They saw themselves as “good guys”, so it wouldn’t apply to them. And some of them complained that nowadays there was more immigrants and hence more competition for jobs, so they wanted it to be more difficult to get in. Typical “now that I’m here, kick the stairs”. I just assume most of his supporters are selfish people who wouldn’t miss the chance to throw someone else at a bus to have some personal gain—as long as they don’t stain their own hands with blood.

    I would look for the video, but it was in Portuguese anyway.




  • That’s actually quite common in large companies, just recently I read this story:

    Back then, there was close to zero collaboration between divisions at Microsoft […] In late 2013, my team was building Skype for Web, which we positioned as a competitor to Google Hangouts. We had a problem, though: in order to start a video or voice call, users needed to download a plugin which contained the required video codecs. We noticed Google Hangouts did the same on Internet Explorer and Firefox, but not on Chrome because the plugin was bundled with the browser for a frictionless experience. […] My team decided we had to offer the same frictionless experience on Microsoft’s latest browser, Edge, which was in development at the time. […] the team politely and firmly rejected bundling our plugin into the new Microsoft browser. The reason? Their KPI was to minimize the download size of the browser, and helping us would not help them reach that goal.




  • This kind of logic never made sense to me, like: if an AI could build something like Netflix (even if it needed the assistance of a mid software engineer), then it means every indie dev will be able to build a Netflix competitor, bringing the value of Netflix down. Open source tools would quickly reach a level where they’d surpass any closed source software, and would be very user-friendly without much effort.

    We’d see LLMs being used to create and improve rapidly infrastructure like compilers, IDEs and build systems that are currently complex and slow, rewrite any slow software into faster languages etc. So many projects that are stalled today for lack of manpower would be flourishing and flooding us with new apps and features in an incredible pace.

    I’m yet to see it happen. And that’s because for LLMs to produce anything with enough quality, they need someone who understands what they’re outputting, someone who can add the necessary context in each prompt, who can test it, integrate it into the bigger scheme without causing regressions etc. It’s no simple work and it requires even understanding LLMs’ processing limitations.