Bonus issue:
This one is a little bit less obvious
Lol, my brain is like, nope, I’m not even trying to read that.
I think I lost a few brain cells reading it all the way through.
Wow, this just hurts. The “twice, I might add!” is sooooo fucking bad. I don’t have aby words for this.
aby
Checks out
god damn it i can’t type lmao
I mean, even if it’s annoying someone obviously used AI, they probably still have that problem and just suck at communicating that themselves
They don’t, because it’s not an actual issue for any human reading it. The README contains the data and the repo is just for coordination, but the LLM doesn’t understand that.
Then… that’s so fucking weird, why would someone make that issue? I genuinely lack the understanding for how this could have happened in that case.
Why do LLMs obsess over making numbered lists? They seem to do that constantly.
Oh, I can help! 🎉
- computers like lists, they organize things.
- itemized things are better when linked! 🔗
- I hate myself a little for writing this out 😐
Well they are computers…
- Honestly I don’t know
My conspricy theory is that they have a hard time figuring out the logical relation between sentenses, hence do not generate good transitions between sentences.
I think bullet point might be manually tuned up by the developers, but not inheritly present in the model; because we don’t tend to see bullet points that much in normal human communications.
That’s not a bad theory especially since newer models don’t do it as often
The emoji littering in fastapi’s documentation actually drove me away from using it.
When your repository is on Facebook.
I wonder if they made chat gpt use an unnatural amount of emojis just to make it easier to spot
People often use a ridiculous amount of emoji’s in their readme, perhaps seeing it was a README triggered something in the LLM to talk like a readme?
There have been so many people filing AI generated security vulnerabilities