Researchers have found that large language models (LLMs) tend to parrot buggy code when tasked with completing flawed snippets.
That is to say, when shown a snippet of shoddy code and asked to fill in the blanks, AI models are just as likely to repeat the mistake as to fix it.
I’ve found it okay to get a general feel for stuff but I’ve been given insidiously bad code. Functions and data structures that look similar enough to real stuff but are deeply wrong or non+existent.