I’m not a big fan of focusing on the mental health aspect when the real story is how bad the AI is that grades the papers for them is. I’ve heard stories of people submitting papers through an AI system that gives it a plagiarism score. Too high and it’s rejected. But the assignment is to use the class book as a reference, and it marks topics in the class book as being too close to plagiarism. So now writing a paper is a sort of CAPTCHA approach. You have to purposefully write things to bypass a filter, not to actually write a good paper.
The mental cost is being called a liar. This is not unique to AI. It serves only to demonize AI, and reduce the blame on the people who poorly implement it. Finding the root cause and reporting on that is good journalism. Baiting people with feelings is not.
I’m not a big fan of focusing on the mental health aspect when the real story is how bad the AI is that grades the papers for them is. I’ve heard stories of people submitting papers through an AI system that gives it a plagiarism score. Too high and it’s rejected. But the assignment is to use the class book as a reference, and it marks topics in the class book as being too close to plagiarism. So now writing a paper is a sort of CAPTCHA approach. You have to purposefully write things to bypass a filter, not to actually write a good paper.
Yea, but there’s always an article about how AI Z is actually inaccurate or inefficient in X for Y reasons
I think some articles focusing on the human cost (mental health in this case) is good too
The mental cost is being called a liar. This is not unique to AI. It serves only to demonize AI, and reduce the blame on the people who poorly implement it. Finding the root cause and reporting on that is good journalism. Baiting people with feelings is not.