AI Can’t Cure a Flaccid Mind

Many of my classes consist of a large writing component. I’ve designed the courses so that most students write the best paper that they’ll ever write in their life. Recently, I had reason to believe that a student was using AI or a paid service to write their paper. I couldn’t find conclusive evidence that they didn’t write it, but it ended up not mattering much in the end.

Continue reading

Message To My Students: Don’t Use AI to Cheat (at least not yet)

If you have spent any time on social media in the past week, you’ve probably noticed a lot of people using the new AI program called ChatGPT. Joy blogged about it recently too. It’s a fun thing to play with and often gives you very good (or at least interesting) responses to questions you ask. And it’s blown up on social media, probably because it’s free, responds instantly, and is easy to screenshot.

But as with all things AI, there are numerous concerns that come up, both theoretical and immediately real. One immediately real concern among academics is the possibility of cheating by students on homework, short writing assignments, or take-home exams. I don’t want to diminish these concerns, but I think for now they are overblown. Let me demonstrate by example.

This semester I am teaching an undergraduate course in Economic History. Two of the big topics we cover are the Industrial Revolution and the Great Depression. Specifically, we spend a lot of time discussing the various theories of the causes of these two events. On the exams, students are asked to, more or less, summarize these potential causes and discuss them.

How does ChatGPT do?

On the Industrial Revolution:

And on the Great Depression:

Now, it’s not that these answers are flat out wrong. The answers certainly list theories that have been discussed by at various times, including in the academic literature. But these answers just wouldn’t be very good for my class, primarily because they miss almost all of the theories that we have discussed in class as being likely causes. Moreover, the answers also list theories that we have discussed in class as probably not being correct.

These kinds of errors are especially true of the answer about the Great Depression, which reads like it was taken straight from a high school history textbook, ignoring almost everything economists have said about the topic. The answer for the Industrial Revolution doesn’t make this mistake as much as it misses most of the theories discussed by Koyama and Rubin, which was the main book we used to work through the literature. If a student gave an answer like the AI, it suggests to me that they didn’t even look at the chapter titles in K&R, which provide a roadmap of the main theories.

So, my message to students: don’t try to use this to answer questions in class, at least not right now. The program will certainly improve in the future, and perhaps it will eventually get very good at answering these kinds of academic questions.

But I also have a message to fellow academics: make sure that you are writing questions that aren’t easily answered by an AI. This can be hard to do, especially if you haven’t thought about it deeply, but ultimately thinking in this way should help you to write better exam and homework questions. This approach seems far superior to the one that the AI suggests.