<p>The era of ChatGPT is kind of horrifying for me as an instructor of mathematics... Not because I am worried students will use it to cheat (I don't care! All the worse for them!), but rather because many students may try to use it to *learn*.</p><p>For example, imagine that I give a proof in lecture and it is just a bit too breezy for a student (or, similarly, they find such a proof in a textbook). They don't understand it, so they ask ChatGPT to reproduce it for them, and they ask followup questions to the LLM as they go. </p><p>I experimented with this today, on a basic result in elementary number theory, and the results were disastrous... ChatGPT sent me on five different wild goose-chases with subtle and plausible-sounding intermediate claims that were just false. Every time I responded with "Hmm, but I don't think it is true that [XXX]", the LLM responded with something like "You are right to point out this error, thank you. It is indeed not true that [XXX], but nonetheless the overall proof strategy remains valid, because we can [...further gish-gallop containing subtle and plausible-sounding claims that happen to be false]."</p><p>I know enough to be able to pinpoint these false claims relatively quickly, but my students will probably not. They'll instead see them as valid steps that they can perform in their own proofs.</p>