<p>The era of ChatGPT is kind of horrifying for me as an instructor of mathematics... Not because I am worried students will use it to cheat (I don&#39;t care! All the worse for them!), but rather because many students may try to use it to *learn*.</p><p>For example, imagine that I give a proof in lecture and it is just a bit too breezy for a student (or, similarly, they find such a proof in a textbook). They don&#39;t understand it, so they ask ChatGPT to reproduce it for them, and they ask followup questions to the LLM as they go. </p><p>I experimented with this today, on a basic result in elementary number theory, and the results were disastrous... ChatGPT sent me on five different wild goose-chases with subtle and plausible-sounding intermediate claims that were just false. Every time I responded with &quot;Hmm, but I don&#39;t think it is true that [XXX]&quot;, the LLM responded with something like &quot;You are right to point out this error, thank you. It is indeed not true that [XXX], but nonetheless the overall proof strategy remains valid, because we can [...further gish-gallop containing subtle and plausible-sounding claims that happen to be false].&quot;</p><p>I know enough to be able to pinpoint these false claims relatively quickly, but my students will probably not. They&#39;ll instead see them as valid steps that they can perform in their own proofs.</p>
Reply