<p>The next step of the fallacy is to conflate the strict and colloquial senses of programmability. If you can give an LLM "instructions" to multiply two numbers, couldn't you give it "instructions" to do anything else?</p><p>Well, no. That's a giant leap, and one that's fully unsupported by evidence. But it *feels* right, if only because you can again try out common problems and find solutions somewhere in the training corpus.</p>