2
<p><span class="h-card" translate="no"><a href="https://mastodon.social/@neuronakaya" class="u-url mention">@<span>neuronakaya</span></a></span> just for context, the Nobel Prize committee doesn&#39;t nominate anyone. Hundreds of people are allowed to submit their nominations, so it only takes a few fanboys to submit Musk as a nominee. Last years there were 300-400 nominated people or organisations.</p>
<p><span class="h-card" translate="no"><a href="https://wandering.shop/@xgranade" class="u-url mention">@<span>xgranade</span></a></span> (this this this. i really dislike the effects that one paper had on the collective discourse around FPGAs. the paper was fine, it&#39;s a cool hack using a truly ancient FPGA, i&#39;ve even reproduced a sub-result from it myself in <a href="https://lab.whitequark.org/notes/2016-08-05/parasitic-interaction-between-oscillating-luts-on-silego-greenpak-4/" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://</span><span class="ellipsis">lab.whitequark.org/notes/2016-</span><span class="invisible">08-05/parasitic-interaction-between-oscillating-luts-on-silego-greenpak-4/</span></a>! i just don&#39;t like people going all woo about it)</p>
<p>(The programmability fallacy, for what it&#39;s worth, also applies to running FPGAs outside of design parameters.)</p>
<p>Even if you had an LLM-based implementation of programmability, will it always work? Will it work for programs up to a certain size? How will you know when it stops working? Will it work for a different seed or at a different temperature?</p><p>The programmability fallacy, as I&#39;ll coin it, lies to us and tells us not to worry about those problems because we can tell LLMs to multiply small numbers by each other and sometimes get the right answer.</p>
<p>That&#39;s precisely what you *can&#39;t* do with LLMs, though. Can you use an LLM to simulate a Fredikin gate if you say &quot;fuck&quot; enough, or do you need a &quot;please&quot; or two in there?</p><p>What&#39;s the construction? What *concrete* steps do you take to get an LLM to do what you want, and how do you know that you&#39;re correct?</p><p>With billiard balls, you have Newtonian physics. With MtG, you have the comprehensive rules. With LLMs, you&#39;ve got ?????????.</p>
<p>Computer scientists, as a lot, tend to have a pocket industry of proving that this or another toy model is equivalent to Turing machines. Billiard balls can be used to implement the Fredikin gate, which is universal for computation, and thus you can program your local pool table. Magic: the Gathering is infamously Turing complete, a fun fact to bring up during draft.</p><p>Those kinds of proofs are useful because the easiest way to show something can be programmed is to do it.</p>
<p>The next step of the fallacy is to conflate the strict and colloquial senses of programmability. If you can give an LLM &quot;instructions&quot; to multiply two numbers, couldn&#39;t you give it &quot;instructions&quot; to do anything else?</p><p>Well, no. That&#39;s a giant leap, and one that&#39;s fully unsupported by evidence. But it *feels* right, if only because you can again try out common problems and find solutions somewhere in the training corpus.</p>
<p>It&#39;s the whole thing where if you ask an LLM to multiply two small numbers together, someone has probably done that somewhere, so it &quot;works,&quot; but it completely fails for larger numbers. &quot;Reasoning&quot; models can get around that by giving an escape hatch to eval, like the original chain-of-thought paper, but then why not just use eval directly?</p><p>But regardless, if you think of a task common enough that it has been solved in the training corpus, then it &quot;works,&quot; right?</p>
<p>Should you use &quot;please&quot; and be more polite to it, or should you use &quot;fuck&quot; to bypass its guard rails? Who knows! As discussed, there&#39;s no theory to help you here, so you&#39;re stuck without a map to track back to what &quot;prompt&quot; you should start with.</p><p>The problem is that some prompts *seem* to work. Because these things are trained on lots and lots of stolen labor, it&#39;s not too hard to find text that superficially resembles some task or another.</p>