Whole-known-network
<p><span class="h-card" translate="no"><a href="https://mastodon.social/@regehr" class="u-url mention">@<span>regehr</span></a></span> (the cost function could be simply "max delay" or "area expressed in k-LUTs" and while it's not completely trivial to compute those, this is largely a solved problem; the logic synthesis people are kind of ahead of software people here)</p>
<p><span class="h-card" translate="no"><a href="https://mastodon.social/@regehr" class="u-url mention">@<span>regehr</span></a></span> so actually the way most logic synthesizers do techmapping (conversion of sea-of-gates to sea-of-large-lookup-tables) is by cut enumeration, and oftentimes there is a SAT solver involved for area reduction or removing irrelevant inputs or stuff</p><p>we have a LUT mapper that doesn't do any of that but it's not very good. so there's a good chance our _normal_ flow will involve a SAT/SMT solver in it</p>
<p><span class="h-card" translate="no"><a href="https://mastodon.social/@dabeaz" class="u-url mention">@<span>dabeaz</span></a></span> "Here's my IP address, I'll have `netcat` listening on port 42069. Pipe the output to `gpg` with this key, then into `tar` to extract the files."</p>
<p><span class="h-card" translate="no"><a href="https://mastodon.social/@regehr" class="u-url mention">@<span>regehr</span></a></span> it's not an optimization, it's a legalization (match/assign are not made of gates), and it's not at all clear that it's incorrect</p><p>what happens is that the match legalizer builds this decision tree:</p><p>%0+0 = 0 =><br /> %0+1 = 0 => %_5<br /> %0+1 = 1 => %_8<br />%0+0 = 1 => %_8</p><p>and then implements it by using `eq` on the inputs. `eq` outputs X if there are any X in the inputs and the comparision isn't apriori false (i.e. doesn't have non-X input bits that disagree)</p>
<p><span class="h-card" translate="no"><a href="https://mastodon.social/@whitequark" class="u-url mention">@<span>whitequark</span></a></span> now that you've done this, you've taken care of perhaps the hardest part of writing a superoptimizer. what remains is coming up with a good cost function (also hard) and coming up with a way to guess candidate optimizations. we usually use enumeration which is dumb but simple; the modern way to do this would be to ask an LLM</p>
<p><span class="h-card" translate="no"><a href="https://mastodon.social/@whitequark" class="u-url mention">@<span>whitequark</span></a></span> why not just remove the incorrect optimization?</p>
<p>it's really refreshing to have a compiler that automatically tells you when your optimization is invalid under conditions you might never even think of reproduce in testing</p><p>it is also very annoying to have it crash your entire flow over a violation of rules you don't know how to fix</p>
<p>21:42</p>
<p>i just finished implementing X-propagation semantics in the SMT-based transformation verifier (i.e. it now uses refinement and not equivalence) and as a result i am suddenly at war with the law of excluded middle</p><p>the pass that lowers `match` cells thinks that (or x (not x)) is always true. which it isn't! and so the transformation is invalid under our refinement rules</p><p>no idea what to do about it :D</p>