Whole-known-network
<p>aaaaand there it is. Expected frame length (set by writing to FETHTX.LENGTH) is 5d4, we send 5d8 bytes of data, *and those extra bytes got pushed into the fifo*.</p>
<p>Waiting for a LA to compile into that part of the design but at a high level I can see the bug now. The MDMA is rounding the frame up to a 64 bit boundary while writing to the TX FIFO.</p><p>This is OK, the existing datapath roundsd up to a 32 bit boundary already, and I had a register in the TX buffer where I write the *actual* frame length so I know to ignore the trailing padding.</p><p>Something there probably didn't account for a >= vs == case or something so once you add more than one word of padding it probably breaks something.</p>
<p>Very interesting.</p><p>It's *not* the offload per se. the offload is just triggering it.</p><p>All of my frames have four trailing 0x00 bytes, and the offload's calculated checksum is off by 0x04, consistently.</p><p>I suspect that somewhere further up the chain the 32 vs 64 bit path is adding trailing data to frames that shouldn't be there.</p>
<p><span class="h-card" translate="no"><a href="https://mastodon.social/@shriramk" class="u-url mention">@<span>shriramk</span></a></span> I think people are mostly just reflexively talking down something they're scared of. (And not without reason!)</p>
<p><span class="h-card" translate="no"><a href="https://mastodon.social/@shriramk" class="u-url mention">@<span>shriramk</span></a></span> how does one verify that the prompt is fulfilled both beautifully and accurately?</p>
<p><span class="h-card" translate="no"><a href="https://mastodon.social/@DRMacIver" class="u-url mention">@<span>DRMacIver</span></a></span> <span class="h-card" translate="no"><a href="https://mastodon.social/@shriramk" class="u-url mention">@<span>shriramk</span></a></span> Yes, I have a feeling that this prompt although it might seem magic actually falls solidly into what LLMs do best.</p>
<p><span class="h-card" translate="no"><a href="https://mastodon.social/@film_girl" class="u-url mention">@<span>film_girl</span></a></span> :(</p>
<p><span class="h-card" translate="no"><a href="https://mastodon.social/@shriramk" class="u-url mention">@<span>shriramk</span></a></span> TBF the comparison is auto*complete* not auto*correct*, which is genuinely a pretty good mental model of what the LLM is actually doing, even if it very much undersells how well it's doing it.</p>
<p>It's very fashionable to keep criticizing LLMs as "glorified autocorrect"s. I'm curious how one explains the ability to execute this prompt beautifully as an "autocorrect". (And yes, I've used many other language systems: Duolingo, Mango, Pimsleur, etc.)</p>