2
<p>aaaaand there it is. Expected frame length (set by writing to FETHTX.LENGTH) is 5d4, we send 5d8 bytes of data, *and those extra bytes got pushed into the fifo*.</p>
Attached image 0
<p>Waiting for a LA to compile into that part of the design but at a high level I can see the bug now. The MDMA is rounding the frame up to a 64 bit boundary while writing to the TX FIFO.</p><p>This is OK, the existing datapath roundsd up to a 32 bit boundary already, and I had a register in the TX buffer where I write the *actual* frame length so I know to ignore the trailing padding.</p><p>Something there probably didn&#39;t account for a &gt;= vs == case or something so once you add more than one word of padding it probably breaks something.</p>
<p>Very interesting.</p><p>It&#39;s *not* the offload per se. the offload is just triggering it.</p><p>All of my frames have four trailing 0x00 bytes, and the offload&#39;s calculated checksum is off by 0x04, consistently.</p><p>I suspect that somewhere further up the chain the 32 vs 64 bit path is adding trailing data to frames that shouldn&#39;t be there.</p>
<p><span class="h-card" translate="no"><a href="https://mastodon.social/@shriramk" class="u-url mention">@<span>shriramk</span></a></span> I think people are mostly just reflexively talking down something they&#39;re scared of. (And not without reason!)</p>
<p><span class="h-card" translate="no"><a href="https://mastodon.social/@shriramk" class="u-url mention">@<span>shriramk</span></a></span> how does one verify that the prompt is fulfilled both beautifully and accurately?</p>
<p><span class="h-card" translate="no"><a href="https://mastodon.social/@DRMacIver" class="u-url mention">@<span>DRMacIver</span></a></span> <span class="h-card" translate="no"><a href="https://mastodon.social/@shriramk" class="u-url mention">@<span>shriramk</span></a></span> Yes, I have a feeling that this prompt although it might seem magic actually falls solidly into what LLMs do best.</p>
<p><span class="h-card" translate="no"><a href="https://mastodon.social/@film_girl" class="u-url mention">@<span>film_girl</span></a></span> :(</p>
<p><span class="h-card" translate="no"><a href="https://mastodon.social/@shriramk" class="u-url mention">@<span>shriramk</span></a></span> TBF the comparison is auto*complete* not auto*correct*, which is genuinely a pretty good mental model of what the LLM is actually doing, even if it very much undersells how well it&#39;s doing it.</p>
<p>It&#39;s very fashionable to keep criticizing LLMs as &quot;glorified autocorrect&quot;s. I&#39;m curious how one explains the ability to execute this prompt beautifully as an &quot;autocorrect&quot;. (And yes, I&#39;ve used many other language systems: Duolingo, Mango, Pimsleur, etc.)</p>
Attached image 0