Whole-known-network
<p><span class="h-card" translate="no"><a href="https://mastodon.social/@whitequark" class="u-url mention">@<span>whitequark</span></a></span> <span class="h-card" translate="no"><a href="https://ioc.exchange/@azonenberg" class="u-url mention">@<span>azonenberg</span></a></span> I'm currently struggling with dumping an ESMT F50-series QSPI NAND flash. flashrom doesn't support it, loading the spinand.ko module on Raspberry Pi detects the chip correctly but fails to access its contents. My suspicion is the SPI controller, so I thought about synthesizing a Cadence QSPI into a Zynq-7000 and accessing it from Petalinux.<br />I have no idea how much effort it would be to write a SPI-NAND applet for Glasgow.</p>
<p><span class="h-card" translate="no"><a href="https://mastodon.social/@whitequark" class="u-url mention">@<span>whitequark</span></a></span> Also most of the cheap FPGA devkits don't have a MCU, and either 10/100 ethernet or USB2 as the only PC interface</p><p>Dumptruck has an RGMII PHY direct to the FPGA, plus a STM32H735 connected by a fast memory mapped interface (>500 Mbps throughput) to the FPGA.</p><p>Right now all of the dumping algorithms are software driven, as is the TCP/IP stack. over time as I build out an accelerated processing flow and move more of the datapath to FPGA I expect performance to skyrocket. There's a lot of round tripping now.</p>
<p><span class="h-card" translate="no"><a href="https://ioc.exchange/@azonenberg" class="u-url mention">@<span>azonenberg</span></a></span> oh yeah that makes sense</p>
<p><span class="h-card" translate="no"><a href="https://mastodon.social/@whitequark" class="u-url mention">@<span>whitequark</span></a></span> It's also going to serve as a general purpose swiss-army-knife devkit for talking to random embedded stuff with a ton of IO at weird voltage levels.</p><p>The FPGA has four full 7 series IO banks broken out (200 pins, 50 each at 3.3 / 2.5 / 1.8 / 1.2) with no level shifters, just directly to a socket.</p><p>A lot of the low cost entry level devkits only have like a few PMODs, and the high end ones have expensive FMC connectors at only like one voltage level.</p><p>There was a gap for a low cost board with a ton of IO at all different voltages.</p>
<p><span class="h-card" translate="no"><a href="https://mastodon.social/@whitequark" class="u-url mention">@<span>whitequark</span></a></span> Yeah. Dumptruck started out as a result of two consecutive gigs at work running into problems dumping a) ONFI with 2.5V core and 1.2V IO and b) 47-pin parallel async NOR.</p><p>So I decided to build a dumper that was capable enough in hardware that I'd never have to touch the hardware again.</p><p>Now I'm building out the gateware/software stack to make it fast and capable; QSPI is the "crash test dummy" for that phase of the project as it's simple and I have a ton of them lying around that don't have client data on them (i.e. I can use them freely without any NDA encumbrances).</p>
<p><span class="h-card" translate="no"><a href="https://ioc.exchange/@azonenberg" class="u-url mention">@<span>azonenberg</span></a></span> ah I see. I reverse-engineered ATF15xx for that but nobody came around to make a usable P&R so it's kind of in limbo</p>
<p><span class="h-card" translate="no"><a href="https://mastodon.social/@whitequark" class="u-url mention">@<span>whitequark</span></a></span> Well, yeah. I never said this was at all the final performance I was going to get.</p><p>There's multiple unnecessary pipeline stages, reads to a buffer that I then read separately, etc. It can get waaay faster.</p><p>But the big thing dumptruck was built for wasn't speed, it was exotic things that need a lot of pins and weird IO voltage levels.</p><p>Most notably parallel NOR flash that needs like 45 pins to interface with it.</p>
<p><span class="h-card" translate="no"><a href="https://ioc.exchange/@azonenberg" class="u-url mention">@<span>azonenberg</span></a></span> oh, i hit this on Glasgow long ago :D</p>
<p>Initial quad read support in DUMPRUCK dumping at 6.3 MB/s (50.4 Mbps). There's lots of room to make this faster still...</p>