<p><span class="h-card" translate="no"><a href="https://mastodon.social/@samth" class="u-url mention">@<span>samth</span></a></span> <span class="h-card" translate="no"><a href="https://mastodon.social/@wingo" class="u-url mention">@<span>wingo</span></a></span> <span class="h-card" translate="no"><a href="https://mastodon.social/@sayrer" class="u-url mention">@<span>sayrer</span></a></span> <br />I think you made that very same confusion in your response: “heaps from 100g up”.</p><p>What do you mean?</p><p>If you mean to say that these algorithms are designed only for environments where there’s 64X memory *headroom* (as in the example you gave), then fine, but we need to be call that out explicitly.</p><p>If you mean to say that these algorithms are designed for *live heaps* of 100G, then fine but you need to be clear about that, and point out what these applications are. For example, are they pointer rich? Or are they fillies with vast arrays of non pointer data? Who uses these workloads.</p><p>My take is that it is absolutely not okay to hide behind this ambiguous language.</p><p>If your algorithm meets 64X headroom, say that clearly.</p><p>If you’ve built for apps with 100GB of live data, tell us what they are and, getting back to the paper, contribute benchmarks.</p>
Reply