Whole-known-network
<p><span class="h-card" translate="no"><a href="https://mastodon.social/@neuronakaya" class="u-url mention">@<span>neuronakaya</span></a></span> eagles? I thought today was all about <a href="https://ecoevo.social/tags/SuperbOwls" class="mention hashtag" rel="tag">#<span>SuperbOwls</span></a></p>
<p>Who are The Eagles? I've never watched a single <a href="https://mastodon.social/tags/SuperBowlLIX" class="mention hashtag" rel="tag">#<span>SuperBowlLIX</span></a>. lol. I guess here in Asia, these sports are not that popular. :/</p>
<p><span class="h-card" translate="no"><a href="https://mathstodon.xyz/@cstheory" class="u-url mention">@<span>cstheory</span></a></span> Totally. They're amazing.</p>
<p><span class="h-card" translate="no"><a href="https://mastodon.social/@gamingonlinux" class="u-url mention">@<span>gamingonlinux</span></a></span> could you please add add alt text to your images?<br />I would be incredibly thankful đ</p>
<p><span class="h-card" translate="no"><a href="https://hachyderm.io/@cliffle" class="u-url mention">@<span>cliffle</span></a></span> oof, i'm sorry to hear that</p>
<p><span class="h-card" translate="no"><a href="https://mastodon.social/@samth" class="u-url mention">@<span>samth</span></a></span> <span class="h-card" translate="no"><a href="https://mastodon.social/@wingo" class="u-url mention">@<span>wingo</span></a></span> <span class="h-card" translate="no"><a href="https://mastodon.social/@sayrer" class="u-url mention">@<span>sayrer</span></a></span> </p><p>Wait up. Letâs not confuse âlarge heapsâ with âgenerous heapsâ. This is the number one fallacy I hear in these discussions. </p><p>When someone says they ran âin a 100GB heapâ, theyâre not saying anything about how generous that heap is, how much DRAM theyâre burning to get that number. Perhaps G1 could have run in 5GB with similar performance. Unless the min heap is stated, such comments are meaningless or downright misleading. </p><p>Collectors trade time for space, and this family of collectors generally needs massive headroom to get okay performance. You can see that plain as day in our paper.</p><p>What people sometimes allude to is that because it does not copy concurrently and is strictly copying, G1 will have very costly pauses when the live heap is very large. This is where C4 et al are better. Though LXR also addresses this, without the massive space overhead required.</p>
<p>being trans is just speedrunning the complete disappearance of any trust towards the healthcare system you might've had left</p>
<p><span class="h-card" translate="no"><a href="https://mastodon.social/@whitequark" class="u-url mention">@<span>whitequark</span></a></span> if the LLM is returning things that tend to be the âright shapeâ more than other methods then this makes sense.</p>
<p><span class="h-card" translate="no"><a href="https://mastodon.social/@samth" class="u-url mention">@<span>samth</span></a></span> <span class="h-card" translate="no"><a href="https://mastodon.social/@wingo" class="u-url mention">@<span>wingo</span></a></span> <span class="h-card" translate="no"><a href="https://mastodon.social/@sayrer" class="u-url mention">@<span>sayrer</span></a></span></p><p>I think youâre asking why did the industry make such inefficient collectors, given theyâre not fools.</p><p>In short, there was little incentive to do better. There are two high level issues:1. Methodology. 2. Lack of innovation.</p><p>The costs we expose simply were not being measured. Two broad failings: not understanding overhead (addressed by LBO). Not measuring user-experienced latency.</p><p>You can âsolveâthe first one by discounting resources (making out that theyâre free). Gil Tene has often pointed out that most cloud VMs are priced so that they come vastly over-provisioned with memoryâso use it!! The falicy here is that ultimately someone is paying for that memory and the power used to keep it on. This is why those collectors are not used by data center scale applications, where the cost of those overheads is measured in $10Mâs. Second, most GC vendors focus on âGC pause time,â which is a proxy metric that doesnât translate to user-experienced pauses, which is where the rubber hits the road. </p><p>With that backdrop, and those incentives, the lack of innovation is unsurprising. G1, C4, Shen. and ZGC all share the same basic design, which is fundamentally limited by its complete dependence on tracing (slow reclamation, high drag), and strictly copying, which is brutally expensive when made concurrent. (See our LXR paper for a completely different approach.)</p><p>So it should be no surprise that companies like Google are so dependent on C++ â those collector costs are untenable for many / most data center scale applications.</p>