Whole-known-network
<p>I don't think I've read it anywhere, of any violence at <a href="https://mastodon.sdf.org/tags/handsoff" class="mention hashtag" rel="tag">#<span>handsoff</span></a> </p><p>That's remarkable</p><p>(You KNOW MAGA chuds would be squawking all over if there was even an minor altercation)</p><p>The turnout from older people, also remarkable - anyone with an IRA that counts on that to help pay their bills has just seen the funds cut by almost a third in the past few weeks - many of these folks no doubt voted <a href="https://mastodon.sdf.org/tags/Republican" class="mention hashtag" rel="tag">#<span>Republican</span></a> (but not any more, I'll wager)<br /> <br />The <a href="https://mastodon.sdf.org/tags/GOP" class="mention hashtag" rel="tag">#<span>GOP</span></a> has just lost a big segment of their reliable base</p>
<p><span class="h-card" translate="no"><a href="https://mastodon.social/@whitequark" class="u-url mention">@<span>whitequark</span></a></span> re initial indexing speed: if you use ninja, there is a recently-merged PR (which should be in ninja 1.13) that may help with this if your project has multiple executable/lib targets:</p><p><a href="https://github.com/ninja-build/ninja/pull/2497" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://</span><span class="ellipsis">github.com/ninja-build/ninja/p</span><span class="invisible">ull/2497</span></a></p><p>so e.g. if your project builds an executable 'foo' then: </p><p>ninja -t compdb-targets foo > compile_commands.json</p><p>...gives you a compilation database with all commands required to build foo, and *only* those commands.</p>
<p><span class="h-card" translate="no"><a href="https://chaos.social/@ronya" class="u-url mention">@<span>ronya</span></a></span> <span class="h-card" translate="no"><a href="https://mastodon.social/@whitequark" class="u-url mention">@<span>whitequark</span></a></span> The difference in initial indexing time comes from IntelliSense using a totally different parser to collect definitions across TUs, which is orders of magnitude faster but also much less accurate and much less detailed.</p>
<p><span class="h-card" translate="no"><a href="https://chaos.social/@ronya" class="u-url mention">@<span>ronya</span></a></span> <span class="h-card" translate="no"><a href="https://mastodon.social/@whitequark" class="u-url mention">@<span>whitequark</span></a></span> IntelliSense is architecturally very similar to clangd here- they both use the build system to configure a full compiler frontend to run on some TU that contains the open file.</p>
<p>Today I had to read 7 abstracts and then the respective papers.</p><p>Can the authors please say in the first sentence what the paper is doing? Don't start by giving background, or saying why (you think) the problem is important.</p><p>Just please say right away what you have done. Then, if you feel it is really important, go on to give background justification of importance. Don't let me have to "compute" in order to determine what you have done by reading the abstract.</p><p>For all seven abstracts, I've written a one- or, in some cases, two-sentence description of what they do and why we care.</p><p>An abstract where the main result/contribution is hidden in the middle of the second paragraph or three paragraphs is not an abstract!</p><p>(And don't start by saying, "in this paper". Of course it is in this paper.)</p><p>1/</p>
<p><span class="h-card" translate="no"><a href="https://mastodon.social/@whitequark" class="u-url mention">@<span>whitequark</span></a></span> AFAIU:</p><p>Intellisense:<br />- upside: works immediately on any code, as it’s generating the AST itself, similar to CodeQL.<br />- downside: limited ability to resolve things in precompiler conditionals.</p><p>Clangd:<br />- upside: knows exact types and dead code for your project based on the optional compile time settings you set. <br />- downside: basically useless until you have coaxed the current projects buildsystem to generate a compile_commands.json and built the project once. </p><p>Correct?</p>
<p><span class="h-card" translate="no"><a href="https://mastodon.social/@whitequark" class="u-url mention">@<span>whitequark</span></a></span> what really bugs me about this is M$ literally came up with the idea of language servers. How come they <br />cannot write a proper implementation for C/C++?</p>
<p><span class="h-card" translate="no"><a href="https://mastodon.social/@whitequark" class="u-url mention">@<span>whitequark</span></a></span> </p><p>but if i don't use pfp they see me as an elephant</p><p>that's even worse</p>
<p>if you're using VS Code (or editors based on it) and are frustrated with C/C++ IntelliSense being incredibly slow: ditch the Microsoft language server entirely and install clangd. clangd is actually fit for purpose, although the initial indexing is vastly slower</p>