Whole-known-network
<p>Q: how do you notice that Element got updated<br />A: more UI breakage</p>
<p><span class="h-card" translate="no"><a href="https://mastodon.social/@grimalkina" class="u-url mention">@<span>grimalkina</span></a></span> do you think that a ready-to-use course on applied statistics / research methods for undergraduate computer science students would help? or do you feel it's unlikely to be adopted / unlikely to have impact / would be mis-delivered ("the blind leading the blind") / take too long to pay off?</p>
<p>I think we can't just fight by being like, "numbers are bad! Measurement is scary!" I think we can fight by saying we too are measurement experts. Numbers are highly critiquable. The representation of human ability is not always a quantification problem, but we ALSO DESERVE to be in that room when things are being quantified. That has been a long-standing mission in my life and a reason that I believe statistical and evidence literacy need to be in our pockets, not something we fear</p>
<p>I don't have all the answers right now, but I know that there is a vacuum, and toxic decisions about human beings are waiting to rush in to fill a vacuum in a moment when human beings are being treated as disposable. This is why I believe that a collective, social, and shared model for problem-solving and software innovation is a piece of the cure and the medicine and the hope.</p>
<p>We need to establish that some of this work is human subjects work. The ethics, history, failings to learn from, and practice of applied research, of behavioral science, and of intervention science are all unbelievably ignored in favor of "engineering expertise" and ridiculous credibility contests. Yes, even in "human-centered" software spaces, rarely is care and effort taken to learn from social sci even when *people are talking about doing social science*!!</p>
<p>We need genuine data literacy. Too much time and energy is being trapped in arguing about the most obvious facts of bad methods, bad statistics, and bad measurement. We need to define problems of data missingness, and poor operationalizations. We need a strong, robust working model of developer problem-solving, which will give us a shared language for what is NOT being measured. We need something to move toward, not just flailing against.</p>
<p>What do we do differently with the measurement of software work? What do we do if we want to not become victims to whatever foolish analysis is able to game this & use the right keywords or a scary N size to get VCs to amplify every message? </p><p>I have some thoughts and they were forged in the fires of working with students with significant missing data, of working with data on learning outcomes at scale, and of working in contexts where careful work gets screamed at and bullshit gets a promotion</p>
<p><span class="h-card" translate="no"><a href="https://mastodon.social/@whitequark" class="u-url mention">@<span>whitequark</span></a></span> absolutely useless-ass company</p>
<p><span class="h-card" translate="no"><a href="https://chaos.social/@gsuberland" class="u-url mention">@<span>gsuberland</span></a></span> even broadcom's major big customers hate broadcom, as far as i know</p>