<p>Saying ‘but humans make mistakes too’ is not a valid argument for allowing LLM-generated code contributions.</p><ul><li>Humans make mistakes in ways that we have experience accounting for.</li><li>Humans can be held accountable for mistakes they make.</li><li>Humans can gain a real understanding of a codebase and the decisions that went into it.</li></ul><p>LLMs can’t do any of that, and the humans that use them usually can’t either.</p><p><a href="https://infosec.exchange/tags/LLM" class="mention hashtag" rel="tag">#<span>LLM</span></a> <a href="https://infosec.exchange/tags/genAI" class="mention hashtag" rel="tag">#<span>genAI</span></a></p>
Reply