<p><span class="h-card" translate="no"><a href="https://mastodon.social/@shriramk" class="u-url mention">@<span>shriramk</span></a></span> <span class="h-card" translate="no"><a href="https://mastodon.nu/@richcarl" class="u-url mention">@<span>richcarl</span></a></span> <span class="h-card" translate="no"><a href="https://mastodon.social/@DRMacIver" class="u-url mention">@<span>DRMacIver</span></a></span> The coherent output that state-of-the-art transformer-based LLMs produce over global, long-range dependencies in data is a deliberate design feature of their architecture, and certainly blew everyone&#39;s expectations out of the water. It&#39;s still, however, an auto&quot;complete&quot; because these architectures still don&#39;t have any *intentionality* designed into them, and still function by producing *most probable* outputs.</p>
Reply