<p><span class="h-card" translate="no"><a href="https://scholar.social/@wim_v12e" class="u-url mention">@<span>wim_v12e</span></a></span> <span class="h-card" translate="no"><a href="https://mastodon.social/@ednl" class="u-url mention">@<span>ednl</span></a></span> Correct:<br />llama3.2: &quot;1B and 3B models&quot;<br />tinyllama: &quot;1.1B Llama model&quot;<br />phi: &quot;2.7B model&quot;</p><p>Thanks for the very informative reply!</p><p>I wonder how those compare with Apple&#39;s built-in GPUs. My understanding is that ollama can&#39;t use Apple Silicon GPUs from Docker but the pre-compiled Mac builds will do so. I would assume it&#39;s fairly comparable.</p>
Reply