<p><span class="h-card" translate="no"><a href="https://scholar.social/@wim_v12e" class="u-url mention">@<span>wim_v12e</span></a></span> <span class="h-card" translate="no"><a href="https://mastodon.social/@ednl" class="u-url mention">@<span>ednl</span></a></span> Correct:<br />llama3.2: "1B and 3B models"<br />tinyllama: "1.1B Llama model"<br />phi: "2.7B model"</p><p>Thanks for the very informative reply!</p><p>I wonder how those compare with Apple's built-in GPUs. My understanding is that ollama can't use Apple Silicon GPUs from Docker but the pre-compiled Mac builds will do so. I would assume it's fairly comparable.</p>