<oembed><type>rich</type><version>1.0</version><author_name>Ivan (npub1q8…aw95u)</author_name><author_url>https://nostr.ae/npub1q8w7u2ymrghfpp6v5dpg57jpgaj2djk340a2npwzq8n64ksk6wxq8aw95u</author_url><provider_name>njump</provider_name><provider_url>https://nostr.ae</provider_url><html>Good morning, Nostr. Who&#39;s running local LLMs? What are the best models that can run at home for coding on a beefy PC system? In 2026, I want to dig into local LLMs more and stop using Claude and Gemini as much. I know I can use Maple for more private AI, but I prefer running my own model. I also like the fact there are no restrictions on these models ran locally. I know hardware is the bottleneck here; hopefully, these things become more efficient. </html></oembed>