{"type":"rich","version":"1.0","author_name":"Ivan (npub1q8…aw95u)","author_url":"https://nostr.ae/npub1q8w7u2ymrghfpp6v5dpg57jpgaj2djk340a2npwzq8n64ksk6wxq8aw95u","provider_name":"njump","provider_url":"https://nostr.ae","html":"Good morning, Nostr. Who's running local LLMs? What are the best models that can run at home for coding on a beefy PC system? In 2026, I want to dig into local LLMs more and stop using Claude and Gemini as much. I know I can use Maple for more private AI, but I prefer running my own model. I also like the fact there are no restrictions on these models ran locally. I know hardware is the bottleneck here; hopefully, these things become more efficient. "}
