{"type":"rich","version":"1.0","author_name":"Ape Mithrandir (npub16d…kh6vy)","author_url":"https://nostr.ae/npub16dsu2ghu9yd363utyw7dejqc8rgx5ps7r4hxddrvx2x64dh73p7qqkh6vy","provider_name":"njump","provider_url":"https://nostr.ae","html":"Depends alot on the machine specs, VRAM, RAM, CPU, GPU etc.\nYou can check model VRAM usage here:\nhttps://apxml.com/tools/vram-calculator\n\nAlso people mostly run already trained models locally, since training a model from scratch requires far too much resources. You can however use tools like OpenWebUI to create a chat frontend for Ollama and then add knowledge base to the chat to help direct the conversation with additional data. "}
