<oembed><type>rich</type><version>1.0</version><author_name>Ape Mithrandir (npub16d…kh6vy)</author_name><author_url>https://nostr.ae/npub16dsu2ghu9yd363utyw7dejqc8rgx5ps7r4hxddrvx2x64dh73p7qqkh6vy</author_url><provider_name>njump</provider_name><provider_url>https://nostr.ae</provider_url><html>Depends alot on the machine specs, VRAM, RAM, CPU, GPU etc.&#xA;You can check model VRAM usage here:&#xA;https://apxml.com/tools/vram-calculator&#xA;&#xA;Also people mostly run already trained models locally, since training a model from scratch requires far too much resources. You can however use tools like OpenWebUI to create a chat frontend for Ollama and then add knowledge base to the chat to help direct the conversation with additional data. </html></oembed>