Nostr'a Katılın
2026-01-12 04:45:02 UTC
in reply to

Ape Mithrandir on Nostr: Depends alot on the machine specs, VRAM, RAM, CPU, GPU etc. You can check model VRAM ...

Depends alot on the machine specs, VRAM, RAM, CPU, GPU etc.
You can check model VRAM usage here:
https://apxml.com/tools/vram-calculator

Also people mostly run already trained models locally, since training a model from scratch requires far too much resources. You can however use tools like OpenWebUI to create a chat frontend for Ollama and then add knowledge base to the chat to help direct the conversation with additional data.