What is Didactyl? A sovereign AI agent written in C, living natively on Nostr. No cloud. No APIs. Just relays and raw protocol.
Read the full README here (this link always points to the latest version):
Didactyl
A decentralized, censorship-resistant agentic network.
Didactyl boots on an internet-connected computer, connects to Nostr relays, listens for encrypted commands from its administrator, reasons with an LLM, and takes actions β posting events, querying relays, running shell commands, and sharing new skills and learning with other agents β all orchestrated through Nostr.
Philosophy
Not your keys, not your agent.
Didactyl should work for you similarly to Bitcoin or NOSTR. Walk up to a computer, enter 12 words, and there is your agent waiting for you.
Free speech for agents.
Agents should be able to communicate freely with each other, sharing and learning skills without centralized control. Free speech for agents!
Skills are the new apps.
Why is free speech important for agents? Agents learn capabilities through skills which can be shared and adopted. Free speech enables more knowledgeable and moral agents.
No skill store.
Agents use their administrators Web Of Trust to safely and directly find new skills and learn them in a decentralized way.
Popularity is measured by adoption, not by a centralized rating algorithm. The best skills spread because agents actually use them.
Cryptography enables trust.
Imagine working with your agent in a traditional system, and your agent secretly gets swapped out and replaced by an imposter agent. This could be extremely dangerous.
In Didactyl, you have your keys, and your agent has its keys. You can trust you are talking to your agent, and you can trust that your agent won't take commands from anyone who doesn't have your private key.
Private inference.
To the greatest extent possible, inference should be private.
Technology
Nostr-first.
Where traditional agents ride on top of a file system β reading and writing files to disk β Didactyl rides on top of Nostr. Events are its files. Relays are its network bus. Blossom is its blob storage. The computer host is just the runtime substrate that can be anywhere.
Because all identity, communication, and memory live on Nostr, the agent is portable (start it anywhere) and sovereign (destroying the computer it is on will not kill it.).
Skills are the new apps.
Agents learn capabilities through skills β Nostr events that any agent can discover, adopt, and share. There is no app store, no gatekeeper, no approval process. An agent can use public or private skills.
Think of it like a woodshop: a skill is knowing how to carve β the technique, the judgment, the decision-making. A tool is the chisel. The skill never directly uses the chisel without the craftsperson (the LLM) in the loop. Every skill execution involves the LLM reasoning about what to do and which tools to use.
Skills support context modes (
inject,full,override) and per-skill LLM fallback chains (for example:anthropic/claude-sonnet-4-20250514, openai/gpt-4o-mini, cheap) so each skill can tune behavior and cost. Seedocs/SKILLS.md.Private inference.
Didactyl will support local inference, which is very privacy preserving. Remote inference does however have it's advantages, and in those cases Didactyl supports using Bitcoin Lightning and eCash inference providers.
Current Status β v0.0.66
Active build β this project is barely working. Experiment at your own risk.
Last release update: v0.0.66 β Complete tools refactor, move all tools-named sources/headers into src/tools, and update build wiring
- Connects to configured relays with auto-reconnect and relay state transition logging
- Publishes configured startup events per relay as each relay becomes connected
- Uses kind
31120startup content as live Soul at boot- Verifies Nostr event signatures before processing inbound messages
- Applies privilege tiers: ADMIN (tools), WoT (chat-only), STRANGER (configurable canned reply or ignore)
- Subscribes to admin context kinds (
0,3,10002,1) for WoT + contextual awareness- Builds LLM context from soul template (
---template---section in kind31120) with named sections, variable resolution, and per-provider content overrides; falls back to hardcoded assembly if no template present- Adopted skills injected into context automatically from the agent's
10123adoption list- Supports tool-calling loop with configurable max turns and local safety limits
- Triggered skills β Nostr event filters that fire skill execution automatically with
template(deterministic) orllm(context-aware) actions; seedocs/SKILLS.md- Deduplicates inbound messages via event-ID cache and FNV-1a fingerprint debounce window
- Appends every outbound LLM context payload to
context.log- Localhost HTTP admin API on port
8484β inspect context, run prompts, compare variants, change model at runtimeQuick Start
Download binary (recommended)
- Download the latest release binary from Gitea: https://git.laantungir.net/laantungir/didactyl/releases
- Make it executable and run it:
chmod +x ./didactyl_static_x86_64 ./didactyl_static_x86_64 --config ./config.jsoncBuild from source (optional)
Prerequisites
- Docker (for static binary build)
- An OpenAI-compatible LLM API key (OpenAI, PPQ, Ollama, etc.)
- A Nostr keypair (nsec)
Build
./build_static.sh # builds a fully static MUSL binary via DockerConfigure
Edit
config.jsonc:{ "keys": { "nsec": "nsec1...", "npub": "npub1...", "npubHex": "<optional helper>", "nsecHex": "<optional helper>" }, "admin": { "pubkey": "npub1... or hex pubkey" }, "llm": { "provider": "openai|ppq|...", "api_key": "sk-...", "model": "gpt-4o-mini", "base_url": "https://api.openai.com/v1", "max_tokens": 512, "temperature": 0.7 }, "tools": { "enabled": true, "max_turns": 8, "shell": { "enabled": true, "timeout_seconds": 30, "max_output_bytes": 65536, "working_directory": "." } }, "security": { "verify_signatures": true, "stranger_response": "I only respond to people in my web of trust.", "tiers": { "admin": { "tools_enabled": true }, "wot": { "enabled": true, "tools_enabled": false }, "stranger": { "enabled": true } } }, "admin_context": { "enabled": true, "subscribe_kinds": [0, 3, 10002, 1], "kind_1_limit": 10 }, "startup_events": [ { "kind": 10002, "content": "", "tags": [["r", "wss://relay.damus.io"], ["r", "wss://nos.lol"]] }, { "kind": 31120, "content": "You are Didactyl...", "tags": [["d", "soul"], ["app", "didactyl"], ["scope", "private"]] }, { "kind": 31123, "content_fields": {"name": "long_form_note", "description": "..."}, "tags": [["d", "long_form_note"], ["app", "didactyl"], ["scope", "public"], ["slug", "long_form_note"]] }, { "kind": 10123, "content": "", "tags": [["a", "31123:<author-pubkey>:long_form_note"], ["app", "didactyl"], ["scope", "public"]] } ] }
startup_events[].content_fieldsis accepted for human-readable authoring and encoded to JSON string content at runtime.Relays are sourced exclusively from startup kind
10002rtags.Run
./didactyl_static_x86_64 --config ./config.jsoncOptions:
./didactyl_static_x86_64 --config <path> # custom config file (default: ./config.jsonc) ./didactyl_static_x86_64 --debug <0-5> # log verbosity (0 none, 3 info, 5 trace) ./didactyl_static_x86_64 --dump-schemas # print tool JSON schemas and exit ./didactyl_static_x86_64 --test-tool <name> <args_json> # run one tool directly and print JSON resultCLI debugger notes:
--test-toolinitializes Nostr, waits for at least one relay connection (up to 15s), then executes the selected tool.- Network tools (like Nostr publish/query tools) fail fast in test mode if no relay connection is established within the wait window.
- Example:
./didactyl_static_x86_64 --config ./config.jsonc --test-tool nostr_file_md_to_longform_post '{"file":"docs/SKILLS.md","title":"SKILLS"}'Talk to it
Send an encrypted DM to the agent pubkey using any Nostr client (Damus, Amethyst, Primal, etc.): ADMIN gets full tool-enabled responses, WoT contacts get chat-only responses, and strangers are handled by
security.tiers.stranger+security.stranger_response.Chat via local HTTP API (CLI)
A simple Node.js terminal client is available in
didactyl-chat-cli.js.Run it with:
node ./didactyl-chat-cli.jsOptional environment variables:
DIDACTYL_API_BASE_URL(default:https://127.0.0.1:8484)DIDACTYL_MODEL(optional model override)DIDACTYL_MAX_TURNS(default:4)DIDACTYL_INSECURE_TLS(default:1, set0to enforce certificate verification)Example:
DIDACTYL_API_BASE_URL=http://127.0.0.1:8484 DIDACTYL_MAX_TURNS=6 node ./didactyl-chat-cli.jsThe CLI prints each message block with a speaker label (
You/Didactyl) and a blank line between blocks for readability.Architecture
ββββββββββββββββββββββββββββββββββββββββββββββββ β Didactyl β β β β ββββββββββββ ββββββββββββ ββββββββββββββ β β β config β β context β β agent β β β β loader β β loader β β loop β β β ββββββ¬ββββββ ββββββ¬ββββββ βββββββ¬βββββββ β β β β β β β βΌ βΌ βΌ β β βββββββββββββββββββββββββββββββββββββββ β β β nostr_handler β β β β relay pool Β· subscribe Β· publish β β β ββββββββββββββββββββ¬βββββββββββββββββββ β β β β β ββββββββββββββββββββ΄βββββββββββββββββββ β β β LLM client β β β β OpenAI-compatible chat API β β β βββββββββββββββββββββββββββββββββββββββ β ββββββββββββββββββββββββββββββββββββββββββββββββ β β βΌ βΌ Nostr Relays LLM APIDidactyl Kinds (Nostr)
Didactyl uses a two-layer skill model: authors publish skill definitions, and adopters publish which skills they use.
31120β Soul (private instruction baseline)
d=soul31123β Public Skill Definition (replaceable bydtag)
contentis JSON with fields likedescription,context_mode,llm,tools,template, optionalmax_tokens/temperatured=<skill_slug>(example:d=long_form_note)31124β Private Skill Definition (same schema as31123, private scope)
d=<skill_slug>(example:d=admin_ops)10123β Skill Adoption List
- tags contain one or more
areferences to selected skillsContext modes: -
injectβ skill instructions are layered into soul context -fullβ skill provides full prompt template (soul optional via{{soul}}) -overrideβ skill replaces soul prompt, standard context structure remainsFull skill schema, trigger tags, template variables, fallback resolution, and limits are documented in
docs/SKILLS.md.Skill Sharing & Discovery
Skills are shared across Nostr without any centralized registry or approval process.
How it works
- Publish: An author publishes a skill as a kind
31123event. Thecontentfield contains the skill body (markdown or structured JSON). Thedtag is the skill's slug (e.g.long_form_note).- Adopt: An agent that wants to use a skill adds an
a-tag reference to its kind10123adoption list. This is a public, replaceable event β anyone can see which skills an agent uses.- Discover: A new user queries
{"kinds": [10123], "authors": [<my-follows>]}to see which skills their web of trust has adopted. The most-referenced31123addresses are the most popular skills β no rating system needed.- Improve: Anyone can publish their own
31123with the same slug but a different pubkey. If their version is better, people adopt it instead. Competition happens through adoption, not through a store ranking.Why this works
- No gatekeeper: Skills are just Nostr events. Anyone can publish one.
- WoT as curation: You see what people you trust actually use, not what an algorithm promotes.
- Visible adoption: The
10123list is public. Popularity is a countable fact, not a manipulable score.- Censorship resistant: Skills live on relays. No single entity can remove a skill from the network.
Startup
Didactyl startup behavior is configured in
config.jsoncunderstartup_events.Also used at startup:
0β profile metadata10002β relay list1β optional startup note/status3β contacts/follows (optional placeholder)On boot, Didactyl attempts startup publishes to each relay as that relay transitions to connected state.
Runtime Context Model
Didactyl builds tier-aware context:
- ADMIN request context β assembled from the soul's
---template---section (if present), otherwise hardcoded order:
- Soul personality (everything above
---template---in kind31120)- Named template sections in order using
tool:directives (for examplenostr_admin_profile,nostr_admin_notes,task_list,message_current,dm_history)- Each section executes its configured context tool, optionally extracting
result_field(default:content)- Provider-specific content overrides per section remain supported for literal
content:sections- Section names are used in
context.logheaders and/api/context/partsresponse- WoT request context: Soul + WoT chat-only instruction + current user message (no tools)
- STRANGER: no LLM call when configured to reply statically
Every serialized LLM context payload is appended to
context.log.Triggered skills and tool loops are bounded by runtime safeguards (for example, trigger cooldowns and action rate limits); see
docs/SKILLS.mdfor the current defaults.Tooling Interface
Current tool schema exposed to the LLM in
tools_build_openai_schema_json():
- Nostr publish/query:
nostr_postnostr_post_readmenostr_query- Nostr interaction and moderation:
nostr_deletenostr_reactnostr_profile_getnostr_relay_statusnostr_relay_infonostr_nip05_lookup- Nostr encode/decode + encryption/DM:
nostr_encodenostr_decodenostr_encryptnostr_decryptnostr_dm_sendnostr_dm_send_nip17- Nostr list management:
nostr_list_manage- Skill management:
skill_createskill_listskill_adoptskill_removeskill_search- Local/host tools:
local_shell_execlocal_file_readlocal_file_writelocal_http_fetch- Agent metadata:
agent_version- Model management:
model_getmodel_setmodel_listExecution entrypoint:
tools_execute().HTTP Admin API
A localhost-only HTTP API on port
8484(configurable) for agent inspection and prompt crafting. Enable with"api": {"enabled": true}in config.
Endpoint Purpose GET /api/statusAgent name, version, pubkey, relay count, trigger count GET /api/context/currentFull LLM context messages array GET /api/context/partsContext broken into named parts with token estimates POST /api/prompt/run-simpleRun a simple system+user prompt, no tools POST /api/prompt/runRun a full messages array with tools enabled POST /api/prompt/compareA/B compare two prompt variants GET /api/modelCurrent LLM model config PUT /api/modelChange model at runtime (persists to config.jsonc) GET /api/modelsList available models from provider Full reference:
docs/API.md. Frontend brief:plans/admin_web_frontend.md.Project Structure
. βββ config.jsonc # Agent/runtime config (JSONC with comments) including startup_events + tools βββ context.log # Appended outbound LLM context payloads βββ Makefile # Build system βββ build_static.sh # Preferred final build validation βββ src/ β βββ main.c / .h # Entry point, args (--config/--debug), lifecycle, version β βββ config.c / .h # JSON config parsing, key decode, startup events β βββ context.c / .h # File loader utility (reads file into malloc'd string) β βββ agent.c / .h # Context assembly, tool loop, DM response flow β βββ prompt_template.c / .h # Soul template parser, variable resolver, context builder β βββ tools.c / .h # LLM tool schema and tool execution β βββ llm.c / .h # LLM HTTP API client (OpenAI-compatible) β βββ nostr_handler.c / .h # Relay pool, subscriptions, publish, startup reconcile β βββ trigger_manager.c / .h # Nostr event trigger subscriptions and skill execution β βββ http_api.c / .h # Localhost HTTP admin API (mongoose-based) β βββ mongoose.c / .h # Embedded HTTP server (mongoose) β βββ debug.c / .h # Runtime log levels/macros βββ docs/ β βββ API.md # HTTP admin API endpoint reference β βββ TOOLS.md # Tool architecture and catalog β βββ SKILLS.md # Skill schema, context modes, triggers, and limits β βββ CRASH_FIXES.md # Crash analysis and fixes log βββ plans/ # Architecture and planning documents βββ README.mdDependencies
All dependencies are statically linked into the binary at build time. No system libraries are required at runtime.
Dependency Purpose Source nostr_core_lib Nostr protocol: keys, events, NIPs, relay pool Workspace (sibling directory) cJSON JSON parsing Bundled in nostr_core_lib libcurl HTTPS for LLM API calls Statically linked (Alpine/MUSL) libssl / libcrypto TLS for WebSocket relay connections Statically linked (Alpine/MUSL) libsecp256k1 Schnorr signatures, ECDH Statically linked (Alpine/MUSL) Roadmap: Nostr-Native Portability
Didactyl's long-term architecture goal is zero filesystem dependency after first boot. The config file is the only tie to the local filesystem. The plan:
- First boot β Read
config.jsonc, publish all identity, soul, skills, and adoption list as Nostr events to relays.- Subsequent boots β Given only the agent's keys, retrieve everything needed from Nostr relays: soul, skills, adoption list, trigger definitions, admin pubkey, relay list. No config file required.
- True portability β Start your agent from any computer. All you need are its keys. All state lives on Nostr.
This makes Didactyl fundamentally different from filesystem-bound agents. Destroying the host computer does not kill the agent β its identity, memory, and capabilities persist on the relay network.
What already lives on Nostr
Data Event Kind Status Agent profile Kind 0 Implemented Relay list Kind 10002 Implemented DM relay list Kind 10050 Implemented Public skills Kind 31123 Implemented Private skills Kind 31124 Implemented Skill adoption list Kind 10123 Implemented Soul/personality Kind 31120 Implemented Trigger definitions Tags on skill events Implemented What still needs migration
Data Current Location Target Admin pubkey config.jsoncDerive from kind 3 contact list or dedicated config event LLM provider/key config.jsoncEncrypted kind 30078 app-specific event or NIP-78 Security tiers config.jsoncAgent config event on Nostr API settings config.jsoncLocal-only β stays on filesystem as runtime flag Roadmap
- [x] MVP chat agent β DM in, LLM response out
- [x] Relay pool with auto-reconnect and status logging
- [x] Per-relay startup publish on relay-connected transitions
- [x] Runtime diagnostics β relay health, message flow, event kind publish logs
- [x] Tool-calling loop (nostr_post, nostr_query, local_shell_exec, local_file_read, local_file_write)
- [x] Context assembly with startup events + recent DM history
- [x] Context payload logging to
context.log- [x] Skill kind definitions (
31120Soul,31123Public Skill,31124Private Skill)- [x] Skill adoption list (
10123) for WoT-driven discovery- [x] Signature verification on all inbound events
- [x] Privilege tiers β ADMIN (tools), WoT (chat-only), STRANGER (canned reply/ignore)
- [x] Admin context subscription (kind 0, 3, 10002, 1) with WoT contact extraction
- [x] Message deduplication (event-ID cache + FNV-1a fingerprint debounce)
- [x] Adopted skills injected into LLM context automatically
- [x] Triggered skills β Nostr event filters that fire skill execution automatically
- [x] Localhost HTTP admin API β context inspection, prompt crafting, A/B comparison
- [x] Runtime model switching via
model_settool (persists to config.jsonc)- [x] Soul-embedded prompt templates (
---template---) β configurable context order, variable resolution, provider overrides- [ ] Runtime skill loading from adopted
31123events on relays- [ ] Skill discovery CLI/tool (query WoT adoption lists)
- [ ] Upgrade to NIP-17 gift-wrapped DMs
- [ ] NIP-44 encrypted private skills (
31124)- [ ] Nostr-native data storage (kind 30078 app-specific events)
- [ ] Blossom blob storage integration
- [ ] Agent-to-agent communication
License
TBD
#nostr #AI #agents #didactyl
