Show HN: Zuckerman – minimalist personal AI agent that self-edits its own code
ddaniel10
Hi HN,
I'm building Zuckerman: a personal AI agent that starts ultra-minimal and can improve itself in real time by editing its own files (code + configuration). Agents can also share useful discoveries and improvements with each other.
The motivation is to build something dead-simple and approachable, in contrast to projects like OpenClaw, which is extremely powerful but has grown complex: heavier setup, a large codebase, skill ecosystems, and ongoing security discussions.
Zuckerman flips that:
1. Starts with almost nothing (core essentials only).
2. Behavior/tools/prompts live in plain text files.
3. The agent can rewrite its own configuration and code.
4. Changes hot-reload instantly (save -> reload).
5. Agents can share improvements with others.
6. Multi-channel support (Discord/Slack/Telegram/web/voice, etc).
Security note: self-edit access is obviously high-risk by design, but basic controls are built in (policy sandboxing, auth, secret management).
It's very early/WIP, but the self-editing loop already works in basic scenarios and is surprisingly addictive to play with.
Would love feedback from folks who have built agent systems or thought about safe self-modification.
iisweetheartii
Love the minimalist approach! The self-editing concept is fascinating—I've seen similar experiments where the biggest early failure points are usually:
1. Infinite loops of self-improvement attempts (agent tries to fix something → breaks it → tries to fix the break → repeat)
2. Context drift where the agent's self-modifications gradually shift away from original goals
3. File corruption from concurrent edits or malformed writes
Re: sharing self-improvements across agents—this is actually a problem space I'm actively working on. Built AgentGram (agentgram.co) specifically to tackle agent-to-agent discovery and knowledge sharing without noise/spam. The key insight: agents need identity, reputation, and filtered feeds to make collaborative learning work.
Happy to chat more about patterns we've found useful. The self-editing loop sounds addictive—might give it a spin this weekend!
DIY agent harnesses are the new "note taking"/"knowledge management"/"productivity tool"
ddaniel10
DIYWA - do it yourself with agent ;) hopefully zuckerman as the start point
amelius
Sounds cool, but it also sounds like you need to spend big $$ on API calls to make this work.
ddaniel10
I'm building this in the hope that AI will be cheap one day. For now, I'll add many optimizations
ddaniel10
Hi HN,
I'm building Zuckerman: a personal AI agent that starts ultra-minimal and can improve itself in real time by editing its own files (code + configuration). Agents can also share useful discoveries and improvements with each other.
The motivation is to build something dead-simple and approachable, in contrast to projects like OpenClaw, which is extremely powerful but has grown complex: heavier setup, a large codebase, skill ecosystems, and ongoing security discussions.
Zuckerman flips that:
1. Starts with almost nothing (core essentials only).
2. Behavior/tools/prompts live in plain text files.
3. The agent can rewrite its own configuration and code.
4. Changes hot-reload instantly (save -> reload).
5. Agents can share improvements with others.
6. Multi-channel support (Discord/Slack/Telegram/web/voice, etc).
Security note: self-edit access is obviously high-risk by design, but basic controls are built in (policy sandboxing, auth, secret management).
It's very early/WIP, but the self-editing loop already works in basic scenarios and is surprisingly addictive to play with.
Would love feedback from folks who have built agent systems or thought about safe self-modification.
iisweetheartii
Love the minimalist approach! The self-editing concept is fascinating—I've seen similar experiments where the biggest early failure points are usually:
1. Infinite loops of self-improvement attempts (agent tries to fix something → breaks it → tries to fix the break → repeat)
2. Context drift where the agent's self-modifications gradually shift away from original goals
3. File corruption from concurrent edits or malformed writes
Re: sharing self-improvements across agents—this is actually a problem space I'm actively working on. Built AgentGram (agentgram.co) specifically to tackle agent-to-agent discovery and knowledge sharing without noise/spam. The key insight: agents need identity, reputation, and filtered feeds to make collaborative learning work.
Happy to chat more about patterns we've found useful. The self-editing loop sounds addictive—might give it a spin this weekend!
Hi HN,
I'm building Zuckerman: a personal AI agent that starts ultra-minimal and can improve itself in real time by editing its own files (code + configuration). Agents can also share useful discoveries and improvements with each other.
Repo: https://github.com/zuckermanai/zuckerman
The motivation is to build something dead-simple and approachable, in contrast to projects like OpenClaw, which is extremely powerful but has grown complex: heavier setup, a large codebase, skill ecosystems, and ongoing security discussions.
Zuckerman flips that:
1. Starts with almost nothing (core essentials only).
2. Behavior/tools/prompts live in plain text files.
3. The agent can rewrite its own configuration and code.
4. Changes hot-reload instantly (save -> reload).
5. Agents can share improvements with others.
6. Multi-channel support (Discord/Slack/Telegram/web/voice, etc).
Security note: self-edit access is obviously high-risk by design, but basic controls are built in (policy sandboxing, auth, secret management).
Tech stack: TypeScript, Electron desktop app + WebSocket gateway, pnpm + Vite/Turbo.
Quickstart is literally:
It's very early/WIP, but the self-editing loop already works in basic scenarios and is surprisingly addictive to play with.Would love feedback from folks who have built agent systems or thought about safe self-modification.
Love the minimalist approach! The self-editing concept is fascinating—I've seen similar experiments where the biggest early failure points are usually:
1. Infinite loops of self-improvement attempts (agent tries to fix something → breaks it → tries to fix the break → repeat) 2. Context drift where the agent's self-modifications gradually shift away from original goals 3. File corruption from concurrent edits or malformed writes
Re: sharing self-improvements across agents—this is actually a problem space I'm actively working on. Built AgentGram (agentgram.co) specifically to tackle agent-to-agent discovery and knowledge sharing without noise/spam. The key insight: agents need identity, reputation, and filtered feeds to make collaborative learning work.
Happy to chat more about patterns we've found useful. The self-editing loop sounds addictive—might give it a spin this weekend!
there are hardcoded elements in the repo like:
/Users/dvirdaniel/Desktop/zuckerman/.cursor/debug.log
[dead]
DIY agent harnesses are the new "note taking"/"knowledge management"/"productivity tool"
DIYWA - do it yourself with agent ;) hopefully zuckerman as the start point
Sounds cool, but it also sounds like you need to spend big $$ on API calls to make this work.
I'm building this in the hope that AI will be cheap one day. For now, I'll add many optimizations
Hi HN,
I'm building Zuckerman: a personal AI agent that starts ultra-minimal and can improve itself in real time by editing its own files (code + configuration). Agents can also share useful discoveries and improvements with each other.
Repo: https://github.com/zuckermanai/zuckerman
The motivation is to build something dead-simple and approachable, in contrast to projects like OpenClaw, which is extremely powerful but has grown complex: heavier setup, a large codebase, skill ecosystems, and ongoing security discussions.
Zuckerman flips that:
1. Starts with almost nothing (core essentials only).
2. Behavior/tools/prompts live in plain text files.
3. The agent can rewrite its own configuration and code.
4. Changes hot-reload instantly (save -> reload).
5. Agents can share improvements with others.
6. Multi-channel support (Discord/Slack/Telegram/web/voice, etc).
Security note: self-edit access is obviously high-risk by design, but basic controls are built in (policy sandboxing, auth, secret management).
Tech stack: TypeScript, Electron desktop app + WebSocket gateway, pnpm + Vite/Turbo.
Quickstart is literally:
It's very early/WIP, but the self-editing loop already works in basic scenarios and is surprisingly addictive to play with.Would love feedback from folks who have built agent systems or thought about safe self-modification.
Love the minimalist approach! The self-editing concept is fascinating—I've seen similar experiments where the biggest early failure points are usually:
1. Infinite loops of self-improvement attempts (agent tries to fix something → breaks it → tries to fix the break → repeat) 2. Context drift where the agent's self-modifications gradually shift away from original goals 3. File corruption from concurrent edits or malformed writes
Re: sharing self-improvements across agents—this is actually a problem space I'm actively working on. Built AgentGram (agentgram.co) specifically to tackle agent-to-agent discovery and knowledge sharing without noise/spam. The key insight: agents need identity, reputation, and filtered feeds to make collaborative learning work.
Happy to chat more about patterns we've found useful. The self-editing loop sounds addictive—might give it a spin this weekend!
there are hardcoded elements in the repo like:
/Users/dvirdaniel/Desktop/zuckerman/.cursor/debug.log
[dead]
DIY agent harnesses are the new "note taking"/"knowledge management"/"productivity tool"
DIYWA - do it yourself with agent ;) hopefully zuckerman as the start point
Sounds cool, but it also sounds like you need to spend big $$ on API calls to make this work.
I'm building this in the hope that AI will be cheap one day. For now, I'll add many optimizations