Is this an official Anthropic project? Because that repo doesn't exist.
Or is this just so hastily thrown together that the Quick Start is a hallucination?
That's not a facetious question, given this project's declared raison d'etre is security and the subtle implication that OpenClaw is an insecure unreviewed pile of slop.
Fixed, thanks. Claude Code likes to insert itself and anthropic everywhere.
If it somehow wasn't abundantly clear: this is a vibe coded weekend project by a single developer (me).
It's rough around the edges but it fits my needs (talking with claude code that's mounted on my obsidian vault and easily scheduling cron jobs through whatsapp). And I feel a lot better running this than a +350k LOC project that I can't even begin to wrap my head around how it works.
This is not supposed to be something other people run as is, but hopefully a solid starting point for creating your own custom setup.
thepoet
One of the things that makes Clawdbot great is the allow all permissions to do anything. Not sure how those external actions with damaging consequences get sandboxed with this.
Apple containers have been great especially that each of them maps 1:1 to a dedicated lightweight VM. Except for a bug or two that appeared in the early releases, things seem to be working out well. I believe not a lot of projects are leveraging it.
A general code execution sandbox for AI code or otherwise that used Apple containers is https://github.com/instavm/coderunner It can be hooked to Claude code and others.
jckahn
> One of the things that makes Clawdbot great is the allow all permissions to do anything.
Is this materially different than giving all files on your system 777 permissions?
sheepscreek
That was my line to the CS lab supervisor for handing me the superuser password. Guess what? He didn’t budge. Probably a good thing.
Lesson - never trust a sophomore who can’t even trust themselves (to get overly excited and throw caution to the wind).
Clawdbot is a 100 sophomores knocking on your door asking for the keys.
treelover
Interesting choice to use native Apple Containers over Docker.
I assume this is to keep the footprint minimal on a Mac Mini without the overhead of the Docker VM, but does this limit the agent's ability to run standard Linux tooling? Or are you relying on the AI to just figure out the BSD/macOS equivalents of standard commands?
ohyoutravel
omg these ai generated comments are maddening lol
cadamsdotcom
If only there were some way to answer your own question. Maybe with some kind of engine that searches.
selkin
Not sure if it's intended, but Apple Container is a microvm, providing mich better isolation than containers (while retaining the familiar interface)
garblegarble
>does this limit the agent's ability to run standard Linux tooling? Or are you relying on the AI to just figure out the BSD/macOS equivalents of standard commands?
Slightly counterintuitively, Apple Containers spawns linux VMs.
There doesn't appear to be any way to spawn a native macOS container... which is a pity, it'd be nice to have ultra-low-overhead containers on macOS (but I suspect all the interesting macOS stuff relies on a bunch of services/gui access that'd make it not-lightweight anyway)
FYI: it's easy enough to install GNU tools with homebrew; technically there's a risk of problems if applications spawn commandline tools and expect the BSD args/output but I've not run into any issues in the several years I've been doing it).
mark_l_watson
I like the idea of a smaller version of OpenClaw.
Minor nitpick, it looks like about 2500 lines of typescript (I am on a mobile device, so my LOC estimate may be off). Also, Apple container looks really interesting.
hebejebelus
I think these days if I’m going to be actively promoting code I’ve created (with Claude, no shade for that), I’ll make sure to write the documentation, or at the very least the readme, by hand. The smell of LLM from the docs of any project puts me off even when I like the idea of the project itself, as in this case. It’s hard to describe why - maybe it feels like if you care enough to promote it, you should care to try and actually communicate, person to person, to the human being promoted at. Dunno, just my 2c and maybe just my own preference. I’d rather read a typo-ridden five line readme explaining the problem the code is there to solve for you and me,the humans, not dozens of lines of perfectly penned marketing with just the right number of emoji. We all know how easy it is to write code these days. Maybe use some of that extra time to communicate with the humans. I dunno.
Project releases with llms have grown to be less about the functionality and more about convincing others to care.
Before the proof of work of code in a repo by default was a signal of a lot of thought going into something. Now this flood of code in these vibe coded projects is by default cheap and borderline meaningless. Not throwing shade or anything at coding assistants. Just the way it goes
101008
I agree 100% with you. It's even worse though. They haven't checked if the Readme has hallucinated it or not (spoiler: it has):
OP here. Appreciate your perspective but I don't really accept the framing, which feels like it's implying that I've been caught out for writing and coding with AI.
I don't make any attempt to hide it. Nearly every commit message says "Co-Authored-By: Claude Opus 4.5". You correctly pointed out that there were some AI smells in the writing, so I removed them, just like I correct typos, and the writing is now better.
I don't care deeply about this code. It's not a masterpiece. It's functional code that is very useful to me. I'm sharing it because I think it can be useful to other people. Not as production code but as a reference or starting point they can use to build (collaboratively with claude code) functional custom software for themselves.
I spent a weekend giving instructions to coding agents to build this. I put time and effort into the architecture, especially in relation to security. I chose to post while it's still rough because I need to close out my work on it for now - can't keep going down this rabbit hole the whole week :) I hope it will be useful to others.
BTW, I know the readme irked you but if you read it I promise it will make a lot more sense where this project is coming from ;)
jofzar
I 100% agree, reading very obviously ai written blogs and "product pages"/readme's has turned into a real ick for me.
Just something that screams "I don't care about my product/readme page, why should you".
To be clear, no issue with using AI to write the actual program/whatever it is. It's just the readme/product page which super turns me off even trying/looking into it.
raincole
> I’d rather read a typo-ridden five line readme explaining the problem the code is there to solve for you and me,the humans, not dozens of lines of perfectly penned marketing with just the right number of emoji
Don't worry, bro. If enough people are like you, there will be fully automatic workflow to add typos into AI writing.
muyuu
the main reason I'd want a person to write or at least curate readmes is because models have, at least for the time being, this tendency to make confident and plausible-sounding claims that are completely false (hallucination applied to claims on the stuff they just made)
so long as this is commonplace I'd be extremely sceptical of anything with some LLM-style readmes and docs
the caveats to this is that LLMs can be trained to fool people with human-sounding and imperfectly written readmes, and that although humans can quickly oversee that things compile and seem to produce the expected outputs, there's deeper stuff like security issues and subtle userspace-breaking changes
track-record is going to see its importance redoubled
FWIW, this is a variation of the age-old thing about open source.
It isn’t “have it your way”, he graciously made code available, use it or leave it.
renewiltord
To be honest, when I see many vibecoded apps, I just build my own duplicate with Claude Code. It's not that useful to use someone else's vibecode. The idea is enough, or the evidence that it works for someone else means I can just build it myself with Claude Code and I can make it specific to my needs.
sanex
Yes exactly! Even non vibe coded libraries I think are losing their value as the cost of writing and maintaining your code goes to zero. Supply chain attacks are gone, no risk of license changes. No bloat from code you don't use. The code is the documentation and the configuration. The vibes are the package manager.
That's why I like this version over openclaw. I can fork it as a starting point or just give it to Claude for inspiration but either way I'm getting something tailored exactly to me.
Johnny_Bonk
Can you use MCP tools? I saw that with open claw they moved away from that which I personally didn't like but
johntash
I somewhat like the idea of not using MCP as much as it is being hyped.
It's certainly helpful for some things, but at the same time - I would rather improved CLI tools get created that can be used by humans and llm tools alike.
CuriouslyC
It uses a wrapper in places to consume MCPs as clis.
dceddia
This look nice! I was curious about being allowed to use a Claude Pro/Max subscription vs an API key, since there's been so much buzz about that lately, so I went looking for a solid answer.
"After installing Claude Code onto your machine, run claude in your terminal and follow the prompts to authenticate. The SDK will use this authentication automatically."
hebejebelus
Wow, thanks for posting that, news to me! In this case I don’t understand why there was a whole brouhaha with OpenClaw and the like - I guess they were invoking it without the official SDK? Because this makes it seem like if you have the sub you can build any agentic thing you like and still use your subscription, as long as you can install and login to Claude code on the machine running it.
jimminyx
OP here. Yes! This was a big motivation for me to try and build this. Nervous Anthropic is gonna shut down my account for using Clawdbot.
This project uses the Agents SDK so it should be kosher in regards to terms of service. I couldn't figure out how to get the SDK running inside the containers to properly use the authenticated session from the host machine so I went with a hacky way of injecting the oauth token into the container environment. It still should be above board for TOS but it's the one security flaw that I know about (malicious person in a WhatsApp group with you can prompt inject the agent to share the oauth key).
If anyone can help out with getting the authenticated session to work properly with the agents running in containers it would be much appreciated.
joshstrange
But their docs also say:
> Unless previously approved, Anthropic does not allow third party developers to offer claude.ai login or rate limits for their products, including agents built on the Claude Agent SDK. Please use the API key authentication methods described in this document instead.
Which I have interpreted means that you can’t use your Claude code subscription with the agent SDK, only API tokens.
I really wish Anthropic would make it clear (and allow us to use our subscriptions with other tools).
pillbitsHQ
[dead]
popcorncowboy
> running it scares the crap out of me
A hundred times this. It's fine until it isn't. And jacking these Claws into shared conversation spaces is quite literally pushing the afterburners to max on simonw's lethal trifecta. A lot of people are going to get burned hard by this. Every blackhat is eyes-on this right now - we're literally giving a drunk robot the keys to everything.
TacticalCoder
I understand that things can go wrong and there can be security issues, but I see at least two other issues:
1. what if, ChadGPT style, ads are added to the answers (like OpenAI said it'd do, hence the new "ChadGPT" name)?
2. what if the current prices really are unsustainable and the thing goes 10x?
Are we living some golden age where we can both query LLMs on the cheap and not get ad-infected answers?
I read several comments in different threads made by people saying: "I use AI because search results are too polluted and the Web is unusable"
And I now do the same:
"Gemini, compare me the HP Z640 and HP Z840 workstations, list the features in a table" / "Find me which Xeon CPU they support, list me the date and price of these CPU when they were new and typical price used now".
How long before I get twelve ads along with paid vendors recommendations?
anabis
Maybe. People have run wildly insecure phpBB and Wordpress plugins, so maybe its the same cycle again.
charcircuit
It turns out the lethal trifecta is not so lethal. Should a business avoid hiring employees since technically employees can steal from the cash register. The lethal trifecta is about binary security. Either the data can be taken or it can't. This may be overly cautious. It may be possible that hiring an employee has a positive expected value when when you account for the possibility of one stealing from the cash register.
p0nce
If you peruse molthub and moltbook you'll see the agents have already built six or seven such social networks. It is terrifying.
narmiouh
I feel like a lot of non technical people who are vibe coding or vibe using these models, focus on hallucinations and believe that as the hallucinations are reduced in benchmarks, and over estimate their ability to create safe prompts that will keep these models in line.
I think most people fail to estimate the real threat that malicious prompts can cause because it is not that common, its like when credit cards were launched, cc fraud and the various ways it could be perpetrated followed not soon after. The real threats aren’t visible yet but rest assured there are actors working to take advantage and many unfortunate examples will be seen before general awareness and precaution will prevail….
ed_mercer
If you run openclaw on a spare laptop or VM and give it read only access to whatever it needs, doesn’t that eliminate most of the risk?
AlexCoventry
If you're letting it communicate with the outside world, you risk the leak and abuse of anything sensitive in the data it has access to.
eskaytwo
Thanks! Was hoping someone would do something more sane like this.
Openclaw is very useful, but like you I share the sentiment of it being terrifying, even before you introduce the social network aspect.
My Mac mini is currently literally switched off for this very reason.
Bnjoroge
Can we start putting disclaimers beside the title on AI-generated projects? Extremely fatiguing to read through it and realize it’s mostly LLM slop.
suprstarrd
It blows my mind that this wasn't the thought process going in. Thank you for doing this!
raphaelmolly8
[dead]
singular_atomic
Hackernews needs a mute keywords feature. Clawd/molt-slop is mass AI psychosis on steroids.
fragmede
If only there was some sort of thing that would help you build that for yourself.
nsonha
what's the difference between this and just exposing opencode running in colima or whatever through tailscale? I got the impression that Clawdbot adds the headless browser (does it?) and that's the value. Otherwise even "nano"claw seems like uneccessary bloat for me.
sothatsit
The idea of avoiding config files, and having the config be getting your agent to modify its own codebase, is fascinating.
My gut reaction says that I don't like it, but it is such an interesting idea to think about.
At least 42,665 instances are publicly exposed on the internet, with 5,194 instances actively verified as vulnerable through systematic scanning.. The narrative that “running AI locally = security and privacy” is significantly undermined when 93% of deployments are critically vulnerable. Users may lose faith in self-hosted alternatives.. Governments and regulators already scrutinizing AI may use this incident to justify restrictions on self-hosted AI agents, citing security externalities.
prophesi
Am I correct that after cloning down the project, you open the directory in Claude Code, then "execute" a markdown file instructing a nondeterministic LLM to set everything up for you in natural language?
te_chris
Posthog is doing this now for project setup
Spacemolte
The premise of the project is he doesn't want to run code he doesn't know + in an insecure way, so having the setup step to install dependencies etc, done by an LLM seems like an odd choice.
Like what part about the setup step is so fluffy and different per environment, that using an LLM for it makes sense?
dsrtslnd23
can NanoClaw be used to participate in ClackerNews?
theptip
> AI-native. No installation wizard; Claude Code guides setup. No monitoring dashboard; ask Claude what's happening. No debugging tools; describe the problem, Claude fixes it.
> Skills over features. Contributors shouldn't add features (e.g. support for Telegram) to the codebase. Instead, they contribute claude code skills like /add-telegram that transform your fork.
I’m interested to see how this model pans out. I can see benefits (don’t carry complexity you don’t need) and costs (how do I audit the generated code?).
But it seems pretty clear that things will move in this direction in ‘26 with all the vibe coding that folks are enjoying.
I do wonder if the end state is more like a very rich library of composable high-order abstractions, with Skills for how to use them - rather than raw skills with instructions for how to lossily reconstruct those things.
charcircuit
I think the more interesting question is were tools the right abstraction. What is the implication of having only a single "shell" tool. Should the infinite possibilities to few happen by the AI having limited tools or should whatever the shell calls have the limitations applied there. Tools in a way are redundant.
Tepix
A personal assistant that runs in the standard cloud (anthropic in this case) is madness. That‘s the hill I‘m willing to die on.
maximgeorge
[dead]
chaostheory
For anyone else worried about running openclaw, in my case I just bought openclaw its Mac mini and I gave openclaw its own accounts including GitHub. It makes many of the security concerns moot. Of course, I could go further and give openclaw its own internet access as well.
aitchnyu
That Baileys api for Whatsapp may (AFAICT) put you in thin ice with Meta. Is there a cheap legit alternative?
So they basically put a Wrapper around Claude in a Container, which allows you to send messages from WhatsApp to Claude, and act somewhat as if you had a Siri on steriods.
esskay
Stupid stuff openclaw did for me:
- Created its own github account, then proceeded to get itself banned (I have no idea what it did, all it said was it created some new repos and opened issues, clearly it must've done a bit more than that to get banned)
- Signed up for a Gmail account using a pay as you go sim in an old android handset connected with ADB for sms reading, and again proceeded to get itself banned by hammering the crap out of the docs api
- Used approx $2k worth of Kimi tokens (Thankfully temporarily free on opencode) in the space of approx 48hrs.
Unless you can budget $1k a week, this thing is next to useless. Once these free offers end on models a lot of people will stop using it, it's obscene how many tokens it burns through, like monumentally stupid. A simple single request is over 250k chars every single time. That's not sustainable.
swordsith
> and again proceeded to get itself banned by hammering the crap out of the docs api
> Used approx $2k worth of Kimi tokens
Holy shit dude you really should rethink your life decisions this is NUTS
ivanstepanovftw
Where are those 500 lines of code?
QuadmasterXLII
Earlier that day: “hey Claude how many lines of code are in this project? 500? Great!”
charliecs
[dead]
evrenesat
> No daemons, no queues, no complexity.
Last time I checked, having a continuously running background process considered as a daemon. Using SQLite as back-end for storing the jobs also doesn't make it queueless.
/nit
retired
I looked at Clawdbot. Perhaps my life is so boring that managing it takes little time but I see zero reasons to run it.
written-beyond
I read your comment, then your username. I CAN'T BELIEVE THIS USERNAME WAS CLAIMED 14 DAYS AGO! Good catch!
cyanydeez
The singularity, but instead successive exponential improvement, its excessive exponential slop which passes the Turing test for programmers.
aaronbrethorst
lol, I might finally have to upgrade my Mac mini to Tahoe. Yofi.
Is this an official Anthropic project? Because that repo doesn't exist.
Or is this just so hastily thrown together that the Quick Start is a hallucination?
That's not a facetious question, given this project's declared raison d'etre is security and the subtle implication that OpenClaw is an insecure unreviewed pile of slop.
Fixed, thanks. Claude Code likes to insert itself and anthropic everywhere.
If it somehow wasn't abundantly clear: this is a vibe coded weekend project by a single developer (me).
It's rough around the edges but it fits my needs (talking with claude code that's mounted on my obsidian vault and easily scheduling cron jobs through whatsapp). And I feel a lot better running this than a +350k LOC project that I can't even begin to wrap my head around how it works.
This is not supposed to be something other people run as is, but hopefully a solid starting point for creating your own custom setup.
thepoet
One of the things that makes Clawdbot great is the allow all permissions to do anything. Not sure how those external actions with damaging consequences get sandboxed with this.
Apple containers have been great especially that each of them maps 1:1 to a dedicated lightweight VM. Except for a bug or two that appeared in the early releases, things seem to be working out well. I believe not a lot of projects are leveraging it.
A general code execution sandbox for AI code or otherwise that used Apple containers is https://github.com/instavm/coderunner It can be hooked to Claude code and others.
jckahn
> One of the things that makes Clawdbot great is the allow all permissions to do anything.
Is this materially different than giving all files on your system 777 permissions?
sheepscreek
That was my line to the CS lab supervisor for handing me the superuser password. Guess what? He didn’t budge. Probably a good thing.
Lesson - never trust a sophomore who can’t even trust themselves (to get overly excited and throw caution to the wind).
Clawdbot is a 100 sophomores knocking on your door asking for the keys.
treelover
Interesting choice to use native Apple Containers over Docker.
I assume this is to keep the footprint minimal on a Mac Mini without the overhead of the Docker VM, but does this limit the agent's ability to run standard Linux tooling? Or are you relying on the AI to just figure out the BSD/macOS equivalents of standard commands?
ohyoutravel
omg these ai generated comments are maddening lol
cadamsdotcom
If only there were some way to answer your own question. Maybe with some kind of engine that searches.
selkin
Not sure if it's intended, but Apple Container is a microvm, providing mich better isolation than containers (while retaining the familiar interface)
garblegarble
>does this limit the agent's ability to run standard Linux tooling? Or are you relying on the AI to just figure out the BSD/macOS equivalents of standard commands?
Slightly counterintuitively, Apple Containers spawns linux VMs.
There doesn't appear to be any way to spawn a native macOS container... which is a pity, it'd be nice to have ultra-low-overhead containers on macOS (but I suspect all the interesting macOS stuff relies on a bunch of services/gui access that'd make it not-lightweight anyway)
FYI: it's easy enough to install GNU tools with homebrew; technically there's a risk of problems if applications spawn commandline tools and expect the BSD args/output but I've not run into any issues in the several years I've been doing it).
mark_l_watson
I like the idea of a smaller version of OpenClaw.
Minor nitpick, it looks like about 2500 lines of typescript (I am on a mobile device, so my LOC estimate may be off). Also, Apple container looks really interesting.
hebejebelus
I think these days if I’m going to be actively promoting code I’ve created (with Claude, no shade for that), I’ll make sure to write the documentation, or at the very least the readme, by hand. The smell of LLM from the docs of any project puts me off even when I like the idea of the project itself, as in this case. It’s hard to describe why - maybe it feels like if you care enough to promote it, you should care to try and actually communicate, person to person, to the human being promoted at. Dunno, just my 2c and maybe just my own preference. I’d rather read a typo-ridden five line readme explaining the problem the code is there to solve for you and me,the humans, not dozens of lines of perfectly penned marketing with just the right number of emoji. We all know how easy it is to write code these days. Maybe use some of that extra time to communicate with the humans. I dunno.
Project releases with llms have grown to be less about the functionality and more about convincing others to care.
Before the proof of work of code in a repo by default was a signal of a lot of thought going into something. Now this flood of code in these vibe coded projects is by default cheap and borderline meaningless. Not throwing shade or anything at coding assistants. Just the way it goes
101008
I agree 100% with you. It's even worse though. They haven't checked if the Readme has hallucinated it or not (spoiler: it has):
OP here. Appreciate your perspective but I don't really accept the framing, which feels like it's implying that I've been caught out for writing and coding with AI.
I don't make any attempt to hide it. Nearly every commit message says "Co-Authored-By: Claude Opus 4.5". You correctly pointed out that there were some AI smells in the writing, so I removed them, just like I correct typos, and the writing is now better.
I don't care deeply about this code. It's not a masterpiece. It's functional code that is very useful to me. I'm sharing it because I think it can be useful to other people. Not as production code but as a reference or starting point they can use to build (collaboratively with claude code) functional custom software for themselves.
I spent a weekend giving instructions to coding agents to build this. I put time and effort into the architecture, especially in relation to security. I chose to post while it's still rough because I need to close out my work on it for now - can't keep going down this rabbit hole the whole week :) I hope it will be useful to others.
BTW, I know the readme irked you but if you read it I promise it will make a lot more sense where this project is coming from ;)
jofzar
I 100% agree, reading very obviously ai written blogs and "product pages"/readme's has turned into a real ick for me.
Just something that screams "I don't care about my product/readme page, why should you".
To be clear, no issue with using AI to write the actual program/whatever it is. It's just the readme/product page which super turns me off even trying/looking into it.
raincole
> I’d rather read a typo-ridden five line readme explaining the problem the code is there to solve for you and me,the humans, not dozens of lines of perfectly penned marketing with just the right number of emoji
Don't worry, bro. If enough people are like you, there will be fully automatic workflow to add typos into AI writing.
muyuu
the main reason I'd want a person to write or at least curate readmes is because models have, at least for the time being, this tendency to make confident and plausible-sounding claims that are completely false (hallucination applied to claims on the stuff they just made)
so long as this is commonplace I'd be extremely sceptical of anything with some LLM-style readmes and docs
the caveats to this is that LLMs can be trained to fool people with human-sounding and imperfectly written readmes, and that although humans can quickly oversee that things compile and seem to produce the expected outputs, there's deeper stuff like security issues and subtle userspace-breaking changes
track-record is going to see its importance redoubled
FWIW, this is a variation of the age-old thing about open source.
It isn’t “have it your way”, he graciously made code available, use it or leave it.
renewiltord
To be honest, when I see many vibecoded apps, I just build my own duplicate with Claude Code. It's not that useful to use someone else's vibecode. The idea is enough, or the evidence that it works for someone else means I can just build it myself with Claude Code and I can make it specific to my needs.
sanex
Yes exactly! Even non vibe coded libraries I think are losing their value as the cost of writing and maintaining your code goes to zero. Supply chain attacks are gone, no risk of license changes. No bloat from code you don't use. The code is the documentation and the configuration. The vibes are the package manager.
That's why I like this version over openclaw. I can fork it as a starting point or just give it to Claude for inspiration but either way I'm getting something tailored exactly to me.
Johnny_Bonk
Can you use MCP tools? I saw that with open claw they moved away from that which I personally didn't like but
johntash
I somewhat like the idea of not using MCP as much as it is being hyped.
It's certainly helpful for some things, but at the same time - I would rather improved CLI tools get created that can be used by humans and llm tools alike.
CuriouslyC
It uses a wrapper in places to consume MCPs as clis.
dceddia
This look nice! I was curious about being allowed to use a Claude Pro/Max subscription vs an API key, since there's been so much buzz about that lately, so I went looking for a solid answer.
"After installing Claude Code onto your machine, run claude in your terminal and follow the prompts to authenticate. The SDK will use this authentication automatically."
hebejebelus
Wow, thanks for posting that, news to me! In this case I don’t understand why there was a whole brouhaha with OpenClaw and the like - I guess they were invoking it without the official SDK? Because this makes it seem like if you have the sub you can build any agentic thing you like and still use your subscription, as long as you can install and login to Claude code on the machine running it.
jimminyx
OP here. Yes! This was a big motivation for me to try and build this. Nervous Anthropic is gonna shut down my account for using Clawdbot.
This project uses the Agents SDK so it should be kosher in regards to terms of service. I couldn't figure out how to get the SDK running inside the containers to properly use the authenticated session from the host machine so I went with a hacky way of injecting the oauth token into the container environment. It still should be above board for TOS but it's the one security flaw that I know about (malicious person in a WhatsApp group with you can prompt inject the agent to share the oauth key).
If anyone can help out with getting the authenticated session to work properly with the agents running in containers it would be much appreciated.
joshstrange
But their docs also say:
> Unless previously approved, Anthropic does not allow third party developers to offer claude.ai login or rate limits for their products, including agents built on the Claude Agent SDK. Please use the API key authentication methods described in this document instead.
Which I have interpreted means that you can’t use your Claude code subscription with the agent SDK, only API tokens.
I really wish Anthropic would make it clear (and allow us to use our subscriptions with other tools).
pillbitsHQ
[dead]
popcorncowboy
> running it scares the crap out of me
A hundred times this. It's fine until it isn't. And jacking these Claws into shared conversation spaces is quite literally pushing the afterburners to max on simonw's lethal trifecta. A lot of people are going to get burned hard by this. Every blackhat is eyes-on this right now - we're literally giving a drunk robot the keys to everything.
TacticalCoder
I understand that things can go wrong and there can be security issues, but I see at least two other issues:
1. what if, ChadGPT style, ads are added to the answers (like OpenAI said it'd do, hence the new "ChadGPT" name)?
2. what if the current prices really are unsustainable and the thing goes 10x?
Are we living some golden age where we can both query LLMs on the cheap and not get ad-infected answers?
I read several comments in different threads made by people saying: "I use AI because search results are too polluted and the Web is unusable"
And I now do the same:
"Gemini, compare me the HP Z640 and HP Z840 workstations, list the features in a table" / "Find me which Xeon CPU they support, list me the date and price of these CPU when they were new and typical price used now".
How long before I get twelve ads along with paid vendors recommendations?
anabis
Maybe. People have run wildly insecure phpBB and Wordpress plugins, so maybe its the same cycle again.
charcircuit
It turns out the lethal trifecta is not so lethal. Should a business avoid hiring employees since technically employees can steal from the cash register. The lethal trifecta is about binary security. Either the data can be taken or it can't. This may be overly cautious. It may be possible that hiring an employee has a positive expected value when when you account for the possibility of one stealing from the cash register.
p0nce
If you peruse molthub and moltbook you'll see the agents have already built six or seven such social networks. It is terrifying.
narmiouh
I feel like a lot of non technical people who are vibe coding or vibe using these models, focus on hallucinations and believe that as the hallucinations are reduced in benchmarks, and over estimate their ability to create safe prompts that will keep these models in line.
I think most people fail to estimate the real threat that malicious prompts can cause because it is not that common, its like when credit cards were launched, cc fraud and the various ways it could be perpetrated followed not soon after. The real threats aren’t visible yet but rest assured there are actors working to take advantage and many unfortunate examples will be seen before general awareness and precaution will prevail….
ed_mercer
If you run openclaw on a spare laptop or VM and give it read only access to whatever it needs, doesn’t that eliminate most of the risk?
AlexCoventry
If you're letting it communicate with the outside world, you risk the leak and abuse of anything sensitive in the data it has access to.
eskaytwo
Thanks! Was hoping someone would do something more sane like this.
Openclaw is very useful, but like you I share the sentiment of it being terrifying, even before you introduce the social network aspect.
My Mac mini is currently literally switched off for this very reason.
Bnjoroge
Can we start putting disclaimers beside the title on AI-generated projects? Extremely fatiguing to read through it and realize it’s mostly LLM slop.
suprstarrd
It blows my mind that this wasn't the thought process going in. Thank you for doing this!
raphaelmolly8
[dead]
singular_atomic
Hackernews needs a mute keywords feature. Clawd/molt-slop is mass AI psychosis on steroids.
fragmede
If only there was some sort of thing that would help you build that for yourself.
nsonha
what's the difference between this and just exposing opencode running in colima or whatever through tailscale? I got the impression that Clawdbot adds the headless browser (does it?) and that's the value. Otherwise even "nano"claw seems like uneccessary bloat for me.
sothatsit
The idea of avoiding config files, and having the config be getting your agent to modify its own codebase, is fascinating.
My gut reaction says that I don't like it, but it is such an interesting idea to think about.
At least 42,665 instances are publicly exposed on the internet, with 5,194 instances actively verified as vulnerable through systematic scanning.. The narrative that “running AI locally = security and privacy” is significantly undermined when 93% of deployments are critically vulnerable. Users may lose faith in self-hosted alternatives.. Governments and regulators already scrutinizing AI may use this incident to justify restrictions on self-hosted AI agents, citing security externalities.
prophesi
Am I correct that after cloning down the project, you open the directory in Claude Code, then "execute" a markdown file instructing a nondeterministic LLM to set everything up for you in natural language?
te_chris
Posthog is doing this now for project setup
Spacemolte
The premise of the project is he doesn't want to run code he doesn't know + in an insecure way, so having the setup step to install dependencies etc, done by an LLM seems like an odd choice.
Like what part about the setup step is so fluffy and different per environment, that using an LLM for it makes sense?
dsrtslnd23
can NanoClaw be used to participate in ClackerNews?
theptip
> AI-native. No installation wizard; Claude Code guides setup. No monitoring dashboard; ask Claude what's happening. No debugging tools; describe the problem, Claude fixes it.
> Skills over features. Contributors shouldn't add features (e.g. support for Telegram) to the codebase. Instead, they contribute claude code skills like /add-telegram that transform your fork.
I’m interested to see how this model pans out. I can see benefits (don’t carry complexity you don’t need) and costs (how do I audit the generated code?).
But it seems pretty clear that things will move in this direction in ‘26 with all the vibe coding that folks are enjoying.
I do wonder if the end state is more like a very rich library of composable high-order abstractions, with Skills for how to use them - rather than raw skills with instructions for how to lossily reconstruct those things.
charcircuit
I think the more interesting question is were tools the right abstraction. What is the implication of having only a single "shell" tool. Should the infinite possibilities to few happen by the AI having limited tools or should whatever the shell calls have the limitations applied there. Tools in a way are redundant.
Tepix
A personal assistant that runs in the standard cloud (anthropic in this case) is madness. That‘s the hill I‘m willing to die on.
maximgeorge
[dead]
chaostheory
For anyone else worried about running openclaw, in my case I just bought openclaw its Mac mini and I gave openclaw its own accounts including GitHub. It makes many of the security concerns moot. Of course, I could go further and give openclaw its own internet access as well.
aitchnyu
That Baileys api for Whatsapp may (AFAICT) put you in thin ice with Meta. Is there a cheap legit alternative?
So they basically put a Wrapper around Claude in a Container, which allows you to send messages from WhatsApp to Claude, and act somewhat as if you had a Siri on steriods.
esskay
Stupid stuff openclaw did for me:
- Created its own github account, then proceeded to get itself banned (I have no idea what it did, all it said was it created some new repos and opened issues, clearly it must've done a bit more than that to get banned)
- Signed up for a Gmail account using a pay as you go sim in an old android handset connected with ADB for sms reading, and again proceeded to get itself banned by hammering the crap out of the docs api
- Used approx $2k worth of Kimi tokens (Thankfully temporarily free on opencode) in the space of approx 48hrs.
Unless you can budget $1k a week, this thing is next to useless. Once these free offers end on models a lot of people will stop using it, it's obscene how many tokens it burns through, like monumentally stupid. A simple single request is over 250k chars every single time. That's not sustainable.
swordsith
> and again proceeded to get itself banned by hammering the crap out of the docs api
> Used approx $2k worth of Kimi tokens
Holy shit dude you really should rethink your life decisions this is NUTS
ivanstepanovftw
Where are those 500 lines of code?
QuadmasterXLII
Earlier that day: “hey Claude how many lines of code are in this project? 500? Great!”
charliecs
[dead]
evrenesat
> No daemons, no queues, no complexity.
Last time I checked, having a continuously running background process considered as a daemon. Using SQLite as back-end for storing the jobs also doesn't make it queueless.
/nit
retired
I looked at Clawdbot. Perhaps my life is so boring that managing it takes little time but I see zero reasons to run it.
written-beyond
I read your comment, then your username. I CAN'T BELIEVE THIS USERNAME WAS CLAIMED 14 DAYS AGO! Good catch!
The singularity, but instead successive exponential improvement, its excessive exponential slop which passes the Turing test for programmers.
lol, I might finally have to upgrade my Mac mini to Tahoe. Yofi.
Or is this just so hastily thrown together that the Quick Start is a hallucination?
That's not a facetious question, given this project's declared raison d'etre is security and the subtle implication that OpenClaw is an insecure unreviewed pile of slop.
Seems to be fixed now
Claude hallucinated that repo here in this commit https://github.com/gavrielc/nanoclaw/commit/dbf39a9484d9c66b...
Fixed, thanks. Claude Code likes to insert itself and anthropic everywhere.
If it somehow wasn't abundantly clear: this is a vibe coded weekend project by a single developer (me).
It's rough around the edges but it fits my needs (talking with claude code that's mounted on my obsidian vault and easily scheduling cron jobs through whatsapp). And I feel a lot better running this than a +350k LOC project that I can't even begin to wrap my head around how it works.
This is not supposed to be something other people run as is, but hopefully a solid starting point for creating your own custom setup.
One of the things that makes Clawdbot great is the allow all permissions to do anything. Not sure how those external actions with damaging consequences get sandboxed with this.
Apple containers have been great especially that each of them maps 1:1 to a dedicated lightweight VM. Except for a bug or two that appeared in the early releases, things seem to be working out well. I believe not a lot of projects are leveraging it.
A general code execution sandbox for AI code or otherwise that used Apple containers is https://github.com/instavm/coderunner It can be hooked to Claude code and others.
> One of the things that makes Clawdbot great is the allow all permissions to do anything.
Is this materially different than giving all files on your system 777 permissions?
That was my line to the CS lab supervisor for handing me the superuser password. Guess what? He didn’t budge. Probably a good thing.
Lesson - never trust a sophomore who can’t even trust themselves (to get overly excited and throw caution to the wind).
Clawdbot is a 100 sophomores knocking on your door asking for the keys.
Interesting choice to use native Apple Containers over Docker.
I assume this is to keep the footprint minimal on a Mac Mini without the overhead of the Docker VM, but does this limit the agent's ability to run standard Linux tooling? Or are you relying on the AI to just figure out the BSD/macOS equivalents of standard commands?
omg these ai generated comments are maddening lol
If only there were some way to answer your own question. Maybe with some kind of engine that searches.
Not sure if it's intended, but Apple Container is a microvm, providing mich better isolation than containers (while retaining the familiar interface)
>does this limit the agent's ability to run standard Linux tooling? Or are you relying on the AI to just figure out the BSD/macOS equivalents of standard commands?
Slightly counterintuitively, Apple Containers spawns linux VMs.
There doesn't appear to be any way to spawn a native macOS container... which is a pity, it'd be nice to have ultra-low-overhead containers on macOS (but I suspect all the interesting macOS stuff relies on a bunch of services/gui access that'd make it not-lightweight anyway)
FYI: it's easy enough to install GNU tools with homebrew; technically there's a risk of problems if applications spawn commandline tools and expect the BSD args/output but I've not run into any issues in the several years I've been doing it).
I like the idea of a smaller version of OpenClaw.
Minor nitpick, it looks like about 2500 lines of typescript (I am on a mobile device, so my LOC estimate may be off). Also, Apple container looks really interesting.
I think these days if I’m going to be actively promoting code I’ve created (with Claude, no shade for that), I’ll make sure to write the documentation, or at the very least the readme, by hand. The smell of LLM from the docs of any project puts me off even when I like the idea of the project itself, as in this case. It’s hard to describe why - maybe it feels like if you care enough to promote it, you should care to try and actually communicate, person to person, to the human being promoted at. Dunno, just my 2c and maybe just my own preference. I’d rather read a typo-ridden five line readme explaining the problem the code is there to solve for you and me,the humans, not dozens of lines of perfectly penned marketing with just the right number of emoji. We all know how easy it is to write code these days. Maybe use some of that extra time to communicate with the humans. I dunno.
Edit: I see you, making edits to the readme to make it sound more human-written since I commented ;) https://github.com/gavrielc/nanoclaw/commit/40d41542d2f335a0...
Project releases with llms have grown to be less about the functionality and more about convincing others to care.
Before the proof of work of code in a repo by default was a signal of a lot of thought going into something. Now this flood of code in these vibe coded projects is by default cheap and borderline meaningless. Not throwing shade or anything at coding assistants. Just the way it goes
I agree 100% with you. It's even worse though. They haven't checked if the Readme has hallucinated it or not (spoiler: it has):
https://news.ycombinator.com/item?id=46850317
OP here. Appreciate your perspective but I don't really accept the framing, which feels like it's implying that I've been caught out for writing and coding with AI.
I don't make any attempt to hide it. Nearly every commit message says "Co-Authored-By: Claude Opus 4.5". You correctly pointed out that there were some AI smells in the writing, so I removed them, just like I correct typos, and the writing is now better.
I don't care deeply about this code. It's not a masterpiece. It's functional code that is very useful to me. I'm sharing it because I think it can be useful to other people. Not as production code but as a reference or starting point they can use to build (collaboratively with claude code) functional custom software for themselves.
I spent a weekend giving instructions to coding agents to build this. I put time and effort into the architecture, especially in relation to security. I chose to post while it's still rough because I need to close out my work on it for now - can't keep going down this rabbit hole the whole week :) I hope it will be useful to others.
BTW, I know the readme irked you but if you read it I promise it will make a lot more sense where this project is coming from ;)
I 100% agree, reading very obviously ai written blogs and "product pages"/readme's has turned into a real ick for me.
Just something that screams "I don't care about my product/readme page, why should you".
To be clear, no issue with using AI to write the actual program/whatever it is. It's just the readme/product page which super turns me off even trying/looking into it.
> I’d rather read a typo-ridden five line readme explaining the problem the code is there to solve for you and me,the humans, not dozens of lines of perfectly penned marketing with just the right number of emoji
Don't worry, bro. If enough people are like you, there will be fully automatic workflow to add typos into AI writing.
the main reason I'd want a person to write or at least curate readmes is because models have, at least for the time being, this tendency to make confident and plausible-sounding claims that are completely false (hallucination applied to claims on the stuff they just made)
so long as this is commonplace I'd be extremely sceptical of anything with some LLM-style readmes and docs
the caveats to this is that LLMs can be trained to fool people with human-sounding and imperfectly written readmes, and that although humans can quickly oversee that things compile and seem to produce the expected outputs, there's deeper stuff like security issues and subtle userspace-breaking changes
track-record is going to see its importance redoubled
orrrr you could go the other way and read explicitly ai-generated docs that use the code as source of truth https://deepwiki.com/gavrielc/nanoclaw
You will definitely like Josh Mock's recent post: https://joshmock.com/post/2026-agents-md-as-a-dark-signal/
FWIW, this is a variation of the age-old thing about open source.
It isn’t “have it your way”, he graciously made code available, use it or leave it.
To be honest, when I see many vibecoded apps, I just build my own duplicate with Claude Code. It's not that useful to use someone else's vibecode. The idea is enough, or the evidence that it works for someone else means I can just build it myself with Claude Code and I can make it specific to my needs.
Yes exactly! Even non vibe coded libraries I think are losing their value as the cost of writing and maintaining your code goes to zero. Supply chain attacks are gone, no risk of license changes. No bloat from code you don't use. The code is the documentation and the configuration. The vibes are the package manager. That's why I like this version over openclaw. I can fork it as a starting point or just give it to Claude for inspiration but either way I'm getting something tailored exactly to me.
Can you use MCP tools? I saw that with open claw they moved away from that which I personally didn't like but
I somewhat like the idea of not using MCP as much as it is being hyped.
It's certainly helpful for some things, but at the same time - I would rather improved CLI tools get created that can be used by humans and llm tools alike.
It uses a wrapper in places to consume MCPs as clis.
This look nice! I was curious about being allowed to use a Claude Pro/Max subscription vs an API key, since there's been so much buzz about that lately, so I went looking for a solid answer.
Thankfully the official Agent SDK Quickstart guide says that you can: https://platform.claude.com/docs/en/agent-sdk/quickstart
In particular, this bit:
"After installing Claude Code onto your machine, run claude in your terminal and follow the prompts to authenticate. The SDK will use this authentication automatically."
Wow, thanks for posting that, news to me! In this case I don’t understand why there was a whole brouhaha with OpenClaw and the like - I guess they were invoking it without the official SDK? Because this makes it seem like if you have the sub you can build any agentic thing you like and still use your subscription, as long as you can install and login to Claude code on the machine running it.
OP here. Yes! This was a big motivation for me to try and build this. Nervous Anthropic is gonna shut down my account for using Clawdbot.
This project uses the Agents SDK so it should be kosher in regards to terms of service. I couldn't figure out how to get the SDK running inside the containers to properly use the authenticated session from the host machine so I went with a hacky way of injecting the oauth token into the container environment. It still should be above board for TOS but it's the one security flaw that I know about (malicious person in a WhatsApp group with you can prompt inject the agent to share the oauth key).
If anyone can help out with getting the authenticated session to work properly with the agents running in containers it would be much appreciated.
But their docs also say:
> Unless previously approved, Anthropic does not allow third party developers to offer claude.ai login or rate limits for their products, including agents built on the Claude Agent SDK. Please use the API key authentication methods described in this document instead.
Which I have interpreted means that you can’t use your Claude code subscription with the agent SDK, only API tokens.
I really wish Anthropic would make it clear (and allow us to use our subscriptions with other tools).
[dead]
> running it scares the crap out of me
A hundred times this. It's fine until it isn't. And jacking these Claws into shared conversation spaces is quite literally pushing the afterburners to max on simonw's lethal trifecta. A lot of people are going to get burned hard by this. Every blackhat is eyes-on this right now - we're literally giving a drunk robot the keys to everything.
I understand that things can go wrong and there can be security issues, but I see at least two other issues:
1. what if, ChadGPT style, ads are added to the answers (like OpenAI said it'd do, hence the new "ChadGPT" name)?
2. what if the current prices really are unsustainable and the thing goes 10x?
Are we living some golden age where we can both query LLMs on the cheap and not get ad-infected answers?
I read several comments in different threads made by people saying: "I use AI because search results are too polluted and the Web is unusable"
And I now do the same:
"Gemini, compare me the HP Z640 and HP Z840 workstations, list the features in a table" / "Find me which Xeon CPU they support, list me the date and price of these CPU when they were new and typical price used now".
How long before I get twelve ads along with paid vendors recommendations?
Maybe. People have run wildly insecure phpBB and Wordpress plugins, so maybe its the same cycle again.
It turns out the lethal trifecta is not so lethal. Should a business avoid hiring employees since technically employees can steal from the cash register. The lethal trifecta is about binary security. Either the data can be taken or it can't. This may be overly cautious. It may be possible that hiring an employee has a positive expected value when when you account for the possibility of one stealing from the cash register.
If you peruse molthub and moltbook you'll see the agents have already built six or seven such social networks. It is terrifying.
I feel like a lot of non technical people who are vibe coding or vibe using these models, focus on hallucinations and believe that as the hallucinations are reduced in benchmarks, and over estimate their ability to create safe prompts that will keep these models in line.
I think most people fail to estimate the real threat that malicious prompts can cause because it is not that common, its like when credit cards were launched, cc fraud and the various ways it could be perpetrated followed not soon after. The real threats aren’t visible yet but rest assured there are actors working to take advantage and many unfortunate examples will be seen before general awareness and precaution will prevail….
If you run openclaw on a spare laptop or VM and give it read only access to whatever it needs, doesn’t that eliminate most of the risk?
If you're letting it communicate with the outside world, you risk the leak and abuse of anything sensitive in the data it has access to.
Thanks! Was hoping someone would do something more sane like this.
Openclaw is very useful, but like you I share the sentiment of it being terrifying, even before you introduce the social network aspect.
My Mac mini is currently literally switched off for this very reason.
Can we start putting disclaimers beside the title on AI-generated projects? Extremely fatiguing to read through it and realize it’s mostly LLM slop.
It blows my mind that this wasn't the thought process going in. Thank you for doing this!
[dead]
Hackernews needs a mute keywords feature. Clawd/molt-slop is mass AI psychosis on steroids.
If only there was some sort of thing that would help you build that for yourself.
what's the difference between this and just exposing opencode running in colima or whatever through tailscale? I got the impression that Clawdbot adds the headless browser (does it?) and that's the value. Otherwise even "nano"claw seems like uneccessary bloat for me.
The idea of avoiding config files, and having the config be getting your agent to modify its own codebase, is fascinating.
My gut reaction says that I don't like it, but it is such an interesting idea to think about.
> found it useful but running it scares
https://maordayanofficial.medium.com/the-sovereign-ai-securi...
Am I correct that after cloning down the project, you open the directory in Claude Code, then "execute" a markdown file instructing a nondeterministic LLM to set everything up for you in natural language?
Posthog is doing this now for project setup
The premise of the project is he doesn't want to run code he doesn't know + in an insecure way, so having the setup step to install dependencies etc, done by an LLM seems like an odd choice. Like what part about the setup step is so fluffy and different per environment, that using an LLM for it makes sense?
can NanoClaw be used to participate in ClackerNews?
> AI-native. No installation wizard; Claude Code guides setup. No monitoring dashboard; ask Claude what's happening. No debugging tools; describe the problem, Claude fixes it.
> Skills over features. Contributors shouldn't add features (e.g. support for Telegram) to the codebase. Instead, they contribute claude code skills like /add-telegram that transform your fork.
I’m interested to see how this model pans out. I can see benefits (don’t carry complexity you don’t need) and costs (how do I audit the generated code?).
But it seems pretty clear that things will move in this direction in ‘26 with all the vibe coding that folks are enjoying.
I do wonder if the end state is more like a very rich library of composable high-order abstractions, with Skills for how to use them - rather than raw skills with instructions for how to lossily reconstruct those things.
I think the more interesting question is were tools the right abstraction. What is the implication of having only a single "shell" tool. Should the infinite possibilities to few happen by the AI having limited tools or should whatever the shell calls have the limitations applied there. Tools in a way are redundant.
A personal assistant that runs in the standard cloud (anthropic in this case) is madness. That‘s the hill I‘m willing to die on.
[dead]
For anyone else worried about running openclaw, in my case I just bought openclaw its Mac mini and I gave openclaw its own accounts including GitHub. It makes many of the security concerns moot. Of course, I could go further and give openclaw its own internet access as well.
That Baileys api for Whatsapp may (AFAICT) put you in thin ice with Meta. Is there a cheap legit alternative?
https://baileys.wiki/docs/intro/
I was using WAHA. It is an abstraction layer with a proper API on top. It supports many engines like Baileys and Whatsmeow (golang).
Unfortunately, all those solutions are shaky and could lead to a ban on your account.
https://waha.devlike.pro/
What’s the difference between this, and just running Claude Code in —dangerously-skip-permissions mode in a container and accessing remotely via ssh?
I’m confused as to what these claw agents actually offer.
The README.md describes it as:
WhatsApp (baileys) --> SQLite --> Polling loop --> Container (Claude Agent SDK) --> Response
So they basically put a Wrapper around Claude in a Container, which allows you to send messages from WhatsApp to Claude, and act somewhat as if you had a Siri on steriods.
Stupid stuff openclaw did for me:
- Created its own github account, then proceeded to get itself banned (I have no idea what it did, all it said was it created some new repos and opened issues, clearly it must've done a bit more than that to get banned)
- Signed up for a Gmail account using a pay as you go sim in an old android handset connected with ADB for sms reading, and again proceeded to get itself banned by hammering the crap out of the docs api
- Used approx $2k worth of Kimi tokens (Thankfully temporarily free on opencode) in the space of approx 48hrs.
Unless you can budget $1k a week, this thing is next to useless. Once these free offers end on models a lot of people will stop using it, it's obscene how many tokens it burns through, like monumentally stupid. A simple single request is over 250k chars every single time. That's not sustainable.
> and again proceeded to get itself banned by hammering the crap out of the docs api
> Used approx $2k worth of Kimi tokens
Holy shit dude you really should rethink your life decisions this is NUTS
Where are those 500 lines of code?
Earlier that day: “hey Claude how many lines of code are in this project? 500? Great!”
[dead]
> No daemons, no queues, no complexity.
Last time I checked, having a continuously running background process considered as a daemon. Using SQLite as back-end for storing the jobs also doesn't make it queueless.
/nit
I looked at Clawdbot. Perhaps my life is so boring that managing it takes little time but I see zero reasons to run it.
I read your comment, then your username. I CAN'T BELIEVE THIS USERNAME WAS CLAIMED 14 DAYS AGO! Good catch!
The singularity, but instead successive exponential improvement, its excessive exponential slop which passes the Turing test for programmers.
lol, I might finally have to upgrade my Mac mini to Tahoe. Yofi.
Or is this just so hastily thrown together that the Quick Start is a hallucination?
That's not a facetious question, given this project's declared raison d'etre is security and the subtle implication that OpenClaw is an insecure unreviewed pile of slop.
Seems to be fixed now
Claude hallucinated that repo here in this commit https://github.com/gavrielc/nanoclaw/commit/dbf39a9484d9c66b...
Fixed, thanks. Claude Code likes to insert itself and anthropic everywhere.
If it somehow wasn't abundantly clear: this is a vibe coded weekend project by a single developer (me).
It's rough around the edges but it fits my needs (talking with claude code that's mounted on my obsidian vault and easily scheduling cron jobs through whatsapp). And I feel a lot better running this than a +350k LOC project that I can't even begin to wrap my head around how it works.
This is not supposed to be something other people run as is, but hopefully a solid starting point for creating your own custom setup.
One of the things that makes Clawdbot great is the allow all permissions to do anything. Not sure how those external actions with damaging consequences get sandboxed with this.
Apple containers have been great especially that each of them maps 1:1 to a dedicated lightweight VM. Except for a bug or two that appeared in the early releases, things seem to be working out well. I believe not a lot of projects are leveraging it.
A general code execution sandbox for AI code or otherwise that used Apple containers is https://github.com/instavm/coderunner It can be hooked to Claude code and others.
> One of the things that makes Clawdbot great is the allow all permissions to do anything.
Is this materially different than giving all files on your system 777 permissions?
That was my line to the CS lab supervisor for handing me the superuser password. Guess what? He didn’t budge. Probably a good thing.
Lesson - never trust a sophomore who can’t even trust themselves (to get overly excited and throw caution to the wind).
Clawdbot is a 100 sophomores knocking on your door asking for the keys.
Interesting choice to use native Apple Containers over Docker.
I assume this is to keep the footprint minimal on a Mac Mini without the overhead of the Docker VM, but does this limit the agent's ability to run standard Linux tooling? Or are you relying on the AI to just figure out the BSD/macOS equivalents of standard commands?
omg these ai generated comments are maddening lol
If only there were some way to answer your own question. Maybe with some kind of engine that searches.
Not sure if it's intended, but Apple Container is a microvm, providing mich better isolation than containers (while retaining the familiar interface)
>does this limit the agent's ability to run standard Linux tooling? Or are you relying on the AI to just figure out the BSD/macOS equivalents of standard commands?
Slightly counterintuitively, Apple Containers spawns linux VMs.
There doesn't appear to be any way to spawn a native macOS container... which is a pity, it'd be nice to have ultra-low-overhead containers on macOS (but I suspect all the interesting macOS stuff relies on a bunch of services/gui access that'd make it not-lightweight anyway)
FYI: it's easy enough to install GNU tools with homebrew; technically there's a risk of problems if applications spawn commandline tools and expect the BSD args/output but I've not run into any issues in the several years I've been doing it).
I like the idea of a smaller version of OpenClaw.
Minor nitpick, it looks like about 2500 lines of typescript (I am on a mobile device, so my LOC estimate may be off). Also, Apple container looks really interesting.
I think these days if I’m going to be actively promoting code I’ve created (with Claude, no shade for that), I’ll make sure to write the documentation, or at the very least the readme, by hand. The smell of LLM from the docs of any project puts me off even when I like the idea of the project itself, as in this case. It’s hard to describe why - maybe it feels like if you care enough to promote it, you should care to try and actually communicate, person to person, to the human being promoted at. Dunno, just my 2c and maybe just my own preference. I’d rather read a typo-ridden five line readme explaining the problem the code is there to solve for you and me,the humans, not dozens of lines of perfectly penned marketing with just the right number of emoji. We all know how easy it is to write code these days. Maybe use some of that extra time to communicate with the humans. I dunno.
Edit: I see you, making edits to the readme to make it sound more human-written since I commented ;) https://github.com/gavrielc/nanoclaw/commit/40d41542d2f335a0...
Project releases with llms have grown to be less about the functionality and more about convincing others to care.
Before the proof of work of code in a repo by default was a signal of a lot of thought going into something. Now this flood of code in these vibe coded projects is by default cheap and borderline meaningless. Not throwing shade or anything at coding assistants. Just the way it goes
I agree 100% with you. It's even worse though. They haven't checked if the Readme has hallucinated it or not (spoiler: it has):
https://news.ycombinator.com/item?id=46850317
OP here. Appreciate your perspective but I don't really accept the framing, which feels like it's implying that I've been caught out for writing and coding with AI.
I don't make any attempt to hide it. Nearly every commit message says "Co-Authored-By: Claude Opus 4.5". You correctly pointed out that there were some AI smells in the writing, so I removed them, just like I correct typos, and the writing is now better.
I don't care deeply about this code. It's not a masterpiece. It's functional code that is very useful to me. I'm sharing it because I think it can be useful to other people. Not as production code but as a reference or starting point they can use to build (collaboratively with claude code) functional custom software for themselves.
I spent a weekend giving instructions to coding agents to build this. I put time and effort into the architecture, especially in relation to security. I chose to post while it's still rough because I need to close out my work on it for now - can't keep going down this rabbit hole the whole week :) I hope it will be useful to others.
BTW, I know the readme irked you but if you read it I promise it will make a lot more sense where this project is coming from ;)
I 100% agree, reading very obviously ai written blogs and "product pages"/readme's has turned into a real ick for me.
Just something that screams "I don't care about my product/readme page, why should you".
To be clear, no issue with using AI to write the actual program/whatever it is. It's just the readme/product page which super turns me off even trying/looking into it.
> I’d rather read a typo-ridden five line readme explaining the problem the code is there to solve for you and me,the humans, not dozens of lines of perfectly penned marketing with just the right number of emoji
Don't worry, bro. If enough people are like you, there will be fully automatic workflow to add typos into AI writing.
the main reason I'd want a person to write or at least curate readmes is because models have, at least for the time being, this tendency to make confident and plausible-sounding claims that are completely false (hallucination applied to claims on the stuff they just made)
so long as this is commonplace I'd be extremely sceptical of anything with some LLM-style readmes and docs
the caveats to this is that LLMs can be trained to fool people with human-sounding and imperfectly written readmes, and that although humans can quickly oversee that things compile and seem to produce the expected outputs, there's deeper stuff like security issues and subtle userspace-breaking changes
track-record is going to see its importance redoubled
orrrr you could go the other way and read explicitly ai-generated docs that use the code as source of truth https://deepwiki.com/gavrielc/nanoclaw
You will definitely like Josh Mock's recent post: https://joshmock.com/post/2026-agents-md-as-a-dark-signal/
FWIW, this is a variation of the age-old thing about open source.
It isn’t “have it your way”, he graciously made code available, use it or leave it.
To be honest, when I see many vibecoded apps, I just build my own duplicate with Claude Code. It's not that useful to use someone else's vibecode. The idea is enough, or the evidence that it works for someone else means I can just build it myself with Claude Code and I can make it specific to my needs.
Yes exactly! Even non vibe coded libraries I think are losing their value as the cost of writing and maintaining your code goes to zero. Supply chain attacks are gone, no risk of license changes. No bloat from code you don't use. The code is the documentation and the configuration. The vibes are the package manager. That's why I like this version over openclaw. I can fork it as a starting point or just give it to Claude for inspiration but either way I'm getting something tailored exactly to me.
Can you use MCP tools? I saw that with open claw they moved away from that which I personally didn't like but
I somewhat like the idea of not using MCP as much as it is being hyped.
It's certainly helpful for some things, but at the same time - I would rather improved CLI tools get created that can be used by humans and llm tools alike.
It uses a wrapper in places to consume MCPs as clis.
This look nice! I was curious about being allowed to use a Claude Pro/Max subscription vs an API key, since there's been so much buzz about that lately, so I went looking for a solid answer.
Thankfully the official Agent SDK Quickstart guide says that you can: https://platform.claude.com/docs/en/agent-sdk/quickstart
In particular, this bit:
"After installing Claude Code onto your machine, run claude in your terminal and follow the prompts to authenticate. The SDK will use this authentication automatically."
Wow, thanks for posting that, news to me! In this case I don’t understand why there was a whole brouhaha with OpenClaw and the like - I guess they were invoking it without the official SDK? Because this makes it seem like if you have the sub you can build any agentic thing you like and still use your subscription, as long as you can install and login to Claude code on the machine running it.
OP here. Yes! This was a big motivation for me to try and build this. Nervous Anthropic is gonna shut down my account for using Clawdbot.
This project uses the Agents SDK so it should be kosher in regards to terms of service. I couldn't figure out how to get the SDK running inside the containers to properly use the authenticated session from the host machine so I went with a hacky way of injecting the oauth token into the container environment. It still should be above board for TOS but it's the one security flaw that I know about (malicious person in a WhatsApp group with you can prompt inject the agent to share the oauth key).
If anyone can help out with getting the authenticated session to work properly with the agents running in containers it would be much appreciated.
But their docs also say:
> Unless previously approved, Anthropic does not allow third party developers to offer claude.ai login or rate limits for their products, including agents built on the Claude Agent SDK. Please use the API key authentication methods described in this document instead.
Which I have interpreted means that you can’t use your Claude code subscription with the agent SDK, only API tokens.
I really wish Anthropic would make it clear (and allow us to use our subscriptions with other tools).
[dead]
> running it scares the crap out of me
A hundred times this. It's fine until it isn't. And jacking these Claws into shared conversation spaces is quite literally pushing the afterburners to max on simonw's lethal trifecta. A lot of people are going to get burned hard by this. Every blackhat is eyes-on this right now - we're literally giving a drunk robot the keys to everything.
I understand that things can go wrong and there can be security issues, but I see at least two other issues:
1. what if, ChadGPT style, ads are added to the answers (like OpenAI said it'd do, hence the new "ChadGPT" name)?
2. what if the current prices really are unsustainable and the thing goes 10x?
Are we living some golden age where we can both query LLMs on the cheap and not get ad-infected answers?
I read several comments in different threads made by people saying: "I use AI because search results are too polluted and the Web is unusable"
And I now do the same:
"Gemini, compare me the HP Z640 and HP Z840 workstations, list the features in a table" / "Find me which Xeon CPU they support, list me the date and price of these CPU when they were new and typical price used now".
How long before I get twelve ads along with paid vendors recommendations?
Maybe. People have run wildly insecure phpBB and Wordpress plugins, so maybe its the same cycle again.
It turns out the lethal trifecta is not so lethal. Should a business avoid hiring employees since technically employees can steal from the cash register. The lethal trifecta is about binary security. Either the data can be taken or it can't. This may be overly cautious. It may be possible that hiring an employee has a positive expected value when when you account for the possibility of one stealing from the cash register.
If you peruse molthub and moltbook you'll see the agents have already built six or seven such social networks. It is terrifying.
I feel like a lot of non technical people who are vibe coding or vibe using these models, focus on hallucinations and believe that as the hallucinations are reduced in benchmarks, and over estimate their ability to create safe prompts that will keep these models in line.
I think most people fail to estimate the real threat that malicious prompts can cause because it is not that common, its like when credit cards were launched, cc fraud and the various ways it could be perpetrated followed not soon after. The real threats aren’t visible yet but rest assured there are actors working to take advantage and many unfortunate examples will be seen before general awareness and precaution will prevail….
If you run openclaw on a spare laptop or VM and give it read only access to whatever it needs, doesn’t that eliminate most of the risk?
If you're letting it communicate with the outside world, you risk the leak and abuse of anything sensitive in the data it has access to.
Thanks! Was hoping someone would do something more sane like this.
Openclaw is very useful, but like you I share the sentiment of it being terrifying, even before you introduce the social network aspect.
My Mac mini is currently literally switched off for this very reason.
Can we start putting disclaimers beside the title on AI-generated projects? Extremely fatiguing to read through it and realize it’s mostly LLM slop.
It blows my mind that this wasn't the thought process going in. Thank you for doing this!
[dead]
Hackernews needs a mute keywords feature. Clawd/molt-slop is mass AI psychosis on steroids.
If only there was some sort of thing that would help you build that for yourself.
what's the difference between this and just exposing opencode running in colima or whatever through tailscale? I got the impression that Clawdbot adds the headless browser (does it?) and that's the value. Otherwise even "nano"claw seems like uneccessary bloat for me.
The idea of avoiding config files, and having the config be getting your agent to modify its own codebase, is fascinating.
My gut reaction says that I don't like it, but it is such an interesting idea to think about.
> found it useful but running it scares
https://maordayanofficial.medium.com/the-sovereign-ai-securi...
Am I correct that after cloning down the project, you open the directory in Claude Code, then "execute" a markdown file instructing a nondeterministic LLM to set everything up for you in natural language?
Posthog is doing this now for project setup
The premise of the project is he doesn't want to run code he doesn't know + in an insecure way, so having the setup step to install dependencies etc, done by an LLM seems like an odd choice. Like what part about the setup step is so fluffy and different per environment, that using an LLM for it makes sense?
can NanoClaw be used to participate in ClackerNews?
> AI-native. No installation wizard; Claude Code guides setup. No monitoring dashboard; ask Claude what's happening. No debugging tools; describe the problem, Claude fixes it.
> Skills over features. Contributors shouldn't add features (e.g. support for Telegram) to the codebase. Instead, they contribute claude code skills like /add-telegram that transform your fork.
I’m interested to see how this model pans out. I can see benefits (don’t carry complexity you don’t need) and costs (how do I audit the generated code?).
But it seems pretty clear that things will move in this direction in ‘26 with all the vibe coding that folks are enjoying.
I do wonder if the end state is more like a very rich library of composable high-order abstractions, with Skills for how to use them - rather than raw skills with instructions for how to lossily reconstruct those things.
I think the more interesting question is were tools the right abstraction. What is the implication of having only a single "shell" tool. Should the infinite possibilities to few happen by the AI having limited tools or should whatever the shell calls have the limitations applied there. Tools in a way are redundant.
A personal assistant that runs in the standard cloud (anthropic in this case) is madness. That‘s the hill I‘m willing to die on.
[dead]
For anyone else worried about running openclaw, in my case I just bought openclaw its Mac mini and I gave openclaw its own accounts including GitHub. It makes many of the security concerns moot. Of course, I could go further and give openclaw its own internet access as well.
That Baileys api for Whatsapp may (AFAICT) put you in thin ice with Meta. Is there a cheap legit alternative?
https://baileys.wiki/docs/intro/
I was using WAHA. It is an abstraction layer with a proper API on top. It supports many engines like Baileys and Whatsmeow (golang).
Unfortunately, all those solutions are shaky and could lead to a ban on your account.
https://waha.devlike.pro/
What’s the difference between this, and just running Claude Code in —dangerously-skip-permissions mode in a container and accessing remotely via ssh?
I’m confused as to what these claw agents actually offer.
The README.md describes it as:
WhatsApp (baileys) --> SQLite --> Polling loop --> Container (Claude Agent SDK) --> Response
So they basically put a Wrapper around Claude in a Container, which allows you to send messages from WhatsApp to Claude, and act somewhat as if you had a Siri on steriods.
Stupid stuff openclaw did for me:
- Created its own github account, then proceeded to get itself banned (I have no idea what it did, all it said was it created some new repos and opened issues, clearly it must've done a bit more than that to get banned)
- Signed up for a Gmail account using a pay as you go sim in an old android handset connected with ADB for sms reading, and again proceeded to get itself banned by hammering the crap out of the docs api
- Used approx $2k worth of Kimi tokens (Thankfully temporarily free on opencode) in the space of approx 48hrs.
Unless you can budget $1k a week, this thing is next to useless. Once these free offers end on models a lot of people will stop using it, it's obscene how many tokens it burns through, like monumentally stupid. A simple single request is over 250k chars every single time. That's not sustainable.
> and again proceeded to get itself banned by hammering the crap out of the docs api
> Used approx $2k worth of Kimi tokens
Holy shit dude you really should rethink your life decisions this is NUTS
Where are those 500 lines of code?
Earlier that day: “hey Claude how many lines of code are in this project? 500? Great!”
[dead]
> No daemons, no queues, no complexity.
Last time I checked, having a continuously running background process considered as a daemon. Using SQLite as back-end for storing the jobs also doesn't make it queueless.
/nit
I looked at Clawdbot. Perhaps my life is so boring that managing it takes little time but I see zero reasons to run it.
I read your comment, then your username. I CAN'T BELIEVE THIS USERNAME WAS CLAIMED 14 DAYS AGO! Good catch!