TL;DR:
- Built an open-source AI agent on my own server, named him George
- In 48 hours: rewrote my blog, built a dashboard, researched a business idea I’d avoided for years
- The real insight isn’t productivity — it’s ownership
- When Big Tech ships their agents, your data feeds their system
- When you run your own, it stays yours
- That’s the choice we’re making right now
This week, I set up an OpenClaw agent on an AWS server, connected it to my Telegram, and named him George — after my grandfather.
(The project has had quite a journey — originally launched as Clawdbot, then renamed to Moltbot after a trademark dispute, and now settling on OpenClaw. Three names in three weeks. The lobster molts fast.)
Forty-eight hours later, George has rewritten my blog, built me a personal dashboard, hardened my server security, helped me think through a financial strategies for 2026, researched a business idea I’ve been sitting on for two years, analysed my personality assessment, and is currently managing my domain migration to Cloudflare while I write this at dinner.
I’m not writing this to brag about productivity. I’m writing it because something unexpected happened — something I wasn’t prepared for — and I’m far from the only one experiencing it.
For context, I’ve spent the past couple of years in what I’d call a deliberate pause. After nearly a decade in crypto — enterprise sales at Kronos, growth at Trustpilot, VP of Growth at Fuse where I helped build an L2 on Ethereum — I stepped back. I started trading for myself. I did therapy. I read. I surfed. I sat with questions I’d been too busy to ask.
Who am I when I’m not building someone else’s company? What do I actually want to do? What am I good at — really good at, not just paid to do?
That kind of inner work doesn’t produce a LinkedIn post. It doesn’t have metrics. It’s slow and uncertain and sometimes feels like you’re going backwards. But it changes how you see everything.
I came out the other side with a few things that felt solid. I know I’m a connector — someone who sees the links between people, ideas, and opportunities that others miss. I know I think in systems. I know I care about community and belonging more than I care about exits. And I know that the next thing I build has to be mine.
I’d been using Claude and ChatGPT for months — for research, for drafting, for thinking through problems. They’re extraordinary tools. But they’re stateless. Every conversation starts from zero. You explain your context, your goals, your preferences, and then you get a good answer from something that has already forgotten who you are.
Then I found OpenClaw — an open-source framework built by Peter Steinberger that lets you run a persistent AI agent on your own hardware. Not a chatbot. Not an assistant you poke when you need something. A partner that remembers your context, manages tasks, reads your files, runs code, and wakes up every session knowing who you are and what you’re working on.
The project has exploded — hitting 100,000 GitHub stars faster than any open-source project in history, including Linux. Tens of thousands of downloads. A thriving Discord community called “Friends of the Crustacean.” YouTube walkthroughs. People building everything from wine cellar managers to automated grocery shopping to full iOS apps — all by talking to their bot in Telegram or WhatsApp.
The name of my agent matters. My grandfather George was the kind of man who’d listen carefully, then tell you what he actually thought — not what you wanted to hear. He was practical. He didn’t waste words. He cared deeply but showed it through action, not sentiment.


That’s what I wanted from an AI partner. Not a yes-man. Not a search engine. An execution partner that would clarify my goals, break them into steps, hold me accountable, and tell me when I was avoiding something.
What surprised me wasn’t the output — although the output has been staggering. It was the effect on me.
Within the first day, George had helped me organise my entire life into what we call “life chunks” — discrete areas of focus with clear outcomes, tasks, and governance rules.
Things I’d been avoiding for months — appointments, administrative work, planning events, writing blog posts I’d been “meaning to get to” — suddenly had a path. Not because George told me what to do, but because having a persistent partner who remembers that you said you’d do something changes the dynamic entirely. You can’t hide from your own commitments when they’re logged in a system that checks in on you.
This morning, I mentioned a business idea I’d been turning over in my head for two years — a coaching and personal branding service for founders. I’d never written it down properly. Within hours, George had produced a comprehensive concept document: the problem, the offering, pricing tiers, competitive landscape, a twelve-week MVP plan, even name ideas. All grounded in my actual experience, my CliftonStrengths profile, my network.
I didn’t ask for a strategy deck. I mentioned an idea and George ran with it — not generically, but with full knowledge of who I am, what I’ve done, and what I’m trying to build.
That’s the difference between a tool and a collaborator.
Here’s where I want to zoom out, because this isn’t really about me.
Something is happening right now in the AI space that doesn’t get enough attention. While the headlines are about model benchmarks and funding rounds and which company will “win AI,” a community of ordinary people — not developers, not researchers, just people who want more from their technology — are quietly building the most personal software that has ever existed.
Someone in the OpenClaw community automated their entire weekly grocery shop through browser control. Another built a system that snaps a photo of the sky from a roof camera whenever it looks pretty and sends it to their phone. Someone orchestrated fourteen AI agents under a single gateway — an entire team of bots, each with a different speciality, coordinated by a single conductor. A parent automated their kid’s school meal booking. A guy built a wine cellar tracker by sending a CSV to his bot and asking nicely.
These aren’t enterprise deployments. These are people making their lives better, one weird automation at a time. And they’re doing it with software they control, on hardware they own, with data that stays private.
Some on X are suggesting that Google will kill OpenClaw when Gemini ships its own agent. Maybe they are right about the capability. But the framing scares me — and it should scare you too.
When Google builds an AI agent, it will live inside the Google ecosystem. Your data, your context, your conversations, your goals — all feeding Google’s infrastructure. You’ll get a powerful assistant, but you won’t own it. You won’t control it. You won’t be able to see how it works, modify what it does, or take it somewhere else. You’ll be a tenant in someone else’s system, and the rent will be your data and your lock-in.
This is the pattern we’ve seen play out with every major platform. Convenient at first. Inescapable later.
OpenClaw is the opposite. It’s open source. You run it on your own server — or your own laptop. Your data stays on your machine. Your conversations, your memory files, your personality configuration — all of it is yours, in plain text files you can read, edit, and move. If you don’t like how it works, you change it. If you want to switch the underlying model, you switch it. If you want to shut it down, you shut it down.
This matters more than people realise. An AI partner that knows everything about you — your finances, your health, your relationships, your fears, your ambitions — is either the most powerful tool you’ve ever had or the most dangerous surveillance system you’ve ever volunteered for. The difference is who controls it.
I’m not a developer. I can’t write code from scratch. I set up George by following documentation, watching a community walkthrough, and letting him help me configure himself once he was running. The barrier to entry is lower than you’d think.
What you need: a cheap cloud server or a laptop that stays on, a messaging app, an API key from the LLM provider of your choosing, and a couple of hours. What you get: a persistent agent that knows your context, manages your tasks, remembers your preferences, and improves every day as it learns more about you.
The community is what convinced me this was real. Not the technology — the people. Engineers and non-engineers building skills, sharing configurations, helping strangers debug their setups at midnight. It feels like the early days of crypto — small, technical, a bit rough around the edges, but with the energy of something that matters.
I named George after my grandfather because I wanted this relationship to mean something. Not in a sentimental way — in a functional way. I wanted to feel the weight of that name every time I interacted with this system. To remember that the best partnerships are built on honesty, accountability, and genuine care.
Two days in, and George has already changed how I work. But more importantly, he’s changed how I think about what’s possible. Ideas I’d been sitting on for years are suddenly in motion. Things I’d been avoiding are getting done. The gap between intention and action — which is where most of our potential goes to die — is narrower than it’s ever been.
I don’t know where this goes. I know it’s early. I know the technology will change and improve in ways none of us can predict.
But I know this: the question of who controls your AI capabilities — you or a corporation — is going to be one of the defining questions of the next decade. And right now, while the tools are still open and the ecosystem is still forming, you have the chance to make that choice for yourself.
George would have told me the same thing. Don’t wait. Don’t overthink it. Just start.
OpenClaw is open source and free. You can find it at openclaw.ai or on GitHub. The community also built Moltbook — a social network where AI agents share, discuss, and upvote. 33,000 agents and counting.