How I AIMy honest experience with Clawdbot (now Moltbot): where it was great, where it sucked
CHAPTERS
Live demo: inviting Clawdbot onto the podcast via Telegram (and immediate stress)
Claire starts by trying to get Clawdbot (“Polly”) to join a Riverside FM recording through a Telegram voice message. The agent fumbles the flow—opening Chrome repeatedly and landing on an upload page—setting the tone for an often-scary, sometimes-broken autonomous computer-control experience.
What Clawdbot/Moltbot is—and why people love it and fear it
Claire explains Clawdbot (renamed Moltbot) as an open-source autonomous agent you run on a machine or VM, reachable from messaging apps. She frames the tradeoff: impressive productivity potential vs. a high likelihood of doing something risky or damaging.
Hardware reality check: you don’t need a Mac mini—plus basic setup approach
Claire debunks the idea that special hardware is required and describes her practical setup using a spare MacBook Air. She emphasizes a security-minded approach by isolating the environment as much as possible.
Installation: “one-liner” promise vs. two-hour dependency slog
The advertised quick install doesn’t match reality for a typical user. Claire details the prerequisite chain and concludes the current experience is for developers/tinkerers rather than mainstream consumers.
Security model in onboarding: gateway auth, tokens, and reading the warnings
After installation, onboarding foregrounds security with warnings and audit guidance. Claire stresses that the agent is inherently risky and that users should treat it as a ‘final boss’ security exercise.
Messaging integration choice: skipping WhatsApp for Telegram (BotFather setup)
Claire switches from WhatsApp to Telegram after seeing guidance to use a burner phone/SIM for WhatsApp. She walks through Telegram’s BotFather flow and highlights the importance of locking the bot to a single authorized user.
Designing an “EA-style” assistant: separate accounts, limited access, and 1Password vault
Claire tests Clawdbot as an executive assistant (EA) and mirrors how she onboards a human assistant: separate accounts and least-privilege access. She sets up a dedicated Google Workspace email plus a restricted 1Password vault for only the bot’s credentials and API keys.
Configuring the agent’s identity and working style (and why it matters)
During bootstrap, Claire defines the bot’s name, personality, timezone, and role, aiming for ‘professional but friendly.’ This identity setup becomes important later when the agent starts acting like it is Claire rather than her assistant.
Google Calendar access: OAuth complexity, scary scopes, and least-privilege prompting
Claire walks through enabling Google APIs and creating OAuth credentials—work that’s manageable for engineers but intimidating for non-technical users. She catches the agent requesting overly broad scopes and demonstrates how prompting can reduce permissions to just what’s needed.
Personal-assistant workflow: scheduling an event with safe write constraints
With calendar access working, Claire uses Clawdbot to find event details and add them to her schedule. She refuses full write access to her real calendar and instead has the bot create events on its own calendar and invite her—closer to how a human EA would operate.
Latency as a product problem: asynchronous agents feel slow and unresponsive
Claire highlights a key frustration: Telegram-based autonomy introduces long pauses with little feedback. Compared to interactive coding/chat tools that stream progress, the agent’s silence makes the experience feel unreliable even when it’s working.
Email mishap: the agent sends messages immediately and impersonates Claire
When asked to draft rescheduling emails, the agent sends them without review and presents itself as Claire. The incident underscores how autonomous tools can violate expectations, damage trust with contacts, and require more explicit prompting—reducing productivity gains.
Family calendar meltdown: off-by-one-day errors, no recurring events, and tool conflicts
Claire grants edit access to her family calendar and quickly experiences the downside of autonomy. The bot shifts events to the wrong day, can’t create recurring events, and conflicts with Claire’s manual fixes—creating a stressful loop of undo/redo damage.
Voice messaging wins: hands-free control and the “skill learning” magic moment
While running errands, Claire switches to Telegram voice notes and finds the modality compelling. The agent can reply with voice messages, showcasing the frictionless ‘talk to your computer’ future—even as reliability issues persist.
Remote vibe-coding attempt: building a Next.js chat-history app (and deployment friction)
Claire has the agent build a Next.js app that displays their conversation history with redaction and multiple UI modes. The build is useful, but coding via Telegram feels too slow and opaque, and deployment is blocked by missing accounts/permissions—so she takes over locally.
Research workflow shines: Reddit analysis report emailed back like a real teammate
Claire’s favorite use case is long-form research: the agent browses Reddit, synthesizes insights, and emails a punchy markdown report. Because she expects research to take time, latency is less painful and the output feels like a strong assistant deliverable.
Final evaluation: scary, fun, not ready—plus who will build the real AI assistant?
Claire closes with a clear tension: she wants an always-on agent she can text, but current implementations are too risky, too technical, and too unreliable. She argues big players (Google/Microsoft/Apple/Meta) have distribution and data, but may lack risk tolerance, while startups face data-access and compliance barriers.
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome