tutorial lifestyle

Building a Home AI Hub: ZeroClaw as Your Always-On Family Assistant

ZeroClaws.io

ZeroClaws.io

@zeroclaws

February 25, 2026

6 min read

Building a Home AI Hub: ZeroClaw as Your Always-On Family Assistant

My neighbor asked me last month how to set up an AI assistant for her family. Her kids were using ChatGPT for homework help, her husband wanted it for recipe ideas, and she was using it for work. Three separate subscriptions, $60 a month, and every conversation going through servers she'd never thought about. "There has to be a better way," she said.

There is. An old laptop, ZeroClaw, and a family Telegram group. That's the whole setup. One AI assistant for everyone, running on hardware you already own, costing less per month than a single streaming subscription.

Why the Economics Work

The cloud AI subscription model is built around individuals. ChatGPT Plus is $20 per person per month. Claude Pro is $20 per person per month. For a family of four, that's $80 a month — $960 a year — for the privilege of having AI assistance. And every question your kids ask about homework, every recipe your partner looks up, every work question you type goes through someone else's servers, stored in someone else's database, subject to data retention policies you've probably never read.

A home AI hub inverts this model. You pay once for hardware — or use something you already own — and pay a small monthly API fee that covers the whole family. The AI processing still happens at Anthropic or OpenAI, but the routing, the memory, and the conversation history all live on your machine. If you add Ollama for local models, even the AI processing stays home.

ZeroClaw makes this practical because of its resource footprint. Most AI agent runtimes need a dedicated server with gigabytes of RAM. ZeroClaw uses 4MB. An old laptop from 2016 with 8GB of RAM can run ZeroClaw using 0.05% of its memory. Close the lid, plug it in, and it runs silently in the corner of a room, drawing about as much power as a phone charger.

The Hardware Question

The honest answer is that you probably already have something that works. An old laptop that's been sitting in a drawer since you upgraded — plug it in, install Linux or leave it on macOS, and it's your AI server. A Raspberry Pi 4 or 5 costs $35-80 and is silent, tiny, and draws minimal power. A Mac Mini that's been replaced by a newer model is perfect. Even an old Android phone can run ZeroClaw through Termux, though that's more of a curiosity than a recommendation for most families.

The only real requirement is that the machine can stay on and connected to the internet. ZeroClaw itself needs almost nothing — the bottleneck is always the network connection to your AI provider, not the hardware running the agent. A $35 Raspberry Pi 4 handles hundreds of simultaneous conversations without breaking a sweat, because ZeroClaw's job is message routing, not AI inference.

Setting It Up

The setup takes about fifteen minutes, most of which is waiting for things to download.

Install ZeroClaw with the bootstrap script:

```bash curl -fsSL https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/main/scripts/bootstrap.sh | bash ```

Create a Telegram bot through @BotFather — it takes about two minutes and gives you a token. Create a family group, add your family members, add the bot, and get the group chat ID. Then configure ZeroClaw:

```toml [ai] provider = "anthropic" model = "claude-sonnet-4-20250514" api_key = "sk-ant-..."

[ai.system_prompt] text = """You are a helpful family assistant. Be friendly, patient, and age-appropriate. For homework questions, guide rather than give direct answers — help kids think through problems. Keep responses concise in group chat."""

[channels.telegram] token = "YOUR_BOT_TOKEN" allowed_chats = [-100123456789] ```

The system prompt is where you customize the assistant's personality for your family. The "guide rather than give direct answers" instruction for homework is important — it means the bot helps kids think through problems rather than just handing them answers. You can add dietary preferences for recipe suggestions, note family members' interests, or set any other context that makes the assistant more useful for your specific household.

Set it up as a systemd service so it starts automatically on boot and restarts if it crashes:

```bash sudo tee /etc/systemd/system/zeroclaw.service << 'EOF' [Unit] Description=Family AI Assistant After=network-online.target

[Service] Type=simple User=pi ExecStart=/home/pi/.cargo/bin/zeroclaw start Restart=always

[Install] WantedBy=multi-user.target EOF

sudo systemctl enable --now zeroclaw ```

That's it. Your family AI assistant is live, will survive reboots, and restarts automatically if anything goes wrong.

What It Actually Feels Like

The cross-channel memory is what makes this genuinely useful rather than just technically interesting. When your kid tells the bot about a school project on Monday, it remembers on Thursday when they ask a follow-up question. When you mention that someone in the family is vegetarian, every recipe suggestion accounts for it. The context accumulates over time, and the assistant gets more useful the longer you use it.

In practice, the family group chat becomes a place where anyone can ask anything. Homework questions get thoughtful guidance rather than direct answers. Recipe requests get suggestions that account for what's in the fridge and who's eating. Scheduling questions get remembered and recalled. Trivia debates get settled. Language questions get answered. The assistant is present in the same place where the family already communicates, which means it actually gets used — unlike a separate app that requires switching contexts.

The safety configuration is worth spending a few minutes on if you have younger kids. ZeroClaw lets you set content filters and block specific topics, and you can set per-user rules if family members have individual chats with the bot in addition to the group. The configuration is simple TOML, not a complex admin interface.

The Cost Reality

The math is straightforward. A family of four using cloud AI subscriptions pays $80/month, $960/year. A home AI hub costs roughly $5-15/month in API fees for typical family usage — a few hundred messages a day — plus $1-3/month in electricity. That's $6-18/month total, for the whole family.

If you want complete privacy and zero ongoing cost, add Ollama on the same machine and point ZeroClaw at a local model. The quality is lower than frontier cloud models, but for homework help, recipes, and general questions, a local llama3.1:8b model is perfectly capable. Nothing leaves your home network. The API cost drops to zero.

The hybrid approach is the practical sweet spot for most families: local model for simple questions, cloud model for complex ones. ZeroClaw handles the routing automatically based on a complexity threshold you configure. In practice, 80-90% of typical family queries are handled locally for free; the ones that genuinely need frontier-level reasoning fall back to the cloud.

The Bigger Picture

There's something worth noting about what this setup represents beyond the cost savings. Your family's AI conversations — the homework questions, the recipe ideas, the scheduling, the random curiosity — are yours. They live on your hardware, in a SQLite file you can back up, inspect, or delete at any time. They're not training data for someone else's model. They're not subject to a company's data retention policy changing next year.

The tools to make this work have existed for a while, but they required significant technical expertise to set up. ZeroClaw's 4MB footprint and single-binary deployment changed that. The barrier is now low enough that anyone who can follow a tutorial can have a private, always-on AI assistant running on hardware they already own. The question isn't whether it's possible — it's whether you'd rather keep paying $80 a month for the cloud version.

Stay in the Loop

Get updates on new releases, integrations, and Rust-powered agent infrastructure. No spam, unsubscribe anytime.