1. Model access works
Confirm OpenClaw can answer a simple prompt before you involve Telegram, VPS hosting, or local model tuning.
This free checklist helps you prove the first useful loop: Telegram message in, OpenClaw assistant response out, with the right model path and workspace context.
Use it before you move to VPS hosting, local models, groups, advanced tools, or heavy automation.
Most failed private-assistant setups are not blocked by one hard problem. They are blocked by changing too many layers at once.
Confirm OpenClaw can answer a simple prompt before you involve Telegram, VPS hosting, or local model tuning.
Start with direct messages. Verify bot token, allowed user ID, route, inbound event, and outbound reply.
Keep AGENTS.md, SOUL.md, USER.md, IDENTITY.md, and TOOLS.md useful but not overloaded for the first test.
A private assistant should not feel like a generic bot. Add operating style, boundaries, and preferred output formats.
Separate gateway, model, Telegram policy, auth, and workspace-file problems instead of guessing randomly.
Move to VPS, local models, groups, recurring tasks, and richer tools after the basic loop is stable.
The paid Launch Kit adds deployment options, model-choice guidance, persona packs, troubleshooting maps, starter builds, and a worked example for a private Telegram assistant.