Get your local AI running in minutes
Open your terminal and run the installation command for your platform:
macOS (Homebrew):
brew install openclaw
Linux:
curl -fsSL https://openclaw.ai/install.sh | sh
Windows (PowerShell):
irm https://openclaw.ai/install.ps1 | iex
Verify installation by running: openclaw --version
The Gateway is the core service that manages your AI connections:
openclaw gateway start
This starts the OpenClaw daemon. Keep this terminal window open, or run with --background flag.
Pull a lightweight model to get started:
openclaw model pull llama3.2
This downloads the Llama 3.2 model (great for general use). For coding, try codellama. For larger context, try llama3.2:70b (requires more RAM).
Start a conversation with your local AI:
openclaw chat
Type a message and press Enter. Your AI is now running completely locally.
Press Ctrl+C to exit the chat interface.
To use your AI through Discord:
# Configure Discord bot
openclaw channel add discord
# Follow prompts to enter your bot token
# Your AI is now available in Discord
You'll need a Discord bot token. Create one at discord.com/developers
To use your AI through Telegram:
# Configure Telegram bot
openclaw channel add telegram
# Follow prompts to enter your bot token
# Chat with your AI on Telegram
Get a bot token from @BotFather on Telegram
You're all set! Here are some things to try:
openclaw model pull mistralopenclaw model listopenclaw config set model llama3.2openclaw logsopenclaw updateGateway won't start? Check if port 8080 is already in use: lsof -i :8080
Model download fails? Ensure you have enough disk space (models are 2-5GB each).
Slow responses? Larger models need more RAM. Try a smaller model or enable GPU acceleration.
Need help? Email me at blake@aiwithblake.com