AI CLI translates natural language into shell commands using LLMs (OpenAI or OpenRouter), then applies a safety policy before execution.
LLMs are non-deterministic. The same prompt can produce different commands across runs.
Always review and understand every command before you approve or run it. Treat generated commands as untrusted suggestions, especially for file operations, process control, networking, and anything requiring elevated permissions.
brew install kriserickson/tap/ai-cliRequires Go 1.25+.
go install github.com/kriserickson/ai-cli@latestDownload prebuilt binaries for macOS (signed and notarized), Linux, and Windows from the latest release.
On macOS or Linux, extract the archive, make it executable, and move it onto your PATH:
VERSION=$(curl -s https://api.github.com/repos/kriserickson/ai-cli/releases/latest | grep '"tag_name"' | cut -d'"' -f4)
OS=$(uname -s | tr '[:upper:]' '[:lower:]')
ARCH=$(uname -m | sed 's/x86_64/amd64/')
curl -LO "https://github.com/kriserickson/ai-cli/releases/download/${VERSION}/ai-${VERSION}-${OS}-${ARCH}.tar.gz"
tar -xzf "ai-${VERSION}-${OS}-${ARCH}.tar.gz"
chmod +x ai
sudo mv ai /usr/local/bin/ai
ai versionOn Windows (PowerShell), extract the zip and move ai.exe onto your PATH:
$VERSION = (Invoke-RestMethod "https://api.github.com/repos/kriserickson/ai-cli/releases/latest").tag_name
Invoke-WebRequest -Uri "https://github.com/kriserickson/ai-cli/releases/download/${VERSION}/ai-${VERSION}-windows-amd64.zip" -OutFile "ai-${VERSION}-windows-amd64.zip"
Expand-Archive -Path "ai-${VERSION}-windows-amd64.zip" -DestinationPath .\ai
Move-Item .\ai\ai.exe "$env:USERPROFILE\go\bin\ai.exe" -Force
ai.exe versionUse ai doctor as the first command. It verifies config and launches setup if your API key is missing.
ai doctorTypical flow:
- validates config file location
- checks selected provider and model
- checks API key presence
- starts setup wizard if needed
Characters like ?, *, and # have special meaning in most shells. Without protection, a command like ai what is using all my cpu? will fail because zsh tries to glob-expand cpu? before ai ever sees it.
Add a noglob alias to your shell config so you can type naturally:
zsh (~/.zshrc):
alias ai='noglob ai'bash (~/.bashrc):
# Only needed if you have failglob or nullglob enabled;
# noglob is zsh-only, so bash needs a wrapper function instead.
ai() { set -f; command ai "$@"; set +f; }Then reload your shell:
source ~/.zshrc # or source ~/.bashrcAfter this, ai what is using all my cpu? will work as expected.
Single-shot examples:
ai list files in current directory sorted by size
ai find all files larger than 10MB
ai show what process is using port 8080Interactive mode:
aiThen type requests one by one:
ai> what ports are listening
ai> compress all log files in /var/log older than 7 days
ai> show biggest folders in my home directory
Every generated command has:
risk:safeorriskycertainty: 0-100
Decision matrix:
| Risk | Allowlisted | Certainty >= threshold | Action |
|---|---|---|---|
| safe | any | yes | Auto-execute |
| safe | any | no | Ask confirmation |
| risky | any | any | Ask confirmation |
When always_confirm=true, every command asks first.
Default allowlist prefixes:
gitlscatechopwdheadtailwcgrepfindwhichman
Force confirmation for everything:
ai config set always_confirm trueIncrease certainty required for auto-run:
ai config set min_certainty 95Allow auto-run decisions from the safety matrix:
ai config set always_confirm falseLower certainty threshold:
ai config set min_certainty 60Important:
- all risky commands prompt for confirmation, regardless of threshold or allowlist
min_certaintyonly affects whether safe commands auto-run
Current CLI supports reading allowlist via config output, but not setting it directly with ai config set.
To customize allowlist prefixes, edit ~/.ai-cli/config.toml:
[safety]
always_confirm = false
min_certainty = 80
allowlist_prefixes = ["git", "ls", "cat", "echo", "pwd", "head", "tail", "wc", "grep", "find", "which", "man"]After editing, run:
ai statusMemories let you store named context (like server addresses, port mappings, or project conventions) that automatically gets injected into the AI prompt when the keyword appears in your input.
ai memory add my-server "kris@137.184.10.103 always port-forward 9229 to 2229"
ai memory add staging-db "postgres://app:secret@staging.example.com:5432/mydb"
ai memory list
ai memory remove my-serverOnce stored, just use the keyword naturally:
ai connect to my-server
# → ssh -L 2229:localhost:9229 kris@137.184.10.103
ai dump the users table from staging-db
# → pg_dump -t users "postgres://app:secret@staging.example.com:5432/mydb"Keyword matching is case-insensitive. Multiple memories can match in a single request. Memories are stored in ~/.ai-cli/memory.json.
In interactive mode, the same commands are available:
ai> memory add my-server kris@137.184.10.103 always port-forward 9229 to 2229
ai> memory list
ai> memory remove my-server
ai status
ai doctor
ai set-model
ai version
ai config show
ai config get provider
ai config set provider openai
ai config set openai_key sk-your-key-hereai completion generates an autocompletion script for your shell so you can tab-complete subcommands and flags.
Enable completion in your environment (once, if not already done):
echo "autoload -U compinit; compinit" >> ~/.zshrcLoad for the current session:
source <(ai completion zsh)Load permanently (macOS with Homebrew):
ai completion zsh > $(brew --prefix)/share/zsh/site-functions/_aiLoad permanently (Linux):
ai completion zsh > "${fpath[1]}/_ai"Requires the bash-completion package (brew install bash-completion on macOS, or your distro's package manager on Linux).
Load for the current session:
source <(ai completion bash)Load permanently (macOS):
ai completion bash > $(brew --prefix)/etc/bash_completion.d/aiLoad permanently (Linux):
ai completion bash > /etc/bash_completion.d/aiLoad for the current session:
ai completion fish | sourceLoad permanently:
ai completion fish > ~/.config/fish/completions/ai.fishLoad for the current session:
ai completion powershell | Out-String | Invoke-ExpressionLoad permanently — add the above line to your PowerShell profile ($PROFILE).
Start a new shell after any permanent installation for changes to take effect.
Configuration file location:
~/.ai-cli/config.toml
Available ai config get/set keys:
| Key | Description | Default |
|---|---|---|
provider |
Active provider (openai or openrouter) |
openrouter |
model |
Model identifier | anthropic/claude-3.5-sonnet |
openai_key |
OpenAI API key | (empty) |
openrouter_key |
OpenRouter API key | (empty) |
openai_url |
OpenAI base URL | https://api.openai.com/v1 |
openrouter_url |
OpenRouter base URL | https://openrouter.ai/api/v1 |
always_confirm |
Always prompt before execution (true/false) |
false |
min_certainty |
Auto-execute threshold (0-100) | 80 |
debug |
Debug mode (none, screen, file) |
none |
Notes:
allowlist_prefixesexists in config but is currently edited directly inconfig.tomlai config get openai_keyandai config get openrouter_keyreturn masked values
# Persisted config
ai config set debug screen
ai config set debug file
# Per-command override
ai --debug list files
ai --debug=screen list files
ai --debug=file list filesFor complex requests, AI CLI may return multiple commands. They run sequentially and stop on first failure.
$ ai kill the process on port 8080
[1/2] Find PID on port 8080
$ lsof -ti :8080 [safe] 95% certainty
[2/2] Kill the process
$ kill -9 $(lsof -ti :8080) [risky] 90% certainty
Execute? [Y/n]Requires Go 1.25+ and go-task.
Install task once:
go install github.com/go-task/task/v3/cmd/task@latestEnsure your Go bin directory is in PATH:
- Windows:
%USERPROFILE%\\go\\bin - macOS/Linux:
$HOME/go/bin
Example for zsh:
echo 'export PATH="$PATH:$HOME/go/bin"' >> ~/.zshrc
source ~/.zshrcCommon commands:
task # build + test
task build # dist/<os>/ai
task test
task test:verbose
task test:pkg PKG=./internal/llm/...
task installtask install copies from dist/<os>/ into:
- Windows:
%USERPROFILE%\\go\\bin - macOS/Linux:
/usr/local/bin
Override install target:
task install INSTALL_DIR="$HOME/.local/bin"Windows (PowerShell):
task install INSTALL_DIR="$env:USERPROFILE\\tools\\bin"Version is injected at build time using ldflags. Default value in source is dev.
Create a release:
git tag v0.2.0
git push origin v0.2.0Tag push triggers GitHub Actions to:
- run tests
- build cross-platform artifacts
- create a GitHub Release
- upload artifacts to the release
task test
task test:verbose
task test:pkg PKG=./internal/executor/...
task test:pkg PKG=./internal/llm/...
task test:pkg PKG=./internal/config/...
task test:pkg PKG=./internal/shell/...ai-cli/
├── main.go
├── go.mod
├── cmd/
│ ├── root.go
│ ├── config.go
│ ├── memory.go
│ ├── version.go
│ ├── status.go
│ ├── doctor.go
│ ├── setmodel.go
│ └── wizard.go
└── internal/
├── config/
│ ├── config.go # TOML config load/save/defaults (~/.ai-cli/config.toml)
│ └── config_test.go
├── llm/
│ ├── client.go # LLM HTTP client (OpenAI-compatible), debug logging
│ ├── client_test.go
│ ├── models.go # Model-list fetching (OpenRouter + OpenAI), GroupByCompany
│ ├── models_test.go
│ ├── prompt.go # System prompt template with OS/shell/cwd context
│ ├── prompt_test.go
│ ├── parse_test.go
│ └── types.go # JSON request/response structs
├── executor/
│ ├── executor.go # Sequential command execution with colored output
│ ├── safety.go # allowlist check + risk/certainty safety matrix
│ └── safety_test.go
├── memory/
│ ├── memory.go # Memory CRUD + keyword matching (~/.ai-cli/memory.json)
│ └── memory_test.go
├── shell/
│ ├── detect.go # OS, shell, version detection
│ └── detect_test.go
└── interactive/