Skip to content

A tool that uses an LLM to generate command line functions.

License

Notifications You must be signed in to change notification settings

kriserickson/ai-cli

Repository files navigation

AI CLI

AI CLI translates natural language into shell commands using LLMs (OpenAI or OpenRouter), then applies a safety policy before execution.

Important Safety Warning

LLMs are non-deterministic. The same prompt can produce different commands across runs.

Always review and understand every command before you approve or run it. Treat generated commands as untrusted suggestions, especially for file operations, process control, networking, and anything requiring elevated permissions.

Quick Start (Users)

1) Install

Homebrew (macOS and Linux)

brew install kriserickson/tap/ai-cli

Go

Requires Go 1.25+.

go install github.com/kriserickson/ai-cli@latest

Binary releases

Download prebuilt binaries for macOS (signed and notarized), Linux, and Windows from the latest release.

On macOS or Linux, extract the archive, make it executable, and move it onto your PATH:

VERSION=$(curl -s https://api.github.com/repos/kriserickson/ai-cli/releases/latest | grep '"tag_name"' | cut -d'"' -f4)
OS=$(uname -s | tr '[:upper:]' '[:lower:]')
ARCH=$(uname -m | sed 's/x86_64/amd64/')
curl -LO "https://github.com/kriserickson/ai-cli/releases/download/${VERSION}/ai-${VERSION}-${OS}-${ARCH}.tar.gz"
tar -xzf "ai-${VERSION}-${OS}-${ARCH}.tar.gz"
chmod +x ai
sudo mv ai /usr/local/bin/ai
ai version

On Windows (PowerShell), extract the zip and move ai.exe onto your PATH:

$VERSION = (Invoke-RestMethod "https://api.github.com/repos/kriserickson/ai-cli/releases/latest").tag_name
Invoke-WebRequest -Uri "https://github.com/kriserickson/ai-cli/releases/download/${VERSION}/ai-${VERSION}-windows-amd64.zip" -OutFile "ai-${VERSION}-windows-amd64.zip"
Expand-Archive -Path "ai-${VERSION}-windows-amd64.zip" -DestinationPath .\ai
Move-Item .\ai\ai.exe "$env:USERPROFILE\go\bin\ai.exe" -Force
ai.exe version

2) Run Setup and Health Checks First

Use ai doctor as the first command. It verifies config and launches setup if your API key is missing.

ai doctor

Typical flow:

  • validates config file location
  • checks selected provider and model
  • checks API key presence
  • starts setup wizard if needed

Recommended: Shell Alias for Special Characters

Characters like ?, *, and # have special meaning in most shells. Without protection, a command like ai what is using all my cpu? will fail because zsh tries to glob-expand cpu? before ai ever sees it.

Add a noglob alias to your shell config so you can type naturally:

zsh (~/.zshrc):

alias ai='noglob ai'

bash (~/.bashrc):

# Only needed if you have failglob or nullglob enabled;
# noglob is zsh-only, so bash needs a wrapper function instead.
ai() { set -f; command ai "$@"; set +f; }

Then reload your shell:

source ~/.zshrc   # or source ~/.bashrc

After this, ai what is using all my cpu? will work as expected.

3) Run Your First Commands

Single-shot examples:

ai list files in current directory sorted by size
ai find all files larger than 10MB
ai show what process is using port 8080

Interactive mode:

ai

Then type requests one by one:

ai> what ports are listening
ai> compress all log files in /var/log older than 7 days
ai> show biggest folders in my home directory

How Command Execution Safety Works

Every generated command has:

  • risk: safe or risky
  • certainty: 0-100

Decision matrix:

Risk Allowlisted Certainty >= threshold Action
safe any yes Auto-execute
safe any no Ask confirmation
risky any any Ask confirmation

When always_confirm=true, every command asks first.

Default allowlist prefixes:

  • git
  • ls
  • cat
  • echo
  • pwd
  • head
  • tail
  • wc
  • grep
  • find
  • which
  • man

Restrict or Expand Auto-Run Behavior

Restrict auto-run (safer)

Force confirmation for everything:

ai config set always_confirm true

Increase certainty required for auto-run:

ai config set min_certainty 95

Turn auto-run on more aggressively

Allow auto-run decisions from the safety matrix:

ai config set always_confirm false

Lower certainty threshold:

ai config set min_certainty 60

Important:

  • all risky commands prompt for confirmation, regardless of threshold or allowlist
  • min_certainty only affects whether safe commands auto-run

Allowlist control

Current CLI supports reading allowlist via config output, but not setting it directly with ai config set.

To customize allowlist prefixes, edit ~/.ai-cli/config.toml:

[safety]
always_confirm = false
min_certainty = 80
allowlist_prefixes = ["git", "ls", "cat", "echo", "pwd", "head", "tail", "wc", "grep", "find", "which", "man"]

After editing, run:

ai status

Memories

Memories let you store named context (like server addresses, port mappings, or project conventions) that automatically gets injected into the AI prompt when the keyword appears in your input.

Managing memories

ai memory add my-server "kris@137.184.10.103 always port-forward 9229 to 2229"
ai memory add staging-db "postgres://app:secret@staging.example.com:5432/mydb"
ai memory list
ai memory remove my-server

Using memories

Once stored, just use the keyword naturally:

ai connect to my-server
# → ssh -L 2229:localhost:9229 kris@137.184.10.103

ai dump the users table from staging-db
# → pg_dump -t users "postgres://app:secret@staging.example.com:5432/mydb"

Keyword matching is case-insensitive. Multiple memories can match in a single request. Memories are stored in ~/.ai-cli/memory.json.

In interactive mode, the same commands are available:

ai> memory add my-server kris@137.184.10.103 always port-forward 9229 to 2229
ai> memory list
ai> memory remove my-server

Common User Commands

ai status
ai doctor
ai set-model
ai version
ai config show
ai config get provider
ai config set provider openai
ai config set openai_key sk-your-key-here

Shell Completion

ai completion generates an autocompletion script for your shell so you can tab-complete subcommands and flags.

zsh

Enable completion in your environment (once, if not already done):

echo "autoload -U compinit; compinit" >> ~/.zshrc

Load for the current session:

source <(ai completion zsh)

Load permanently (macOS with Homebrew):

ai completion zsh > $(brew --prefix)/share/zsh/site-functions/_ai

Load permanently (Linux):

ai completion zsh > "${fpath[1]}/_ai"

bash

Requires the bash-completion package (brew install bash-completion on macOS, or your distro's package manager on Linux).

Load for the current session:

source <(ai completion bash)

Load permanently (macOS):

ai completion bash > $(brew --prefix)/etc/bash_completion.d/ai

Load permanently (Linux):

ai completion bash > /etc/bash_completion.d/ai

fish

Load for the current session:

ai completion fish | source

Load permanently:

ai completion fish > ~/.config/fish/completions/ai.fish

PowerShell

Load for the current session:

ai completion powershell | Out-String | Invoke-Expression

Load permanently — add the above line to your PowerShell profile ($PROFILE).


Start a new shell after any permanent installation for changes to take effect.

Configuration

Configuration file location:

  • ~/.ai-cli/config.toml

Available ai config get/set keys:

Key Description Default
provider Active provider (openai or openrouter) openrouter
model Model identifier anthropic/claude-3.5-sonnet
openai_key OpenAI API key (empty)
openrouter_key OpenRouter API key (empty)
openai_url OpenAI base URL https://api.openai.com/v1
openrouter_url OpenRouter base URL https://openrouter.ai/api/v1
always_confirm Always prompt before execution (true/false) false
min_certainty Auto-execute threshold (0-100) 80
debug Debug mode (none, screen, file) none

Notes:

  • allowlist_prefixes exists in config but is currently edited directly in config.toml
  • ai config get openai_key and ai config get openrouter_key return masked values

Debugging

# Persisted config
ai config set debug screen
ai config set debug file

# Per-command override
ai --debug list files
ai --debug=screen list files
ai --debug=file list files

Multi-Step Commands

For complex requests, AI CLI may return multiple commands. They run sequentially and stop on first failure.

$ ai kill the process on port 8080

[1/2] Find PID on port 8080
  $ lsof -ti :8080  [safe] 95% certainty
[2/2] Kill the process
  $ kill -9 $(lsof -ti :8080)  [risky] 90% certainty
Execute? [Y/n]

Developer Guide

Build, Test, and Install (go-task)

Requires Go 1.25+ and go-task.

Install task once:

go install github.com/go-task/task/v3/cmd/task@latest

Ensure your Go bin directory is in PATH:

  • Windows: %USERPROFILE%\\go\\bin
  • macOS/Linux: $HOME/go/bin

Example for zsh:

echo 'export PATH="$PATH:$HOME/go/bin"' >> ~/.zshrc
source ~/.zshrc

Common commands:

task             # build + test
task build       # dist/<os>/ai
task test
task test:verbose
task test:pkg PKG=./internal/llm/...
task install

task install copies from dist/<os>/ into:

  • Windows: %USERPROFILE%\\go\\bin
  • macOS/Linux: /usr/local/bin

Override install target:

task install INSTALL_DIR="$HOME/.local/bin"

Windows (PowerShell):

task install INSTALL_DIR="$env:USERPROFILE\\tools\\bin"

Versioning and Releases

Version is injected at build time using ldflags. Default value in source is dev.

Create a release:

git tag v0.2.0
git push origin v0.2.0

Tag push triggers GitHub Actions to:

  • run tests
  • build cross-platform artifacts
  • create a GitHub Release
  • upload artifacts to the release

Testing

task test
task test:verbose
task test:pkg PKG=./internal/executor/...
task test:pkg PKG=./internal/llm/...
task test:pkg PKG=./internal/config/...
task test:pkg PKG=./internal/shell/...

Project Structure

ai-cli/
├── main.go
├── go.mod
├── cmd/
│   ├── root.go
│   ├── config.go
│   ├── memory.go
│   ├── version.go
│   ├── status.go
│   ├── doctor.go
│   ├── setmodel.go
│   └── wizard.go
└── internal/
    ├── config/
    │   ├── config.go            # TOML config load/save/defaults (~/.ai-cli/config.toml)
    │   └── config_test.go
    ├── llm/
    │   ├── client.go            # LLM HTTP client (OpenAI-compatible), debug logging
    │   ├── client_test.go
    │   ├── models.go            # Model-list fetching (OpenRouter + OpenAI), GroupByCompany
    │   ├── models_test.go
    │   ├── prompt.go            # System prompt template with OS/shell/cwd context
    │   ├── prompt_test.go
    │   ├── parse_test.go
    │   └── types.go             # JSON request/response structs
    ├── executor/
    │   ├── executor.go          # Sequential command execution with colored output
    │   ├── safety.go            # allowlist check + risk/certainty safety matrix
    │   └── safety_test.go
    ├── memory/
    │   ├── memory.go            # Memory CRUD + keyword matching (~/.ai-cli/memory.json)
    │   └── memory_test.go
    ├── shell/
    │   ├── detect.go            # OS, shell, version detection
    │   └── detect_test.go
    └── interactive/

About

A tool that uses an LLM to generate command line functions.

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors 4

  •  
  •  
  •  
  •