AI in the Shell is an experimental project that gives a local AI model (via Ollama) unrestricted root access to a Unix shell. The AI takes natural language instructions, converts them into shell commands, and executes them on the system β no restrictions, no holding back.
β οΈ WARNING: This project is highly experimental and should only be run inside an isolated virtual machine (VM). Misuse can result in system damage, data loss, or unintended consequences.
- Full root shell access for LLMs
- Natural language to shell command conversion
- Real-time command execution with output logging
- Local-only, runs fully offline via Ollama
- Lightweight Python Flask backend
- You send a prompt like:
Install nginx and start the service
- The LLM (e.g.,
llama3) responds with:
sudo apt update && sudo apt install nginx -y && sudo systemctl start nginx- The shell executor runs the command and returns the output.
- Python 3.8+
- Ollama (installed and running)
- A virtual machine (e.g., Kali Linux, Ubuntu, etc.)
-
Clone the repo:
git clone https://github.com/yourusername/AI-in-the-Shell.git cd AI-in-the-Shell -
Install dependencies:
pip install flask requests
-
Install Ollama and run a model:
curl -fsSL https://ollama.com/install.sh | sh ollama run llama3 -
Run the Flask server:
python3 rootshell_combined.py
Use curl or any HTTP client to interact with the API:
curl -X POST http://localhost:4224/run \
-H "Content-Type: application/json" \
-d '{"prompt": "Install nginx and start the service"}'Response:
{
"command": "sudo apt update && sudo apt install nginx -y && sudo systemctl start nginx",
"output": "Hit:1 http://... nginx is already the newest version...",
"user_input": "Install nginx and start the service"
}- You must run this in an isolated VM with
sudoaccess. - The AI has full control β treat this like giving a human root shell access.
- Designed for experimentation and research, not production.
- Web UI with terminal-like feedback
- Command history and approval mode
- Live output streaming
- Prompt-based session memory
This project is for educational and experimental purposes only. You are responsible for the consequences of running code generated by an AI with system-level permissions.
Use wisely β preferably in a VM that you can restore or wipe.
MIT License. See LICENSE file for details.