Skip to content

Nugfle/ferrum-agent

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Ferrum 🦀🤖

Ferrum is a fast, async, and extensible local LLM agent built in Rust. It acts as an interactive bridge to local language models via the Ollama API, featuring a decoupled terminal UI, real-time response streaming, and a type-safe tool execution system.

Note: This project is currently an MVP (Minimum Viable Product). It implements the core foundation for a robust agent application but currently includes a limited subset of planned features.

✨ Features

  • 🚀 High Performance: Built entirely in Rust utilizing tokio for non-blocking, async I/O.
  • 🛠️ Type-Safe Tool Calling: Easily extend the agent's capabilities. Using schemars and serde, Rust structs are automatically converted into JSON schemas that the LLM can understand and invoke.
  • 🏗️ Decoupled Architecture: Runs the User Interface and the LLM Agent on separate threads, communicating safely via asynchronous channels (mpsc).
  • 📦 Modular Workspace: Cleanly separated into the core agent (ferrum-agent) and the api (ollama-api). Which allows for easy modification and addition of new APIs.

📂 Project Structure

The project utilizes a Cargo Workspace to separate concerns:

ferrum/
├── ferrum-agent/              # Main application binary
│   ├── src/ui.rs              # User interface rendering and input handlin
│   ├── src/ollama/agent.rs    # Core agent logic and task management
│   ├── src/tools/mod.rs       # Extensible tool system (DynTool traits)
│   └── src/main.rs            # Application entry point
├── crates/ollama_api/         # Standalone library for Ollama communication
│   ├── src/dtos.rs            # Data Transfer Objects (Serde models)
│   └── src/lib.rs             # HTTP client logic (reqwest)
└── Modelfile                  # Ollama configuration for the agent's base model

🛠️ Prerequisites

Before you begin, ensure you have the following installed:

  1. Rust & Cargo (latest stable version)
  2. Ollama running locally.

🚀 Getting Started

  1. Start Ollama Ensure your local Ollama instance is running (by default on http://localhost:11434).
  2. Setup the Custom Model Use the provided Modelfile to build the agent's base model or modify it to fit your own hardware needs:
ollama create ferrum-model -f Modelfile
  1. Build and Run the Agent Navigate to the root of the workspace and run:
cargo run --bin ferrum-agent

🏗️ Architecture Overview

Ferrum employs an event-driven architecture. The application initializes an OllamaAgent on a background thread while the main thread handles the UI.

  • Communication: The UI sends prompts and commands to the Agent via a multi-producer, single-consumer (mpsc) channel.
  • Execution: The Agent queries the Ollama API, parsing JSON schemas for any available tools.
  • Streaming: As the model generates tokens, the Agent streams the partial chunks back to the UI in real time.

🗺️ Roadmap & Future Improvements

As an MVP, Ferrum has a solid foundation, but there are several areas planned for future development:

  • Robust Error Handling: Replace development-stage .unwrap() and .expect() calls with graceful error recovery to prevent panics on unexpected channel drops or API timeouts.
  • Tool Sandboxing & Security: Implement strict validation, sandboxing, and potential "user-confirmation" prompts before executing state-changing tools generated by the LLM.
  • Expanded Toolset: Add more built-in tools (e.g., file system access, web search) utilizing the DynTool trait framework.
  • Conversation History: Implement robust token-window management and persistent chat history.

📄 License

About

A minimal agent written in Rust

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages