A full-stack web application that provides a chat interface to interact with an AI model, built with the MERN stack and TypeScript.
- Real-time Chat: Engage in real-time conversations with an AI model.
- AI Integration: Powered by the OpenRouter API, utilizing the
x-ai/grok-4-fast:freemodel. - Modern UI: A sleek and responsive user interface built with React, TypeScript, and Tailwind CSS.
- Streaming Responses: AI responses are streamed back to the user for a more interactive experience.
- GitHub Integration: Includes GitHub Actions for CI/CD, automated reviews, and issue triage.
- Framework: React with Vite
- Language: TypeScript
- Styling: Tailwind CSS, shadcn/ui
- UI Components: Radix UI, lucide-react
- AI SDK:
@ai-sdk/react - Routing: React Router
- Framework: Express.js
- Language: TypeScript
- AI Provider: OpenRouter
- API Client: Axios
- Node.js (v18 or higher)
- npm, yarn, or pnpm
-
Clone the repository:
git clone https://github.com/your-username/AIChat.git cd AIChat -
Backend Setup:
cd backend npm installCreate a
.envfile in thebackenddirectory and add your API keys:OPENROUTER_API_KEY=your_openrouter_api_key HF_API_KEY=your_hugging_face_api_keyIf you plan to use Google Generative AI image generation, add the following as well:
GOOGLE_GENERATIVE_AI_API_KEY=your_google_genai_api_key -
Frontend Setup:
cd ../frontend npm install
-
Prerequisites:
- Docker
- Docker Compose
-
Environment Variables:
Backend:
cd backend cp .env.example .envEdit
backend/.envand add your API keys:OPENROUTER_API_KEY=your_openrouter_api_key HF_API_KEY=your_hugging_face_api_key GOOGLE_GENERATIVE_AI_API_KEY=your_google_genai_api_keyFrontend:
cd ../frontend cp .env.local.example .env.localThe default values in
frontend/.env.localshould work out of the box. -
Build and Run: From the root directory, run:
docker-compose up --build
The frontend will be available at
http://localhost:5173and the backend athttp://localhost:5001.Note: Changes to your code will automatically trigger hot reload in both frontend and backend.
-
Stop the containers:
docker-compose down
-
Start the backend server:
cd backend npm run devThe backend will be running on
http://localhost:5001. -
Start the frontend development server:
cd ../frontend npm run devThe frontend will be running on
http://localhost:5173.
Contributions are welcome! Please follow these steps to contribute:
- Fork the repository.
- Create a new branch:
git checkout -b feature/your-feature-name - Make your changes and commit them with a descriptive message. This project follows the Conventional Commits specification.
- Push to the branch:
git push origin feature/your-feature-name - Create a pull request.
Please use the provided issue and pull request templates.
This project includes an endpoint that can call Google GenAI image models. To enable it:
- Install the Google SDK in the backend:
cd backend
npm install @google/genai-
Add
GOOGLE_GENERATIVE_AI_API_KEYtobackend/.env(see above). -
The endpoint is exposed at
POST /api/image/geminiand accepts JSON{ "prompt": "your prompt" }. -
Responses will be either
{ "image": "data:image/png;base64,..." }when the model returns inline bytes, or{ "text": "..." }when the model returns text parts.
Note: Some Google models are gated or require billing on your Google account. If you receive 404 or permission errors, you may need to request access or switch to an Imagen model (e.g. imagen-4.0-generate-001).
This project is licensed under the ISC License.