Skip to content

SentryAryan/AIChat

 
 

Repository files navigation

AI Chat Application

A full-stack web application that provides a chat interface to interact with an AI model, built with the MERN stack and TypeScript.

Features

  • Real-time Chat: Engage in real-time conversations with an AI model.
  • AI Integration: Powered by the OpenRouter API, utilizing the x-ai/grok-4-fast:free model.
  • Modern UI: A sleek and responsive user interface built with React, TypeScript, and Tailwind CSS.
  • Streaming Responses: AI responses are streamed back to the user for a more interactive experience.
  • GitHub Integration: Includes GitHub Actions for CI/CD, automated reviews, and issue triage.

Tech Stack

Frontend

  • Framework: React with Vite
  • Language: TypeScript
  • Styling: Tailwind CSS, shadcn/ui
  • UI Components: Radix UI, lucide-react
  • AI SDK: @ai-sdk/react
  • Routing: React Router

Backend

  • Framework: Express.js
  • Language: TypeScript
  • AI Provider: OpenRouter
  • API Client: Axios

Getting Started

Prerequisites

  • Node.js (v18 or higher)
  • npm, yarn, or pnpm

Installation and Setup

  1. Clone the repository:

    git clone https://github.com/your-username/AIChat.git
    cd AIChat
  2. Backend Setup:

    cd backend
    npm install

    Create a .env file in the backend directory and add your API keys:

    OPENROUTER_API_KEY=your_openrouter_api_key
    HF_API_KEY=your_hugging_face_api_key
    

    If you plan to use Google Generative AI image generation, add the following as well:

    GOOGLE_GENERATIVE_AI_API_KEY=your_google_genai_api_key
    
  3. Frontend Setup:

    cd ../frontend
    npm install

Running the Application

Option 1: Using Docker (Recommended for Contributors)

  1. Prerequisites:

    • Docker
    • Docker Compose
  2. Environment Variables:

    Backend:

    cd backend
    cp .env.example .env

    Edit backend/.env and add your API keys:

    OPENROUTER_API_KEY=your_openrouter_api_key
    HF_API_KEY=your_hugging_face_api_key
    GOOGLE_GENERATIVE_AI_API_KEY=your_google_genai_api_key
    

    Frontend:

    cd ../frontend
    cp .env.local.example .env.local

    The default values in frontend/.env.local should work out of the box.

  3. Build and Run: From the root directory, run:

    docker-compose up --build

    The frontend will be available at http://localhost:5173 and the backend at http://localhost:5001.

    Note: Changes to your code will automatically trigger hot reload in both frontend and backend.

  4. Stop the containers:

    docker-compose down

Option 2: Running Locally (Without Docker)

  1. Start the backend server:

    cd backend
    npm run dev

    The backend will be running on http://localhost:5001.

  2. Start the frontend development server:

    cd ../frontend
    npm run dev

    The frontend will be running on http://localhost:5173.

Contributing

Contributions are welcome! Please follow these steps to contribute:

  1. Fork the repository.
  2. Create a new branch: git checkout -b feature/your-feature-name
  3. Make your changes and commit them with a descriptive message. This project follows the Conventional Commits specification.
  4. Push to the branch: git push origin feature/your-feature-name
  5. Create a pull request.

Please use the provided issue and pull request templates.

Google Generative AI (optional)

This project includes an endpoint that can call Google GenAI image models. To enable it:

  • Install the Google SDK in the backend:
cd backend
npm install @google/genai
  • Add GOOGLE_GENERATIVE_AI_API_KEY to backend/.env (see above).

  • The endpoint is exposed at POST /api/image/gemini and accepts JSON { "prompt": "your prompt" }.

  • Responses will be either { "image": "data:image/png;base64,..." } when the model returns inline bytes, or { "text": "..." } when the model returns text parts.

Note: Some Google models are gated or require billing on your Google account. If you receive 404 or permission errors, you may need to request access or switch to an Imagen model (e.g. imagen-4.0-generate-001).

License

This project is licensed under the ISC License.

About

An AI application to learn ai chat through contributing

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • TypeScript 96.4%
  • CSS 3.4%
  • Other 0.2%