Skip to content

An AI-powered translator that converts spoken audio or video input into American Sign Language (ASL) gestures in real-time using deep learning and computer vision, featuring gesture animation, cloud accessibility, and support for multilingual and avatar-based rendering.

Notifications You must be signed in to change notification settings

M4YH3M-DEV/SignifyAI

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SignifyAI

Note

Built at MUJHackX 3.0, the biggest hackathon of MUJ, and we won 3rd place among 500+ teams nationwide! 🔥 winning

A real-time application that converts spoken audio and video into American Sign Language gestures, enabling accessibility for deaf and hard-of-hearing individuals.

Problem

Hearing-impaired individuals cannot easily access spoken communication in classrooms, meetings, and online content due to limited interpreter availability and high costs.

Solution

The system translates speech to ASL through three stages:

  1. Speech Recognition - Whisper AI transcribes audio to English text
  2. Grammar Conversion - Removes articles/verbs and normalizes tense for ASL notation
  3. Gesture Mapping - Displays corresponding hand gestures with auto-advancing animation

Key Features

  • Microphone recording and video upload support
  • Real-time gesture animation (800ms per gesture)
  • Automatic fingerspelling for unknown words
  • Progress tracking and sequence display
  • Both speech-to-sign and video-to-sign processing

How It Works

User speaks >> Whisper transcribes >> NLP converts to ASL Gloss >> Gestures map to images >> Animated display

Technology Stack

  • Backend: FastAPI + Python (Faster Whisper, FFmpeg)
  • Frontend: Next.js + React (Framer Motion animations)
  • Data: JSON gesture mappings, Kaggle ASL Alphabet
  • AI: Openrouter

Benefits

  • Accessibility: Independent access to communication without interpreters
  • Cost Reduction: Eliminates continuous interpreter fees
  • Scalability: Supports multiple simultaneous translations
  • 24/7 Availability: On-demand access to translated content

Impact

Makes education, employment, and information equally accessible to the deaf community by removing communication barriers in real-time.

Setup

  • Backend: pip install -r requirements.txt >> uvicorn main:app --reload
  • Frontend: yarn >> yarn dev

Credits

💡 HB Singh Chaudhary (M4YH3M) 👨‍💻 BIGBEASTISHANK (Pranjal)


Bridging communication gaps through AI and accessibility. 🤝

About

An AI-powered translator that converts spoken audio or video input into American Sign Language (ASL) gestures in real-time using deep learning and computer vision, featuring gesture animation, cloud accessibility, and support for multilingual and avatar-based rendering.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • TypeScript 71.6%
  • Python 28.0%
  • Other 0.4%