Action Recognition-Based Sign Language Interpretation Using CNNs
Developed a real-time sign language detection system that converts hand gestures into readable text using deep learning and computer vision.
Utilized OpenCV for video capture and TensorFlow to train a CNN model for gesture classification across various ASL signs.
Preprocessed input frames using hand landmark detection and image augmentation to improve accuracy under different lighting and backgrounds.
Built with the goal of enhancing inclusive communication for the hearing and speech-impaired, with potential for integration into assistive technologies.