SVM-SignLanguage is a machine learning project designed to recognize hand signs and gestures from Indian Sign Language (ISL). The system uses Support Vector Machines (SVM) for classification and integrates OpenCV and MediaPipe for image and video processing. The project supports both static hand signs (e.g., alphabets and numbers) and dynamic gestures (e.g., "Thank you", "Yes", "No").
The primary goal of this project is to develop an efficient system for recognizing hand signs and gestures from Indian Sign Language (ISL). This system aims to bridge the communication gap between individuals who use sign language and those who do not understand it. By leveraging machine learning techniques, specifically Support Vector Machines (SVM), and integrating tools like OpenCV and MediaPipe, the project seeks to:
- Recognize static hand signs (e.g., alphabets and numbers) with high accuracy.
- Identify static gestures (e.g., "Thank you", "Yes", "No") in real-time.
- Provide a real-time interface for practical use cases, such as education, accessibility, and communication.
- Static Hand Sign Recognition: Recognizes alphabets (A-Z) and numbers (0-9).
- Static Gesture Recognition: Recognizes common gestures like "Thank you", "Yes", "No", etc.
- Real-Time Prediction: Uses a webcam interface to predict hand signs and gestures in real-time.
datasets/
misha_dataset/
Misha_gesture_dataset/
nikhita_dataset/
spandanas_dataset/
spandana_gesture_dataset/
SVM-SignLanguage/
capture.py # Script for capturing images or videos
prediction.py # Real-time prediction script
README.md # Project documentation
requirements.txt # Python dependencies
train_svm.py # SVM training script
data/ # Processed data files
data_features.npy
data_labels.npy
new_data_features.npy
new_data_labels.npy
models/ # Trained SVM models
svm_model.joblib
svm_model1.joblib
utils/
preprocess.py # Preprocessing script for datasets
-
Clone the repository:
git clone https://github.com/your-username/SVM-SignLanguage.git cd SVM-SignLanguage
-
Create a virtual environment and activate it:
python -m venv venv venv\Scripts\activate # On Windows source venv/bin/activate # On macOS/Linux
-
Install the required dependencies:
pip install -r requirements.txt
Run the preprocessing script to extract features from the datasets:
python utils/preprocess.py
Train the SVM model using the processed data:
python train_svm.py
Use the webcam interface to predict hand signs and gestures in real-time:
python prediction.py
Capture images or videos for new hand signs or gestures:
python capture.py
- Place the new dataset in the
datasets/
directory. - Update the
utils/preprocess.py
script to include the new dataset. - Run the preprocessing script to extract features.
- Retrain the SVM model using
train_svm.py
.
- Misha N Devegowda - m1sha1107
- Nikhita K Nagavar
- Spandana Sujay
- MediaPipe: For hand landmark detection.
- OpenCV: For image and video processing.
- scikit-learn: For SVM implementation. .