Skip to content

πŸ” Real-time object detection with voice command integration using YOLOv5 (Objects365), OpenCV, MediaPipe, spaCy NLP, and SpeechRecognition. Enhances accessibility by guiding users to locate indoor objects with directional feedback relative to their position. Ideal for smart-home, accessibility tech, and assistive applications.

Notifications You must be signed in to change notification settings

PrthD/AI-powered-Voice-assisted-Object-Locator

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

29 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

AI-powered-Voice-assisted-Object-Locator (ECE 492 Capstone G08):

Group members:

  1. Parth Dadhania (SID: 1722612)
  2. Het Bharatkumar Patel (SID: 1742431)
  3. Chinmoy Sahoo (SID: 1729807)
  4. Dang Nguyen (SID: 1740770)

πŸ“œ Setup Guide for Team (WSL & Raspberry Pi)

This guide ensures that every team member can seamlessly set up, configure, and run the AI-powered Voice-Assisted Object Locator (AIVOL) project in an identical development environment across WSL Ubuntu & Raspberry Pi.


πŸš€ Quick Setup Guide for AIVOL

πŸ“Œ Follow these steps carefully to ensure a smooth and identical setup on your machine.

πŸ”Ή Supported Platforms:
βœ… Windows Subsystem for Linux (WSL) Ubuntu (Development)
βœ… Raspberry Pi OS (Deployment/Production)


πŸ“Œ Step 1: Clone the GitHub Repository

Navigate to your desired project directory and run:

git clone https://github.com./PrthD/AI-powered-Voice-assisted-Object-Locator.git
cd AI-powered-Voice-assisted-Object-Locator

πŸ“Œ Step 2: Run the Setup Script

Make the setup script executable:

chmod +x setup.sh

Then, run the script:

./setup.sh

This will: βœ” Install necessary system dependencies
βœ” Ensure Python 3.11.4 is installed using pyenv
βœ” Create and activate a virtual environment
βœ” Install all Python dependencies from requirements.txt
βœ” Download YOLO model weights


πŸ“Œ Step 3: Verify Installation

Once the setup is complete, verify that everything is correctly installed:

3.1 Check Python Version

python --version

βœ” Should output: Python 3.11.4

3.2 Check Installed Packages

pip list

βœ” Should list all dependencies (e.g., opencv-python, SpeechRecognition, PyAudio, pyttsx3, mediapipe, ultralytics, torch, torchvision).

3.3 Verify YOLO Model is Installed

ls -lh models/yolo/yolo.weights

βœ” Should show the YOLO model weights file (yolo.weights).


πŸ“Œ Step 4: Running the Project

Now that everything is set up, run the main program:

python3 src/main.py

πŸ“Œ Troubleshooting Guide

If you encounter any issues during the setup, the script will now immediately exit and display an error message. For example, if Python or the dependencies are not installed correctly, you will see a message like:

❌ An error occurred during the setup. Please review the error messages above and refer to the Troubleshooting Guide in README.md.

Common troubleshooting steps:

⚠️ pyenv not found: Run the following commands to add pyenv to your shell environment and restart your terminal:

export PATH="$HOME/.pyenv/bin:$PATH"
eval "$(pyenv init --path)"
eval "$(pyenv init -)"
eval "$(pyenv virtualenv-init -)"
exec "$SHELL"

⚠️ Dependency Installation Failures: Ensure that you are connected to the internet and that your system package manager (apt) has successfully updated. If errors persist, try running the setup script again after addressing any specific error messages shown.


πŸ“Œ Updating the Project

Whenever there’s a new update, run:

git pull origin main
./setup.sh

🎯 Summary of Steps

Step Description
1️⃣ Clone the GitHub repository
2️⃣ Run the setup.sh script
3️⃣ Verify installation (python --version, pip list)
4️⃣ Run the main program (python src/main.py)
5️⃣ Troubleshoot issues if needed
6️⃣ Pull updates and re-run setup.sh

πŸŽ‰ You're Now Ready to Develop & Deploy!

πŸš€ This guide ensures that all team members have an identical setup, making collaboration seamless and error-free! πŸš€

About

πŸ” Real-time object detection with voice command integration using YOLOv5 (Objects365), OpenCV, MediaPipe, spaCy NLP, and SpeechRecognition. Enhances accessibility by guiding users to locate indoor objects with directional feedback relative to their position. Ideal for smart-home, accessibility tech, and assistive applications.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published