UGC Approved Journal no 63975(19)

ISSN: 2349-5162 | ESTD Year : 2014
Call for Paper
Volume 11 | Issue 5 | May 2024

JETIREXPLORE- Search Thousands of research papers



WhatsApp Contact
Click Here

Published in:

Volume 11 Issue 4
April-2024
eISSN: 2349-5162

UGC and ISSN approved 7.95 impact factor UGC Approved Journal no 63975

7.95 impact factor calculated by Google scholar

Unique Identifier

Published Paper ID:
JETIR2404280


Registration ID:
536280

Page Number

c729-c736

Share This Article


Jetir RMS

Title

EYEMATE – Object Detection With Voice Commands for Visually Impaired

Abstract

Object detection is used in almost all real-world applications, such as autonomous navigation, visual systems, face recognition, and many more. Instead of using object detection techniques, it aids in identifying an image's contrasting features and generates a knowledgeable and competent comprehension, much like how human vision functions. This study begins with a succinct introduction to deep learning, which has produced noteworthy performance as a result of these visual representations. This clarifies the function of deep learning on convolutional neural networks (RCNNs) for object detection algorithms that assist the blind by identifying objects by name. EYEMATE's primary role is to record live visual input utilizing a smartphone or portable camera. A complex object detection algorithm then processes this input. This algorithm is capable of identifying a broad variety of items that are frequently seen in daily life, including obstacles, traffic signs, products on store shelves, and more. Modern machine learning techniques are used by the system to increase its accuracy and extend its object detection capabilities. EYEMATE incorporates a natural language processing component that allows users to give voice commands in order to promote smooth interaction. Users can ask the system to describe their surroundings, identify specific things, or offer assistance with activities like navigating unfamiliar places by speaking commands to it. Users are guaranteed timely and pertinent information by means of synthesized speech or haptic feedback as the system's reaction

Key Words

Object Detection, RCNN, Neural Network, Deep Learning

Cite This Article

"EYEMATE – Object Detection With Voice Commands for Visually Impaired", International Journal of Emerging Technologies and Innovative Research (www.jetir.org), ISSN:2349-5162, Vol.11, Issue 4, page no.c729-c736, April-2024, Available :http://www.jetir.org/papers/JETIR2404280.pdf

ISSN


2349-5162 | Impact Factor 7.95 Calculate by Google Scholar

An International Scholarly Open Access Journal, Peer-Reviewed, Refereed Journal Impact Factor 7.95 Calculate by Google Scholar and Semantic Scholar | AI-Powered Research Tool, Multidisciplinary, Monthly, Multilanguage Journal Indexing in All Major Database & Metadata, Citation Generator

Cite This Article

"EYEMATE – Object Detection With Voice Commands for Visually Impaired", International Journal of Emerging Technologies and Innovative Research (www.jetir.org | UGC and issn Approved), ISSN:2349-5162, Vol.11, Issue 4, page no. ppc729-c736, April-2024, Available at : http://www.jetir.org/papers/JETIR2404280.pdf

Publication Details

Published Paper ID: JETIR2404280
Registration ID: 536280
Published In: Volume 11 | Issue 4 | Year April-2024
DOI (Digital Object Identifier):
Page No: c729-c736
Country: Greater Noida, Uttarpradesh, India .
Area: Science & Technology
ISSN Number: 2349-5162
Publisher: IJ Publication


Preview This Article


Downlaod

Click here for Article Preview

Download PDF

Downloads

00033

Print This Page

Current Call For Paper

Jetir RMS