Autonomous Fetching Robot with Voice Control for Assisted Living

2019 ELE Engineering Design Project (NMK05)


Faculty Lab Coordinator

Naimul Mefraz Khan

Topic Category

Consumer Products / Applications

Preamble

Technology development for assisted living is an important aspect to improve the quality of lives of seniors in Canada. An intelligent robot that can fetch specific objects for seniors and bring it back to them autonomously can drastically improve the quality of assisted living. Through robotic automation, both time and cost for additional human resources can be saved, thus bringing down the cost of assisted living, resulting in increased affordability.

Objective

The objective of this project is to develop an autonomous robot car that can drive around the house of a person autonomously. Upon voice command (e.g. "get me a coffee mug"), the robot should be able to navigate around the house, find the specified object, fetch it, and bring it back to the person.

Partial Specifications

1. The Robot car should have a camera for object detection, and additional sensors (e.g. ultrasonic sensor) for distance measurement if necessary.

2. The object to be fetched should be communicated to the robot with voice control.

3. The robot should have ability to pick up a regular shaped object (through a sled/platform is fine, grasping not required).

4. The robot should have the ability to navigate back to the person after picking up the object (through facial recognition/person identification).

Suggested Approach

1. A 4-wheel car can be built, or an existing RC-car can be modified so that it can be controlled through an Arduino microcontroller and NVIDIA Jetson Nano or similar.

2. Object detection should be implemented with machine learning/computer vision.

3. Voice command can be implemented through Google Assistant/Amazon Alexa.

4. Mount camera on the car. Camera should have tilting capability for face recognition/person identification.

5. It is recommended to use ROS for automating robot navigation.

6. Additional sensors can be mounted for ease of navigation/picking up objects.

Group Responsibilities

1. Study the required hardware (RC cars, Arduino, Raspberry Pi, NVIDIA Jetson Nano).
2. Study the required software APIs (OpenCV, Keras, Tensorflow for object detection and face detection/person identification, ROS for navigation, Google Assistant/Alexa for voice).
3. Design the car rig that can fit all the required components.
4. Program the end-to-end demonstrable product.
5. Test the demonstrable product under different conditions, both ideal and non-ideal, and report limitations.

Student A Responsibilities

Building the hardware rig (4 wheel car, sled/platform for picking up object, tilt mechanism for camera)

Student B Responsibilities

Program object detection and person identification/face recognition

Student C Responsibilities

Program ROS for navigation

Student D Responsibilities

Overall integration of different components, electronics

Course Co-requisites

 


NMK05: Autonomous Fetching Robot with Voice Control for Assisted Living | Naimul Mefraz Khan | Sunday September 1st 2019 at 03:13 PM