Cloud-based Visual SLAM Framework for Mobile Augmented Reality

2017 COE Engineering Design Project (NMK04)


Faculty Lab Coordinator

Naimul Mefraz Khan

Topic Category

Software / Data Engineering

Preamble

The commercial market for Augmented Reality (AR) is expected to reach $20 billion by the end of 2020. A big bottleneck for AR is precise localization of the mobile devices so that virtual and real world objects can be aligned properly. Although recent efforts have shown such localization is possible through implementation of Simultaneous Localization and Mapping (SLAM), such algorithms are still very resource intensive for low-powered agents, and consumes battery very quickly, leaving us with very little room for additional intelligent processing. An alternative is cloud-based visual SLAM, where the camera data is sent to high-powered servers, and calculated SLAM parameters are sent back to the mobile devices for efficient real-time AR. This can enable further intelligent processing on the server side, such as map storage, object-level scene understanding etc.

Objective

To implement a cloud based visual SLAM algorithm, where the mobile devices only grab camera data and sends it to a central server/cloud for calculation required for SLAM. The cloud sends back the positional information to the mobile device for real-time localization. The critical parameter to optimize is latency, so that there is minimal delay between grabbing the camera image and retrieving positional information from the cloud.

Partial Specifications

1. Divide an existing visual SLAM algorithm into server-client tasks, where the client grabs camera images and server does the required calculation.
2. Implement client portion on a mobile (Android/iOS), that can grab image data and send to the server in real-time.
3. Implement server-side processing, so that the server can receive images sent by the client, perform SLAM calculation, and send position and orientation values back to the client.
4. Optimize latency so that on the client-side, the delay between sending an image and receiving position and orientation is minimal.

Suggested Approach

1. Study existing visual SLAM algorithms, especially the ones that are open source (e.g. ORB-SLAM).
2. Study the required steps to divide it into a client-server architecture. 3. Investigate how camera images can be streamed to the server with minimal latency.
4. Investigate how client-server communication can be performed optimally.

Group Responsibilities

1. Literature review on existing visual SLAM algorithms suitable for client-server partitioning. 2. Implement mobile app (Android/iOS) to stream camera images to the server. 3. Implement server application to process SLAM calculation and send positional information back to the client. 4. Implement a prototype AR application to demonstrate the of the project. 5. Quantify performance (latency, bandwidth usage, mobile resource usage).

Student A Responsibilities

TBD

Student B Responsibilities

TBD

Student C Responsibilities

TBD

Course Co-requisites

 


NMK04: Cloud-based Visual SLAM Framework for Mobile Augmented Reality | Naimul Mefraz Khan | Thursday September 14th 2017 at 07:43 PM