Posted in Computer Vision

OpenCV Augmented Reality Demo

Introduction

You are probably wondering what does Augmented Reality have to due with CNC milling.  Augmented Reality would enhance the usability of the mill by using a camera.  The camera would tell the mill where to move by detecting markers placed on the material to be machined.  I came across a lot articles online about Marker Tracking.  The goal is to have the camera detect the marker and calculate the postion on the material.  Next, the mill would move to the position and start cutting the material.

Open Source Augmented Reality

There are many Augmented Reality libraries out there.  The 3 libraries that I looked at were; ArUco toolkit, ARToolkit, and Glyph.  I was looking for a library that would support 64 bit Windows and was based on OpenCV or Emgu.  I chose the ArUco library for the marker tracking ability.  For more information on the ArUco library refer to the following link;

http://www.uco.es/investiga/grupos/ava/node/26

https://sourceforge.net/projects/aruco/

There is a learning curve with the library, but there are many simple examples to start with.  Calibrating the camera took a little time to get working that was me not the library.

The ARToolkit is a good library to use also.  The ARToolkit also worked with the Unity Game Engine.  The game engine looked like a popular package to use with Augmented Reality.  The next area to look at would be understand how to integrate Unity with the project.

The versions of software I was using;

  • OpenCV Library 3.10
  • ArUco Library 1.3
  • ARToolkit V5.x
  • Visual Studio 2012
  • Windows 7 x64
  • Kinect SDK 1.8

Kinect Camera

The project started out using a Logitech web camera.  Once the software was working the next phase was to use the Kinect camera.  As the project progresses, the Kinect would be used to calculate the position of the marker in 3D space.  The camera is able to measure the distance because it includes a depth buffer along with  an image buffer.  A  plain old web camera only has an image buffer.  With the depth and image buffers aligned in the video, a user can click mouse button in the viďeo window to get the distance reading (Z).  The X and Y position would be derived from the measurement (Not sure about the accuracy of measurements).

Marker Tracking with the Kinect

The video below starts out with the Kinect tracking one marker.  The second half of the video shows the Kinect tracking 24 markers.  Initially the Kinect would only track 6 markers, because the image was reversed horizontally.  This meant that the markers were also reversed.  6 markers were detected, because their orientation was symmetrical horizontally. Once the orientation of the video was corrected, then the Kinect detected the 24 markers.

The application is written in C++.  The application is one of the sample applications provided with the library, but it had to be modified for the Kinect.  Most of the work was getting the library to work with the OpenCV 3.1 and the Kinect SDK 1.8.

Most of the OpenCV applications I came across used a web cam, but not the Kinect.  The code used for collecting images with the Kinect came from a blog by Rozengain “A Creative Technology Company Specialized in 3D”.  The web site is http://www.rozengain.com

The next step is to make a version that works in WPF and C#.