Eye C.U.



Eye Camera Unit Capstone Project

Fully Assembled Dualsky Quadrotor with Autonomous Tracking Platform

In the past decades, robotic detection and following is a goal scientists and engineers have worked on for its applications such as vehicle convoying, criminal justice, artificial intelligence and filmmaking. Because of this demand, the members of the Eye Camera Unit (Eye C.U.) have designed a robot which has the ability to follow an arbitrary objects through various environments using computer vision. Previously, designs of tracking and following robotics have used systems that utilize infrared beacons/sensors, inertial measurement units (IMUs), and differential global positioning systems(DGPS). The largest downfall to infrared systems, IMUs, and differential GPS being used for following is they all rely on information sent from the object being followed by the system. Computer vision offers a follower to be fully autonomous without the information being sent from the object being followed. With the use of computer vision software and a gimbal with two degrees of freedom, Eye C.U. hopes to introduce a tracking and following method that differs from its competition.   Project Specifications When Eye C.U. began its initial design in the fall of 2012, initial goals were set for the gimbal with two degrees of freedom. These goals were that the gimbal was under 200 g., had 120° of movement and prove computer vision was a viable method for tracking and following.

Gimbal Assembly attached to Dualsky Quadrotor

Initially, the proof of concept, as seen in the gallery, was very large and too heavy even though it was made of low density, robust materials such as acrylic and polyvinyl chloride plastic (PVC). Though this initial concept was too heavy to attach to a quadrotor, it proved that Eye C.U.’s computer vision approach was possible. Eye C.U.’s second design for the dynamic gimbal proved to be much lighter, weighing in at only 45 grams.

Portion of New Gimbal Assembly

The quadrotor chosen by Eye C.U. was the Dualsky 460 hornet. This was chosen for its high payload capacity (1 kg, excluding battery) and its relatively low price ($299.99). Eye C.U. flying the Dualsky 460 IMG_0208



The Eye C.U. Team

Name Experience
Carlo Desantis As the leader of the Eye C.U. team, Carlo strives to keep the other members on task as well as motivated to accomplish the overall goal of the design. Carlo will be graduating from the University of Nevada, Reno during the spring of 2013 with a Bachelor of  Science in Engineering and a minor in Mathematics
Kristofer Berggren Graduating student from the University of Nevada, Reno during the spring of 2013 with a Bachelor of Science in Mechanical Engineering and a minor in Business Administration. Experience in electronic circuits, management, and linux operating system programming.
DanielHayden Daniel is a senior engineering student at the University of Nevada, Reno. With experience in RC helicopters he brings strategy and enthusiasm to the Eye C.U. design.
James Mulcahy James is a dual major student of the University of Nevada, Reno pursueing degrees in Mathematics and Mechanical Engineering. James brings knowledge of programming in C++ that the team has benefited greatly from in its final design.
Dane Weiler Dane’s professional experience in machining and design has helped the Eye C.U. team with the mechanical aspects of the project as well as aesthetics. Dane will be graduating in the Spring of 2013 with a Bachelor of Science in Mechanical Engineering and a minor in Mathematics. During time away from work and school he enjoys building and racing late 60’s and early 70’s muscle cars.




The Computer Vision program used by Eye C.U. to track and follow objects is OpenTLD (also referred to as the Predator Algorithm).  Developed by Zdenek Kalal, OpenTLD is an open source computer vision program that simultaneously Tracks, Learns and Detects an object through streaming video. OpenTLD was chosen by Eye C.U. for the following reasons:

  • OpenTLD could track various objects including Cars, Humans, Animals and miscellaneous shapes
  • Long-term tracking abilities (many other computer vision programs fail from accumulated errors)
  • Learning abilities and adaptive short-term tracking
  • Ability to extract data
  • Open source and User Friendly


The TLD algorithm utilizes the Lucas-Kanade method for its short term tracking. This method uses spatial intensity gradient information that direct the search for the object being searched for through the continuous frames of video. The Lucas-Kanade method is important to the TLD algorithm because it increases the adaptability of the tracker as well as increases the rate of tracking through its sparse motion fields. Further information on the Tracking of OpenTLD can be found here.


The most unique aspect of the TLD algorithm is its online learning capabilities. Online learning means that the program can improve itself while its running.  By utilizing the algorithms repeating detection and errors, the program continuously “grows” and “prunes” the model of the object being tracked through bootstrapping binary classifiers and forward-backward error. The expanding and converging events is what gives OpenTLD the robustness in long term tracking. For further reading on OpenTLD’s Learning methods, Zdenek Kalals publications can be found here and here.


The detector of the TLD algorithm uses a new method called 2bit Binary Patterns (2bitBP). 2bitBP measure gradient orientation within a small area of the image (a 2 x 1 verticle and a 1 x 2 horizontal pixel arrangement). Next the algorithm quantizes the information through layering the horizontal and verticle components and the outputs four possibilities of the orientation. More detailed information on OpenTLD’s detecting method can be found here.

OpenTLD Source Code

Open Source Code for OpenTLD (matlab)

Open Source Code for OpenTLD (C++)

Eye C.U. Source Code

 ***Needs SyntaxHighlighter Evolved WP Theme*** [code language=”cpp”]

#include “BB.h” #include “BBPredict.h” #include “Median.h” #include “Lk.h”

// ———————————————————————-

int fd = open(“/dev/ttyACM0”, O_RDWR | O_NOCTTY | O_NDELAY); if (fd == -1) { fprintf(stderr, “\n Could not open port \n”); } else { fprintf(stderr, “\n Port Opened \n”); }

struct termios options; tcgetattr (fd, &options); cfsetispeed (&options, B9600); cfsetospeed (&options, B9600);

int servoRotx; int servoRoty;

int servoNumber0 = 0; int servoNumber5 = 5;

int centerptsx = 0.5 * (bb[0] + bb[2]); int centerptsy = 0.5 * (bb[1] + bb[3]);

int cvalx = 340; // —> new center at (320,240) int cvaly = 250;

int rightptx = 370; int leftptx = 270; int toppty = 205; int botpty = 275;

// Tolerance box to scale down servo output when closer to desired outcome // int tolx = #; // int toly = #;

printf( “\n Center-x: %d \n”, centerptsx); printf( “\n Center-y: %d \n”, centerptsy);

if (centerptsx < leftptx && centerptsy < toppty) // top left { servoRotx = 58; servoRoty = 58; } else if (leftptx < centerptsx < rightptx && centerptsy rightptx && centerptsy < toppty) // top right { servoRotx = 37; servoRoty = 58; } else if (centerptsx centerptsy > toppty) // mid left { servoRotx = 58; } else if (leftptx < centerptsx centerptsy > toppty) // mid mid { // do nothing —> already at desired location } else if (centerptsx > rightptx && botpty > centerptsy > toppty) // mid right { servoRotx = 37; } else if (centerptsx botpty) // bot left { servoRotx = 58; servoRoty = 37; } else if (leftptx < centerptsx botpty) // bot mid { servoRoty = 37; } else if (centerptsx > rightptx && centerptsy > botpty) // bot right { servoRotx = 37; servoRoty = 37; }

/* if(centerptsx > cvalx && centerptsy > cvaly) { servoRotx = 37; servoRoty = 37; } else if(centerptsx > cvalx && centerptsy < cvaly) { servoRotx = 37; servoRoty = 58; } else if(centerptsx cvaly) { servoRotx = 58; servoRoty = 37; } else if(centerptsx < cvalx && centerptsy > 7; unsigned char pos_low_x = servoRotx & 0x7F; unsigned char pos_hi_y = (servoRoty & 0xFF) >> 7; unsigned char pos_low_y = servoRoty & 0x7F;

// ——— Servo 0 (x) ——————————————- unsigned char buffx[6]; buffx[0] = 0xAA; // start byte buffx[1] = 0x0C; // device id buffx[2] = 0x04; // command number buffx[3] = servoNumber0; // servo number buffx[4] = pos_hi_x; // data1 buffx[5] = pos_low_x; // data2

//———- Servo 5 (y) ——————————————– unsigned char buffy[6]; buffy[0] = 0xAA; // start byte buffy[1] = 0x0C; // device id buffy[2] = 0x04; // command number buffy[3] = servoNumber5; // servo number buffy[4] = pos_hi_y; // data1 buffy[5] = pos_low_y; // data2

int writeServx = write(fd, &buffx, 6); int writeServy = write(fd, &buffy, 6);




Image Discription
 The proof of concept gimbal was large and bulky. Though it proved that tracking and following could be achieved through computer vision, it would not be used for the final product which will be attached to a quadrotor.
 With its payload capacity of 1 kg (excluding the battery), the Dualsky 460 hornet was chosen as the quadrotor for Eye C.U.’s design. The quadrotor is roughly a foot and a half in length and width and is cheap for its payload at only 299.99$.
 A 5 meter long endoscope was originally bought by Eye C.U. due to its cheap price as well as low resolution (lower resolution means less computation and processing power). With the the outside framing of the endoscope removed and the chord shortened from 5 meters to 32 centimeters, the cameras weight was reduced by 175 grams!
 Eye C.U. chose the pandaboard ES for its microcomputer for its ability to process at 1.2 Ghz. Also, the pandaboard had the ability to run matlab which was useful to Eye C.U. with its initial programming.
Close-up of Eye C.U.’s camera and gimbal unit
Final assembly ready for action. (Minus a propeller blade)
CNC router machining a pillow block for the original proof-of-concept gimbal



  For any questions or concerns with the Eye C.U. design, please email: