In this article, I’m going to present the final part of our computer vision project namely moving object detection, tracking, positioning, and speed estimation using only a single camera. This part contains an example of ‘visual aircraft tracking’. The project has been finally completed using python and opencv library. After giving the required reference libraries and summarizing the general steps, I’m going to share the main file which includes the main python codes.
As I have already mentioned in the previous posts, the first stage of the project is the automatic moving object detection. For this, I use a novel hybrid method which is a mixture of frame differencing and background subtraction approaches. The details could be seen in the following post:
In the post above, please download the ‘objectDetector.py’ file in your working directory because we are going to need it for aircraft tracking.
The second step of the project is the object tracking part. The related post is the following:
In that post, I present a simple and lightweight tracker which is based on template matching, kalman filtering, and adaptive scaling. The required file in this post is ‘customTracker.py’ which includes both customTracker and customMultiTracker classes. Please also download this file into your working directory.
The third step of the project is the positioning (3D) of the moving objects using a single camera. In most cases, stereo and depth cameras are used to capture 3D images and depth information of the objects in the scene. However, using a single camera, it is not easy to accurately extract the depth information. Once it is extracted, then one can reconstruct the 3D position of the object by means of camera properties and mapping formulations. The velocity information can be then estimated using Kalman Filter for free. The following article includes the details of this step:
The required file for the post above is ‘localization.py’. Please also download the file into your working directory.
Both the tracking and localization parts use Kalman Filter which is presented in the following post:
Please also download the ‘kalmanFilter.py’ file from the post given above. We are now ready for visual aircraft tracking. Please create a file namely ‘aircraftSpeedEstimation.py’ and copy-paste the following python codes:
import cv2
import numpy as np
import time
from customTracker import customTracker, customMultiTracker
from localization import localization
from objectDetector import objectDetector
class aircraftSpeedEstimation():
def __init__(self):
self.file_name = 'aircraft.mp4'
self.frame_rate = 30
self.capture = cv2.VideoCapture(self.file_name)
# Check if camera opened successfully
if (self.capture.isOpened()== False):
print("Error opening video stream or file")
self.frame_name = 'Aircraft Speed Estimation'
self.width = int(self.capture.get(cv2.CAP_PROP_FRAME_WIDTH))
self.height = int(self.capture.get(cv2.CAP_PROP_FRAME_HEIGHT))
self.is_reading = False
self.frame = None
divx = cv2.VideoWriter_fourcc(*'DIVX')
self.writer = cv2.VideoWriter('aircraft_cap.mp4',divx,20,(self.width,self.height))
self.recording = False
self.real_object_length = 40 #meters (aircraft length)
self.selected_img = None
self.tracking = False
self.multiTracker = None
self.tracking_pen_color = (255,255,255)
self.tracking_pen_thickness = 1
self.auto_track = True
self.localizer_list = []
self.detector = objectDetector(self.width, self.height)
self.search_duration = 0
self.search_limit = 50 #frames
def tracking_mode(self, frame):
try:
success, boxes = self.multiTracker.update(frame)
if len(boxes) ==0:
self.tracking = False
self.search_duration = 0
return
if success and len(boxes)> 0:
for i, newbox in enumerate(boxes):
(x, y, w, h) = [int(v) for v in newbox]
cv2.rectangle(frame, (x, y), (x + w, y + h), self.tracking_pen_color,self.tracking_pen_thickness)
self.put_text(frame,'Tracking',(x-5,y-35), 0.3)
localizer = self.localizer_list[i]
pos, vel = localizer.predict(newbox,self.real_object_length)
text = 'Pos: {:.2f}, {:.2f}, {:.2f}'.format(pos[0],pos[1],pos[2])
self.put_text(frame,text, (x-5,y-20), 0.4)
speed = np.linalg.norm(np.array(vel)*3.6)
text = 'Speed: {:.2f} kph'.format(speed)
self.put_text(frame,text, (x-5,y-5), 0.4)
except Exception as e:
self.localizer_list = []
self.multiTracker.clear()
self.tracking = False
self.search_duration = 0
print('tracking error : {} '.format(e))
def put_text(self, frame, text, pos, scale):
cv2.putText(frame,text,org=pos,fontFace=cv2.FONT_HERSHEY_COMPLEX,
fontScale=scale,color=self.tracking_pen_color)
def init_tracking_mode(self,frame):
try:
print('tracks initiated......')
# Create MultiTracker object
#self.multiTracker = cv2.MultiTracker_create()
self.multiTracker = customMultiTracker()
self.localizer_list = []
# Initialize MultiTracker
for bbox in self.detector.detection_list:
tracker = customTracker()
tracker.init(frame,bbox)
#tracker = cv2.TrackerKCF_create()
localizer = localization(self.frame_rate, self.width, self.height)
self.localizer_list.append(localizer)
self.multiTracker.add(tracker)
#self.multiTracker.add(tracker,frame,bbox)
except Exception as e:
print('tracking init error : {} '.format(e))
def main_loop(self):
self.is_reading, self.frame = self.capture.read()
self.tracking = False
while self.is_reading:
time.sleep(1.0/self.frame_rate)
if self.is_reading:
frame = self.frame
try:
self.state_machine(frame)
cv2.imshow(self.frame_name, frame)
except Exception as e:
print('exception: ', e)
else:
break
key = (cv2.waitKey(20) & 0xFF)
if key == ord('t') and self.tracking == False:
self.tracking = True
self.init_tracking_mode(frame)
if key == ord('s'):
self.tracking = False
if key == ord('r'):
if self.recording == False:
self.recording = True
else:
self.recording = False
if key == ord('q'):
break
self.is_reading, self.frame = self.capture.read()
self.capture.release()
self.writer.release()
cv2.destroyAllWindows()
def state_machine(self,frame):
if self.tracking == False:
self.detector.find_detections(frame)
if self.auto_track == True:
self.search_duration = self.search_duration + 1
if self.search_duration> self.search_limit and len(self.detector.detection_list)>0:
self.tracking = True
self.init_tracking_mode(frame)
else:
self.tracking_mode(frame)
if self.recording:
self.writer.write(frame)
def main():
ase = aircraftSpeedEstimation()
ase.main_loop()
if __name__ == '__main__':
main()
In the following figure, we see how an aircraft is tracked and position/speed information estimated. Please note that the detector’s ‘alpha_background’ parameter is set to 0.8 for fast objects such as aircrafts.

The video of the experiment and raw video can be seen in the following links. That’s all for this project, enjoy your day.
Maybe we could do UFO??? 😀 😀 😀