FPGA accelerated object detection for the Formula Student Driverless

This project describes the object detection system of the autonomous race car from the "StarkStrom Augsburg e.V." Formula Student Driverless team.

Documentation

The field of autonomous driving has been introduced to the Formula Student competition as a new discipline in the year 2017. Formula Student in general is a worldwide engineering competition between different student teams. As for the driverless discipline the students are building autonomous race cars equipped with sensors to detect the environment around the vehicle. One task is the so called track drive. The race car has to drive ten laps on a track, marked with blue cones on the left and yellow cones on the right side. enter image description here

Our Team

We are the Formula Student Team StarkStrom Augsburg e.V. from the University of Applied Sciences Augsburg. Last year, we have developed a fully self driving race car which won the Formula Student UK FS-AI ADS competition in Silverstone and holds the Formula Student world record for the fastest autonomous acceleration achieved at the Formula Student Germany event at the Hockenheim race track.

This article is focused on the camera system of the vehicle, which together with a LiDAR sensor forms the perception system. The goal of the camera system is to detect the colored traffic cones marking the lane at a high frame rate and at a low power consumption. Therefore, we chose an FPGA board for an accelerated cone detection.

Hardware components:

  • FPGA: AVNET MicroZed 7020
  • Camera: Basler Dart daA1600-60bc
  • Lens: Evetar Lens F1.8 f5.5mm 1/1.8
  • Basler BCON Carrierboard

enter image description here

System Overview

Our object detection system is divided into two main areas, the computational heavy pixel based operations implemented in hardware on the FPGA logic and the software post-processing running on an ARM core. The detection is focused around the Generalized Hough Transform which can detect arbitrarily shaped objects with the help of a reference shape. The final hardware/software co-design is shown in the following picture:

enter image description here

The beginning of our image processing chain is the camera interface. Specifically, we have chosen the Basler BCON interface. It is based on LVDS and provides images from the camera directly into our FPGA logic without the need of memory access.

Following, we apply a color filter. This custom filter only passes specific colors and therefore reduces background structures and highlights the cones.

Next we are detecting edge pixels in the image with the Canny edge algorithm. The edge points as well as their gradient directions are the computational inputs for the Generalized Hough Transform.

The Hough Transform algorithm is searching in 5 different search regions on the image for cones. The detection results are written to memory.

The Linux system running on the ARM core has access to the same memory. For a plausibility check, the proposed detections from the hardware system are read by the software.

In order to estimate the distance of these detected objects, the position of them in the image are transformed to the bird's-eye view with a previously calibrated transformation matrix. The results are the detected cones in the vehicle's coordinate system.

Both, the cone color and the position is used to generate an object list and publish it in the ROS network of the race car.

Result

The result of the presented system is a camera based cone detection reaching a frame rate of 40-50fps at a power consumption of 3.9 Watts (one camera + FPGA).

enter image description here

Info

Project State

Public Project

Licences

Software Licence: Project has no software
Hardware Licence: Project has no hardware

Project Tags

Admins

christian_hsa

Does this project pique your interest?

Login or register to join or follow this project.

Comments
Back to top

Your comments, please!

Want to comment this ... Show more