Distributed processing on an embedded vision system

Embedded Vision system build by the GreenTeam Driverless for a Formula Student race car. The machine vision subsystem is integrated into the autonomous control system to offload non-volatile preprocessing work from the main computer to the cameras. For increased power efficiency and low latency of the vision system, hardware acceleration is used for the image preprocessing inside the camera’s processing platform.

Documentation

GreenTeam Driverless

GreenTeam Stuttgart is a Formula Student racing team founded by students of all subjects at University of Stuttgart. After participating as a Formula Student Electric team for 7 years, GreenTeam decided to jump into the new Driverless Competition at Formula Student Germany in Hockenheim in 2017. The competition requires the cars to compete fully autonomously in acceleration over 75m, driving down a figure of eight skidpad and track drive on the race course. As the tracks are marked by nothing but coloured traffic cones, the cars have to make use of reliable image processing to accomplish these tasks. Besides the dynamic competitions, the student-built race cars have to face up to the jury of experts from professional racing teams and automotive industries.

Acceleration+Skidpad

Acceleration (top) and Skidpad (bottom) track

Introduction

Autonomous systems use a variety of different sensors to estimate the current position and map the environment in which they operate. While range sensors return already preprocessed 3D pointclouds their response is monochromatic and the real visual appearance of the object cannot be determined. To detect objects like a human eye cameras sensitive to a number of spectrums of the visual light are necessary. These data streams have to be combined and preprocessed prior to running them via the used algorithms. Standard approaches, when using USB, GiGE or CameraLink interfaces, run the whole image processing chain from data conversion and preprocessing over dynamic adaptation of the image sensor to object detection on the main processing system. But as this autonomous system main control unit’s(ASCU) resources are also used by the detection and mapping algorithms, it is very hard to meet the timing restrictions required by an autonomous system in a racing situation. Therefore, we decided to integrate camera-specific image processing directly into the camera module, which helps us to achieve a modular integration of the machine vision subsystem into the autonomous control system and offload non-volatile preprocessing work from the main computer to the cameras. For increased power efficiency and low latency of the vision system, hardware acceleration is used for the image preprocessing inside the camera’s processing platform.

Camera Module

Rendering of camera assembly (left); Camera module on car (right)

ROS compatible Ethernet camera

With the rise of autonomous driving the open source framework Robot Operating System (ROS) transitioned, as a de-facto standard in robotics, into the autonomous driving domain. By delivering a lot of well-tested tools and libraries dedicated to the problems of autonomous machines, it is the perfect base for developing autonomous prototype race cars. There are various camera drivers which integrate the ROS data structure to make cameras ROS-ready. The drivers process the image frames on the main computer system and convert them into the ROS message format. As ROS is designed to be spread over multiple machines connected by an IP network, we decided to convert the images captured by the camera directly to a ROS image message.

    while ( ros::ok() && camera.IsGrabbing() ){
            // Wait for an image and then retrieve it. A timeout of 5000 ms is used.
            camera.RetrieveResult(5000, ptrGrabResult, TimeoutHandling_ThrowException);

            // Image grabbed successfully?
            if (ptrGrabResult->GrabSucceeded())
            {
                set_leds((uint8_t)ptrGrabResult->GetID());

                // Access the image data.
                ROS_INFO("Frame ID: %d; Size X: %d Y: %d", (uint8_t)ptrGrabResult->GetID(), (uint16_t)ptrGrabResult->GetWidth(), (uint16_t)ptrGrabResult->GetHeight());
                const uint8_t *pImageBuffer = (uint8_t *) ptrGrabResult->GetBuffer();
                if ((uint16_t)ptrGrabResult->GetWidth() > 0 && (uint16_t)ptrGrabResult->GetHeight() > 0)
                {
                    cv_img_rgb = cv::Mat((uint16_t)ptrGrabResult->GetHeight(), (uint16_t)ptrGrabResult->GetWidth(), CV_8UC3, (uint8_t *) ptrGrabResult->GetBuffer());
                    image_msg = cv_bridge::CvImage(std_msgs::Header(), "rgb8", cv_img_rgb ).toImageMsg();

                    // Publish the grabbed Image with ROS
                    ROS_INFO("Publish Image");
                    pub.publish(image_msg);
                }
            }
            else
            {
                cerr << "Error: " << ptrGrabResult->GetErrorCode() << " " << ptrGrabResult->GetErrorDescription() << endl;
            }

        //  ROS loop sleep
        ros::spinOnce();
        loop_rate.sleep();
  }

Frame grabbing ros node example

The Programmable Logic (PL) of an Zynq 7000 reads the image data via the LVDS interface from the camera, processes the images and stores these AXI streams with a video direct memory access in the DDR memory. A on the ARM-based Programmable System (PS) implemented ROS node uses the shared images in the DDR memory and streams the data to the Ethernet network in the car. A ROS node on the embedded ARM processor also allows to configure the camera module via ROS messages by translating them to I2C commands on the camera module.

Camera <-> ROS node data flow+EV76C570 sensitivity with IR cut-off filter (Source: baslerweb.com)

Camera <-> ROS node data flow (left); EV76C570 sensitivity with IR cut-off filter (Source: baslerweb.com) (right)

Furthermore, the main processing unit acts as an external trigger. To achieve a simultaneous data acquisition all the camera modules are time synchronized with the main processing unit and receive trigger messages, self-supervised by the cameras.
With the unified data structure of our ROS nodes it is possible to visualize, monitor and configure the camera by using the same monitoring and logging utilities we designed for the other systems in the car. As a development platform we used the Basler dart BCON for LVDS Development Kit with the EV76C570 camera sensor. In order to prevent active visual sensors operating in a range from 840nm to 950 nm to interfere with our cameras, a lense with an IR cut-off filter is used to filter out these wavelengths (700nm to 1100nm). The extensively documented demo project provided by Basler enabled a welcome jump start for developing the camera module. For developing a ‘idea - to - race’ camera system in only 8 month this is essential.

Basler dart BCON for LVDS Development Kit (Source: baslerweb.com)

Basler dart BCON for LVDS Development Kit (Source: baslerweb.com)

Stereo Vision in a race car

We use two stereo vision systems in our autonomous race car, optimized either for distance accuracy or horizontal angle resolution. One uses a small field of view (FOV) for increased distance accuracy, the second pair uses a wide FOV to detect traffic cones in an area of approximately 180° around the car. The sweet spot between a wide FOV, large pixel count and data size has to be considered for an accurate and fast object detection and position calculation. Positioning the camera at the main roll hoop of the car provides an elevated, unobstructed mounting point.

Small FOV + Wide FOV

Small FOV (left); Wide FOV (right)

Visual system topology

Visual system topology

The camera modules are angled slightly downwards for an optimal projection of the cones surface on the image sensor, thus enabling the algorithms to work with the maximum possible pixel count per object. Our compute chain for stereo vision consists of the stages

  • preprocessing of the image,
  • object detection,
  • distance calculation and
  • mapping of the objects.

Image processing chain

Image processing chain

To optimize detection speed and color detection accuracy, the frames are converted into both a different color space and greyscale image. In an additional step the depth image is calculated and the objects are detected in the greyscale image with a Histogram of Oriented Gradient (HOG) feature descriptor based linear Support Vector Machine (SVM) object detector. Afterwards the depth and color information are fused with the detected objects. Finally, the processing pipe returns a local map of the cones, which outlines the track. The cones’ position is further improved by processing cones in multiple camera frames.

Hardware accelerated vision system

While developing the first proof of concept in C++ with the OpenCV library, we moved the camera-specific preprocessing part of the processing pipeline towards the development boards Zynq 7000. Xilinx provides the perfect solution to port C-code to a FPGA with its SDSoC development environment. We used our OpenCV implementation and ported it to the equivalent SDSoC function based on the xfOpenCV library. High Level Synthesis (HLS) was utilized within the SDSoC environment to generate Vivado blocks and integrate them afterwards in the Basler Development Kit provided example project. The SDSoC environment also provides a test bench where we could test our code without the time-consuming step of building the binary for the FPGA.

Object detection Skidpad

Object detection Skidpad

What’s next?

Working with the camera provided by Basler, especially with the development kit and its tools, allowed us to build usable prototypes of our camera system in a short time. After the step of building a customizable prototype camera systems the work is not finished and a lot more is to be done, the current team is eager working on improving the autonomous system and evaluating promising concepts. While the whole project is still work-in-progress, we see a lot of potential in our decentralized approach to hardware-accelerated image processing in our Formula Student race car. We thank Basler for the support and hope to continue our cooperation for the next generation of autonomous vehicles.

Code

GitHub Repository

https://github.com/pphuth/dart_lvds_node

Commits

Philipp Huth
pushed 32a72b1e49d590e74a5731d274fd199e754a2554
Init repo
2019-01-21 14:29:19 UTC
Philipp Huth
pushed 72650a59271c9a68901bc159aa786199dcc978ab
Initial commit
2019-01-21 14:27:48 UTC
Info

Project State

Public Project

Licences

Software Licence: BSD-3-Clause
Hardware Licence: Project has no hardware

Project Tags

Admins

p.huth

Does this project pique your interest?

Login or register to join or follow this project.

Comments
Back to top

Ready to join the project?

You'd like to participate ... Show more