This project is about designing and constructing an autonomous racecar for the Formula Student Driverless competiton. It is developed by students of the High Octane Driverless Team in Erlangen.
We, the driverless team of the High Octane Motorsports e.V. Erlangen (HighOctane), are building an autonomous race car for the formula student competition. In this competition, student groups from around the world design their own race car. Since 2017 the formula student also has a driverless competition, which includes the following challenges:
In each of the dynamic challenges, the lane is marked by street cones in two colors, black and yellow on one side and blue and white on the other. For more information about the Formula Student competition its rules can be found here: FormulaStudent
In the first phase of the project we were using a remote control car for getting started and testing first approaches of software development.
In the second phase we are building our sensors on a go-kart, and also constructing the necessary hardware for the car to drive autonomously. This will give us the opportunity to test the software and hardware on a system closer to the actual race car.
Our third step is to use the knowledge gained from the go-kart testing to retrofit the race carfor autonomous driving. We will be taking part in the competitions in summer 2019 with this car.
Our base electronic system is reused form our regular race car which utilizes STM32 processors for all the control units in the car. The connection is a 1 Mbit CAN bus, and to interface with our existing CAN bus we use a Rasberry Pi 3 with a MP2515 transciever.
We will be replacing the steering break system and throttle with our own hardware using a stepper motor for accurate control. As we have a combustion engined car we also have to construct an electronic gear switch. For these components we will use a High Torque Servo 1005SGT.
As our middle layer we will be using the ROS environment as it enables very fast prototyping.
The first most important step is to detect the cones. For this we will be using the SICK LIDAR. The pointcloud is clustered to extract the position of the cones and in the next step the camera is used to detect if the cone is on the left or the right side.
In the next step we will use a ransac algorithm with a clothoide model to compute the lane. Based on this a trajectory is computed by using an mpc (model predictive control) approach.
The general method we want to use is Visual Odometry, where we compare the different frames of the camera and LIDAR to get the movement of the car and with that the position of the cones. For that we have to synchronize between the viewing angle of the LIDAR and the camera to get an exact output. Code snippets and further explanation will be added after some testing.
After the first exploring round, when the map is computed, we only need to detect landmarks to navigate through the course. For that we will use the camera, as it has the higher frame rate compared to the LIDAR.