Advanced Lane Detection

lane detection

Self-driving vehicles will make mobility safer and faster. They for example can help increase accessibility to people that can’t drive for different reasons. Though, there are a large number of challenges than need to be tackle before can be a reality. Several applications have been implemented for Advanced Driver Assistance Systems(ADAS) to improve road safety on modern vehicles such as Forward Collision Warning, Parking Assistance, Lane Centering among many others.

Udacity - Self-Driving Car NanoDegree

In this article, we present a classical computer vision approach, bird’s eye view implementation for lane detection. The developed strategy is as follows:

  • Compute the camera calibration matrix and distortion coefficients given a set of chessboard images

  • Apply a distortion correction to raw images

  • Use color transforms, and gradients to create a threshold binary image

  • Apply a perspective transform to rectify binary image (“birds-eye view”)

  • Detect lane pixels and fit to find the lane boundary

  • Determine the curvature of the lane and vehicle position with respect to the center

  • Warp the detected lane boundaries back onto the original image

  • Output visual display of the lane boundaries and numerical estimation of lane curvature and vehicle position

You can find a thorough description here. To know what was expected to be accomplished here are the rubric points.

lane detection animation

Complete Video

The pipeline description shows how the algorithms are used, and the video pipeline processes a video stream. To see the details of the algorithms used look in the utility folder. Finally, the unit tests and integration tests are located on the tests folder.

The code can be found on the Github repository.

Conclusions

The approach used in this project requires several steps to process an image warp to bird view and back. Which makes it computationally expensive.

Even though we have improved a lot since the basic approach to detect lines, there are many variables that make the algorithm unstable, on conditions such as rain or low or no lines on the street.

An option is to have a set of algorithms to test the external context. Also, a good option is to include sensors such as light(day/night) raining, etc to help to select the proper algorithm to be used in different contextual situations.

On the performance side, will be good to profile the code and see if the usage of a GPU or C++ code can improve the performance.

There is still a place to improve the detection of problematic zones since the sliding windows are calculated only In the first frame of the video, an option is to detect problematic polylines and try to find again the sliding windows.

References

https://www.udacity.com/course/self-driving-car-engineer-nanodegree--nd013

https://link.springer.com/article/10.1007/s11042-016-4184-6