Part 1-Lane detection using OpenCV

Kumar Brar
3 min readJan 8, 2020

--

Source : https://github.com/StevieG47/Lane-Detection

I am learning about OpenCV and how it is used for detecting lanes by cars or more specifically, the futuristic self-driving cars. It is very important for the car to be able to detect lanes and continue driving within it or when needed can change lanes. All these decisions being taken by the self-driving car are not easy ones. There is a lot that needs to be taken into account while making those decisions. So, the entire project of self-driving cars is not an easy one. We will work on this project step by step starting from an image and then applying lane detection to it (obviously discussing the Whys, Hows and techniques & algorithms behind it).

In this article, we will discuss the concept of edges, edge detection, gradient and why to convert an image to Grayscale. The image being used in this article is the one shown below :

https://www.geeksforgeeks.org/

Edge Detection

This is an important step in the processing of the image. We know that any image is a matrix of numbers ranging from 0 (black i.e. no intensity) to 255 (white i.e. max intensity) where these numbers denote the intensity of each pixel. So, edge detection helps us to identify sharp changes in the intensity of adjacent pixels as shown below :

Image (Left) and corresponding array of pixels (Right)
Edge Detection in Gray-Scale image

Gradient

Gradient is the measure of change in brightness over adjacent pixels. A strong gradient indicates a steep change, whereas a small gradient indicates a shallow change. An example is shown below :

Strong and Small Gradient

Edge

Edge is nothing but a rapid change in the brightness of the pixels in the image. A strong gradient will help us in detecting an edge in the image whereas a small gradient doesn’t help us much in detecting an edge as the intensity values are somewhat close.

Why to convert an image to GrayScale

We know that images are a combination of pixels. Now, in a colored image each pixel is a combination of three intensities i.e. RGB(Red-Green-Blue). So, here we have three channels whereas in a Grayscale image, each pixel is a combination of 1 channel i.e. one intensity ranging from 0 to 255. So, processing a single channel or a GrayScale image is faster w.r.t. colored image.

Grayscale image

Hope, this article helped in understanding the concept of edges, edge detection, gradient and Grayscale. If you find this article helpful, then don’t forget to share and clap for it. Thanks for reading.

--

--

Kumar Brar
Kumar Brar

Written by Kumar Brar

I am a lifelong learner with an ongoing curiosity to learn new things and share them with others. This helps in brainstorming and implementing new ideas.

No responses yet