
The lane detecting system, a foundation of Advanced driver assistance systems (ADAS), is an application to track the lane lines and keep the vehicle between specified lines. However, these systems' efficiency varies based on specific conditions, like a shadow, road lighting, visibility and so on. Thus, it is difficult for both classical computer vision and ordinary Convolutional Neural Network (CNN) to identify subtle lane features from the raw image in general scenes. We formulated lane detection as a binary segmentation problem and proposed a novel hybrid CNN architecture by fusing encoder-decoder and dilated convolution mechanisms. The proposed encoder-decoder architecture is a modified version of Segnet architecture to segment lane pixels. Further, we proposed a parallel branch consisted of dilated convolutional layers to captures spatial information with large receptive field. The proposed approach fuses these outputs obtained from two approaches using weighted sum. We undertook an ablation study to understand effectiveness of dilated convolution mechanism proposed model performed on par with the existing state-of-the-art results on the Tusimple lane detection benchmark. To evaluate the system with respect to unstructured road dataset, we created 4109 labelled images from India Driving Dataset (IDD). Finally, we tested the model on Indian lane dataset to understand its’ effectiveness on diverse road conditions. We reported 95.13% and 97.21% accuracy on TuSimple and IDD datasets, respectively.