Behavioural Cloning

Mriganka Nath
3 min readMay 28, 2020

The self-driving car is considered as one of the next big things in the technology scene. With many big tech giants investing in it, it has become a competition, where each company is making its own set of algorithms to make the Car drive smoothly, efficiently and making it SAFE to use.

Machine Learning plays a very big part in driving them and is the basis of many autonomous algorithms.

Here in this small project, it’s not a full pipeline of a self-driving car, but an end to end deep learning approach to make the car learn to drive just by looking at the road and predicting the steering angle. The algorithm tries to replicate the behaviour of the car when a driver is present and tries to clone it, hence the name.

The project covers all the task for a machine learning problem. The tasks are divided as:-
1. Data collection
2.Training our model
3.Testing the model

1.Data Collection
Udacity has open-sourced its self-driving car simulator, based in Unity which is a part of their Self Driving car nanodegree. The simulator is easy to work with. There are two modes Training and Autonomous (for testing). On training mode, we can control the car and run on the track, simultaneously the cameras on the front of the car will collect images and record the steering angle, throttle, reverse speed and speed of the car. This will be saved as a CSV file and will be used as data for our model. You can run on any of the two tracks.
The type of data we collect also plays a decisive role, your data should consist of a variety of situations else your car could not learn to tackle situations like bending or how to recover when it goes off-road. The data can be given more variety by applying various data augmentation techniques like flipping, translation, normalisation etc according to the need.

2. Training
We aim to make a model which can predict the steering angle. The speed and throttle will be decided by the angle. Here we are dealing with Image data and a regression type problem. So we will be using Convolutional neural network(CNN)since it best works with image data. CNN’s tries to learn the information of the image by observing each pixel. When we stack CNNs of different sizes over an image it learns the properties of the image like the edges, sharp turns and other specific details. Here I have used three models, first is a custom model learning from scratch and the other using VGG16 as the base CNN model the third model uses a model architecture developed by Nvidia. While training we have to observe both the training and validation loss, as we don’t want overfitting or underfitting.
After training it, we save it as an h5 file.

3.Testing
This is the fun part. The repo from Udacity comes with drive.py file. When we send our saved model h5 file in the drive.py as an argument, it uses to predict on new images. So when we open the autonomous mode in the simulator, the car will drive on its own showing you what it has learned.

Conclusion and some observations
The results of the first two models were not that great. But the model based on Nvidea is quite good. I think I should have trained the network
with more iterations and also work in the data collection part. Since I don’t have a GPU :(, and CPU is slow I couldn’t accelerate the training.
Maybe with more training power and better data, you could even surpass Nvidia’s pre-trained model.

Nvidia’s model

Link to my Github page ( we will find the required files to run there)

https://github.com/mrinath123/Behavioural-cloning

To download the Simulator( for windows )

https://d17h27t6h515a5.cloudfront.net/topher/2017/February/58ae4419_windows-sim/windows-sim.zip

Siraj’s video has been my source of inspiration. I recommend you to see the video.

In the end, thanks to Udacity by making this great simulator open-sourced and thank you Nvidia and Siraj for the ideas.

And thank you for reading.

--

--

Mriganka Nath

high dimensions go brrrrr; I work with Neural Networks;