The development of fully autonomous vehicles represents a paradigm shift in technology and significant advances in robotics, machine learning, and engineering. Gain the knowledge and expertise utilised by the teams working on motion planning for self-driving cars at the most cutting-edge technology companies.
Vision Using Computers
In this course, you will understand the various stages that comprise the life cycle of a Machine Learning project, beginning with the formulation of the problem and continuing to the selection of metrics, training, and improvement of models. This class will concentrate on the camera sensor, and you will learn how to process raw digital images to feed those processed images into various algorithms, such as neural networks. You will construct convolutional neural networks with the help of TensorFlow and learn how to detect and categorise objects in digital photographs. You will gain a solid understanding of the work performed by a Machine Learning Engineer and how it relates to the context of autonomous vehicles as a result of taking this course, which will provide you with an introduction to the entire Machine Learning workflow.
In addition to self-driving cars, cameras rely on other sensors that employ measurement principles that are complementary to one another. This helps to improve the vehicles’ robustness and reliability. One will get insights about how to go for object detection, such as the vehicles which are in 3D point for usage in the cloud using an approach of deep learning. This will also help you in learning to evaluate the performance of the detection via using the art metrics. This will be covered in the second part of the course. You will learn how to combine the detections from a camera and a lidar and how to track objects over time using an Extended Kalman Filter during the second half of the course. With the help of the course, the individuals will be given a chance to get one of the most hands-on and best experiences with the tracking of multi-target. This will help in learning to update or delete tracks and techniques of data usage. After completing the training, you will have the fundamental knowledge necessary to work as a sensor fusion engineer on autonomous vehicles. Localisation
You will learn everything there is to know about automatic localisation in this class, progressing to the utilisation of point cloud maps (three-dimensional) obtained from lidar sensors. Before you start collecting sensor data, you will first learn about the motion (bicycle) model, which makes use of simple motion for estimating the location at the subsequent time step.