Labelling Road Scenes Using Machine Learning and Stereo Vision : Semantic labelling of road scenes using supervised and unsupervised machine learning with LIDAR-stereo sensor fusion

Bok av Osgood Thomas
Vehicles capable of sensing their surroundings are not only of interest to car manufactures for safety systems, but the underlying systems are also applicable to autonomous space exploration, military applications e.g. the DARPA challenge and fully autonomous passenger cars. The ability to autonomously detect and avoid pedestrians, for example, would be the next step in the suite of existing vision based driver assistance technologies such as road sign detection and lane departure warning systems. The main goal of this work is to explore all the tasks involved in the processing of raw sensor data into scene description which is meaningful to a computer. This starts with the selection, configuration and evaluation of current vehicle sensors. Then the processing and identification of the collected data. The project will evaluate a range of currently used techniques in the field of image processing and classification. In areas where information is currently lacking, such as a comparison between classification techniques, further investigation is carried out. Where current techniques do not provide results ideal for this application, improvements have been suggested.