Sponsored by

Program

*New for this year, vehicle demos of actual vehicles from Daimler and V-Charge in the parking lot.

.
Time
Activity
09:00-09:05 Opening notes from Workshop Organizers
09:05-09:45 Invited Speaker: Vision for Low-Cost Autonomy with the Oxford University RobotCar, Will Maddern, Oxford University, United Kingdom.
09:45-10:45Contributed works (15min per talk)
09:45-10:00Ten Years of Pedestrian Detection, What Have We Learned?, Rodrigo Benenson, Mohamed Omran, Jan Hosang, Bernt Schiele
10:00-10:15Fast 3-D Urban Object Detection on Streaming Point Clouds, Attila Börcs, Balázs Nagy, Csaba Benedek
10:15-10:30Relative Pose Estimation and Fusion of Omnidirectional and Lidar Cameras, Levente Tamas, Robert Frohlich, Zoltan Kato
10:30-10:45Good Edgels To Track: Beating The Aperture Problem With Epipolar Geometry,Tommaso Piccini, Mikael Persson, klas Nordberg, Michael Felsberg, Rudolf Mester
10:45-11:15Coffee Break
11:15-11:55Invited Speaker: Localization in Urban Canyons using Cadastral 3D City Models, Srikumar Ramalingam, MERL, USA.
11:55-12:15Demo talk: Stixmantics: Real-time semantic segmentation of street scenes, Uwe Franke / Timo Scharwächter, Daimler, Germany.
12:15-14:00Lunch Break
14:00-14:40Invited Speaker: Is the self-driving car around the corner? Mobileye's work on Computer Vision centric approach to self-driving at consumer level cost, Amnon Shashua, MobilEye, Israel.
14:40-15:00Demo talk: Multi-Camera Systems in the V-Charge Project: Fundamental Algorithms, Self Calibration, and Long-Term Localization, Paul Furgale, Vcharge ETH Zurich, Switzerland.
15:00-15:40Invited Speaker: Intelligent Drive & Pedestrian Safety 2.0, Dariu Gavrila, Daimler, Germany.
16:00-18:00Posters / Demos of actual vehicles from Daimler and V-Charge in the parking lot

Invited Speakers / Demos

Topics of Interest

Analyzing road scenes using cameras could have a crucial impact in many domains, such as autonomous driving, advanced driver assistance systems (ADAS), personal navigation, mapping of large scale environments, and road maintenance. For instance, vehicle infrastructure, signage, and rules of the road have been designed to be interpreted fully by visual inspection. As the field of computer vision becomes increasingly mature, practical solutions to many of these tasks are now within reach. Nonetheless, there still seems to exist a wide gap between what is needed by the automotive industry and what is currently possible using computer vision techniques. The goal of this workshop is to allow researchers in the fields of road scene understanding and autonomous driving to present their progress and discuss novel ideas that will shape the future of this area. In particular, we would like this workshop to bridge the large gap between the community that develops novel theoretical approaches for road scene understanding and the community that builds working real-life systems performing in real-world conditions. To this end, we encourage submissions of original and unpublished work in the area of vision-based road scene understanding. The topics of interest include (but are not limited to):

We encourage researchers to submit not only theoretical contributions, but also work more focused on applications. Each paper will receive 3 double blind reviews, which will be moderated by the workshop chairs.

Important Dates

Organizing Committee

Program Committee

Academia

Industry

Paper Submission

Papers should describe original and unpublished work about the above or closely related topics. Each paper will receive double blind reviews, moderated by the workshop chairs. Authors should take into account the following: