Contact
Call for Papers
Topics of Interest
Analyzing road scenes using cameras could have a crucial impact in many domains, such as autonomous driving, advanced driver assistance systems (ADAS), personal navigation, mapping of large scale environments and road maintenance. For instance, vehicle infrastructure, signage, and rules of the road have been designed to be interpreted fully by visual inspection. As the field of computer vision becomes increasingly mature, practical solutions to many of these tasks are now within reach. Nonetheless, there still seems to exist a wide gap between what is needed by the automotive industry and what is currently possible using computer vision techniques.
The goal of this workshop is to allow researchers in the fields of road scene understanding and autonomous driving to present their progress and discuss novel ideas that will shape the future of this area. In particular, we would like this workshop to bridge the gap between the community that develops novel theoretical approaches for road scene understanding and the community that builds working real-life systems performing in real-world conditions. To this end, we will aim to have invited speakers covering different continents and coming from both academia and industry.
We encourage submissions of original and unpublished work in the area of vision-based road scene understanding. The topics of interest include (but are not limited to):
- Road scene understanding in mature and emerging markets
- Deep learning for road scene understanding
- Prediction and modeling of road scenes and scenarios
- Semantic labeling, object detection and recognition in road scenes
- Dynamic 3D reconstruction, SLAM and ego-motion estimation
- Visual feature extraction, classification and tracking
- Design and development of robust and real-time architectures
- Use of emerging sensors (e.g., multispectral, RGB-D, LIDAR and LADAR)
- Fusion of RGB imagery with other sensing modalities
- Interdisciplinary contributions across computer vision, optics, robotics and other related fields.
We encourage researchers to submit not only theoretical contributions, but also work more focused on applications. Each paper will receive double blind reviews, which will be moderated by the workshop chairs.
Important Dates
- Submission Deadline: July 4th, 2018.
- Notification of Acceptance: August 5th, 2018.
- Camera-ready Deadline: August 25th, 2018.
- Workshop: September 14th, 2018.
Invited Speakers
- Prof. Dariu Gavrila, TU Delft, The Netherlands
- Prof. Mohan Trivedi, UCSD, USA
- Prof. Arnaud de la Fortelle, MINES ParisTech, France
- Dr. Oscar Beijbom, nuTonomy, USA
- Dr. Henning Hamer, Continental, Germany
Organizing Committee
- Dr. Mathieu Salzmann, EPFL, Switzerland
- Dr. Jose Alvarez, NVIDIA, USA
- Dr. Lars Petersson, Data61 CSIRO, Australia
- Prof. Fredrik Kahl, Chalmers University of Technology, Sweden
- Dr. Bart Nabbe, Aurora, USA
Program Committee
TBA
Paper Submission
Papers should describe original and unpublished work about the above or closely related topics. Each paper will receive double blind reviews, moderated by the workshop chairs. Authors should take into account the following:
- All papers must be written in English and submitted in PDF format.
- Papers must be submitted online through the CMT submission system. The submission site is: https://cmt3.research.microsoft.com/CVRSUAD2018.
- The maximum paper length is 14 pages (excluding references). Note that shorter submissions are also welcome. The workshop paper format guidelines are the same as the Main Conference papers.
- Submissions will be rejected without review if they: contain more than 14 pages (excluding references), violate the double-blind policy or violate the dual-submission policy. The author kit provides a LaTeX2e template for submissions, and an example paper to demonstrate the format. Please refer to this example for detailed formatting instructions.
- A paper ID will be allocated to you during submission. Please replace the asterisks in the example paper with your paper's own ID before uploading your file. More detailed instructions can be found at the main conference website.