EDITING BOARD
RO
EN
×
▼ BROWSE ISSUES ▼
Issue 55

A glimpse into the future of Mapmaking with OSM

Philipp Kandal
General Manager EU / Head of Openstreetmap @ Telenav



PROGRAMMING

We have over the last 12 months starting to look extensively in how we can leverage AI / Deep Learning to help improve OpenStreetMap and today we want to provide a few details about how we envision the future of making maps and also share more on what we are already doing. We see the emergence of self-driving vehicles as a game-changer and one key requirement for those vehicles are accurate and up-to-date maps. Currently commercial map providers map every region around every 12-24 months - in a costly process with a high precision and high cost vehicle, our goal was to achieve maps that are updated on a minutely basis and with key streets covered at least once every day. This is the goal we set out to solve with OSM in supporting to make it ready for this use-case.

Using OSM for Navigation Maps

At Telenav (and before at skobbler) I've been actively involved in OSM for almost 10 years now and it is truly unbelievable how OSM has grown massively in that period from a map that was used mostly by passionate enthusiasts to a map that is used by 100s of millions of users and big companies such as Toyota, Tripadvisor or Apple to just name a few to power their consumer products. Despite this success we have still seen that for navigation maps many additional attributes are needed that are not that well covered in OSM such as Signposts, Speed limits, Turn restricitons or Lane Information is needed to provide the best possible guidance.

Speed limit coverage

Turn restriction coverage United States

What we have done especially to close the turn restriction gap is to use (anonymised) GPS probe data from our millions of customers and from partners like Inrix to detect where there are likely turn restrictions based on turn behaviour. This data is then shared with the community via ImproveOSM and also for the most likely cases we put a high penalty on turns for our customers so they avoid those manoeuvres if possible. This way we have been able to detect 139,181 turn restrictions and increased coverage in a meaningfull way.

Next step: Higher accuracy with Computer Vision

With Speed Limits, Lanes and Sign Posts it is significantly more tricky as it is not possible to identify those purely from GPS probe data. This is the reason why we started our OpenStreetView project to capture those images as there was no truly open project for Streetlevel Imagery that we could use (when we approached Mapillary they asked for hundreds of thousands of dollars in license fees - which was not an option for us).

In parallel to the OpenStreetView projects we have invested a lot in Computer Vision algorithms and established a cooperation with the Technical University in Cluj to get their over 15 years in the field. Our goal was to use computer vision to automatically build maps based on this images.

In the last year we made very significant progress and now we are able to detect Speed Limits, Turn Signs and Signposts (incl. OCR the text in those signs). Those detections when made will be reviewed by our editors and added directly into OSM.

Panel detection

We have build a map editor that allows us internally to review those changes and add them with our team of 20+ mappers to OSM. We have by now added 19,798 map features (turn-restrictions,one-ways, signs) to OSM using this tool, and are adding every week hundreds of new turn restrictions and other signs to the map to make it better.

Map editor tool

Advanced level: Create High Accuracy maps (ADAS / HD maps)

The next level for this challenge was to create the high accuracy maps needed by self driving cars and for ADAS (Advanced Driver Assistance System) applications. Those maps need accuracy \< 2m which typically OSM doesn't provide consistently and which is a big challenge to achieve purely based on GPS probes as we learned through a lot of trial and error. We looked into how we can achieve better accuracy and our natural choice was to leverage car data that is available to achieve higher accuracy. Therefore we integrated our OpenStreetView application via an OBD2 port (which is available on every car manufactured in the last \~20 years) to integrate our phone based data with data coming directly from the car (such as speed, or on some models even with steering wheel angle available via OpenXC). With this we have been able to achieve an accuracy which is 5-10x higher than purely achievable by Phone based GPS and with several passes on one road we can create truly high accuracy maps.

Our vision of the future of map making:

We believe if enough consumers help recording the necessary images via OpenStreetView maps can be created in near real-time at an unprecedented accuracy. This would be a major enabler for self-driving cars and uptodate navigation systems. In order to make that possible we are also in early stages working with several car manufacturers to use the data from their on-board cameras in the future for those detections and hopefully this way we can use millions of cars from our OEM partners in the future with this technology to enhance maps and share this data with the OSM community to create even higher quality maps than today.

We will over the next few weeks go in this blog deeper into our individual modules that we built for making this future happen and looking also forward to feedback from the community.

VIDEO: ISSUE 109 LAUNCH EVENT

Sponsors

  • Accenture
  • BT Code Crafters
  • Bosch
  • Betfair
  • MHP
  • BoatyardX
  • .msg systems
  • P3 group
  • Ing Hubs
  • Cognizant Softvision
  • Colors in projects

VIDEO: ISSUE 109 LAUNCH EVENT