Human-like Autonomous Vehicles

Drawbacks of non-Verless methods:      

The end goal of AV companies is to make AVs more human-like. However, none of them consider what humans perceive through their eyes and ears, and what they do in response with their hands and foot.

Even after so many years, companies are still having to collect AV training data from geofenced areas having HD maps, and manually program it in. Current training data includes Known-Knowns (KKs) and data from disengagement situations. This includes Tesla (although they don’t require geofenced regions or HD maps).

Non-Verless AV training data gathering practices:

  • Need HD maps.
  • Limited to geofenced areas.
  • Limited to rational, organized and strictly-legal driving conditions.
  • Mostly ignores China and India: 35% of world population, 400% bigger than the GDP of USA in 10 years: huge market. However, difficult to gather data in such regions to train AVs through traditional means.
  • Need to employ test drivers.
  • Need expensive and specially equipped cars in large numbers.
  • Liability issues surrounding company cars and employing test drivers.
  • Cannot quickly scale to millions of miles.
  • Cannot gather data from a wide variety of vehicles under a wide variety of conditions, and in a wide variety of regions.
  • Slow data gathering on very specific routes.
  • Need to manually sort and program perception objects.
  • Geared towards object detection, not Even detection or signature extraction.

Tesla

Tesla vehicles are, to a large extent, programmed with lane detection and obstacle avoidance. The knowledge base used for Tesla’s AV software is based on Known Knowns, disengagements, accidents and sub-optimal performance. Tesla manually analyzes data from different scenarios, after which relevant data is manually extracted, and then the AV software is manually re-programmed. Also, data from all drivers are analyzed, not just good drivers.

The reason why event detection and signature extraction for re-programming has to be done manually is that human sensors are not used by Tesla or by anyone else. If eye-trackers, hand and foot sensors are used, then the wealth of human knowledge, experience and logic in driving vehicles accumulated over the decades and spread amongst a wide variety of geographies and populations can be automatically acquired and incorporated into AV software. This can be done with minimal human involvement, and at a much higher efficiency and speed, and very little cost.

Overcoming drawbacks of the current AV development efforts

To overcome current methods of AV trials, it is best to crowdsource the process by recruiting thousands of drivers spread around different countries driving their own cars during their routine lives. No capital cost, riks or liabilities, except for the cost of sensors.

Such a method will have a low capital requirement since costs for AV companies would be limited to: (a) software and data (smartphone app, computation, data warehousing), (b) $200 sensor kit, (c) no liabilities, (d) millions of miles monthly from a wide variety of vehicles and conditions, and from around the globe.

In the Verless system, there would be no need to create HD maps, and not be limited to geofenced areas. Such a scheme would be available to users in all chosen countries, including China and India, helping quick scale-up as the program becomes popular. In many countries, traffic rules are not always followed, leading to chaotic traffic. However, there is a method to this madness – millions of drivers travel through these roads every day. We need to mimic the good drivers amongst these populations so that AVs specific to these countries can be developed.