How is AV training currently being done by other companies?
AVs are trained to follow road rules. These rules are known to all of us, and we know all of these rules have been made available to the public, nothing being secret, incomprehensible or unexpected. That is, we know all of these known rules. These are known knowns (KK).
AVs are also programmed to stop if a human or any obstruction appears in front of a car, or to slow down if a vehicle is coming head-on in the same lane. These are known dangerous situations, that is, situations we are sure about, and are also therefore known knowns (KK).
There are numerous such known knowns (KK) that have already been programmed into AVs, including those that control test vehicles. When unknown or unexpected situations arise, disengagement occurs. The human test driver then takes over the AV to address this situation. The data gathered from such disengagements is then used as a basis to further refine the AV software. That is, this unknown has been converted into a known.
So, the next question would be: was this unknown situation something that we could have predicted would happen? That is, we never had the need or opportunity to program it into the AV until we encountered it in real life. Only after we encountered it did we realize that this situation was previously known to us, but had forgotten all about it, and thanks to this disengagement, we now have rediscovered it and are programming the AV to address it in the future. This is an unknown known (UK). Knowledge is based on many things we are not aware that we have — instincts, intuitions or other factors we think are trivial. Driving has become second nature to us, so we don’t realize all these things exist. Therefore, these are unknown knowns (UK).
There are situations that we knew are within the realms of possibilities of things that could go wrong, but we don’t know all its specific variations. These are known unknown (KU). For example, an AV expects obstructions on the road, and when one is detected, it slows down and stops. If a ball rolled across a road, the AV would slow down until the ball passed the road. However, it is usual that there would be a kid coming after the ball. Human drivers have such thought processes, and are prepared if a kid comes rushing into the road. In the case of an AV, there would be a disengagement if such an event occurs, and resultant learning from this. This is a known unknown (KU) that has been converted to a known known (KK).
There are situations that we don’t know about, nor do we expect them to occur. These take us completely by surprise, neither having known about it in the past, nor having been in the realm of possibility, until we encountered it. This is an unknown unknown (UU). For example, a woman under the influence of drugs and pushing a grocery-laden bicycle crossing a 4 lane highway is not an anticipated event. Stopping distances cannot be hardcoded into AV software for all such situations or their variations. However, an experienced human driver will be able to discern that a collision could occur, and stop the vehicle in time. If visual, hand and foot sensors are employed in such a non-AV, the event will be automatically detected at the first instance, and a signature automatically captured and used to train AVs to anticipate such situations and slow down. A crowdsourced group of thousands of drivers would quickly capture most of such situations and their variations under different driving conditions, and help quickly build a robust database that can be used to train AVs.
What is needed?
In summary, known knowns (KK) have already been programmed into AVs, the rest (KU, UK, UU) are being discovered during AV training. It will take billions of miles of training data around all kinds of roads, in all kinds of weather conditions, traffic conditions, different countries, cities, villages, suburbs, weekdays, weekends, school days, holidays etc to come anywhere close to the driving skill levels of an ordinary human driver. Verless provides the quickest and best way to do this using a crowdsourced group of drivers.
No one other than Verless uses human sensors. How can working of the brain be deduced without knowing what the eyes are looking at, and what the foot and hands do in response?