Automatic detection of Events
Example of an unexpected event outside the vehicle: a child at the edge of a road and unaccompanied by an adult would be an unexpected outside event, causing the driver to become alert and slow down.
An unexpected outside event defined: Causes a vehicle to perform an action that was not expected or required when a map of this road is consulted. For example, stopping the car when there is no stop/yield sign, pedestrian crossing etc. Slowing down suddenly without the map requiring it.
It is important that data is analyzed and categorized without the assistance of a human. Crucial to this is the automatic detection of outside events that are unexpected. However, wouldn’t drivers or external observers have to make this decision by manually observing millions of miles of data? And then manually mark and classify each unexpected event? How does the system of Verless do all of these automatically on the fly?
Let’s start with understanding what “becoming alert” means when an unexpected outside event occurs:
- Eyes dwelling and forming a region of interest around an object.
- Slowing down the car: releasing the accelerator pedal or start depressing the brake pedal.
- Foot hovering over the brake pedal, ready to depress it.
- The distance of the foot hovering can indicate the degree of alertness.
- Steering wheel: transitioning from a single hand on the steering wheel to both hands.
- Steering wheel: increasing grip force on the steering wheel or contact area on the steering wheel.
- Hearing a sound and performing foot and hand actions as above.
Verless starts by categorizing the above as:
- Perception: Primary Human Events: (a) Eye movements (called eye-events), (b) Hearing unexpected sounds (called aural-events);
- Actuation: Secondary Human Events: (a) Foot movements (called foot events), (b) hand movements (called hand events).
Steps involved are:
- The eyes forming a region of interest around an unaccompanied child at the edge of the road is an eye-event, and removing the foot from the accelerator pedal to slow down the car is a foot-event.
- The human brain is the controller. An accompanying road map does not indicate a traffic sign or any other reason for the eyes to form an ROI around the child. But the brain still caused this to happen.
- So, we compare the eye-event and foot-event to a map, and realize that neither of them was expected to be performed, and we tag this as an unexpected outside event.
- Then, we look at the road-facing camera video and determine on which object was the eye forming the ROI. Pattern recognition finds it to be a child without an adult.
- The time period from when the primary human event first occurred is the start period of the outside event. The end period is when both the primary as well as the secondary human event agrees with the map.
- The signature of this time period is saved, and includes eye, foot and hand information, identified object information, GPS start and end coordinates, speed profiles, weather conditions and etc.
- With the collection of a large number of similar signatures, an AV is trained to react similarly when a similar situation is encountered.
The more number of times such a signature is acquired, the stronger the learning reinforcement. Using such signatures will reduce or negate the need for these actions and reactions of the driver to be manually programmed into AV software by a human programmer.
Such signatures carry a wealth of human knowledge, experience and logic accumulated over years and spread among a wide variety of geographies and populations, and their trade-offs with rationalization and risk management, allowing safe, fast, efficient and pleasant transportation.
As societies transition towards non-human vehicle operators, all this is saved as signatures for use by AVs without being lost to time.