What Tesla has in common with baby’s brain? And why it’s a good example of health economics?
Two major players
There are two major players in autonomous cars field. And the way they do it is insanely interesting, so stay with me! Of course there is also Google with their Waymo, but for today I will only mention those two holy grails — doing the same thing, but entirely differently — Tesla and Comma.Ai.
First I want to point out that Tesla and comma.ai both train their neural networks on the same type of data — photo and video. In the process cars learn how to behave themselves on a road. That‘s it — but it‘s not so simple at all.
Tesla — baby‘s brain with a limitless reach.
The first step Tesla takes is feeding their neural networks with all the small details and all the possible tasks they can imagine. Since it is done separately — they have many interconnected networks solving minor puzzles all at once. What are the road signs, road markings, cyclists, traffic lights etc.? Everything you can come into while driving and everything the driver can ever think of — Highway Code, destination, maps, perspective, weather conditions etc.
Tesla car makes decisions based on all the small details it has learned. Depending on the situation it synthesizes the decision by combining all the relevant knowledge of particular settings. Changing the decision means running through all the details and information once again.
Main words characterizing the method are — “all the details at once”. For comparison imagine that you do not know how a house looks like. But you are familiar with the looks of a window, roof, walls and sill. When asked to imagine how a house looks — you would be able to do that — better or worse. This is exactly how Tesla’s AI works and how a baby’s brain work — baby does not know what mom or dog is; but he is able to compare distinctive small details and features of mom versus dog good enough to distinguish one from the other.
This is exactly how Tesla’s AI works and how a baby’s brain work — baby does not know what mom or dog is; but he is able to compare distinctive small details and features of mom versus dog good enough to distinguish one from the other.
Comma.ai — holistic approach and decisions of a mature brain.
And there sits comma.ai at the other side of a table. They train their neural networks without parsing the information into small details and tasks. They instead supply neural nets with full photo/video material. Networks then learn from the view of a driver and from the surroundings of a car (6 cameras attached to the car). It means that networks learn about how a particular situation should look like visually and what a car should do in order to change or maintain the view — decisions are made by changing current visual to the desired one.
Main words characterizing the method are “full visual data”. For comparison imagine that neural networks learn that a car should stay right behind the vehicle in front; at the same time they learn how far away from other objects it should be — how far away from the markings, barriers, walls. All of that without knowing the information what a lane, marking, sign or a barrier is. They are not even told what a red traffic light looks like. Neural networks learn all this holistic visual representation of the situation themselves, and the details behind that are left semi-hidden.
It does not mean that comma.ai networks are not considering the details of the situation. This method means that network chooses the details to make a decision almost by themselves. They may be unable to mark a window, roof or a sill, but they would still know how a house looks like. This is exactly how our mature human brain makes decisions — we do not feel the complex calculations in our head when automatically looking into a mirror during a car maneuver. We don’t need to actively think about mom using two legs to walk and dog having a fur to distinguish them. But these calculations of all the small details are constantly spinning in our brain.
This is exactly how our mature human brain makes decisions — we do not feel the complex calculations in our head when automatically looking into a mirror during a car maneuver. We don’t need to actively think about mom using two legs to walk and dog having a fur to distinguish them. But these calculations of all the small details are constantly spinning in our brain.
The future of car crashes
So what did I have in mind saying that autonomous driving is a very special application of AI? Well, in the future, autonomous car will have to make a decision at one point — to wreck a car in order to prevent the bigger/more severe car crash — I am curious on reverse engineering such a decision in order to understand the motives and calculations behind it. Tesla would bring us way more meaningful insights on that topic.
Maybe in the future a car could be able to decide that 80 year old driver should die, but 30 year old pregnant passenger should live, and design the car crash accordingly? Or maybe cars will be able to calculate all the trajectories and prevent death in exchange of short/long time disability? The main market problem then would not be the methods of training neural networks, but we will be stuck at explaining to computer the principles and priorities of life and death. That, my friends, would become a one great example of health economics!
Read @ my personal blog https://aleknavicius.com/tesla_babys_brain