17 December 2015

←Home

The ultimate Turing test: I trust a (self-driving) algorithm with my life, therefore it is intelligent

From Tesla's Correction to article: "The First Person to Hack the iPhone Built a Self-Driving Car":

This is the true problem of autonomy: getting a machine learning system to be 99% correct is relatively easy, but getting it to be 99.9999% correct, which is where it ultimately needs to be, is vastly more difficult.

So Tesla is aiming at a system that is correct 99.9999% of the time; or put differently: that is wrong 0.0001% of the time.

A daily 1h-commute translates to 900,000 seconds per year (5d/week * 50weeks/year * 1h * 3600secs/h). Thus, theoretically such a self-driving system would be 'wrong' 1 second per year (0.0001% of 900,000 seconds).

Now, what does this one wrong second means? What does it mean for a self-driving system to be wrong ? Does it fail to turn when it should (a false negative )? Does it turn when it should not (a false positive )? In both cases, does it result in a crash? Do I get killed during that one wrong second? Does somebody else get killed? Who is responsible?

For me, what matters are the implications of that one wrong second. It means I am willing to trust my life (and the life of others, within or outside of my car) on an algorithm. Well, of course we already delegate a lot of the trust when driving a car. For example, I trust that when I push the break pedal, the car will slow down. The chain of events is quite straightforward: pushing the pedal moves oil down the break hoses, pushing the break pads against the discs. But trusting a (machine learning) algorithm is a whole different story. We do not control the flow of events. That is the whole point of machine learning: not having to explicitly encode the system representations and its states.

Maybe this is the ultimate Turing test: I trust a (self-driving) algorithm with my life, therefore it is intelligent.

Go Top
comments powered by Disqus