//

Machine learning is a silver bullet, but silver bullets never won a war.

I get accused of being a pessimist by a lot of people but I don't think we're less than 10 years away from self driving cars that are good enough to be unleashed upon the roads, and those will be a blood drenched 10 years filled with interesting ethical court cases.

At the moment, despite all the hype, Tesla’s cars crash at badly painted lines[1] and Uber just killed somebody that their cut-price sensors missed. There's clearly a cognitive gulf between the utopia of self driving cars and the status quo, of experimental death traps driving on ideal Arizona roads.

Part of the irrational exuberance over our ability to perform at these tasks and replace human driving in the near future comes from the momentum and excitement around developments in machine learning.

It would be idiocy to call these developments anything less than revolutionary. Deep learning and new neural based approaches are transforming what is computationally feasible.

And yet what these approaches are, is another tool in an already stocked toolbox. A better tool for sure, but just a tool.

What deep learning allows us to do is to pattern match incredibly well. This is something that humans are very good at but has traditionally been an Achilles heel for computers.

Only a short time ago, if I'd asked you to write a program to identify objects in a picture, you would have had so gently explain the infeasibility of such a task.

XKCD 1425

With machine learning, this becomes a realistic weekend project.

Take for example the task of identifying faces in pictures, an incredibly economically valuable task. Until very recently the state of the art was an algorithm called Viola Jones feature detection[3]. This was a very good algorithm that can speedily detect whether a feature like a face exists in an image. But if you want to detect all faces then you needed separate datasets for the front of the face, the side of the face, the nose etc.

With a deep learning approach, this can all be trained into the same dataset, and it takes a small amount of extra work to detect if the face is smiling or sad.

Pattern matching is now basically a solved problem.

And the applications for this breakthrough are still being discovered. There's an incredibly interesting paper[2] by Jeff Dean et al from Google recently where they use machine learning to optimize data structure lookups. In a field where decades of the smartest people have searched for 1% improvements, using deep learning trained on the distributions of the data, they found significant improvements.

Clearly the ramifications of what this tool can bring to our industry will continue to reverberate for years. Deep learning, and it’s ilk really are a silver bullet.

However, there are realities.

Driving a car isn’t just a single task that can be coded away - when we drive a car, sure, our eyes are pattern matching incredibly well, using all the genetic skill from our time as prey on the savanna, to rapidly identify objects, and a 3d model of our surroundings.

But we’re also doing a lot more, mostly subconsciously.

We notice the body language of the pedestrians and whether they might veer into the road. We see the driver ahead on his phone and know his attention is lessened. We see the wobble in the track of a big rig, and think, perhaps that driver is tired or drunk, better to stay clear.

We feel the road through the steering wheel, and can adapt to the surface or the slippyness of the weather. We know that in certain areas, the roads will be worse and to slow down.

I haven’t described any tasks that we can’t program into our self driving robot, but I want to illustrate the sheer number of variables we are talking about, and then the fact that we have to weigh the importance of each of them, constantly and in real time, while travelling in a metal battering ram.

If a Tesla can’t handle faded paint, how will it handle forest fires, or hail storms, or torrential rain. If an Uber can’t handle the night time, how will it handle thick fog, dirt roads, or moose that jump out in front on snow.

The world that cars have to drive in is a big one, and even limiting to the scope of freeways, or first world roads, has a problem domain that is daunting.

I don’t see a shortcut to long careful data acquisition, testing and improvement, and I don’t see this taking less than a decade. Silver bullets take down a single enemy, but battles can rage for years.

  1. https://electrek.co/2018/04/03/tesla-autopilot-crash-barrier-markings-fatal-model-x-accident/
  2. The Case for Learned Index Structures
  3. I gave a talk on this algorithm in 2003 in Portland but can’t find a video, if somebody has one, I’d love a link!