Senior Director of AI at Tesla, Andrej Karpathy presented Matroid Scaled Machine Learning Conference 2020 back in Feburary. Yesterday, the recording of that speech was published on YouTube.
During the presentation, Karpathy details the difference between their computer vision approach to autonomous driving, versus the lidar and HD maps that others like Waymo.
One of the more interesting parts of the challenge to reach full autonomy, is simply the number of edge cases that AI has to accommodate. In the example of STOP signs, there’s a crazy number of variables in the way they are implemented in the world.
Humans actually do an amazing job of dealing with these variations, but computers need to be explicitly trained on each scenario to ensure they are able to understand and respond accordingly.
Unlike humans, each car doesn’t have to learn individually, instead, Tesla’s connected vehicles are able to learn as a collective and another update 2012.12.10 is rolling out today with more fixes.
The full presentation is 30 minutes long which includes some great question and answer at the end. If you’re into Tesla and their FSD efforts, or interested in driverless vehicles in general, then I highly recommend you take a look.
I’ve watched a lot from Karpathy, I find him one of the most engaging speakers, one of the most knowledgeable and I enjoy his ability to break down complex concepts into information that’s understandable.
The challenge of creating a self-driving car is incredibly hard, but every time I watch a presentation on Tesla’s approach of computer vision with AI learning, it confirms to me it is absolutely the right method to get there as fast as possible.
Given how reliant the system is on labelled data to correct errors, I hope Musk considers my suggestion to enable the world to help solve these problems.