More

    Dear Elon, let the world help accelerate FSD development with Label at Home

    Tesla is leading the way when it comes to achieving level 4/5 autonomy, but the development isn’t happening fast enough.

    Using computer vision to achieve autonomous driving is definitely not easy. The basic principle is to use the array of cameras around the vehicle, combined with an insanely powerful custom-built chip, to replicate and surpass human ability.

    During the Autonomous investor day in April last year, Elon Musk had showed a video of Tesla that drove itself through city streets, obeyed traffic lights, stop signs and took the right exits on the highway. This got everyone, including me excited, that the we weren’t far away from what looked very close to a driverless car.

    We’re now a year on since then and those in the early release program are just receiving traffic light and stop sign recognition now. While Autopilot and FSD is the best in the market, there are still some issues merging lanes, no roundabout support and the car still can’t park itself.

    Musk had hoped the FSD package would be feature complete by the end of 2019, meaning they’d have the slow speed stuff like carparks and summon sorted, the highway driving like Navigate on Autopilot, and everything in between. Unfortunately, it hasn’t played out that way.

    Of course, the technology part is just the start, then there’s the regulatory battles to allow driverless vehicles that has to be worked through, hundreds of times across the world.

    On the technology challenges, the reality is there’s a million edge cases that need to be addressed and somewhere at Tesla, there’s an awfully big to-do-list.

    Essentially the computer vision system that Tesla relies on for Full Self Driving requires a massive amount of data from the world to understand all the weird and wonderful environments that humans navigate daily.

    During the autonomy day presentation, Director of AI at Tesla, Andrej Karpathy explained the way the system learns from the millions of kilometers of real-world driving that Tesla owners do. As drivers intervene and take back control from Autopilot (then re-enable it), that event (with location) is registered and can be used for training. The video from all cameras is sent back to Tesla for analysis.

    Given the ridiculous number of incidents like this, it can’t be dealt with by a linear to-do list and worked through from start to finish, because you’d never finish. Instead, when the team is working on solving an issue, footage from similar environments is used to improve Autopilot’s ability to deal with that type of incident in the future.

    One example of this may be gutters. Across the world, there’s a crazy number of variants in height, colour, angle etc, so determining what is a gutter and if it can reliably be used as a lane line, where painted lines aren’t present, is really important.

    This is where I think the Tesla community can help.

    While Tesla doesn’t disclose the size of the autonomy team, it’s a small fraction of their 50,000 strong workforce. With millions of Tesla owners in the world (a majority with HW3), its likely many of them would be willing to assist Tesla in their autonomous efforts. Right now, there’s also plenty of people with plenty of time on their hands, so now may be the perfect time to launch this.

    Tesla should develop an application (let’s call it Label at Home), where people use their computer at home to label data (video frames), that data would be sent back to Tesla for validation (important) and help speed up the development of FSD and solve for the many, many edge cases.

    This is a fairly laborious task which is why I think we need to offer people diversity in that work. If Tesla could deliver an interface that allowed people to choose the dataset they wanted to work on, based on their own personal challenges, say an issue on their commute, or a prioritised break down of what Tesla deems the most significant issues.

    This doesn’t have to be restricted to just Tesla owners who’ve purchased FSD or the broader Tesla family (aspirational owners in the future), but really anyone interested in the broader field of autonomous vehicles.

    Right now, maybe more than ever, the world realises the value of saving human lives and as the world re-opens, that road toll will inevitably rise again. There is only one solution to actually solve automotive deaths on our roads and that’s autonomous vehicles.

    Given what we’ve seen in the past with Tesla fans offering to help with deliveries, there’s a tremendous amount of untapped good will, so this work wouldn’t have to be rewarded financially, most would be happy to do it for free. If Tesla really wanted to grow the audience for Labelling at Home, a new kind of referral program could be created as a reward for effort put in.

    What are your thoughts on Label and Home and FSD? This is kind of too important to just criticise Tesla for being late, that doesn’t help anyone, I’d much rather be productive and help with the solution.

    Jason Cartwright
    Jason Cartwrighthttps://techau.com.au/author/jason/
    Creator of techAU, Jason has spent the dozen+ years covering technology in Australia and around the world. Bringing a background in multimedia and passion for technology to the job, Cartwright delivers detailed product reviews, event coverage and industry news on a daily basis. Disclaimer: Tesla Shareholder from 20/01/2021

    11 COMMENTS

    1. Brilliant Jason. If this was at all possible and practical – I do note that you emphasize that Tesla would review all relevant submissions – I hope that Elon gets hold of your article.

    2. Massive problem with privacy concerns. I trust a Tesla employee under contract more than I trust some random stranger on his computer.

    3. The beauty of crowdsourced ML labelling is you duplicate the same job many times, e.g. give the image to 5 people to mark the gutter and lane position. If they all match, strong chance it is accurate. Over time you can also rank your humans skills and require less cross checking.

      I’d love to see a live comparison between how I drive manually vs how the AI would be driving. What percentage of the time was our lane position identical, breaking points (regen points), starting reaction time etc.

    4. Check out Tesla Dojo supercomputer. More info on this project later this year. Can’t wait! Pretty much taking over labeling for the entire fleet of Tesla cars.

    Leave a Reply

    Ads

    Latest posts

    Reviews

    Related articles

    techAU