Update: FSD Beta 9 has now gone live and is version 2021.4.18.12.
At midnight Friday PDT (5PM Saturday AEST), Tesla is scheduled to release the latest Full Self Driving Beta software update to a ground of around 2,000 early testers. This release is known as FSD Beta 9 and comes after months of delays, with FSD Beta 8.2 released way back on March 4th. Before that, builds were regularly released every few weeks, so the wait has been significantly raising expectations for major improvements in v9.0.
When the FSD Beta first arrived in October 2020, many were blown away with the Tesla’s ability to navigate city streets, taking left and right turns (indicating automatically), roundabouts and even driving on roads without lane markings. This was a step-change from what we’d experienced with the FSD Preview available to owners who’ve paid for the software upgrade and future releases.
Since then refinements have been made, but there’s still a number of things on the to-do list, before FSD could be considered ‘feature complete’, something first expected by the end of 2020.
Moving to Vision-Only
We know from Andrej Karpathy’s recent talk at CVPR that Tesla’s FSD team has been working on moving to a vision-only approach. Controversially this removes the use of the front-facing radar and instead uses the AI perception of depth in the scene across a number of frames to detect objects and respond accordingly.
This move to use vision-only is a strategy to remove the noisy signal from the radar, which often is the cause of phantom braking. While not easy to achieve, if you can build a vision system and rely on that alone, you don’t have to spend engineering effort on the decision matrix of which system wins, where a conflict occurs.
Imagine the radar detects an object ahead for a split second and sends an alert that the brakes should be applied to hit the object. Meanwhile, the vision system has visibility of the road ahead, seen no obstacle, and suggests its safe to proceed.
If you prioritised the radar’s warning out of an abundance of caution and apply the brakes severely, your customers will likely not be pleased and you create a potential rear impact event. A better outcome is to create a vision system, that is capable of identifying solid that would cause an issue ahead. By tracking objects over a series of frames (read time), they get larger if they’re getting closer. They also have certain attributes that we as humans, could use to identify as a potential threat or not. For instance, a tree branch, you need to stop for, a chip packet, you don’t.
To make these determinations about objects in our path, Tesla takes the inputs from the cameras on-board the car, processes those video feeds using the on-board FSD chip known as Hardware 3, revealed back at Autonomy Day in April 2019. This chip runs the FSD software which is essentially a giant inference engine, inferring that object X is a dog and is likely to continue on X trajectory, based on information in the model.
That model is created using billions of kilometers of training data from Tesla’s fleet of more than a million cars which is processed Tesla’s Data Engine, running on one of the world’s largest supercomputers. With more data, the model gets smarter and more capable and having been iterated on for months out of sight, we’re now expecting big things in terms of its ability to identify the world around Tesla vehicles.
FSD Beta 9 should see Tesla’s be able to operate more confidently, making decisions on turns into traffic, merging lanes all smoother and with less hesitation, while still leaving a buffer for safety.
Traditionally Tesla has processed video input from the cameras as individual frames. In FSD Beta 9, Tesla will move to construct a ‘surround video’ of the car, which will see images from all 8 cameras around the car, stitched together. This really gives the path planning algorithm, the ability to make decisions based on inputs from all directions, rather than have individual camera feeds (like from the B-pillar) flag an issue that needs to be accommodated in the direction of the car.
Tesla will leverage information across a number of frames to give context to decisions. Doing this helps the vision system correctly identify objects and map potential trajectories, which results in a much more accurate outcome than making inputs from information from single frames.
One of Tesla’s most impressive technologies is its ability to perceive the depth of objects from vision alone, which is key to its strategy of avoiding the use of Lidar.
Having this ability to better understand what’s around the car, all at once, should see Tesla be capable of some new things. Don’t be surprised if we see the ability for cars to begin recognising parking bays and gutters far better than before, both of which are critical to the ability to execute on Smart Summon.
While it hasn’t been a key area of focus, part of delivering autonomous driving and ultimately robotaxis, is the capacity for the car to drop you off and go park itself. Until this feature is in place, it’s really not possible for Tesla to call FSD ‘feature complete’.
Improved FSD screen. Attempting to show the “mind” of the car
Elon also announced FSD Beta 9 will feature an improved FSD screen and show the ‘mind’ of the car. Expect this to correlate to a completely new visualisation of the world around the car. In the FSD Beta releases, we’ve seen almost a developer view that suggests the car recognises the multiple lanes, signs, lights, people, cars etc around it.
We’ve known that this is not going to be the UI that is presented to the end user, so FSD Beta 9 should reveal the design direction Tessla is going in. The interface in the car is important in that it gives drivers (and passengers) confidence that as we move to more autonomy, the car is ‘seeing’ the world around and understanding where it can move safely, basically what is drivable space and what is not.
Once FSD Beta 9 is released to the group of around 2,000 customers in the limited release group, Musk expects we’ll see a wider release, likely to the broader early access program group, in around a month or so. Following that, everyone who has purchased FSD should be able to get access to it, assuming the development and feedback goes well.
We are now in early July so there are just 5 months for Elon to reach the benchmark he set in January, that FSD would be safer than the average driver this year.
As a reminder, Tesla’s website currently lists FSD Capability as offering the following features for A$10,100.
- Navigate on Autopilot
- Auto Lane Change
- Full Self-Driving Computer
- Traffic Light and Stop Sign Control
- Autosteer on city streets
That last one is the main focus for FSD Beta, which when linked with Smart summon for low-speed environments and Navigate on Autopilot for high-speed driving, brings us pretty close to having the pieces in place for the car to drive itself.
As per the disclaimer on the FSD purchase page, drivers are still responsible for the vehicle and should continue to monitor the system, particularly while it’s still in beta.