中文阅读
英文阅读
中英文阅读

Eady Capital

Summary

  1. Tesla's upcoming 3D update will bring a sudden increase in AI-assisted driving functionality.
  2. Analysts and investors who have failed to realize Tesla's competitive advantage in autonomous driving technology are in for an abrupt awakening.
  3. Financial models mostly still treat Tesla like a conventional automaker but that will no longer be appropriate in the future.

Analysts and investors systemically underrate Tesla's (TSLA) competitive position in autonomous driving. Assessments of the competitive landscape lean far too much on qualitative judgments of autonomous vehicle performance in demo videos, as well as other PR, marketing, and branding exercises. Sell-side analysts assigned to Tesla have, historically, been mostly auto analysts without the time or obsessiveness to dig deeply into deep learning and robotics. This means Tesla's autonomy advantage is getting underpriced or simply going unpriced in the market.

In the three years I've been writing about Tesla, I've been banging the table, insisting that scale of data matters above almost all else. I've also argued again and again that it is a mistake to assume that Tesla's progress on autonomy will be smooth and continuous, rather than lumpy and punctuated by fits, starts, and plateaus. I surmise that deep learning R&D has two phases. The data collection and labeling phase can be relatively rapid. The speed depends on the number of robots a company has in the wild and the number of workers it employs to label data.

Then, there is the plodding, unpredictable phase wherein AI scientists and engineers endeavor to build a system that ingests labelled data and outputs useful robot behaviors, such as steering, accelerating, and braking at the right times, in the right magnitudes. Getting data is like filling up a gas tank. The AI scientists' and engineers' job is like building an engine. From the odometer's perspective, progress can be nothing, nothing, nothing, and then, in an instant, the ignition switch is turned and the car zooms off.

One remarkable example of non-linear progress in AI is OpenAI's breakthrough on the classic video game Montezuma's Revenge. This graph tells it all:

In this case, data was collected from the game and labelled automatically, so the dataset creation phase was much faster than the science and engineering phase. Moreover, overall progress on Montezuma's Revenge in the AI community was the opposite of smooth and continuous.

Tesla has over 900,000 robots on the road. All its competitors combined have less than 2,000 robot vehicles in the United States and, given that most testing occurs in the U.S., the global total is likely not much higher. The performance of deep neural networks scales predictably with data, such that a data advantage of this magnitude could yield anywhere from a 2x to 30x performance advantage.

A major caveat is that data needs to be labelled either manually or automatically. As I've written about extensively, Tesla has many promising options for automatically labeling data and for using its vast fleet of cars to make manual data labeling far more efficient. (That is, to get far more neural network performance out of the same amount of human labor.) The challenge for Tesla's AI scientists and engineers is to pursue those options and make them work as well at commercial scale as they do in academic proofs of concept. In other words, to build an engine that can run on the ample fuel available.

Going 3D

Apparently, the Tesla AI team's most significant work right now is shifting from a 2D paradigm in computer vision to a 3D paradigm. Elon Musk first described the concept on the Third Row Tesla podcast:

Musk recently elaborated on the work involved in making this shift:

Twitter:
“Whole Mars
@WholeMarsBlog
·
Jul 2, 2020
Replying to @elonmusk @romanhistory1 and @HardcoreHistory
how’s the autopilot rewrite going?

you guys calling it PlaidNet?
Elon Musk
@elonmusk
Going well. Team is kicking ass & it’s an honor to work with them. Pretty much everything had to be rewritten, including our labeling software, so that it’s fundamentally “3D” at every step from training through inference.
4:54 AM · Jul 2, 2020”

“Whole Mars
@WholeMarsBlog

“Tesla Owners East Bay
@TeslaOwnersEBay
·
Jul 2, 2020
Replying to @elonmusk @WholeMarsBlog and 2 others
Any updates on reverse summon?
Elon Musk
@elonmusk
A lot of functionality will happen all at once when we transition to the new software stack. Most likely, it will be releasable in 2 to 4 months. Then it’s a question of what functionality is proven safe enough to enable for owners.
5:01 AM · Jul 2, 2020
1.4K
277 people are Tweeting about this”

Tesla Owners East Bay
@TeslaOwnersEBay
·

What does it mean to transition from 2D to fundamentally 3D? As best as I can surmise, it's about how sensor data is represented to neural networks.

LiDAR laser pulses create a 3D representation of surrounding objects by creating a point whenever the light hits an object and returns to the sensor. This 3D representation of the world is known as a point cloud. It looks like this:

In 2018, computer vision researchers from Cornell University (including Yan Wang and Wei-Lun Chao) published a pre-print showing that cameras can be used to create 3D point clouds and that, more importantly, using this form of representation rather than 2D images improves neural networks' ability to estimate depth using cameras. Cameras can derive points through stereo vision, which is also how humans and some other mammals perceive depth at certain distances. It turns out what makes LiDAR so effective is not just the lasers but also the common practice of parsing LiDAR input as point clouds. Wang et al. called the approach of using cameras to generate point clouds "pseudo-LiDAR ". Since 2018, other researchers have built on this work.

Today, pseudo-LiDAR and associated approaches like pseudo-LiDAR++ and ViDAR (which stands for visual LiDAR or video LiDAR) are within spitting distance of matching LiDAR on 3D object detection and depth estimation, at least on certain academic benchmarks. In my understanding, pseudo-LiDAR++ is about one-third as accurate as LiDAR on the popular KITTI Vision benchmark.

Earlier this year, Tesla's Senior Director of AI, Andrej Karpathy, publicly disclosed that Tesla is working on a pseudo-LiDAR approach to depth estimation. Going back to Autonomy Day in 2019, Karpathy gave a demonstration of the 3D depth information that be obtained via stereo vision:

In his most recent talks, Karpathy has shared a glimpse of the accuracy gains from 3D representations vs. 2D representations. These visualizations show curb detection, with the "ground truth" (presumably from LiDAR) on the left, camera-based detection using 2D representations on the right, and camera-based detection using 3D representations in the middle:

If this one qualitative result is truly representative of Tesla's performance gains across the board, then the 3D update will surely bring a vast improvement to Tesla's AI-assisted driving software. From an outside perspective, this improvement will appear sudden and discontinuous.

Here comes the money

When it comes to Tesla and autonomy, most analysts and investors take an "I'll believe it when I see it" approach. That's their prerogative, but, in my view, it means underestimating Tesla's earnings and cash flow in 2021 and beyond. Before full autonomy is within reach, Tesla will continue to press forward with heavily AI-assisted driving. Already, Teslas can automatically stop at traffic lights and stop signs:
AI-assisted driving is arguably Tesla drivers' favorite feature and the most obvious differentiator between Teslas and other vehicles. Competing automakers have been incredibly slow to implement something as basic as wireless software updates and none, as far as I'm aware, have publicly announced plans to build a deep learning pipeline around its production cars like Tesla has done. In 2021 and beyond, I believe differentiated software will create even more demand for Tesla's vehicles.

For a Model 3 Standard Range Plus with no other add-ons, the $8,000 "Full Self-Driving Capability" option is 17% of the purchase price. That is high-margin revenue since the marginal cost of downloading software is negligible. Musk has repeatedly said that the price will increase as more functionality is added. Moreover, Tesla has plans to sell the software as a monthly subscription. This will surely expand the customer base.

Growing revenue from AI-assisted driving software means analysts should think about Tesla's prospective margins beyond an apples-to-apples comparison to industry peers. Moreover, Tesla's software advantage warrants bullish growth assumptions.

Adam Jonas from Morgan Stanley (MS) recently published a $2,070 bull case for Tesla based on a forecast of 6 million vehicle sales in 2030. In my view, this level of sales volume for 2030 makes sense, given that Tesla is stepping into the gray area between being a car company and being an AI and robotics company. It isn't clear to me how any competitor is going to merge auto manufacturing competency with software and AI competency fast enough to slow down Tesla's current trajectory of hypergrowth. As such, even at roughly $1,400, I still consider the stock to have a significant upside.

We also might think beyond vehicle sales. After the 3D update rolls out to customers, I predict that more analysts and investors will start thinking seriously about robotaxis. Given the uncertainty, it's difficult to know how to price the robotaxi opportunity. However, private market investors have managed to do this with Waymo (GOOG, GOOGL) and Cruise (GM). Why can't the public market do it with Tesla?