MARKET WATCH: What’s Happening? The auto industry is in the headlights following two recent fatalities caused by autonomous and semiautonomous vehicles. But the industry isn’t backing off its ambitious autonomous vehicle timeline: Several automakers and tech companies say that consumers will be able to hail a driverless taxi within the next couple of years, with plenty more promising to have these vehicles in production by 2021.
Our Take: Cracks are beginning to emerge in this roadmap. It will in fact be many years, even decades, before we see driverless cars on the road. Why? Three main reasons: The technology isn’t ready, the regulations aren’t ready, and—perhaps most importantly—the public isn’t ready. Autonomous vehicles will be relegated to use in limited niche scenarios for the foreseeable future.
March 18 marked a grim milestone for the automotive industry: That night in Tempe, Arizona, a pedestrian was struck and killed by a fully autonomous Uber test vehicle, the first such fatality in the United States. Less than a week later, a Tesla Model X driver crashed and died on California’s Bayshore Freeway while using the car’s semiautonomous Autopilot feature.
The incidents are causing trepidation among consumers who wonder what went wrong and whether autonomous and semiautonomous vehicles put people’s lives at risk.
The industry, however, remains undaunted. Several automakers and tech companies promise to have self-driving fleets of taxis on the road servicing consumers within the next couple of years. Others plan to have autonomous vehicles in production by the early 2020s. None are backing off in the wake of the accidents.
But in private, even industry insiders will tell you that the general public won’t be driving fully autonomous vehicles until sometime after 2030: These are startling confessions that reveal how much equity valuations and Wall Street adrenaline rather than dispassionate analysis are driving the tempo of the driverless agenda.
Not much is yet known about the Tesla crash. At first glance, it appears to mirror the 2016 collision that claimed the life of Tesla driver Joshua Brown. The latest victim, 38-year-old Walter Huang, crashed into a concrete divider after reportedly ignoring several warnings from the car’s Autopilot system.
Much of what we know about the Uber crash comes from a disturbing video released by local police that shows the victim, 49-year-old Elaine Herzberg, emerging from the darkness into the path of the oncoming vehicle. Uber’s modified Volvo XC90 did not appear to slow down before impact. A separate camera angle showing the vehicle’s interior reveals that the human test operator behind the wheel was looking downward for several seconds immediately prior to the crash, rendering her unable to brake in time.
But why was human intervention necessary? Uber’s test vehicles are outfitted with at least three different detection systems designed to prevent exactly this type of tragedy. The most advanced of these systems is a top-mounted LiDAR unit that provides a three-dimensional, 360-degree scan of the surrounding environment. (Velodyne, manufacturer of the LiDAR used in Uber’s test cars, sheds any blame for the accident.) Overlapping with the LiDAR system are front- and rear-facing conventional radar units that provide 360-degree coverage. Additionally, each vehicle is equipped with short- and long-range visible light cameras designed to help interpret and classify visible objects.
Experts doubt that the vehicle’s detection systems were at fault. Although LiDAR and radar sensors can sometimes be obscured by weather conditions like rain and snow, the crash occurred on a clear evening. Darkness was not a factor, either: Radar functions equally well at night, while LiDAR actually functions better at night because there’s no possibility of a glare causing interference.
The fault instead probably lies with the “brain” of the vehicle, the software and algorithms responsible for interpreting the data captured by the detection systems. Raj Rajkumar, professor of electrical and computer engineering at Carnegie Mellon, says that the object recognition systems may have been confused “because it was an interesting combination of bicycle, bags, and a pedestrian.”
THE IMMEDIATE AFTERMATH
A few autonomous testing programs have come to a screeching halt in the wake of the crashes. Uber has ceased its testing operations indefinitely in Arizona, Pittsburgh, San Francisco, and Toronto. Likewise, Toyota has pressed pause on its own “Chauffeur” autonomous testing program out of concern for the mental state of its human test operators.
Lawmakers in some locales aren’t waiting for companies to act. Boston officials have asked two local autonomous vehicle startups, Optimus Ride and nuTonomy, to halt their programs temporarily. And in a significant policy reversal, Arizona Governor Doug Ducey has suspended Uber’s autonomous testing program throughout the state. Arizona has long been a haven for autonomous testing because of its favorable weather conditions, miles of open road, and laissez-faire approach to tech regulation.
THE INDUSTRY PLAYS THE LONG GAME
But automakers and tech companies won’t be deterred. Plenty of invested capital is riding on the impending autonomous vehicle revolution.
General Motors has been the most vocal—and the most ambitious—about its driverless aspirations. In December, the company announced plans to deploy a large-scale fleet of driverless taxis in big cities by 2019 (presumably with the help of Lyft, into which GM invested $500 million back in 2016). GM is also hyping up its fourth-generation Cruise AV, which will strip away such “nonessentials” as the steering wheel and pedals. (Planned street-ready date: 2019.) GM CFO Chuck Stevens believes that an autonomous vehicle service could achieve 20% to 30% profit margins by 2025, citing a “total addressable market of several hundreds of billions of dollars.”
Driverless taxis are already a reality for Waymo, a subsidiary of Alphabet, whose early riders program recently became the first-ever ridesharing service to operate without a human behind the wheel. (Only in Phoenix for now.) Waymo also has set its sights on the high-end market: The company plans to roll out 20,000 modified Jaguar I-PACE autonomous vehicles over the next few years. Company executives are unfazed by the Uber crash, with Waymo CEO John Krafcik stating that, “We’re very confident that our car could have handled that situation.”
Ford (through its autonomous vehicle arm, Argo) also is targeting the driverless taxi market: The company aims to have a fully autonomous service ready for consumers by 2021. Interestingly, the company is collaborating with Lyft on technology that would let consumers hail a Ford car, which could lead to a tug-of-war with GM over the company’s services. Additionally, Ford recently filed a patent for an autonomous police car that could monitor traffic, comb local traffic law databases, and even communicate with nearby vehicles.
On the next rung of the ladder is the “me too” crowd, consisting largely of firms that have made only vague plans concerning autonomous vehicles. Toyota is forming a $2.8 billion research arm to help further its autonomous vehicle agenda and reportedly has been working with Uber on a secretive joint autonomous vehicle effort. Publicly, however, the firm has only promised that it will have highway-capable autonomous vehicles “in production” by 2020. Automakers like Honda, Hyundai, and Renault-Nissan have also set 2020 as their production deadline for highway-capable autonomous vehicles.
And then there are the many automakers—from Audi to BMW to Tesla to Volvo—that have taken a more gradual approach by adding so-called “semiautonomous” features to their vehicles. With semi-autonomy, the story goes, you don’t have to give up full control to a machine; you can simply outsource the most monotonous driving tasks (like inching along during a traffic jam or cruising on the highway) to a robot assistant.
On the surface, it seems as if automakers are all-in on bringing autonomous vehicles to market. And maybe they are. But here’s the industry’s dirty little secret: Few experts believe that it’s going to happen anywhere near as quickly as the PR and IR departments would have you believe. Get auto executives alone in a room, and they will tell you as much.
Cue Bloomberg New Energy Finance’s Future of Mobility Summit in Palo Alto, a gathering that featured many of the industry’s movers and shakers. Attendees were asked via live poll to estimate when consumers will actually be able to buy fully autonomous vehicles. The answer was damning: Nearly 75% said the milestone won’t be reached before 2030.
Tech realists have long known this secret—their voices have simply been drowned out by the breathless hype.
Duke University engineering professor Mary Cummings says that we’re 15 to 20 years away from a vehicle that “operates by itself under all conditions, period.” Likewise, author and leading robotics expert David A. Mindell says that his skepticism of a driverless-car takeover is backed up by history. In his 2015 bestseller, Our Robots, Ourselves, Mindell points out that, time and time again, full autonomy rarely turns out to be the answer. “That’s an empirical argument based on everything we’ve seen in the last 40 years of autonomous systems,” he explains. “People are always thinking that full autonomy is just around the corner. But there are 30 to 40 examples in the book, and in every one, autonomy gets tempered by human judgment and experience.”
Why exactly is full autonomy a long way off? Here are three reasons.
The technology isn’t ready. The most obvious problem has to do with the technology powering these vehicles.
It’s troublesome that this far into the autonomous vehicle timeline, we’re still seeing examples of basic technology failures. The fatal Uber crash wasn’t some “edge case” (an industry term referring to tricky, unpredictable scenarios). Seeing and identifying a pedestrian in the middle of the road, and responding accordingly, is at a minimum what these machines are designed to do.
In the past two years alone, autonomous vehicles have been caught running red lights, veering dangerously close to pedestrians, and causing accidents. And that’s not counting the dozens of times that these vehicles would have crashed had a human test operator not intervened. That’s the problem with going “full auto.” If anything goes wrong—whether it’s a faulty sensor, a design oversight, or even a software glitch that freezes the system for a millisecond—there’s no human at the wheel to take over.
This point will be sure to rile up the techno-optimists, who will respond with two counterarguments. The first: Driverless cars should be on the road because they’re safer than human-driven cars. This Silicon Valley-esque viewpoint is a favorite of tech CEOs like Elon Musk, who says that anyone who’s skeptical of self-driving cars is “killing people.” The second: The technology is improving rapidly every day and therefore will be ready soon.
To the first point: Even if driverless cars were safer, people still wouldn’t be comfortable driving them. (More on that later.) And there’s no real way to tell if they are safer in the first place.
Sure, the data collected by these firms look impressive. The industry’s favorite statistic is the so-called “miles per disengagement” rate, which registers the average number of miles that a vehicle is able to drive on its own, without help from a human. According to data provided to the California DMV, Waymo vehicles were able to drive more than 5,000 miles between disengagements, while Cruise (GM’s autonomous unit) vehicles were able to drive 1,250 miles between disengagements.
But the ones defining and recording these “disengagements” are the companies themselves—and as you can imagine, they’re not exactly impartial observers. Google lobbied hard against the California rule requiring companies to report their disengagements. In 2015, once compelled by law to do so, Google reported 341 total disengagements. Google acknowledges that, in reality, human operators took control “many thousands of times.” But the company doesn’t count incidents in which it felt that the machine could have handled the situation. Nissan, likewise, says that it doesn’t count disengagements during the start or end of a test run—in other words, the toughest part.
And that’s not even factoring in the test conditions: How many of the nearly 500,000 miles that Google and GM drove in California last year were truly representative of an average driver’s experience, complete with city driving, bad weather, and bad roads? We’re guessing not many.
Even if you believe the most bullish statistics available, they still don’t paint a promising picture. Let’s take the most optimistic—Waymo’s 5,000-miles-between-disengagements rate—and translate that out over an entire population. If the average driver travels roughly 10,000 miles per year, and 100 million drivers adopt Waymo (less than half of all licensed drivers), that’s 200 million disengagements a year. If even one in a thousand of these disengagements causes a fatality, that’s 200,000 fatalities a year. News flash: That’s not progress.
What about the counterargument that the technology is improving all the time? Just like a human driver, an autonomous vehicle “learns” from experience. (OK, it may have to learn at your expense—but be brave here. We’re talking about the greatest good of the greatest number.)
All this is true. But because of the nature of edge cases, progress becomes exponentially tougher the closer you get to 100% perfection. By now, driverless vehicle engineers have gathered most of the low-hanging fruit—the “easy” scenarios like reading road signs, identifying other vehicles, and tracking lanes in good weather. Most of the remaining progress requires getting machines to make sense of situations in which rules disappear and higher-order human judgment takes over.
We’re talking about encountering bad weather; or driving in construction zones, strip malls, and parking garages; or figuring out ambiguous lane markings or traffic signals; or deciphering strange objects and anticipating the movements of animals, children, and other adult pedestrians and drivers. (See “Driverless Cars: Unsafe at Any Speed?” in which we discuss these challenges at greater length.)
Humans navigate most of these situations effortlessly, even unconsciously. A human may drive more cautiously on New Year’s Eve, knowing that there could be drunk drivers on the road. A human may slow down while driving in a neighborhood known to have children, even if no children are visible. An autonomous vehicle has no such innate awareness and must be programmed for each and every possible edge case. Slate contributor Samuel English Anthony sums it up nicely: “Yes, people sometimes misunderstand one another’s intentions on the road. Still, people have an intuitive fluency with this kind of social negotiation. Self-driving cars lack that fluency, and achieving it will be incredibly difficult.”
Edge cases (usually) end up OK when there’s a human test operator ready to spring into action. But as Uber CEO Dara Khosrowshahi stated plainly late last year, “[W]ith autonomy, the edge cases kill you.”
The regulations aren’t ready. Technology is the only problem. Another barrier is a lack of uniform law and regulations: Legislators haven’t yet thought through the logistical challenges posed by autonomous vehicles.
Currently, 33 states have either enacted or introduced legislation dealing with autonomous vehicles, with the particulars varying widely from state to state. Some states attempt to attract autonomous vehicle companies by exempting their operations from various rules of the road: In Georgia, for example, human operators of autonomous test vehicles don’t even have to hold a driver’s license. Other states do no more than define key terms, such as “autonomous technology.” Still others, like Arizona, have no formal regulations whatsoever regarding autonomous vehicles—a huge plus for companies seeking a safe testing haven. (Sure, they also like Arizona’s near-perfect weather, long straight freeways, and low population density.)
This lack of uniformity hasn’t stopped the House from passing a bill that would permit the deployment of as many as 100,000 autonomous vehicles annually in the United States within the next three years. The bill would also empower the federal government to exempt autonomous vehicle companies from certain safety standards.
The aforementioned bill is now stuck in the Senate thanks to opposition from critics like California Senator Dianne Feinstein (who, notably, has seen autonomous vehicles operating firsthand in her home state). And in the wake of the recent crashes, legislators may need to regroup. While Congress wants to please the lobbyists representing Detroit and Silicon Valley, they want to get re-elected, too.
The widespread rollout of autonomous vehicles will require more than just a few parameters set down by lawmakers. It will require a fundamental rethinking of the legal framework surrounding the auto industry.
Once autonomous vehicles hit the roadways, our age-old system of personal injury tort law will have to be supplemented by product liability law. Today, if you are injured by a human driver, you can try to sue them. But if you are injured by a vehicle with no driver at the wheel, who then is liable? The automaker? The manufacturer of whatever piece of software or hardware “caused” the accident? Or maybe nobody, if automakers succeed in getting consumers to sign contracts waiving their right to sue in the event of an accident. (If you thought people were upset about forced arbitration clauses in credit card contracts, try telling the family of a car crash victim that their loved one inadvertently signed away their right to legal recourse.) In short, the auto claims process, which underwriters have meticulously honed over decades of experience, will have to be completely revamped.
These logistical issues are already cropping up: A motorcyclist is suing GM over a December 7 collision with one of its autonomous vehicles. The motorcyclist’s lawyer referred to the car’s actions as “unpredictable and dangerous.” Who’s to say? Even if following the letter of the law, the vehicle could have been behaving differently than a human driver, which would indeed seem unpredictable. So long as there are still human drivers on the road, these issues aren’t going away.
The public isn’t ready. Even in the (unlikely) event that autonomous vehicle companies clear the technological and regulatory hurdles, there is one more barrier left: the general public. Most Americans simply aren’t ready for autonomous vehicles.
According to a May 2017 Pew Research survey, more than half of U.S. adults (54%) say they are either “somewhat” or “very” worried about the development of driverless vehicles, compared to just 40% who say they are at least somewhat enthusiastic. Additionally, 56% of adults say they would not personally want to ride in a driverless vehicle, compared to 44% who say they would want to do so. A separate survey commissioned by the Advocates for Highway and Auto Safety finds that most consumers (64%) are worried about even being on the road with driverless cars.
These survey results echo previous findings suggesting that consumers are skeptical of autonomous vehicles. A 2016 University of Michigan survey found that, if given the choice, just 16% of consumers would prefer full autonomy in their personal vehicle—compared to 46% who would prefer a non-autonomous vehicle and 39% percent who would prefer a semiautonomous vehicle.
Ordinary consumers aren’t even convinced that autonomous vehicles will be a net positive for society. Fully 61% of Pew survey respondents believe that the number of people killed or injured in traffic accidents will either increase or stay about the same if driverless vehicles become widespread, compared to 39% who believe that traffic fatalities and injuries will decrease. Keep in mind as well that these polls were taken before the Uber crash.
Part of the skepticism stems from our inability to process seemingly random twists of fate that take lives. In a car accident caused by a human, there is usually a logical, even relatable, explanation—like texting and driving. However tragic, traffic fatalities are at least something we’ve learned to understand.
With a car accident caused by an autonomous vehicle, there is no such understanding. In the wake of any such accident, people will be wondering how it happened. Did the recognition software freeze for a second? Was there some sort of fatal flaw in the vehicle’s design? Or did the vehicle do a cost-benefit analysis and effectively choose to kill one person over another? We may never know. By their very nature, machine-learning algorithms cannot reveal their intentionality. Not even the algorithm’s designer can say why it “chose” action A instead of action B. And of course the robot itself cannot explain its “motives” to humans.
In other words, autonomous vehicles offer little reassurance to wary consumers and no consolation at all to suffering victims. In the words of Stanford researcher Stephen Zoepf, “Human-caused accidents are often terrible, but at some level we can usually empathize with a driver who fell asleep, drove too fast, or looked down at the wrong moment. Computer systems have fundamentally different strengths and weaknesses than humans do, and some of the accidents of the future will be hard to comprehend.”
Here is where Silicon Valley, in its left-brained hubris, fails to grasp the emotional and cultural challenges facing its latest big project. Yes, as researchers predict, autonomous vehicles may in the long run turn out to be safer than human-driven vehicles. But people don’t judge a fundamental social institution like driving purely by cost-benefit analysis. People have vastly less tolerance for a system that arbitrarily kills people than for one in which deaths are caused by human intentionality.
Our low tolerance of accidents caused by machines is bred of our desire to be in control at the wheel. The Pew survey found that, among consumers who wouldn’t want to ride in a driverless vehicle, the top reason cited is a fear of giving up control (42%). Most Americans are unmoved by traffic fatality statistics because of the positive illusory bias (“I’m better than the average driver, so it will never happen to me!”). Asking people to accept autonomous vehicles is asking them to abandon a driving system that (they imagine) rewards skill and attentiveness in favor of a system that operates by random chance. That’s a tough sell. In a crunch, people want to know that they can take the wheel and get themselves out of trouble. How are they supposed to do that in a GM car with no steering wheel or pedals?
But the “killer lottery” element isn’t the only thing people dislike about autonomous vehicles. America’s ingrained car culture is also working against autonomous vehicle companies.
Generational change moves a lot slower than technological change. And today’s older generations still cherish cars as a slice of Americana. As we’ve pointed out before (see: “America’s Car Culture Shifts Gears”), Boomers and early-wave Gen Xers have participated all their lives in America’s century-long love affair with automobiles. As kids, they longed for the day when they would be able to grab the keys and head for the open road. Cars meant freedom—from overbearing parents, from responsibility, and from society. This sense of longing turned the 16th birthday into an American rite of passage.
To many Americans today over age 45, all that is threatened by autonomous vehicles. Once machines take the wheel, as columnist Robert Moor observes in New York, “The shared experience of American adolescence—much of it spent in cars, acquiring a nuanced understanding of when, and how, it is okay to break certain rules—will simply vanish. In exchange, we will be given a few more minutes each day to stare at screens. Lives will be saved, but life will become duller.”
With today’s cars, all you really need are the keys and a few bucks for gas. But with self-driving cars, this sense of freedom and limitless possibility would no longer exist. The ultimate utopian dream of Elon Musk is a system in which nobody owns cars and people instead book a ride with Tesla when they need to go someplace. Cars, instead of being a form of expression, would become the new public transportation. And if there’s no service where you want to go, you’re not going there, period. The Economist summarizes the tradeoff quite nicely: “Autonomous vehicles offer passengers freedom from accidents, pollution, congestion, and the bother of trying to find a parking space. But they will require other freedoms to be given up in return—especially the ability to drive your own vehicle anywhere.”
For the privacy-centric, an added wrinkle is that some faceless corporation will now know far more about you than you ever intended. Uber can already use rider data to detect one-night stands. Imagine what Uber could tell about you if you depended upon the platform not just for weekend bar-hopping excursions, but for every trip you took everywhere. If there’s one thing we’ve learned in the wake of the Cambridge Analytica scandal, it’s that people don’t like their information being collected and used by third parties—especially when they didn’t know the stakes.
People’s trepidation about autos run by digital IT is further heightened by their awareness that these cars could be prone to hacking. Someone with a grudge and an Internet connection can already disable your car alarm, unlock your doors, or even cut the power to your car in the middle of the highway. And that’s today, when cars are still mostly analog. Imagine the possible nightmares when every part of your vehicle is synced to the Internet. People fear not just the threat from other individuals—but also from governments and foreign powers: After an EMP burst, only the analog cars have a chance of still working.
Dystopian fiction is already tapping into these fears. In the War of the Worlds reboot, the aliens use an electromagnetic pulse to disable all vehicles in town, turning the residents into helpless victims. (Tom Cruise’s family was only able to escape by a stroke of luck that left their vehicle unharmed.) When Will Smith’s autonomous vehicle is ambushed by robots in I, Robot, he is able to shake them only after taking control of the wheel. Like these protagonists, plenty of Xers surely view their car as their potential last line of defense against a totalitarian state.
To be sure, America’s attachment to the risk, independence, and privacy of car culture has a strong generational dimension. The "drove my chevy to the levy" generations are aging out, and the Millennials following them have shown far less attachment to car culture. They aren’t getting drivers licenses at the same rate as their parents did as young adults. They aren’t itching to get away from their parents (they’re actually close with their parents) or go hang out with their friends (they have smartphones). They’re living in cities that make car ownership a hassle. And they aren’t all that concerned with their data being out there (they’ve grown up with the Internet).
Yet despite all these reasons to love digital driving, according to the earlier University of Michigan survey, just 19% of 18- to 29-year-olds say they’d prefer to drive a fully autonomous vehicle, barely above average. Millennials also are more likely to believe that we need a whole new crop of rules and regulations before driverless cars are unleashed onto the public. Millennials are less into autonomy than Gen Xers, but they’re also more personally risk averse. (Americans age 30-44, not those under age 30, are the most likely to favor “completely self-driving” and are the least likely to say “no self-driving.”)
THE BOTTOM LINE
It’s impossible to imagine autonomous vehicle companies clearing all of these hurdles in a few short years. The reality is: Consumers won’t be able to buy a completely road-ready autonomous vehicle that can operate in all conditions for at least a decade—and probably (our best guess) not until sometime in the late 2030s.
At the same time that the auto industry is bravely declaring full speed ahead on full autonomy, it’s very revealing how semiautonomous vehicle makers are actually pulling back on the amount of full autonomy they allow to drivers. Tesla now issues verbal warnings to drivers who go too long without touching the wheel. The Mercedes-Benz E-Class features a semiautonomous Drive Pilot mode that warns drivers to take the wheel after sensing one minute of inactivity. Some automakers have judged that mere haptic feedback isn’t enough: GM’s Super Cruise mode uses an infrared camera to track a driver’s gaze, ensuring that their eyes are checking the road periodically. If not, drivers receive several warnings—before the system eventually slows the car to a stop.
Sure, full autonomy can achieve near-term profitability in certain niche conditions. It is already proving itself on farms and in mines. Within possibly the next five years, it may begin to take on long-haul trucking, which features hours on end of monotonous, routine highway driving. The possibilities here are endless. Human drivers could lead a “platoon” of autonomous trucks, each following the one ahead. Humans could also find a role as “last-mile” delivery drivers, taking over from autonomous trucks once the terrain gets trickier. Self-driving trucks by startup Embark are already running between Texas and California—and companies like Volvo, Daimler, and Tesla all have their eyes on the space.
But commercial production for everyday use by individual drivers? That’s a still a very long way off. Driverless taxi services, likewise, will only work (for the foreseeable future) in small urban areas with well-defined routes, perfect weather, and plenty of expensive human backup.
Investors, therefore, should be highly skeptical. The market has priced in the assumption that automakers will be rolling out consumer-ready services within the next couple of years. Once these companies start missing their self-imposed deadlines (Tesla already missed its 2017 deadline for an autonomous trip from L.A. to New York), investors will back off.
Among automakers, who’s worst-positioned of the bunch? The stock sell-off will likely be proportional to the amount invested and to the degree of ambitiousness. With so much riding on autonomous vehicles, GM’s stock dropped -2.6% on the day of the Uber fatality, more than any other “Big Three” automaker. The me-too crowd, likewise, deserves to be treated with caution—especially companies like Toyota that have plunked down huge amounts of money on autonomous vehicle development without any well-defined plans.
Paradoxically, the overall industry will likely be helped by long delays in the readiness of full auto. These delays will keep a lot more vehicles on the road. After all, if autonomous vehicles take off on schedule, it could trigger a vertiginous decline in total new car sales (as each driverless car “replaces,” so to speak, several old-tech cars). According to the consulting firm RethinkX, the number of cars on the road would drop from 247 million today to a mere 44 million by 2030—an 80% plunge. In this scenario, only one or two global automakers would survive, except where they are nationally protected, and perhaps none of the survivors would be a U.S. firm or a traditional automaker.
Indeed, much of the driverless fever among automakers is a response to this doomsday scenario. Each firm wants to reassure its investors that it is ahead of the pack, or at least in the mix, to become one of the lucky survivors. As a group, the traditional firms would be much better off resisting this revolution, as they did for so many years with airbags and electric vehicles. But this time they face too many ambitious Silicon Valley outsiders (like Tesla, Google, and Uber) to stand pat. They have no choice but to start running or at least to pretend they are moving with the herd.
Yet even if the driverless revolution does not pose the five-year industry apocalypse that so many imagine, the fact remains that this is not a healthy industry. Automakers currently face a laundry list of headwinds, including generational change, the rising average age of cars, higher gas prices, mounting competition from abroad, the inherent cyclicality of consumer durables, and sky-high fixed costs. (See: “Hitting the Brakes: Rough Road Ahead for the Auto Industry.”)
By gearing up for an unlikely scenario, the imminent rollout of autonomous vehicles, automakers leave themselves even more vulnerable to the (much more likely) prospect of encountering a crushing recession sometime in the next couple of years. During the last recession, every U.S. automaker and several auto-parts suppliers had near-death experiences.
So, which seems like the smarter bet: Spending billions trying to beat the entire industry to full autonomy? Or investing those funds in lowering fixed costs and shedding debt in order to survive next time the economy hits the wall? Seems like a no-brainer to us. Investors should long-short accordingly.