NewsWire: 6/14/21

  • Despite grand proclamations by tech CEOs, we’re still nowhere close to getting self-driving cars to market. The first fully self-driving car could be decades away, if it ever manages to launch at all. (The Wall Street Journal)
    • NH: There is hype, and there is reality.
    • Just three years ago, early in 2018, auto and tech CEOs were daring each other to deliver the most audacious prediction about self-driving cars. GM announced that its "fourth generation Cruise AV" (no steering wheel needed!) would be "street-ready" by 2019. Ford declared its driverless taxis would be ready for consumers by 2021. And of course Elon Musk was already insisting that Tesla was just months away from "full autopilot." Anyone who expressed skepticism was branded as a technophobe or a Luddite.
    • Then the mood started to bend, starting perhaps with the widely publicized accident, on March 18, 2018, in which a fully autonomous Uber test vehicle struck and killed a Tempe, Arizona, pedestrian. A week later, a Tesla Model X driver died on California's Bayshore Freeway after his car ran full speed into an overpass pillar. Journalists at last began interviewing the AI scientists themselves to try to behind all the ballyhoo. And they discovered that the technicians building these cars weren't nearly so optimistic as the CEOs and IR flaks.
    • This WSJ article offers a good summary of what's really going on.
    • Where are we now? "In vehicles you can actually buy, autonomous driving has failed to manifest as anything more than enhanced cruise control, like GM’s Super Cruise or the optimistically named Tesla Autopilot." On the six-level hierarchy of autonomy, the most advanced cars commercially available are only at level two (partial driving automation)--and maybe not even that. Level two should allow the driver to let go of the steering wheel, but even a Tesla on "autopilot" requires drivers to keep their hands on the wheel at all times or else the car will lock itself out of auto entirely.
    • Where are we going? Full self-driving capability for most drivers is still the dream. But that's probably decades into the future. For now, the industry is shifting to much more modest goals, like driverless buses or taxis shuttling passengers along well-defined routes or driverless truck convoys along well-prepared stretches of highway. While Elon Musk, undaunted, continues to boast that a "full self-driving Tesla" is just "weeks away," everybody understands by now that this is just his way of projecting an aspirational, larger-than-life personality.
    • Many major obstacles block the rapid advent of the fully autonomous vehicle. Some are legal and institutional: How rapidly can we shift from an age-old regime of tort law and personal insurance to a regime of product liability law and commercial insurance? Some are attitudinal: How rapidly can most Americans--not just type-A techies--be induced to trust these cars with their lives?
    • Yet as the article ably points out, the biggest obstacle remains technological. We're plenty good on sensory input: Camera and radar and Lidar generate vastly more data than any human could use. We just don't yet know how to build the thinking part. Our current AI algorithms, after getting fed all the input, are still nowhere close to demonstrating the sort of "higher-order intelligence" that a true driverless car would need to have.
    • To be sure, our ability to create AI that can track lanes and avoid obvious hazards (in good weather and under "normal" conditions) is really amazing. Still, it implies nothing about our ability to create AI that can think inferentially and use common sense in ways that humans do effortlessly all the time. We're talking about how to negotiate ambiguous situations like frontage roads or strip malls or complex traffic lights; how to interpret disorienting glare or reflections or shadows; how to know the difference between a harmless paper bag and a steel fender lying in the road; or how to comprehend the "mood" or intentionality of another driver, a pedestrian, or a neighborhood.
    • It's not just that today's AI is not quite there yet, agree the AI scientists. It's that we may never get close to a more human-like, higher-order intelligence without completely rethinking our approach to AI design at the most basic level. It's a bit like imagining that the Wright Brothers' first flight at Kittyhawk cleared the way for a voyage to Mars. Or that decoding the human genome was sufficient to understand and manipulate all the genetic causes of disease. Not hardly. What we thought was a triple turned out to be more like the first few strides toward first base.
    • Such analogies are legion in the AI-tech community. Melanie Mitchell, a computer scientist and professor of complexity at the Santa Fe Institute, has written a superb paper ("Why AI is Harder Than We Think") outlining the serious limitations of our current AI capability. Mary Cummings, director of the Humans and Autonomy Lab at Duke University, has written another.
    • Many years ago, the crux of the issue was aphorized by cognitive scientist Marvin Minsky in the phrase, "the hard things are easy, but the easy things are hard." What he meant was that even the cheapest hand calculator can figure out the square root of two better than any human ("hard things are easy"). But even the easiest human activity, like walking quickly and stealthily across unknown terrain, is virtually impossible for a machine ("easy things are hard"). This is also known as the Moravec Paradox: The earlier a human task was mastered in the history of evolution, the harder it is for AI to simulate.
    • At this point, maybe I should self-advertise a bit: You heard it here first. Back in 2016, when every media outlet was hyping the imminent arrival of driverless tech, I was a loud contrarian. (See "Driverless Cars: Unsafe at Any Speed?") No, I wasn't alone. There were other nay-sayers in 2016, including this brave WSJ feature writer. But there weren't many of us.
    • My summation: "My take on all this buzz? Check your long exposure. I don’t see a fully autonomous vehicle coming to the general marketplace for two decades at a minimum." In retrospect, my "two decades" may have been overly generous. See also "Have Autonomous Vehicles Hit a Roadblock," "Self-Driving Vehicle Companies Hit the Brakes," and "Arizona Citizens Attacking Self-Driving Cars."
    • I will confess. I too got ridiculed as a technophobe and a Luddite. But I trusted my intuition and refused to be dissuaded by self-interested puffery.
    • You only have to drive a car in New York City for a few minutes to realize that AI could never negotiate that mess without truly understanding all the weird situations it regularly encounters--and responding appropriately. Corollary: Every town in America is like New York City, only a bit less so. Conclusion: Since we cannot yet build AI that contextually "understands" in any human sense and since the cost of even one failure is so high, driverless cars for general use would not be viable for the foreseeable future.
To view and search all NewsWires, reports, videos, and podcasts, visit Demography World.
For help making full use of our archives, see this short tutorial.