Falling for Cartoon Prank, Tesla Crashes Into Wall, Rehashes NHTSA Probe

Before we get into the very-serious issue of whether or not Tesla’s Autopilot driver assistance system preemptively shuts itself off, ceding control back to the driver, in the moment before a crash, we bring you YouTube proof, of sorts. Mark Rober put out a CrunchLabs video recently highlighting the key drawbacks to Tesla’s driver assistance technology, namely that the hardware package (sensors) is woefully lacking given the operational scope of Autopilot (adaptive cruise control paired with a self-steering lane-keep function that can accelerate, brake, and steer your car at a set speed on freeways, with supervision) and, on models so equipped, Full Self Driving (which has received a name change appending “Supervised” to the end to remind users that they still need to pay attention when this fully autonomous feature is running). What are we on about, here?

Most other vehicles with similar capabilities (or less) rely on a mix of cameras, short-distance ultrasonic sensors, radar sensors, and even lidar to map their surroundings in real time and respond to other traffic, pedestrians, and roadways. Tesla dropped most of that in favor of a cheaper, simpler array of visual cameras. You needn’t be an engineer to surmise this isn’t a very good idea, given cameras face the same basic limitations as human eyes: They can’t “see” well through fog, rain, snow, or dirt—especially if any of that blocks the camera lens itself—nor do they have any redundant backups to check their work (see: radar, lidar, and other sensors that can offer a different view of an object or something “seen” by a camera). This is where Teslas getting confused by sunsets or sunrises, tunnels, and the like come into play. Lidar or radar would know that something is or isn’t actually in the path of the vehicle, serving as a check on the cameras’ vision.

YouTube Mark Rober slickly demonstrates Tesla’s approach by comparing his own Model Y’s Autopilot performance against a lidar-equipped test vehicle in a series of evaluations involving an unfortunate child-sized dummy. We won’t spoil the video—watch it for yourself—save for the big ending, since it is of great importance to NHTSA’s Autopilot investigation. The two vehicles are driven at a wall painted to resemble the road continuing ahead—an optical illusion, of sorts, familiar to anyone who grew up watching cartoons on Saturday mornings. Even the human eye can detect the ruse, at least once closer to the faux wall, and obviously radar and lidar can see it, too, since it’s a solid object in the roadway. The Tesla? Well, it’s fooled—smashing through the (foam!) wall without slowing, demonstrating that its cameras saw only a road continuing on, and are thus easily tricked.

While that’s both amusing and damning at the same time, the video shows another phenomenon: Autopilot shuts off just before the Model Y impacts the fake wall. As electrek points out, Tesla fans rushing to react to the CrunchLabs video made several misinformed or misleading assertions about Autopilot’s use in the clip—with the comments-section back-and-forth inadvertently bringing attention back to the phenomenon at the center of a NHTSA safety investigation first opened in 2022. The details on that investigation’s expanded findings continue below. But if you want to see the function cited in it in action, watch the YouTube clip above and pay very close attention to the Model Y’s touchscreen just before impact, when the digital “road” animation cuts out, signaling Autopilot has disengaged:

That NHTSA Report…

A NHTSA report on its investigation into crashes in which Tesla vehicles equipped with the automaker’s Autopilot driver assistance feature hit stationary emergency vehicles has unearthed a troubling detail: In 16 of those crashes, “on average,” Autopilot was running but “aborted vehicle control less than one second prior to the first impact.”

That line will send Tesla skeptics into a tizzy, given how Tesla (well, its CEO, Elon Musk) has been suggesting that crashes involving Autopilot or its new (not-ready-for-prime-time) Full Self Driving “autonomous” driving feature are due to driver error or misuse. The obvious subtext is that Tesla seems to be blaming drivers despite, at least in these sixteen high-profile emergency vehicle crashes—in which Teslas rammed into stopped emergency vehicles alongside roadways or in active lanes, incidents NHTSA found on average would have been identifiable by a human up to 8 seconds ahead of time—Autopilot was running but then shut off just a second before the impact.

Tin-foil-hat types are already claiming this indicates Tesla knowingly programs its Autopilot system to deactivate ahead of an impending, unavoidable impact so that data would show the driver was in control at the time of the crash, not Autopilot. So far, NHTSA’s investigation hasn’t uncovered (or publicized) any evidence that the Autopilot deactivations are nefarious; the intent is a mystery. From where we’re sitting, it’d be fairly idiotic to knowingly program Autopilot to throw control back to a driver just before a crash in the hopes that black-box data would absolve Tesla’s driver assistance feature of error. Why? Because no person could be reasonably expected to respond in that blink of an eye, and the data would show that the computers were assisting the driver up to that point of no return.

The NHTSA report also shows that “in the majority of incidents” among those 16 under close investigation, the Teslas activated their forward collision warnings and automated emergency braking systems, so it wasn’t as though the drivers were given zero time to react, though it isn’t mentioned how far in advance of impact those kicked on. In 11 of the crashes, none of the drivers took any action between two and five seconds before impact, indicating they, like Autopilot, didn’t detect the impending collisions, either.

Regardless, those thinking the ghost in the machine punting control to hapless, inattentive drivers just before impact is indicative of Tesla trying to gin up a “blame-the-driver” defense need remember only one thing: Autopilot, despite its name, is a driver assistance feature. It is intended to be used with driver supervision, meaning the driver, who may not need to intervene with the system’s operation of the accelerator, brakes, and steering nonetheless needs to be keeping an eye on the proceedings. In other words, even if Autopilot were programmed to sandbag drivers so Tesla could wash its hands of responsibility in the event of a system failure, the driver is supposed to be paying attention to what’s in front of the vehicle. Even if Autopilot kept running all the way until impact, that’d still be the case, meaning Tesla could still in good faith blame the driver.

Of course, things aren’t that simple. Tesla has repeatedly marketed Autopilot in such a way that suggests its capabilities are greater than they are, often by not correcting popular thinking that the system truly is like a self-driving, well, autopilot feature. The company practically needed to be dragged into making driver prompts and safety blurbs noting how the driver needs to pay attention more visible inside their cars.

So, where does that leave the mysterious half-second-or-so Autopilot shutoff just before these crashes? In all likelihood, it’s probably a simple protocol to shut off the system because a crash is about to occur. Plenty of new cars feature last-ditch shutoffs and other preemptive actions that occur just before or during impact; think about seatbelts that cinch up so occupants are seated more safely, or fuel-line disconnects, or some of the fancy new suspension actions Audi’s A8 is capable of such as lifting one side of the car just before it’s T-boned to place more of the crash structure in the path of the impact.

Absent any proof that Tesla hoped its Autopilot shutoffs—which, by the way, aren’t claimed to have occurred in every crash under investigation—would let it claim, however incredulously, that the crashes were the drivers’ faults because they were “in control” for the hummingbird-wingflap of an instance just before impact, there’s no “there” there. Just speculation, and since Tesla has no public relations team or other way of answering journalists’ questions (other than tweeting at Elon Musk), we won’t know for sure until NHTSA gets to the bottom of things in its investigation, which has since expanded to include systems on over 2014-2022-model-year 800,000 Model S, Model 3, Model Y, and Model X vehicles. As part of the original investigation into Autopilot, not the shutoffs directly, NHTSA did find in 2023 that Tesla’s onboard systems for monitoring driver engagement while Autopilot is running weren’t adequate, forcing a recall to instill more robust driver engagement and tracking. It took another year for NHTSA to finally close that probe, given ongoing questions about the recall satisfying the government’s demands.

This story was originally published in June 2022, and has since been updated to include new developments.

Leave a Reply

Your email address will not be published. Required fields are marked *