pcmag.comWe review products independently, but we may earn affiliate commissions from buying links on this page. Terms of use. Yet another Tesla has hit yet another firetruck, and it's time to have a conversation about how much we can expect from both human and AI drivers. In January, a woman had her Tesla Model S set to autopilot when it crashed into a firetruck on a California highway. In Salt Lake City in May, a man in his Model S with the autopilot engaged also careened into a firetruck, despite having hit the brakes himself when he realized the car had accelerated instead of stopping. In this latest incident, the driver was drunk but also said that he had engaged the Tesla's autopilot feature. The driver's first mistake was obviously his criminal actions in drinking and driving. His second was an overreliance on the autopilot feature and an expectation that autopilot is synonymous with self-driving. Autopilot is a driver-assistance system that can park, stay in the center of a lane, change lanes, and handle adaptive cruise control. Stopping to avoid an accident is within the bounds of the last. But that is one of the places where the autopilot feature is lacking. It's designed to detect and respond to objects in the car's path, yet somehow, when confronted with stopped firetrucks in these accidents, it can fail. This is something Tesla is aware of. One of the many warnings in Tesla manuals states: "Traffic-Aware Cruise Control cannot detect all objects and may not brake/decelerate for stationary vehicles, especially in situations when you are driving over 50 mph (80 km/h) and a vehicle you are following moves out of your driving path and a stationary vehicle or object is in front of you instead. Always pay attention to the road ahead and stay prepared to take immediate corrective action." Considering how common it is to encounter a sudden stop when cruising along at 50 mph or more, that seems like a pretty big oversight. And yet it's actually a deliberate compromise created by engineers who, because of the current limitations of technology, are forced to choose between creating cars that stop for no reason and could cause accidents and cars that don't always stop to avoid an accident. Although not all consumers are aware of these limitations, there's still an inherent distrust of AI-assisted driving. Elicit Insights found that consumers are pretty much split between seeing the benefits of AI-assisted driving and being worried about potential dangers. When asked how they would feel about a self-driving car that makes decisions to minimize damage during unexpected dangerous situations, 51 percent saw it as a benefit while 49 percent did not. In that same group, 44 percent said the idea of such a technology made them happy, but 56 percent said it made them nervous. This result is borne out by a recent Statista survey that found 50 percent of drivers wouldn't feel safe in a self-driving car and that 43 percent are concerned about the car making mistakes. Aside from distrust of the technology itself and the human errors made by those behind the wheel, the marketing messages coming from companies deserve some blame. Visit Tesla's page on the autopilot feature, and the first thing you see is the proclamation "Full Self-Driving Hardware on All Cars" and then "All Tesla vehicles produced in our factory, including Model 3, have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver." This is a deliberately misleading statement that is further bolstered by hyperbolic copy and illustrations below it that detail how that hardware functions. Or rather, how it would function if the self-driving software to power it existed. Consumers are disabused of their expectations of a self-driving Tesla by only one bold line: "Please note that Self-Driving functionality is dependent upon extensive software validation and regulatory approval, which may vary widely by jurisdiction." All the language around the word "autopilot" on the page that is supposedly devoted to that feature is deliberately vague. We are still learning how far we can and can't trust AI, but one thing we should know is that we can't always trust companies with a product to sell.

weiterlesen: RSS Quelle öffnen