Cruise Stopped for Headlights Off — Why Is This a Big Deal?

Cruise Stopped for Headlights Off -- Why Is This a Big Deal?

In April 2022 San Francisco police pulled over an uncrewed Cruise autonomous test vehicle for not having its headlights on. Much fun was had on social media about the perplexed officer having to chase the car a few meters after it repositioned during the traffic stop. Cruise said it was somehow intentional behavior. They also said their vehicle “did not have its headlights on because of a human error” (Source: https://www.theguardian.com/technology/2022/apr/11/self-driving-car-police-pull-over-san-francisco) 

The traffic behavior indicates that Cruise needs to do better making it easier for local police to do traffic stops — but that’s not the main event. The real issue here is the headlights being off.

Cruise said in a public statement: “we have fixed the issue that led to this.” Forgive me if I’m not reassured. A purportedly completely autonomous car (the ones that are supposed to be nearly perfect compared to those oh-so-flawed human drivers) that always drives at night didn’t know to turn its headlights on when driving in the dark. Seriously? Remember that headlights on at night is not just for the AV itself (which might not need them in good city lighting with a lidar etc.) but also to be visible to other road users. So headlights off at night is a major safety problem. (Headlights on in day is also a good idea — at least white forward-facing running lights, which this vehicle also did not seem to have on.)

Cruise says these vehicles are autonomous — no human driver required. And they only test at night, so needing headlights can hardly be a surprise. But they got it wrong. How can that be? 

See also  Having A Baby? Here Are the Carseat Basics You Need to Know

This can’t just be hand-waved away. Something really fundamental seems broken, and it is just a question of what it is.

The entire AV industry, including Cruise, has a history of extreme secrecy and in particular lack of transparency with safety. So we can only speculate. Here are the possibilities I can think of. None of them inspire confidence in safety, and they tend to get worse as we go down the list.

The autonomy software had a defect that didn’t turn headlights on at night. Perhaps, but seems unlikely. That would (one assumes) affect the entire fleet. And there should be software checks to make sure the headlights turn on as a safety check built into the software. If true, this never should have slipped through quality control let alone a rigorous safety critical software design process, and indicates significant concerns with overall software safety.The headlight switch is supposed to be on at all times. Many (most?) vehicles have “smart” lights. You can turn them off if you want, but in practice you just turn it to “on” and leave it there for months or years and the headlights just do the right thing, switching from daytime running lights to full on automatically, and turning off when the vehicle does. If you’re in urban San Francisco high beams are unlikely to be relevant. So the autonomy doesn’t mess with the lights at all. Except — why does the software not check to see that the vehicle condition is safe in terms of headlights? Seems like a design oversight. How did a hazard like this get missed in hazard analysis? If this is the situation, this really calls into question whether the hazard analysis was done properly for the software. Even if this was fixed, what else did hazard analysis miss?2a) A passenger in the vehicle turned the headlights off as a prank. If this is possible, even more important for this to be called out for software monitoring in the hazard analysis. But the check for headlights off obviously isn’t there now. The software is completely ignorant of headlight state, and there is a maintenance tech who is supposed to turn the lights “on” as part of the check-out process each day to prepare the vehicle to run. This manual headlight-on check didn’t get done. This is a huge issue with the Safety Management System, because if they missed that check what other checks did they miss? There are plenty of things to check on these vehicles for safety (e.g., sensor calibration, sensor cleaning/cleaner systems, correct software configuration). If they forget to turn on the headlight switch, what else did they forget? While this might be a “within normal tolerance” deviation, given lack of safety transparency Cruise doesn’t get the benefit of the doubt. A broken SMS puts all road users at risk. This is a big deal unless proven otherwise. Firing, training, or having a stern talk with the human who made the “error” won’t stop it from happening again. Blaming the driver/pilot/human never really does.There is no SMS. That’s basically how Uber ATG ended up killing a pedestrian in Tempe Arizona. If this is the case, it is even scarier.

See also  Comments on PA SB-965 Regulating Autonomous Vehicles

Again, we don’t really know the situation because Cruise is trying the “nothing to see here .. move along” response to this incident. But none of these scenarios is comforting.  If I had to guess my money would be on #3, simply because #4 would be too irresponsible to have to contemplate. But really, we have no way to know what’s really going on. And it might be another alternative I have not considered.

Cruise should take this as a wakeup call to get their safety house in order before they have a big event. Blaming safety critical failures on “human error” is generally indicative of a poor safety culture. They have a chance here to turn the ship around — before there is harm to a road user.

Is there a scenario I missed that is less of a concern? Maybe Cruise will give us a substantive explanation. If they do I’ll be happy to post it here as a follow-up.