Uber has discovered the reason why one of the test cars in its fledgling self-driving car fleet struck and killed a pedestrian earlier this year, according to The Information. While the company believes the car’s suite of sensors spotted 49-year-old Elaine Herzberg as she crossed the road in front of the modified Volvo XC90 on March 18th, two sources tell the publication that due to a software bug the car “decided” that it didn’t need to take evasive action, and possibly flagged the detection as a “false positive.”

Software designers face a basic tradeoff here. If the software is programmed to be too cautious, the ride will be slow and jerky, as the car constantly slows down for objects that pose no threat to the car or aren’t there at all. Tuning the software in the opposite direction will produce a smooth ride most of the time—but at the risk that the software will occasionally ignore a real object. According to Efrati, that’s what happened in Tempe in March—and unfortunately, the “real object” was a human being.

All of Uber’s self-driving testing efforts have been suspended since the accident, and the company is still working with the National Transportation Safety Board, which has yet to issue a preliminary report on the progress that’s been made in its investigation. An Uber’s spokesperson said in a statement;

“We have initiated a top-to-bottom safety review of our self-driving vehicles program, and we have brought on former NTSB Chair Christopher Hart to advise us on our overall safety culture.”

Of course, the long-term goal is for self-driving cars to become so good at recognizing objects that false positives and false negatives both become rare. But that woman’s death provides a tragic reminder that companies shouldn’t get too far ahead of themselves. Getting fully self-driving cars on the road is a worthwhile goal. But making sure that’s done safely is more important.

The post Report claims Uber’s self-driving car crash occurred due to a software bug appeared first on TechJuice.