Waymo’s driverless cars were involved in 18 accidents over 20 months

Waymo’s driverless cars have driven 6.1 million autonomous miles in Phoenix, Arizona, including 65,000 miles without a human behind the wheel from 2019 through the first nine months of 2020. That’s according to data from a new internal report published today by Waymo, which analyzed a portion of collisions involving the robo-taxi service Waymo One that launched in 2018. In total, Waymo’s vehicles were involved in 18 accidents with a pedestrian, cyclist, driver, or other object and experienced 29 disengagements — times human drivers were forced to take control — that likely would have otherwise resulted in an accident.

Three independent studies in 2018 — by the Brookings Institution, the think tank HNTB, and the Advocates for Highway and Auto Safety (AHAS) — found that a majority of people aren’t convinced of driverless cars’ safety. Partners for Automated Vehicle Education (PAVE) reports a majority of Americans don’t think the technology is “ready for prime time.” These concerns are not without reason. In March 2018, Uber suspended testing of its autonomous Volvo XC90 fleet after one of its cars struck and killed a pedestrian in Tempe, Arizona. Separately, Tesla’s Autopilot driver-assistance system has been blamed for a number of fender benders, including one in which a Tesla Model S collided with a parked Culver City fire truck. And the automaker’s Full Self Driving Beta program is raising new concerns.

Waymo has so far declined to sign onto efforts like Safety First For Automated Driving, a group of companies that includes Fiat Chrysler, Intel, and Volkswagen and is dedicated to a common framework for the development, testing, and validation of autonomous vehicles. However, Waymo is a member of the Self-Driving Coalition for Safer Streets, which launched in April 2016 with the stated goal of “work[ing] with lawmakers, regulators, and the public to realize the safety and societal benefits of self-driving vehicles.” Since October 2017, Waymo has released a self-driving report each year, ostensibly highlighting how its vehicles work and the technology it uses to ensure safety, albeit in a format some advocates say resembles marketing materials rather than regulatory filings.

Waymo says its Chrysler Pacificas and Jaguar I-Pace electric SUVs — which have driven tens of billions of miles through computer simulations and 20 million miles (74,000 driverless) on public roads in 25 cities — were providing a combined 1,000 to 2,000 rides per week in the East Valley portion of the Phoenix metropolitan region by early 2020. (Waymo One reached 100,000 rides served in December 2019.) Between 5% and 10% of these trips were driverless — without a human behind the wheel. Prior to early October, when Waymo made available fully driverless rides to the public through Waymo One, contracted safety drivers rode in most cars to note anomalies and take over in the event of an emergency.

Waymo One, which initially transitioned to driverless pickups among a group of riders within Waymo’s Early Rider program, hails rides from a fleet of over 600 autonomous cars from Phoenix-area locations 24 hours a day, seven days a week. It prompts customers to specify both pickup and drop-off points before estimating the time to arrival and the cost of the ride; like a typical ride-hailing app, users can enter payment information and rate the quality of rides using a five-star scale.

Using its cloud simulation platform, Carcraft, Waymo says it predicts what might have transpired had a driver not taken over in the event of a near-accident (which the company calls a counterfactual). Waymo leverages the outcomes of these counterfactual disengagement simulations individually and in aggregate. Engineers evaluate each counterfactual to identify potential collisions, near-misses, and other metrics. If the simulation outcome reveals an opportunity to improve the behavior of the system, they use it to develop and test changes to software. The counterfactual is also added to a library of scenarios so that they can test future software against the scenario.

At an aggregate level, Waymo uses results from counterfactuals to produce metrics relevant to a vehicle’s on-road performance.

While conceding that counterfactuals can’t predict exactly what would have occurred, Waymo asserts they can be more realistic than simulations because they use the actual behavior of the vehicles and objects up to the point of disengagement. Where counterfactuals aren’t involved, Waymo synthesizes sensor data for cars and models scenes in digitized versions of real-world environments. As virtual cars drive through the scenarios, engineers modify the scenes and evaluate possible situations by adding new obstacles into the situation (such as cyclists) or by modulating the speed of oncoming traffic to gauge how the vehicle would have reacted.

In addition, as part of a collision avoidance testing program, Waymo benchmarks the capabilities in thousands of scenarios where immediate braking or steering is required to avoid collisions. The company says these scenarios test competencies crucial in reducing the likelihood of collisions caused by the behavior of other road users.

Waymo analyzes counterfactuals to determine their severity based on the likelihood of injury, collision object, impact velocity, and impact geometry — methods for which the company developed using national crash databases and periodically refines to reflect new data. Events are tallied using classes ranging from no injury expected (S0) to possible critical injuries expected (S1, S2, and S3). Waymo says it determined this scale using the change in velocity and direction of force estimated for each vehicle.

Here’s how the car data from January 1, 2019 to September 30, 2020 breaks down, according to Waymo, which covers 65,000 miles in driverless mode. The disengagement data is from January 1 to December 31, 2019, during which Waymo’s cars drove the aforementioned 6.1 million miles.

S0

  • Waymo was involved in one actual and two simulated events (i.e., events triggered by a disengagement) where a pedestrian or cyclist struck stationary Waymo cars at low speeds.
  • Waymo cars were also involved in two “reversing collisions” (e.g., rear-to-front, rear-to-side, rear-to-rear) — one actual and one simulated — at speeds of less than three miles per hour.
  • There was one actual sideswipe and eight simulated sideswipes involving Waymo vehicles. A Waymo car made a lane change during the actual sideswipe while during the seven simulated sideswipes, other cars made the lane change.
  • Waymo reported 11 actual rear-end collisions involving its cars and one simulated collision. In eight of the actual collisions, another car struck a Waymo car while it was stopped; in two of the actual collisions, another car struck a Waymo car moving at slow speeds; and in one of the actual collisions, another car struck a Waymo car while it was decelerating. As for the simulated collision, it occurred while a Waymo car struck a decelerating car.
  • Waymo vehicles had four simulated angled collisions. Three of these collisions occurred when another car turned into a Waymo car while both were heading the same direction; one of the collisions happened when a Waymo car turned into another car while heading the same direction.

S1

  • While making a lane change, a Waymo vehicle was involved in a simulated sideswipe that didn’t trigger airbag deployment.
  • Waymo cars were involved in one actual and one simulated rear-end collision that didn’t trigger airbag deployment. In the first instance, a Waymo car was struck while traveling slowly, while in the second instance, a Waymo car was struck while decelerating.
  • There were two actual rear-end collisions involving Waymo cars that triggered airbag deployments inside either the Waymo vehicles or other cars, one during deceleration and the other at slow speeds.
  • There were six simulated angled accidents without airbag deployments, including one actual angled accident with deployment and four simulated accidents with deployment.

Waymo points out that the sole event where a Waymo car rear-ended another car involved a passing vehicle that swerved into the lane and braked hard; that only one actual event triggered a Waymo car’s airbags; and that two events would have be more severe had drivers not disengaged. However, the company also notes that the severities it ascribed to the simulated collisions don’t account for secondary collisions that might occur subsequent to the simulated event.

Falling short

Taken as a whole, Waymo’s new report, along with its newly released safety methodologies and readiness determinations, aren’t likely to satisfy critics advocating for industry-standard self-driving vehicle safety metrics. Tellingly, Waymo didn’t detail accidents earlier in the Waymo One program or progress in the other cities where it’s actively conducting car and semi-truck tests, including Michigan, Texas, Florida, Arizona, and Washington — some of which experience more challenging weather conditions than Phoenix. As mandated by law, Waymo was one of dozens of companies that released a California-specific disengagement report in February, which showed disengagement rates among its 153 cars and 268 drivers in the state dropped from 0.09 per 1,000 self-driven miles (or one per 11,017 miles) to 0.076 per 1,000 self-driven miles (one per 13,219 miles). But Waymo itself has characterized disengagements as a flawed metric because they don’t adequately capture improvements or their impact over time.

In 2018, the RAND Corporation published an Uber-commissioned report — “Measuring Automated Vehicle Safety: Forging a Framework” — that laid bare some of the challenges ahead. It suggested that local DMVs play a larger role in formalizing the demonstration process and proposed that companies and governments engage in data-sharing. A separate RAND report estimated it would take hundreds of millions to hundreds of billions of miles to demonstrate driverless vehicle reliability in terms of fatalities and injuries, and Waymo CEO John Krafcik admitted in a 2018 interview that he doesn’t think self-driving technology will ever be able to operate in all possible conditions without some human interaction.

In June, the U.S. National Highway Traffic Safety Administration (NHTSA) detailed the Automated Vehicle Transparency and Engagement for Safe Testing (AV TEST), a program that purports to provide a robust source of information about autonomous vehicle testing. The ostensible goal is to shed light on the breadth of vehicle testing taking place across the country. The federal government maintains no database of autonomous vehicle reliability records, and while states like California mandate that companies testing driverless cars disclose how often humans are forced to take control of the vehicles, critics assert those are imperfect measures of safety.

Some of the AV TEST tool’s stats are admittedly eye-catching, like the fact that program participants are reportedly conducting 34 shuttle, 24 autonomous car, and 7 delivery robot trials in the U.S. But they aren’t especially informative. Major stakeholders like Pony.ai, Baidu, Tesla, Argo.AI, Amazon, Postmates, and Motion seemingly declined to provide data for the purposes of the tracking tool or have yet to make a decision either way. Moreover, several pilots don’t list the road type (e.g., “street,” “parking lot,” “freeway,”) where testing is taking place, and the entries for locations tend to be light on the details. Waymo reports it is conducting “Rain Testing” in Florida, for instance, but hasn’t specified the number and models of vehicles involved.

For its part, Waymo says it evaluates the performance of its cars based on the avoidance of crashes, completion of trips in driverless mode, and adherence to applicable driving rules. But absent a vetting process, Waymo has wiggle room to underreport or misrepresent tests taking place. And because programs like AV TEST are voluntary, there’s nothing to prevent the company from demurring as testing continues during and after the pandemic.

Other federal efforts to regulate autonomous vehicles largely remain stalled. The DOT’s recently announced Automated Vehicles 4.0 (AV 4.0) guidelines request — but don’t mandate — regular assessments of self-driving vehicle safety, and they permit those assessments to be completed by automakers themselves rather than by standards bodies. (Advocates for Highway and Auto Safety also criticized AV 4.0 for its vagueness.) And while the House of Representatives unanimously passed a bill that would create a regulatory framework for autonomous vehicles, dubbed the SELF DRIVE Act, it has yet to be taken up by the Senate. In fact, the Senate two years ago tabled a separate bill (the AV START Act) that made its way through committee in November 2017.

Coauthors of the RAND reports say that there’s a need to test what results from the use of software that embodies self-driving cars’ automation with a broad, agreed-upon framework in place. Last January, MCity at the University of Michigan in January released a white paper laying out safety test parameters it believes could work — an “ABC” test concept of accelerated evaluation (focusing on the riskiest driving situations), behavior competence (scenarios that correspond to major motor vehicle crashes), and corner cases (situations that test limits of performance and technology). In this framework, on-road testing of completely driverless cars is the last step — not the first.

“They don’t want to tell you what’s inside the black box,” Matthew O’Kelly, the coauthor of a recent report proposing a failure detection method for safety-critical machine learning, recently told VentureBeat. “We need to be able to look at these systems from afar without sort of dissecting them.”


How startups are scaling communication: The pandemic is making startups take a close look at ramping up their communication solutions. Learn how


By VentureBeat Source Link

LEAVE A REPLY

Please enter your comment!
Please enter your name here