It was in 2007, during the DARPA Urban Challenge in the Mojave Desert, that self-driving cars first proved that they could navigate city streets with traffic to complete what may well have been the world’s slowest auto race. Speed was limited to posted limits of 30 mph, and parking was part of the course.

I covered the 10-day qualifying event and race day itself for Wired. Back then, the year the iPhone launched and Windows Visa’s glitches and crashes made me a Mac convert, the race cars had to have their trunks stuffed with computers in order to pull off what was then considered borderline insanity: getting cars to drive themselves in ordinary urban traffic. As an example, see the photo I shot of Stanford University’s car during one of the qualifying events. The Stanford team took second place in the race, behind Carnegie Mellon, and went on to lead the autonomous car program at Google.

During the race, the general consensus was that self-driving cars would start to take to public roads in significant numbers within 5-10 years. Right on schedule, we now have Tesla’s cars running part time on Autopilot around the world, Google self-driving cars undergoing long-term testing on streets and highways, and Uber testing self-driving taxis in Pittsburgh.

But any new technology coming out of controlled conditions inevitably shows quirks that can’t be planned for until after they happen. The stakes are rather higher here than for, say, new phones however, as recent crashes of Tesla vehicles have shown.

Last week, news came to light of a fatal crash in China in January in which a Tesla Model S appears to have driven itself at highway speed into a slow-moving or stopped truck in the Tesla’s lane. Tesla says there’s no way to know whether its car was in Autopilot mode because it was too badly damaged in the crash to send back data to Tesla servers afterwards. But the company has confirmed that a later fatal crash, in Florida in May, did occur while the car was driving itself. In that case, the car apparently couldn’t distinguish the white side of a truck turning in front of it from the sky behind it.

These problems aside, autonomous vehicle technology seems well on its way toward achieving the widespread use we all predicted at the Urban Challenge. In fact, just this week, the U.S. Department of Transportation released the first Federal Automated Vehicle Policy, providing guidelines for manufacturers while still leaving the industry breathing room for innovation.

I asked Cornell professor of computer science and director of the university’s Intelligent Information Systems Institute, Bart Selman, to give me his take on where we stand now, what might have gone wrong in the China crash, and where he expects the industry to go from here.

For starters, he says, the more common self-driving cars become, the more they will be embraced by the public.

“Broader familiarity with self-driving technology will help change perception. For example, we will start to see more cases (with videos) where self-driving technology or driver assistive technology helps avoid accidents when the vehicle takes defensive action. The Uber trials in Pittsburgh will also help show people how autonomous car technology can be highly reliable and safe.”

At the same time, challenges will remain for some time around unforeseen events (like parked trucks taking up the fast lane on a highway during poor visibility conditions, as was the case in the China crash).

“Highly unusual events can still be problematic because its hard to get sufficient data for training on such events. It seems likely that the technology will first need to be deployed in areas with clear driving conditions and clear road markings. Regulation can help introduce the technology in a safe manner.”

Even with these initial limitations, Selman believes that the technology is on track to become ten times safer than human drivers in the near future, with 100-times-safer self-driving cars not too much farther away. Expect “a factor 10 under clear driving conditions within three years,” says Selman, and “factor 100 within ten years.”

But that high level of safety will come at a price.

“I expect to see full autonomous driving first in commercial transport (including Uber). Full autonomy with high safety requires many sensors on the car, multiple cameras, lidar, and lots of compute power. This can add easily $30K to $50K to the vehicle. So, initially full autonomy will be mainly for commercial vehicles.”

Then, too, says Selman, autonomous driving needs to come with some measure of common sense.

I believe there will need to be some minimal requirements on visibility and road conditions for the safe operation of autonomous driving. This is actually in-line with human driving. For example, in the Northeast, we do close down schools and other institutions after and during heavy snow storms. There are conditions where visibility and road conditions are simply not safe no matter how good a driver — human or computer — is in the car.

In the case of the China crash, dashcam video has been released, giving Selman and others an opportunity to engage in some informed speculation about what happened.

Says Selman, “What you see right before the accident, is that other cars start moving out of the left-most lane. This is a signal for human drivers that something may be visible to other drivers even if you yourself cannot yet see any issue. People will then generally start following the behavior of the other drivers. (In a snow storm, it’s typical that drivers follow the car right in front of them, often the only thing clearly visible.) The Tesla car has no clue about the behavior of other drivers. So, it happily stayed in its lane and only had the vision and radar system to rely on. Given the low visibility and the very low likelihood of a slow driving truck on the side of the road, the system made a probabilistic judgement, and decided “road clear”. That’s why the car does not even slow down.”

Even before all the kinks are ironed out, however, Selman expects greater numbers of self-driving cars to reduce overall traffic fatalities significantly. “In fact,” he says, “people may start to insist that at least other cars switch to self-driving mode.”

What can be done to increase safety in the meantime? Selman suggests a mechanism by which cars go into self-driving mode only when road conditions permit safe computer-controlled driving, and return control back to a human driver when the situation deteriorates.

That would seem to require common sense on the part drivers as well as their vehicles. In the meantime, I’m looking forward to seeing videos of crashes being averted by self-driving cars. If you’ve already seen them, post a link in the comments.