A few weeks ago, I was driving to work on the main two-lane state highway here in our semi-rural corner of Maryland. It was the Friday before Labor Day weekend, so traffic was lighter than usual. It was also a clear day, with no wet roads or fog.
In other words, a perfect day for driving.
All of a sudden, an oncoming car drifted into our lane. In fact, it appeared as if it was purposefully targeting the vehicle about four or five car lengths in front of me.
In the inevitable collision that occurred (thankfully not completely head-on but sickening enough at 55 mph), there were injuries and ambulances … a closed road for 90 minutes … statements to the police required of myself and others … and two wrecks to be towed.
The cause of this accident had to be a case of distracted driving – perhaps reaching for a smartphone, checking a text message or some other action that took eyes off the road just long enough to cause a serious accident.
It got me to thinking about recent news reports touting “self-driving” cars of the future.
Certainly in a case like this accident, self-driving features like nudging the vehicle back into the correct lane could have easily prevented this accident from ever occurring.
Self-driving vehicles seem like a very nice idea in theory, and in practice they’re not very far off — at least if the news reports are to be believed.
Nissan, Volvo, Daimler-Benz and other leading car companies are predicting that commercial models will be a common sight on the road by about 2020 … and by about 2035, a majority of cars operating will have this technology.
But in order to get there from where we are now, we’re going to have to deal with numerous challenges. Here are a few that seem particularly nettlesome:
- Will operators of self-driving cars require a different kind of vehicle training?
- How will highways accommodate vehicles with and without drivers?
- Will self-driving cars perform equally well in different road environments – ordinary roads in addition to super-highways?
- Insurers will need to figure out who is at fault if a self-driving car crashes – the car or the driver?
- How will automotive manufacturers ensure that cars’ onboard computers can’t be hacked?
And here’s another technology challenge: What sort of back-end servers will be required to process the huge amounts of vehicular data … as well as secure ways for cars to communicate in real-time with the cloud and other vehicles? (Daimler has reported that its self-driving test vehicle produces 300 gigabytes of data every hour from its stereo camera alone.)
And lest you become really anxious, don’t think very hard about the kind of data that’s being captured, chronicled and saved on each and every self-driving car’s trip – including “where it’s been when” and “how fast it got there.”
I also wonder about the transition period when there will be a mix of self-driving cars and traditional vehicles sharing the road.
If self-driving cars “react” to other vehicles so easily, won’t it be really tempting for driver-operated vehicles to make end-runs around self-driving cars or otherwise cut them off, knowing that those cars are programmed to move out of the way to prevent a collision?
Roy Goudy, a senior engineer at Nissan, has commented that since “autonomous” cars can react more quickly to potential hazards than can cars driven by people, it will be difficult to have both on the road at the same time.
“What are the rules in that environment, and what do we do to enforce those rules?” Goudy asks.
I think the future of driving is a very intriguing subject. Self-driving vehicles could mean far fewer traffic-related injuries and deaths … and it could bring more mobility and independence to disabled people and the elderly.
We just need to figure out a way to get there.