UberX is now offering a fleet of self-driving Volvo XC90s.
Uber’s driverless SUVs are a pilot program (no pun intended) in San Francisco and Pittsburgh.
The cars will be easy to spot with lidar remotes and a grid of cameras attached:
For now, technicians will be co-passengers. No doubt able to take immediate control in case of emergencies.
But the stats are clear: self-driving cars are way more safe.
The problem? Everyone else.
How safe will it be for drivers in San Fran or Pittsburgh if other drivers are distracted by these robots? There’ll already issues with rubberneckers.
Also, there’s an old science axiom of artificial intelligence called Moravec’s Paradox:
“Hard problems are easy and easy problems are hard.”
Essentially, when programming artificial intelligence (which this Uber program is, though not Terminator style), it’s harder to program simple tasks like perception and mobility.
Especially when you’re dealing with human fallibility.
Picture it: an UberX self-driving car approaches a traffic jam and the coding kicks in, telling the motor to slow down. The human driver next to it, on the other hand, speeds up and tries to swerve around it.
How would Uber’s cars react to an idiot driver?
Perhaps a plethora of scenarios as well as sensors are already working in these AI machines.
But my thought is this: either we make all cars self-driving which will reduce accidents down to nothing, or none at all. It could be much more dangerous having both on the road.
What are your thoughts?