Self-Driving Cars

Last month, the already-innovative company Uber launched its first self-driving cars in Pittsburgh, USA. Goodbye sharing economy, hello robot-driven market. It joins companies such as Tesla, Google and Ford in the race for the ultimate autonomous car. The car that will not only calculate the shortest route to work, or control your cruising speed on the highway, but allow you to entirely let go of the wheel. Can this truly exist outside the pages of a science-fiction novel? And maybe more importantly, is this something we want?[divider]

Story by: Olivia Miossec

When you think of driverless cars, your mind may conjure Will Smith valiantly fighting robots aboard his fancy Audi in ‘I, Robot’, or Bruce Willis doing stunts among the flying vehicles in ‘The Fifth Element’. Or maybe you feel a shudder at the thought of Stephen King’s killer car ‘Christine’. However, driverless cars today are nothing like these futuristic examples. First of all, completely driverless cars, such as the model championed by Google, are not available for public consumption, and may never be. Uber’s newly launched ‘autonomous’ taxis come with a driver and a front-seat engineer. While Tesla does have its self-driving Model S on the market, this car is made to facilitate driving, not to replace it. Not really the dream automobile the aforementioned movies envisioned for us.

However, we are getting there. Tesla’s Model S, while not entirely driverless, does offer self-driving features such as steering within a lane, changing lanes, breaking or speeding up without any human input. How does this work? For now, these cars have 360 degree cameras which act as their eyes. In suboptimal weather conditions, such as darkness or rain, they also use radars and ultrasounds, sensing obstacles miles away. The car’s software then registers all this information and gives appropriate commands to the vehicle. In this way, the software mimics the human cortex, but without the interference of human emotions – those that fuel our road rage, distract our attention from the road or cloud our senses. Unfortunately, this technology is still not sophisticated enough to replace us. So while they can alleviate us from the more mundane tasks of driving, Tesla’s cars still require drivers to keep their hands on the wheel at all times, in case any complex or confusing situation arises. Just as a captain with his airplane on autopilot, it seems that we are supposed to give up our role of ‘main driver’ for the arguably duller task of ‘supervisor’.

We are asked to become passive supervisors of our vehicles without physically interacting with the gears, pedals or the wheel. Arguably, this requires skills almost paradoxical to the promise of self-driving: constant alertness and quick thinking

Does partial automation then pave the road to driver safety? Can humans work and successfully collaborate with AI? We are asked to become passive supervisors of our vehicles without physically interacting with the gears, pedals or the wheel. Arguably, this requires skills almost paradoxical to the promise of self-driving: constant alertness and quick thinking. Considering the frequency of accidents related to texting while driving, can we trust people with more physical freedom but increased mental taxation? This is tragically disputed by the recent fatal crash of a Tesla Model S when the software could not distinguish a white truck against the bright morning sky. Even though Tesla has stipulated that one should keep their hands on the wheel, the driver is believed to have been watching a film at the moment of the crash. He was thus not alert enough to take control once the software failed. After all, he was only human. A similar phenomenon, and a possible premonition, is the ‘automation addiction’ encountered in aviation. In recent times, pilots appear to rely more and more on autopilot, which can lead to decreased alertness, and ultimately, inability to respond appropriately to technology failures. An infamous example is the Rio-Paris 2005 crash. Whilst the problem initially began with a technical failure and loss of autopilot, it was the subsequent unprepared and erroneous response of the pilot which led to the plane’s ultimate dive into the ocean. Would we drivers run into similar risks when relinquishing our power to technology? Will we still be able to rely on our senses and skills if our self-driving cars fail us?

Is it then a possible solution to simply take the human out of the equation? This is the ambition of Google and its own fleet of self-driving cars. Unlike Tesla, they believe that self-driving cars should have no driver or steering wheel. Besides cameras and radars, they rely on the technology of ‘Light Detection and Ranging’ or Lidar. It consists of a cone-shaped device that rotates atop of Google’s cars, continuously shooting laser beams at every angle around the vehicles. As the light bounces off different objects, a real-time 3D map of the environment is created. An advantage over Tesla’s simple lasers is that Lidar can detect the shape and not only the presence of different obstacles. This, along with the cameras, the smart software, and redundant systems for the case of failures, is hoped to be enough for a car to drive itself. Unfortunately, there are still many problems to be solved before we launch into an Isaac Asimov short story. The ‘eyes’ of the self-driving cars are imperfect. When lane markings are missing or covered by snow, we can adapt and rely on other environmental cues. By contrast, the ‘eyes’ and ‘brains’ of self-driving cars are not sophisticated or flexible enough to do this. Similarly, they simply cannot deal with bridges. With only emptiness, and no environmental cues around them, the software cannot situate these bridges within their internal map. So for now, we can revel in the fact that, while we, humans, may be easily distracted, at least we can drive across bridges without falling into the water (or most of us can).

Is it then a possible solution to simply take the human out of the equation? This is the ambition of Google and its own fleet of self-driving cars

Another possible limitation in this utopian/dystopian scenario? As always, us humans. Our emotions and irrational behaviour always seem to get in everyone’s way. While Google cars have quite successfully driven around themselves, they have been reported to be involved in twenty accidents. Fifteen of those accidents were due to being rear-ended by a human-driven car. Similarly, a journalist experiencing a ride in the autonomous Uber car noted that the driver usually had to take control of the commands when other manual drivers did something illegal. It seems that the smart software cannot comprehend why a car in a left-turn lane would turn right. The developers obviously forgot to add ‘awareness of human stupidity’ into the algorithm. Furthermore, while human drivers often communicate their intent to each other through universal hand gestures or light signals, things may get more complicated once self-driving cars are on the road with us. How does one wave to a driverless car to go ahead at an intersection? Would a driverless car understand that it needs to let you back up into a parking space? Is the solution then to eradicate human-driven cars completely? Indeed, one vision would be to have only self-driving cars on the road. This would reduce the imperfect and erratic situations these poor cars would have to face in our presence. A vehicle to vehicle communication could be implemented in order for each car to transmit their intent and register that of others. Driving could then be more predictable, harmonious and safe.

Ultimately, what would these cars provide us beyond the important albeit questionable feature of safety? They could give increased autonomy to individuals with limiting handicaps, such as blindness or reduced mobility. Drunk driving would be a thing of the past. Children could be picked up from school while their parents are at work. And maybe more importantly, our minds would be free from the taxing and repetitive act of driving. We would reach farther locations in shorter timespans, the cars driving on as we sleep. The interior could be completely redesigned so that passengers could face each other, bond and talk. Cars could become like mobile homes designed to increase our comfort during day long road trips.

On the other hand, a fear shared by many is that increased safety while driving comes at a cost. Self-driving cars may indeed encourage us to further retreat from the real world and into our own private bubbles. As manual driving becomes only a thing of video games, we may lose even more contact with our surrounding environment and disappear further into the alternate realities of our smart devices. While today we text, email, tweet or snap on crowded buses and subways, tomorrow we could be doing so in the confinement of our own cars. Is this truly a solution in a world where people already favour screens over faces and software over human relationships? Should we not instead encourage people to connect, empathize and communicate rather than allow them to ignore each other on the streets as they consult their social media and dating apps? There is also the question of big data. Today, all traces of our internet activity are shared and sold to companies and government agencies. With Google cars on the road, imagine the amount of information accumulated of your daily travels, your monthly trips or your location. Imagine every inch of our society being scanned by those 360 degree cameras! This is an extreme, dystopic and slightly paranoid view of things, but it is still an important one to consider.

Ultimately, while there has been a lot of progress in the development of driverless cars, many questions and uncertainties still remain to be solved. Only time and the blood and sweat of Silicon Valley engineers will tell us the answers.[divider]

This article was previously published in Medicor 2016 #3

Leave a Reply