We don't know whether Google will ever make or sell cars, but last week the company unveiled its first fully autonomous prototype electric car. It is designed to seat two people and can drive about 100 miles on a single charge.Here is the most important thing about it: Google's car does not have a steering wheel - it is a fully autonomous vehicle. Its cameras constantly watch the road in all directions, decides on the speed, and moves in the direction of its destination.
This is not a brand new venture for Google. It has been working on autonomous vehicles since 2010 and the project Google had taken over has an even longer history. Sebastian Thrun, a former director of Stanford University's Artificial Intelligence Laboratory and inventor of Google Street View, manages the project at Google. His team created one of the first examples of robotic vehicles and received awards and funding from the U.S. Government in 2005. His team continues to develop autonomous vehicle technology.Since 2011, starting with Nevada, various U.S. states gave permissions to have autonomous cars on their highways. As of September 2012, California also allowed the operation of these for test purposes. Germany, the Netherlands, Spain, and Finland also welcomed them for the same purpose. Google cars will never have drivers, only passengers. The software takes care of the driving.
This is significant because one might think that we have been leaving decisions to software (computers) for a long time already. However, this is not entirely true. There is always human supervision and extreme care whenever a possibility of "physical impact" exists. It is one thing to have a computerized sprinkler system and something completely different to have a car on a highway with other cars and trucks, many of which are driven by humans.
The implications are significant. Several questions come to mind, mostly dealing with the important legal, ethical and technical challenges ahead. In a previous article I wrote about "high-speed trading" where the software makes decisions to trade short securities in a few microseconds. Whenever we leave decisions to computers we need to plan for contingencies.
In the case of high-speed trading, the institutions apply upper and lower limits regarding the risk involved for each transaction. Human speed is no match here. It is no match in the case of autonomous driving either. It is unlikely the autonomous vehicle makers will allow a passenger to "get involved" if the car veers off. We can concoct many tricky scenarios where the software will make a decision that will be deemed in retrospect as wrong from some legal or technical standpoint. For example, who would be liable if the car runs over an animal that is not supposed to be on the road? I can see objections developing to my simple scenario. One could say an autonomous car with its many powerful sensors, cameras, and radars will be avoiding such occurrences. Autonomous cars will be collaborating with wireless road signs, reporting on traffic and road conditions, weather, and unusual conditions several seconds or minutes before the event, and thus, such events will be completely avoided.
However, it is a fallacy to think that a fully informed computer will never err. In fact, one of the tenets of computer theory, namely "the halting problem," teaches us just that: there are problems that a computer (an algorithm) cannot fathom. Along the lines of Gödel's Incompleteness Theorem, any axiomatic system will contain propositions that cannot be proved or disproved, there are problems a computer cannot solve, or in this case, there are situations in which the computer of the car will fail.
Putting the theory aside there will be many instances of computers failing; far too many. It happens all the time. Therefore, the question I am addressing is not whether an autonomous car will fail, it definitely will, but rather, who would be held liable for the failure? Who is responsible: the owner of the car, the passengers, the company or the engineers who designed and developed the decision-making software?
In some ways we have faced several versions of the same problem in air travel. Passenger airplanes are too complex to be flown by a single pilot or even a pair of them. Computerized systems constantly measure, calculate and make decisions during the flight. We have a lot of trust in the pilot's ability to save the passengers when things go wrong, but many times, it is either too late for him or her to make changes and as such the system will not cooperate. Still, air travel is the safest form of travel.
Keep up to date with what’s happening in Turkey,
it’s region and the world.
You can unsubscribe at any time. By signing up you are agreeing to our Terms of Use and Privacy Policy.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.