Whether corporations are people or not is a point of ongoing political contention in the United States. The issue, in fact, was legally decided back in 1819 when the U.S. Supreme Court ruled that a corporation is a collective of people doing business together and that its constituent persons have their constitutional rights.
For instance, they can sue and be sued, and they can make contracts. These certain rights helped the early U.S. economy expand. However, corporations were not allowed any constitutional right of political speech. In fact, quite the contrary: In 1907, Congress passed a law banning corporate involvement in federal election campaigns.
Eventually, things changed in 1978, when the Supreme Court said that corporations had First Amendment rights to spend money on state referendums. Then came "Citizens United," when the Supreme Court, in 2010, decided that corporations have full rights to spend funds as they wished in federal, state and local elections. This unleashed a flood of campaign cash and gave birth to controversies that continue today.
This brings us to the subject of autonomous cars. How should Google's self-driving cars be regulated for insurance and liability purposes? Should they be treated like pets or children or something else?
A Google car does not feature any steering wheel and is fully autonomous. Its cameras constantly watch the road in all directions, decides on its speed, and moves in the direction of its destination. Google's car will never have drivers; it will only have passengers. The driver is a computer program.
This seems significant; one might think that we have been leaving decisions to computer programs for a long time already, which is, however, not entirely true. Human supervision has been present whenever a possibility of "physical impact" exists. It is one thing to have a computerized sprinkler system and something entirely different to have a car on a highway with other cars and trucks, most of which are driven by humans.
Several questions come to mind and there are important legal, ethical and technical challenges ahead. Whenever we leave decisions to computers, we need to plan for contingencies. For example, in the case of high-speed trading, institutions apply upper and lower limits to the risk involved for each transaction. Human speed is no match here. It is no match in the case of autonomous driving either. It is unlikely autonomous vehicle makers will ever allow passengers to "get involved" if the car veers off.
We can concoct many tricky scenarios where the software might make an arguably wrong decision from some legal or technical standpoint. For example, who would be liable if the car runs over an animal that is not supposed to be on the road but is anyway? But, then one could say, an autonomous car with its numerous powerful sensors, cameras, and radars will easily avoid such occurrences. Autonomous cars will collaborate with wireless road signs, report on traffic and road conditions, weather, and unusual conditions several seconds or minutes before the event, and thus, such events will be completely avoided.
However, it is delusional to think that a fully informed computer will never err. There are problems that a computer (an algorithm) cannot fathom, meaning there are situations where the computer of the vehicle will fail. Putting theory aside, there are many instances of computers failing; far too many. It happens all the time.
Therefore, the question is not whether an autonomous car will fail, but rather, who will be held liable for the failure? Who will be responsible: The owner of the car, the passengers, the company who produced the car, or the engineers who designed and developed the decision-making software?
To solve this problem, one might treat the computer programs as people, working with the analogy of treating corporations as people. We interact and are acted on by programs all the time. These entities are not just agents in the sense of being able to take actions; they are also agents in the representative sense, taking autonomous actions on our behalf. Very often our privacy is violated through the use of programs, such as Google filtering our email, but then Google argues that there is no breach of privacy, because humans were not reading its users' mail.
By fitting artificial agents like smart programs into a specific area of the law, similar to the one that makes corporations people, these proxies can be regulated. Programs would then be treated as legal agents, capable of information and knowledge acquisition like humans, of their corporate or governmental principals.
Recently, controversy erupted when one Mercedes-Benz executive said their future autonomous cars will save the car's driver and passengers lives, even if it meant sacrificing the lives of pedestrians, in a situation where those are the only two options. Mercedes-Benz later retracted the story, saying the executive was misquoted. However, if programs are indeed treated as people, at least legally, whatever decisions a car company's program makes, would in turn make the company responsible.