In 1905, a problem was put forth before the undergraduate students at the University of Wisconsin by Prof Frank Chapman Sharp. This was called the Trolley Problem. The problem involves a train which has dysfunctional brakes rushing on a track that gets branched into two. One track has one individual on it and the other track has 4 children on it. If one was the driver of the train which track would one choose? Here, it is assumed that we consider the greater good and therefore we would choose the track with one individual, therefore protecting the other 4 children in the process. This supports utilitarianism. This decision that was made, was done so by our rational minds. Human minds have the power to differentiate between good and evil, right and wrong.
With the ever-moving technology that is enveloping humankind, artificial intelligence is the future. Here if the trolley problem was put forth in the case of a driverless car, then would it amount to the same decision? Would the car possess the rationality to save the 4 children? Who would one hold responsible for the accident? The artificial intelligence lying within or the company that manufactured the car?
AI has brought in a new wave of legal questions that need excessive research and thought. According to the law, we are only to sue a “legal person” that who is “18 years of age, is of a rational mind and is not an undischarged insolvent.” The age and the insolvency aspect can still be kept aside. But this concept of a rational mind, has raised numerous eyebrows. With the constant development of replicating the mind, through neural networks and code, it has become really easy to imitate the human movement. Scientists are still trying to rationalise the mind. English Physicist and Mathematician from Oxford University, Sir Roger Penrose, in his book, The Emperor’s New Mind 2 , has proven that decision making is dicey. We as humans have still not been able to understand why we make certain decisions. Why the thoughts come into our heads no matter how rational we are. This action is rather difficult to replicate. He in his book has argued that, human consciousness is non-algorithmic, and thus is not capable of being replicated by a conventional machine. We here also need to mention about the Turing machine, which was developed by Alan Turing 3, the first AI machine that depicted codes made by Germans during WWII.
To simplify legal issues, the usual norm would be to sue the company that has manufactured the AI. This would make it easier for litigation and to hold a legal entity responsible. The main question of evolution thus is whether we can ever hold an AI as a legal entity? Would it be able to take the blame itself? Law has developed on specified rules that have been decided or practised upon for ages. It is set after considering behaviour and misconduct. Legal issues come to a standstill if the accused is not human. There are a number of regulatory policies that need to be developed for this aspect. Legal questions that crop up are, should there be separate policies made for AI or separate laws framed? Would an AI be treated in the same way as an individual.
1 Philippa Foot, The Problem of Abortion and the Doctrine of the Double Effect’ (Oxford Review, 1967)
2 The Emperor’s New Mind, Sir Roger Penrose, Oxford University Press, Inc. New York, NY, USA ©1989 ISBN:0-19-851973-7
3 Alan Turing, Computing Machinery and Intelligence, Volume LIX, Issue 236, October 1950, Pages 433–460
With AI coming up, it has become increasingly important to understand whether legal problems can be solved. Certain alternate methods should be adopted to come up with a suitable solution. Law delves on predictability. AI cannot be predicted as it behaves the way it has been programmed. It will still take a few years to come up with a legal solution as to how to deal with AI. But the problem is that these solutions need to be found quickly as technology waits for none.