History of AI and its Ethics

AI ethics is a set of values, principles, and techniques that employ widely accepted standards of right and wrong to guide moral conduct in the development and use of AI technologies. These are a set of principles and moral which strap duties along with responsibilities to engage with machines of in a fair, acceptable and just manner[1]. These are morally acceptable practices. The field of AI ethics has largely emerged popularity with the context of individual and societal harms that the misuse, abuse, poor design, or negative unintended consequences of AI systems may cause. Deployment of ethics in AI systems requires a multidisciplinary team effort. It demands the active cooperation of all team members both in maintaining a deeply ingrained culture of responsibility and in executing governance that adopts morally sound practices at every point in the innovation and implementation process. Ethics in Artificial Intelligence has become a rising concern in today’s time but this concern has existed since centuries back. Philosophy has been deeply integrated with ethics in AI. There are four philosophical themes about technology that have been acknowledged since ancient times in Greece. Plato, Laws X 899a ff, stated that technology imitates or learns from nature. ‘House building’ was done by imitating the swallows and spiders. With the coming in of The Renaissance Period in the 1500s, the concept of ethics, particularly in the field of technology started gaining popularity[2]. The laws that were related to it emerged which attempted to safeguard life and community because technology can be easily misused to create wealth which disregards morals and virtue and this brings chaos in societal stability.

Multiple terms like ‘moral AI’ and ‘ethical AI’ are used. There can be many different flavours of approaches toward Moral AI. Moral reasoning is obviously needed in robots that have the capability for lethal action. Moral AI goes beyond obviously lethal situations, and we can have a spectrum of moral machines. An example of a non-lethal but ethically-charged machine would be a lying machine[3]. Machines have to work on certain grounds of moral code and those using them must also engage in activities which are ethical and do not violate the rights of others, regardless of a law being in existence or not. This requires that the formal framework used for reasoning by the machine be expressive enough to receive such codes. The source of these ethics root from two origins mostly; the first is direct, through explicit coding and the second is via reading that is the indirect manner of coding for morals.

In 1936, Alan Turning[4] developed a universal calculator, also known as the Turning Machine which proved his theory that any problem which is presented through the means of an algorithm can be solved. He was one of the first enthusiasts in this field.

Then ethics induced in Computer Technology by Norbert Wiener, an MIT professor, during World War II. In 1950, he also published a book titled ‘The Human Use of Human Beings’ which addressed the same study but not under the specific head of “Computer Ethics”[5]. This laid the foundation of ethics in technology which are still holding even in the present times.  However, till the mid-1950s this new area remained neglected both in study and research and in application.

The Logic Theorist[6] was a program that was designed to imitate the human skill of problem-solving. It was given its funding by Research and Development Corporation (RAND Co.). It was the first of its kind, groundbreaking artificial intelligence program presented by John McCarthy and Marvin  Minsky at the Dartmouth   Summer   Research   Project on   Artificial Intelligence (DSRPAI)[7]. The term ‘artificial intelligence’ was coined at this very program in 1956. It was assumed that a collaborative effort would be initiated by top researchers and scholars from various fields for a wide discussion on the same. Unfortunately, there was a lack of agreement on standard methods for the field but the sentiment that AI was an achievable concept was established. The significance of this event cannot be undermined as it catalyzed the next twenty years of AI research. From 1957, AI started flourishing. Computers had more storage space and were accessible due to a drop from its originally expensive pricing. ‘General Problem Solver’ came out which made problem-solving through algorithms easier and ‘ELIZA’ made interpretation of spoken language more accessible. Due to considerable progress, Defense Advanced Research Projects Agency initiated funding this field.

However optimistic the façade seemed, there was a severe obstacle and this was the lack of computational powers. Computers couldn’t store ample amount of data or process it at an acceptable speed. Communication with the computer also required for the learning of varied terms with multiple combinations. The hope and desire of AI being as intelligent as human started fading and the research and development took a toll for about 10 years.

In the 1960s, the term ‘hacking’ used to describe the activity of modifying a product or procedure to alter its normal function, or to fix a problem. It was a discovered means to change certain functions without having to re-engineer the entire device. The intent of hacking became malicious during the 1970s. “Phreakers” or hackers impersonated the operators of Bell Telephone Company, dug through and found its highly sensitive information and exploited the system by stealing long-distance phone time. It was a clear observation that AI had become unethical and there was a lack of law and legislation to regulate and prohibit such further behaviour.

Donn Parker who wrote on technology crimes suggested to the Association of Computing Machinery to create a code of ethics for its members and this came into force in 1973. It was then revised in the 1980s and 1990s. Moor in 1985 said that humans ought to act in a certain class of situations in a certain way when it comes to computer technology.

The Berkeley lab intrusion took place in 1986 but Clifford Stoll, the system’s administrator, used the ‘honeypot’ tactic which pushed the hacker back into his network till enough data could be stored to track his source of activity. The first legislation which made hacking illegal was brought forward was Federal Computer Fraud and Abuse Act came out in 1986. The United States was the country which brought it into the picture and it imposed a term of imprisonment as well as monetary fines on the crime.

Then the concept of ‘deep learning’ which allowed the computers to learn from the experience came into the picture of technology and the work in this field was rejuvenated again. Edward Feigenbaum came out with ‘expert systems’; this imitated the process of decision making of a human. It functioned by learning through experts how to respond in multiple virtual situations and then would react as learnt when dealing with amateurs. Industries started using this program extensively and the Japan Government heavily funded this very program for their Fifth Generation Computer Project. This inspired multiple engineers, experts and scientist in the field of AI but no direct results in terms of real growth could be achieved and the funding was ceased. During the last decade of the 20th century, there was considerable development. Computers started having artificially intelligent decision-making programs; Gary Kasparov, the grandmaster, lost a chess match against IBM’s Deep Blue which was a computer program which played chess. Dragon Systems developed speech recognition software which was used in Windows; it helped in the interpretation of spoken language.

‘Kismet’ was a robot developed by Cynthia Breazeal that could recognize as well as display emotions. Human emotions also became a crossed hurdle for the AI.

The computational problem of lack of storage space along with slow processing speed was solved by the coming in of the concept of ‘big data’. With this, a large quantity of information is stored. Algorithms haven’t improved very much but with storing enough data and computing speed and capacity, AI has gained importance.

The Indian Government also had to get indulged in attempts to safeguard the interests of the vulnerable and ensure that the technological systems in the country were working on ethical lines. The country implemented the Information Technology Act in 2000 and then substantially amended it in 2008.

The concern of ethics in AI has been a constant rising concern. Very important questions as to whether machines will replace the humans in a few years; whether fake news and misrepresentational media can be curbed; whether AI will become accessible to those with malicious intents[8]; have been constantly on a rise and those involved with this field have made constant efforts to restore the bona fide intent of those accessing the technology but with the AI become easier to use, moreover exploit and manipulate, it has become difficult to curb and regulate ill effects of Artificial Intelligence which mostly originate from lack of ethics. Ethics are very important for any field to function in a disciplined and sophisticated manner and only tighter laws will be able to ensure this globally.


[1] (The History of Artificial Intelligence, n.d.)

[2] (Philosophy of Technology (Stanford Encyclopedia of Philosophy), n.d.)

[3] (Siau & Wang, 2018)

[4] (The History of Artificial Intelligence, n.d.)

[5] (A Very Short History of Computer Ethics ( Text Only)—The Research Center on Computing & Society, n.d.)

[6] ((PDF) History of Artificial Intelligence, n.d.)

[7] (Bringsjord & Govindarajulu, 2019)

[8] (Ethical Concerns of AI, n.d.)


Letishiya Chaturvedi

Letishiya Chaturvedi

I'm a 20 year old, currently pursuing law from NMIMS, Mumbai among other passions of mine. I strive to learn, achieve and debate on every topic that interests me.

Letishiya Chaturvedi

I'm a 20 year old, currently pursuing law from NMIMS, Mumbai among other passions of mine. I strive to learn, achieve and debate on every topic that interests me.

Leave a Reply