Skip to Content, Navigation, or Footer.
The Tufts Daily
Where you read it first | Saturday, April 27, 2024

Neena Kapur | The IT Ambassador

Science fiction novelists and filmmakers have exposed us to the futuristic world where robots and humans coexist. We have seen robot takeovers in "I, Robot" (2004), docile cleaning robot mice in Ray Bradbury's "The Martian Chronicles," and now we are seeing robots in our very own real?life world.

The concept of artificial intelligence is well known and has become intertwined in much of the machinery we use today. However, as technology has advanced, the intelligence of the machinery developed has grown more autonomous, raising the question: Is artificial intelligence starting to look like real intelligence? The possibility of robots developing capabilities and behavioral patterns akin to those of humans is heavily debated - some deem it impossible, while others deem it inevitable.

In June 2012, The Economist published an article about the morals of machines which stated, "as robots become more autonomous, societies need to develop rules to manage them," indicating that the actions of robots as individuals need to be monitored. The article detailed the rise of robots in the military and the potential ethical dilemmas that could occur if robots gain even more autonomy. Though this article raised interesting questions, it did not address the distinction between scientific autonomy and philosophic autonomy.

Scientific autonomy is certainly developing quickly - robots are becoming more and more advanced and human control is becoming less necessary. However, philosophical autonomy directly addresses the human trait of free will and the mental cognitive ability to be creative, spontaneous and empathetic. The robots of today are far from possessing these traits. Sure, a robot can be instructed to rob a bank - but it's not making that choice; its creator is. Robot morality, at this point in time, revolves more centrally around human usage of robots rather than independent robotic actions.

However, computing power doubles every 18 months - at this rate, many enthusiastic technologists predict an imminent occurrence of technological singularity, in which machines will reach ?- and possibly surpass - the threshold of human intelligence. Research has already indicated that human mental cognitive functions have the potential to be simulated by machines.

Interestingly, countries have already begun implementing laws regarding robot ethics. In South Korea, a code of ethics was created to prevent humans from abusing robots and vice versa. Additionally, Japan's Ministry of Economy, Trade and Industry is supposedly creating a code of conduct for robots, especially those utilized in elderly?care homes.

The Economist article established three steps to facilitate a safe presence of robots in modern day society.

The first step is to determine who is responsible if the robot's actions cause harm - the creator or the robot? This is especially critical, considering the proliferation of robots being used for military purposes. The second step is to ensure that ethical systems imbedded in robots match the societal values that dictate human behavior. Though "moral machines" don't exist yet, it is important to implement a functional morality that is sensitive to ethics and societal values in the prototypes that are currently being developed. Finally, collaboration between policymakers, lawyers and engineers must be established in order to maintain consistency in governing machinery.

It is important to draw a clear distinction between science fiction and the real world. Robots who think and act for themselves remain a ways off, and societies should not dwell on the potential ramifications that could occur if such technology were to ever be produced. Rather, ethical codes of conduct need to be instituted in machinery today to help guide the path of development for the robots of tomorrow.

--

 

Neena Kapur is a sophomore majoring in international relations. She can be reached at Neena.Kapur@tufts.edu.