Print this page

Estimated reading time: 2 minutes, 58 seconds

Can AI Be Inherentially Good? A Look into Roboethics Featured

Can AI Be Inherentially Good? A Look into Roboethics Jelleke Vanooteghem

In a year of a pandemic, murder hornets, and confirmation by the United States Government that aliens do in fact exist – imagine my response when I read that  Pope Francis would like his Catholic constituents to “…pray that the progress of robotics and artificial intelligence may always serve humankind.”  

Each month, the Pope issues a “prayer intention” and November revolves around the idea of an ethical robot. In a YouTube video that was uploaded by the Vatican, Pope Francis tells his followers “Robotics can make a better world possible if it is joined to the common good.” Pope Francis isn’t the first person to toy with this idea of artificial intelligence (AI) being morally good – nor will he be the last.

In fact, there is an entire philosophy dedicated to the ethics involved in producing, designing and deploying robots called “Roboethics”, or the study of robots and whether AI poses a threat to humans. “…we have two kinds of legal and ethical questions that we’ve really never wrestled with before,” Peter W. Singer, a strategist and senior fellow at the New America think tank suggests. “The first is machine permissibility. What is the tool allowed to do on its own? The second is machine accountability. Who takes responsibility … for what the tool does on its own?” These are questions that scientists, engineers, and governments are grappling with as robots become increasingly advanced.

Earlier this year, the United States Department of Defense (DoD) published a list of five AI ethical principles that they will follow as they engineer robots for defense purposes. The fifth rule vows that all AI created will be governable. The rule states, “The department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.”

That’s all well and good – but what happens if a robot becomes so advanced that they are able to override human control? This is the plot to Melissa McCarthy’s new movie Superintelligence. In the movie, produced by HBO Max, an AI studies the most “average person on the planet”, McCarthy’s character Carol Peters as it decides whether or not to save or destroy all of humankind. In an interview with People, McCarthy explained that the movie was about “technology’s dominance over our lives.” She adds that the film directed by her husband Ben Falcone is also a “…lovely reminder that people may be flawed but they’re still worth saving.”

Chances are – scientists do not have the capability to create an AI robot that is able to destroy mankind. But that doesn’t mean that they aren’t still grappling with the ethics of AI. Today – the ethics of robotics are focusing on privacy and data issues. As robots are used increasingly more in the workspace to save organizations time and money – governments are taking a closer look at the data these machines are collecting and the ramifications they could have on humans. While the European Union passed data privacy law like General Data Protection Regulation (GDPR) – some  researchers in the field do not think it’s enough. “I think we should’ve started three decades ago…” Jason Furman, profressor at Harvard Kennedy School mused.  

As Artificial Intelligence is becoming more advanced – laws and regulations surrounding the ethics of robots need to be written and enforced. If not – movies like Superintelligence and The Terminator may be closer to reality than we thought.

Read 731 times
Rate this item
(0 votes)
Danielle Loughnane

Danielle Loughnane earned her B.F.A. in Creative Writing from Emerson College and has been working in the marketing and data science field since 2015. 

https://danielleloughnane.com/