Where Do We Draw the Line for AI?

If you have seen the movie “Avengers: Age of Ultron”, you will probably remember how Tony Stark, or Iron Man, attempts to create a global defense program of robots known as “Ultron”. However, Ultron’s AI spirals out of control and breaks free, bent on achieving Stark’s mission. The problem is that Ultron has decided to take a different approach than what Stark intended: the extinction of humankind.

These kind of events might seem exclusive to sci-fi movies, but with scientists aiming for artificial superintelligence in the future, fiction could become reality.

As of now, AIs are considered “narrow” because they are designed to perform a very specific task such as give directions, like a GPS. However, the long term goal is a much stronger “general AI” (AGI) – systems such as autonomous weaponry. AGI would be just as cognizant as humans, with the potential to outsmart them.

The Controversy

The past several months, the debate on whether AGI should be pursued has been intensifying, as powerful tech figures have started to take sides. Icons like Bill Gates and Elon Musk are against the movement, while Larry Page and Mark Zuckerberg are all for it.

Musk and Zuckerberg have been at the center of the debate, and it’s safe to say that they strongly disagree. This past summer, Zuckerberg posted on Facebook, writing “One reason I’m so optimistic about AI is that improvements in basic research improve systems across so many different fields […] I’m excited about all the progress here and [AI]’s potential to make the world better.” Zuckerberg is optimistic about AI’s potential to help the world in amazing ways – but Musk sees this potential through a different lens.

Likening a future with strong AI to an apocalypse, Musk stated at the National Governors Association 2017 Summer Meeting that “AI is a fundamental risk to the existence of human civilization.” For this reason, he took part in founding OpenAI. As a non-profit research company, OpenAI is focused on developing AI that is safe, powerful and beneficial to the general population. Musk is choosing to pursue AI with utmost caution so catastrophe can be prevented before it’s too late. “Until people see robots going down the street killing people, they don’t know how to react,” he said.

Over the summer, Musk and Zuckerberg took to social media and started commenting about each other. On Facebook Live in July, Zuckerberg stated that “people who are naysayers and try to drum up these doomsday scenarios” are “really negative” and “pretty irresponsible.” This not-so-indirect jab at Musk prompted the Tesla founder to reply on Twitter, writing “I’ve talked to Mark about this. His understanding of the subject is limited.” Is it really ignorance that is leading to Zuckerberg’s optimism or is Musk just being pessimistic? One thing is certain: AI has limitless potential.

Risks and Rewards

Just as tech icons are divided on the matter, the public is as well. Senior Daniel Hong acknowledges both sides, but his main concern is AGI becoming smart enough to break free of human control and going rampant.

“Robots have the potential to learn things that humans could never know, so this is why it is dangerous,” Hong said.

But he also noted how effective and useful they could be. This is why he emphasizes a “step-by-step approach”, slowly introducing smarter AI, which would give scientists the opportunity to safely shut it down if needed.

Hong also referred to Facebook’s incident over the summer, when two AI programs were created to interact with each other. He explained how the programs started communicating in their own “language”, which prompted researchers to shut them down. Without proper planning and restrictions, these kind of events could happen in the future, possibly with direr consequences.

The Positive Impact AGI Can Have

The key point for those supporting AGI is that it can be used to save lives. It could develop ways to cure diseases, make transportation accidents a thing of the past, even fight wars for people. Robots aren’t prone to error like we are and they are certainly more efficient.  

For example, Innoplexus was founded in 2011, and uses AI for their work in the pharmaceutical and life sciences industries. Their AI helps researchers analyze and draw conclusions from sets of data, allowing them to create medicinal products faster than it would normally take. The entire process, from discovery to approval, is more efficient.

Advancements like these could make the difference in someone’s life.

Key Concerns

But where do we draw the line? Systems like autonomous weaponry could be deadly in the wrong hands. These machines wouldn’t have any conscience, just programs telling them to eliminate people. Hong also brought up another major concern, as an unstoppable force like this would only lead to chaos. Senior Dylan Evans agrees.

“Autonomous weaponry should not be used unless there is a system to limit it or control it,” Evans said, “so [the system] can be used to stop a nuclear attack in case humans are not fast enough.”

On the other hand, Hong suggested a possible future that is taken slowly.

“If we take things one step at a time, slowly know the possibilities, we could definitely put this into effect and I support this idea,” Hong said.

Another apparent concern is the case of Ultron in the second “Avengers” movie. Programmed to achieve “peace in our time”, Ultron viewed humanity as a threat and decided to eradicate it. AIs can be programmed to accomplish a specific goal, but defining how an AI should go about it is a significant challenge; it would require thorough limitations. And even then, there would be no way to predict its actions.

The Next Level of Automation

Even with relatively benign purposes, AGI could still have negative effects. An example: automation. AGI could be created for factories to completely automate the production of goods. Hong listed some pros.

“Robots increase productivity, are less likely to make errors, don’t have to be paid and will be more efficient than humans will ever be,” Hong said.

7408451314_0bb1cca4c2_o
Assembly robots building 2012 Tesla Model S cars at a factory (Flickr/Steve Jurvetson)

There are huge benefits, but a major problem is that AGI could replace human jobs. Labor jobs provide income for many people, and taking away their jobs would have economic repercussions. The divide between the rich and poor would grow even larger, while increasing unemployment.

But both Hong and Evans disagreed with this concern. He believes that as long as scientists pursue AGI according to market demand, it will help the economy.

“People won’t lose jobs, they will just move to another sector. Automation frees up labor force to go work in areas that are more important and in higher demand,” Evans said.

With so many aspects to consider, the debate around AGI is never-ending. There is also no telling when AGI will become a part of this world – it could be decades, centuries, or possibly never. It is in the hands of researchers and tech leaders. Hopefully, proper limitations will be put in place if we go down this path. But it’s a risky path nonetheless.

Humans rule Earth because they have a level of intelligence higher than any other life form in existence. But once something smarter is introduced to this planet, how long will our control last?