A simple switch. And then the machine gun sitting idly on a green hill in the outskirts of a city in South Korea awakens, ready to attack any life form within a 2 mile radius with bullets that can stop a truck in its tracks. A simple switch and then the dog-sized machine takes no instruction from the master.
With the rise of Google cars and Apple’s Siri in recent years, artificial intelligence has swept the media with iconic releases of technology and future projects that all promise to be the “next big thing”. However, with this movement, often times the dangers of artificial intelligence are swept under the rug. Somehow, the two faces of artificial intelligence combine by our tunnel vision.
Cancer. Seemingly never-ending treatment. Lack of human control.
Space. Seemingly never-ending expanse. Lack of human control.
Through the use of machine learning, artificial intelligence reduces the human error that occurs in diagnosis of diseases while also quickening the process. Janet Burns, a reporter from Forbes, reports that current technology such as the AI developed by the Houston’s Methodist Hospital can recognize breast cancer 30 times faster and more accurately by going through 500 patient descriptions “in a couple of hours” relative to only 50 patient descriptions in “50 – 70 hours.” The better and faster inspection places the control back into the hands of doctors, potentially saving lives.
Human lives are furthermore protected by this technology when looking towards important expeditions. Artificial intelligence has led to the rise of robot explorers in ocean and space. Because of factors ranging from human fragility and the necessary expensive precautionary measures, Daniel Britt, professor of Astronomy at the University of Central Florida, finds that manned missions are usually far more expensive than missions that use robots.
Not having the cost of human lives makes these missions more probable of occurring, expanding our knowledge of our environment which then in turn saves lives.
However, with all these benefits towards the safety of human lives, what are we losing for this unregulated productivity and knowledge?
From home to the battle field, artificial intelligence poses long term threats by worsening problems that we face today, such as income inequality and the death of innocent civilian lives caught in armed conflict.
Similar to previous trends with new discoveries, artificial intelligence has deepened the wage gap between blue and white collared workers that has reached disastrous extents over the past 30 years. In other words, the increase of artificial intelligence present in our surroundings has resulted in a dependence in certain specialized skills and promoted those who work in offices over those who work with manual labor.
There’s not doubt that with an increase in automation there is less dependence on certain human labor. Instead of actually asking the customer what their concerns are, the job might require skills such as maintaining the program that now does customer service instead. In fact, Carl Benedikt Frey and Michael Osborne of the Department of Engineering Sciences at Oxford University found that in 2013, 47% of American jobs had a high chance to be replaced, making the sluggish employment growth for the past 10-15 years even worse.
Another harm of AI is its ability to easily create dangerous weapons. Currently, AI weaponry such as the Super aeGis II, which tracks and destroys moving targets from miles away, is used in in the Middle East and South Korea. Because these weapons can be any machine that is simply reprogrammed, AI weapons can be made cheaply. And the lack of price in its creation is paralleled in its lack of human judgement that is necessary to follow international humanitarian law. An AI weapon would not be able to distinguish between the enemy or a civilian – a situation already made complicated by human conscious.
A Non-Artificially Intelligent Outlook
The lack of laws for artificial intelligence makes the promising technology far more dangerous in the future. In an open letter, experts such as Bill Gates and Stephen Hawking showed concern of how AI could potentially be as dangerous as nuclear weapons. But if nuclear weapons are heavily regulated, it brings us to wonder how our government can simply expand our defense budget for drones and other artificial intelligent weapons without even regulating it. As Bill Gates once commented at a convention for AI development, “I don’t understand why some people aren’t concerned.”
Indeed, we aren’t concerned because of our tunnel vision that echoes the benefits of artificial intelligence. Although they definitely do exist, the harms seemed to be blocked out of our vision. AI can only be beneficial when we steer it in the direction it needs to be.
As MVHS junior and avid robotics participant Neelie Shah says, “There should be regulation, maybe not for everyday AI like Siri… but definitely for drones used in the military.”
Her opinion is not alone. Similarly, MVHS sophomore Barry Qi comments, “There should be some amount of government regulation for artificial intelligence as we do with other forms of technology.”
So let’s use our artificial intelligence in the way it should be, assisting us for progress. For artificial intelligence to truly beneficial, precautionary measures must be taken so that we as a society can tell Siri “Let’s get out of this tunnel.”