How far can we go with artificial intelligence?
By Jaehuk Alex Kim, English Columnist
What can possibly stop humanity from developing artificial intelligence (AI)? World War lll? Unless something completely exterminates humanity, we will continue to take artificial intelligence one step forward. We're in a constant marathon of racing towards reaching a “technological singularity”, the final destination when artificial intelligence will become smarter than us. At this point a question rises: how far can we take artificial intelligence and what will it end up being look like?
Experts coined the final stage of artificial intelligence as artificial super intelligence (ASI) in which its intelligence will exceed the human’s. Nick Bostrom, expert in the field, describes ASI as any “intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest.” In contrast, the modern dictionary defines AI as a field of study of how machines can imitate human intelligence. Therefore compared to the image of an ASI, the level of intelligence that current AI possesses is in a narrow stage.
Let the level of intelligence be represented as a ladder. The top of the ladder will be endless since we’re in the mere process of exploring the entire spectrum of knowledge/intelligence. Although our AI is in its narrow stage, if we are able to reach to the point of making ASI in near future, this AI will be able to surpass way beyond us in the ladder and will continue to climb way up in the ladder due to its self-learning ability (no human intervention). In other words, ASI will be able to explore the entire intellectual spectrum that we possibly cannot understand or foresee through our intellectual capability. Mathematician I. J. Good describes this theory as “intelligence explosion.”
Taking this theory further, American neuroscientist Sam Harris claims that AI in the period of intelligence explosion can even affect us negatively. At this point, people might often imagine AI suddenly turning malevolent and becoming a threat to a humanity, but he claims that the relationship of human and ASI will rather resemble the relationship between humans and ants .
“We don't hate them(ants). We don't go out of our way to harm them. In fact, sometimes, we take pains not to harm them. We just step over them on the sidewalk. But whenever their presence seriously conflicts with one of our goals, we annihilate them without a qualm. The concern is that we will one day build machines that could treat us with similar disregard.” -Sam Harris
ASI not only has the possibility to disregard human command, but also can bring a huge chaos to our society. Suppose that such competent AI works in replacement of humans in many areas, this might cause a serious unemployment crisis that we have never experienced or witnessed throughout the whole history. The replacement of AI for human labor and task will further exacerbate the wealth inequality which still exists as a problem today. These are just some of the possible consequences of the artificial super intelligence, leading many experts to become cautious of the future AI.
“With artificial intelligence we are summoning the demon” - Elon Musk
“AI is likely to be either the best or worst thing to happen to humanity”- Stephen Hawking
However, some are skeptical about the possibility of developing an ASI. They claim that mastering the key building blocks of the future AI system is yet far from being reachable. The process of inserting knowledge in one domain to other is a skill that still needs to be further researched. Also to develop ASI’s ability to learn without supervision of human is a challenging task since most of the machine learning methods used in current AI are data-based. These limitations puts a strict bottleneck for the development of ASI that many experts are cautious about, and some views that the development of this ASI will not happen anytime soon.
“Listening to Bill Gates, Elon Musk and Stephen Hawking talking about artificial intelligence reminds me of the Jurassic Park scene where they talk about chaos theory.” - Dave Waters
It still seems convincing that although there exists limitations that will make us take longer to develop an artificial super intelligence, it is possible in some day in the future. The final stage of AI that we are trying to reach is not impossible but just a matter of time. Although all the cautious claims about the future of AI is based on theories and predictions, simply overlooking these negative outcomes that AI may bring can to be too risky for our humanity to tolerate. Debating whether or not the theory and predictions about the future AI should not be the main focus. Instead, it seems important to put an effort to construct a safer AI and keep up the pace of constructing proper regulation for artificial intelligence to the pace of developing AI.
'EDITORIAL > 과학 :: Science & Tech' 카테고리의 다른 글
|3 Reasons Why Robots Won't (Can't) Take Over the World (0)||2022.03.06|
|Harvard Research Reveals: Don't Think Too Much, You Might Die Earlier (0)||2019.11.15|
|Gene Editing: Making The Perfect Baby (0)||2019.02.22|
|Blood Types: How Real Are They? (0)||2018.03.23|
|버클륨과 시보그 (0)||2017.11.21|