28.12.2017, by Robin Kuijper
Fear of the Unforeseen
We all know Skynet, the military computer created during the cold war by the US military to defend humanity from nuclear war. Skynet almost destroyed humanity if it wasn’t for Sarah Conner and her passion for time travel. Thankfully the ‘Terminator’ franchise is a science fiction story Skynet that is not real. But how close are we to creating our very own Skynet?
To give the most objective answer possible, we will look into two fields: Artificial Intelligence (AI) and Artificial General Intelligence (AGI) This is known under other names such as “strong AI” or “full AI”, but for the purpose of this article, we will be using the term AGI. AI (Artificial Intelligence) is the study that aims at performing tasks that humans do with their brains, at voice recognition, playing chess and so forth. AI has been used for many years, from the games to the smartphone industry. Almost everyone uses AI on a daily basis from virtual personal assistants like Cortana, Google Now and Siri to online customer support.
What does AGI actually mean?
As for AGI, Skynet falls into the field of AGI (Artificial General Intelligence), which aims at developing machines with “generalized” human intelligence that can perform any intellectual task that a human can, with the goal of general cognitive abilities. In other words, the field of AGI is looking at how to create a machine which, just as any human being would, performs a task based upon its own free will.
AI: Yes to an uncertain future?
People do not fear the AI developments that are listed above. What they do fear, are the developments in the field of AGI, however. Many have continuously expressed their concern. And some have even signed an open letter to the United Nations already in order to call for a ban on “killer robots” and autonomous weapons, among them Tesla and SpaceX founder Elon Musk and Co-founder of Apple, Steve Wozniak. Catch-22, because AGI is just another part of AI itself. Developing AI further consequently makes for developing AGI further, too and that might exactly be the issue. We are already moving forward with AGI as we come to move forward with AI.
The important question seems obvious now: Are we being afraid of AI for no reason or is such public protest an understandable movement within a matter we should seriously be concerned about in the nearest future? The answer differs, of course, depending on the individual and the expectation that comes with it. As stated above, there are many people who think we have to be extremely careful in the field of AI., while others think that AI is the future and definitely a step forward we need to only get used to. Physicist Stephen Hawking summarises the case perfectly and says: “The rise of powerful AI will be either the best or the worst thing ever to happen to humanity”. Fact is, we simply cannot tell how AI will impact society in our nearest future and how AGI develops with it. It might be a bad advancement altogether. Or it might just be mankind’s best creation yet.
Not a part of LabPartners yet? Sign up today to be the first to hear about exciting projects, news and interesting workshops!