03/03/2020

AI – Are we ready for the next step?

⇐ See other stories

17.03.2020, by Magi Marinova

AI changing our way of life

One of the main concerns people have when it comes to AI in the workforce, is unemployment. The near future already offers self-driving trucks and as much as the lower risk of accidents sounds appealing, truck drivers are worried about their jobs. Yet, back in 2018, an accident with a cyclist and a self-driving car sparked some concerns regarding the safety around self-driven machines. Such accidents do get people thinking not only about job loss but also about what prevents or could prevent AI from hurting people? Many people within the US, for example, are disturbed by the fact that the military could potentially use AI in deadly drones against human beings that look just the same as we all do, regardless of our involvement in military actions. 

Other changes that worry people are ones such as how much would AI change our minds, ways of thinking, and would we become insensitive to human emotions? Such speculations get even stronger when we are not even able to recognise whether we chat with a person or AI. Not being able to tell the difference between talking to a person versus AI scares people as they start doubting their own sensitivity. Will we get dependent on AI as the main goal of it is to make our lives easier? If we get used to it, we might be helpless in case it gets taken away from us.

The mistakes of AI

Let’s talk about mistakes. AI is created by people and as such we are bound to make mistakes, hence it is inevitable for AI not to make mistakes (at least in the beginning). We are the teachers of AI and due to that, we might transfer our mistakes to their thinking. For example, when using AI to predict future criminals, the people predicted to be more likely to commit a crime, are mainly ones of colour. Such events push people towards assuming AI might have racist traits, which leads to frustration and rooted misconceptions within the human mind. Simple AI functions such as cameras identifying whether a person has blinked or not, sometimes assume someone has blinked even if they only squint their eyes when they smile. 

Regardless of whether we transfer our human mistakes to AI, however, technology has flaws on its own. For example, AI cannot see the same way humans see objects or see a visual within the visual. This is why people are extremely doubtful when it comes to AI being used for security measurements. Such technology can be easily fooled and this gets people concerned for their own security. 

Before we stumble upon the mistakes or flaws AI could potentially make, Jonathan Shaw from Harvard Magazine, believes we should take a look at how to prevent such events from occurring before we put AI in a position where a lot of things could go wrong.

We would like to hear your opinion

A big worry for companies, when it comes to AI, is inequality. Some companies have more resources and if they decide to invest in an AI workforce, they would not be having time limitations such as the typical eight-hour work shift. This would inevitably lead to companies with AI workforce to rapidly increase their incomes and overall wealth whereas other companies will be “stuck” as they would not have AI at their disposal. Companies with fewer conscious workers would be doubling, maybe tripling the profit of companies with hundreds of conscious employees. Do you think we should worry about this problem? Do you think the problem is AI or the way we make use of it?

Before you leave this article, remember we are discussing how ethical it is to further develop and encourage the usage of AI. Do you have any thoughts? Send us a message via one of our social media channels, or write us an e-mail!

Sign up to become a LabPartner of The Creative Lab and be the first to receive news about workshops, projects and interesting articles in the field of media, technology and innovations.

⇐ See other stories