Is Artificial Intelligence ethical?

When it comes to Artificial Intelligence, we are getting a little spoilt for choice. There are so many different options out there at the moment ranging from simple technologies that simply help us perform menial, everyday tasks to machines like driverless cars that have the potential to change life as we know it. Whatever the technology may be, all new technology disrupting the old way of things brings with it a whole suitcase bursting with ethical questions.

 

The truth is that AI isn’t just a quick fix for short-term problems; take spell checking technology for example – in 2018, it’s hard to imagine what texting and emailing would be like without that helping hand and it’s easy to forget the technology has only been around since 1971. Clearly even simple AI technology is always evolving but it is here to stay,  so it makes sense to think long-term when implementing any AI, regardless of whether you intend to be using it for a few months or a few decades.

 

It’s easy to get lost amongst the ethical questions of AI so we’re just tackling three of the big questions for now:

Who programs the program?

 

 

Many people fear a robotic revolution but, for now, AI depends heavily on humans for their creation and the input of algorithms in order for the machines to perform their roles. Of course, this is great news for those who fear a dystopian technological future but it also means that the programming can be biased.

 

Human controllers mean human mistakes, human agendas and human approaches to AI behaviour. Take something as simple as a Google search – a piece of AI that is so heavily ingrained in our lives we don’t even think twice when we use it – and consider what kind of results you receive when you search ‘Where can I buy a can of lemonade?’ The search is manipulated by companies that want you to buy your drink from their store – so they pay to make sure they come up first in the search results. The same thing will happen if you ask Siri, Alexa or Google Assistant and each of those assistants will provide you with a different top result.

 

This is just another form of marketing but it shows you how easily our AI can start to influence our decisions so that we don’t even know we’re being influenced. Perhaps this is not something you’ve ever thought of before and, for a lot of us, these kinds of biases and influences are expected of AI but, if it particularly concerns you, it’s wise to find out who is in charge of programming and what kind of biases you might encounter.

Is it okay to implement AI if the technology is going to replace human jobs?

 

Bringing AI into everyday operations is something that many organisations have considered in recent years, if they haven’t implemented it already. Many studies project a high rate of automation by the year 2030, with McKinsey finding that it’s likely that 30% of hours worked globally could be automated by then.

 

Most organisations deploy some kind of AI to make the lives of their employees easier – not to erase their jobs completely. The uncomfortable truth is that there are always two sides to every story. Self-driving trucks will take many drivers off the roads but it would also decrease the amount of accidents and potentially save hundreds of lives.

 

Right now, there is AI that has overtaken jobs in many industries but we are not yet at the 30% mark (still, we have 12 years to get there). Of course it can be concerning, for some employees more than others. Do you value your people and their contribution to your organisation? The best way to approach AI would be to consider how the evolution of the technology you want to implement might have an effect on the people in your organisation. If you’re uncomfortable with losing your people to technology, perhaps steer clear.

Who is responsible when things go wrong?

 

This is one of the biggest questions that has plenty of legislators and users of AI scratching their heads. What happens when/if an accident occurs? AI is not yet airtight, so what happens when it makes a mistake? Who can we blame? Is it the manufacturer? The owner/user? Or the AI itself?

 

You can rest-assured that there are plenty of professionals trying to nut out this very question, as we speak. These are still murky waters and as AI evolves and new technologies come onto the scene, things can change in the blink of an eye.

 

At the moment, if you’re concerned about what could go wrong, simply get informed before you implement a piece of AI. Stay up to date with the current regulations and make sure you are notified if there are any changes. If it’s not clear who will be at fault when something goes wrong, perhaps rethink your choice of AI. It’s important to always know exactly where to point the blame, even if it’s determined the blame should lie with you.

 

Ethical questions surrounding AI are rife in this new world of work and there is much more to discuss than what we’re able to touch on in a short blog post. AI is constantly evolving and there’ll be plenty more ethical questions to debate as time goes on.

Want to know more about how your organisation can be adapting to the new world of work? Grab a copy of our eBook and get in touch with Pendragon today to see how we can help. Give us a call on 02 9407 8700.

Post by Pendragon

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.