AI and ethics – cutting through the hype

0
AI and ethics

Ivana Bartoletti

Businesses are right to embrace AI because the technology presents fantastic commercial opportunities. In fields ranging from logistics to digital advertising, it has the power to reduce costs and maximise results, argues data privacy expert Ivana Bartoletti

There is little doubt that enterprises in all sectors will treat this technology as a core part of their transformation plans over the next few years. But recent high-profile incidents have highlighted that artificial intelligence (AI) – if inadequately governed – poses serious risks.

Last year alone, for instance, a self-driving Uber car killed a pedestrian; the Home Office falsely accused 7,000 foreign students of cheating in their visa exams as a result of a flaw in voice-recognition software; and Amazon was forced to drop its AI led recruitment programme after it picked up only male applicants’ CVs.

All of these cases illustrate why companies must treat ethics as a vital concern when deploying AI.

Policy-makers have been particularly active in this field over the past few months. The British government has partnered the World Economic Forum to define the remit and practicalities of AI regulation and governance. It has also set up the Centre for Data Ethics and Innovation, while numerous advocacy groups and parliamentary initiatives are suggesting how ethical concerns can be turned into pragmatic requirements.

In my view, organisations need to focus on two key aspects as they navigate the ethical, legal and practical complexities surrounding their use of AI. The first is strategic – namely: deciding exactly how AI will serve your business’s goals and values. The second is technical – the algorithms, processes and systems required.

Let’s start with the latter. Algorithmic impact assessments (AIAs) are a vital part of AI governance, as they aim to act both ex ante, by providing a framework for ethics by design, and ex post, by serving as useful audit tools.

AIAs are built on a number of criteria, including:

Accountability: ensuring that a record of who is working on the algorithms is kept so that logs of software development, machine-learning training procedures etc are maintained at all times.

Transparency: complying with GDPR and other privacy laws – and also ensuring, given that cyber security and AI are inseparable, that all data security requirements are met.

Responsibility: ensuring that, if values are embedded into a machine, those values are constrained by human considerations and shared ethics.

AIAs can help organisations to audit their algorithms and mitigate the risks. As such, they are useful tools. But AI is much more than technology, so technical fixes won’t be enough.

This is where strategy comes in. Companies need to be able to cut through the hype and decide which tasks they intend to use AI for, how automation will augment human capabilities and how the risks identified through AIAs can be managed to meet organisational goals. This approach generates a level of trust that has been lacking in this age of big data and algorithms.

I believe that sector-specific regulation will be required to ensure that the AI industry continues to thrive while remaining constrained by human values.

The more machines will be able to have values embedded into them, the more we may need some regulatory measure to ensure that those who produce AI tech aren’t the only ones that benefit from it.

And, as countries race to equip themselves for the future, we must also nurture the capabilities – far more than technical skills alone – to deal with the challenges ahead.

Ivana Bartoletti is head of privacy and data protection at Gemserv and co-founder of the Women Leading in AI network.

For further information, email dataprotection@gemserv.com

 

About author

Ivana Bartoletti

Ivana Bartoletti

Ivana Bartoletti is head of privacy and data protection at Gemserv and co-founder of the Women Leading in AI network.

No comments

Time limit is exhausted. Please reload the CAPTCHA.