AI: the end of leadership as we know it?

0
AI and leadership: robots in the boardroom

They’re not human and they’re here to take all our jobs. That’s what the more sensationalist reporting about advances in robot tech would have us believe, at least. But what is the true potential of AI to transform British business? Director asks some of the science’s leading authorities about everything from machine learning to boardroom droids. Their answers will have you wresting back control of the future, while making a little more space in your strategy for marvellous mechanical minds.

If you’re a hardcore sci-fi fan who’s planning to toast the 21st “birthday” of Skynet, the self-aware machine network from the Terminator franchise, on 29 August – perhaps once you’ve returned from the World Science Fiction Convention in San Jose – then look away now. Artificial intelligence doesn’t exist. Well, not in the Hollywood sense, at least.

When the winner of this year’s Loebner Prize, awarded to the world’s most human-like computer program, is announced in September, it will still be the entry that can most impressively respond to vast amounts of data. Consciousness and the ability to emote remain firmly in the fiction section. No androids will be feeling paranoid any time soon.

This may not seem like a revelation, but recent headlines about robots’ increasing capabilities have proved more effective at unsettling business leaders than a shotgun-toting Schwarzenegger. For instance, a BBC News article entitled “Robot automation will ‘take 800 million jobs by 2030’” cited a study of 800 occupations in 46 countries by McKinsey Global Institute, which predicted that up to 20 per cent of the world’s working-age population would be affected.

Will businesses soon need to shed thousands of jobs in favour of AI to stay competitive? Will whole industries be rendered obsolete at a stroke by sophisticated cybernetics? Will directors be ousted from the boardroom by AI-powered bosses? And are those firms that aren’t acting on the threat already on the conveyor belt to the scrapheap?

None of these things will happen, in the short term, according to Matthew Taylor, chief executive of the Royal Society of Arts and a former head of Tony Blair’s Number 10 Policy Unit. At the IoD’s Open House event in March, he warned delegates to take such forecasts “with a pinch of salt”.

Talking exclusively to Director, Taylor says: “Since the birth of mass automation, the record of those who have predicted the economic and social impact of technological change has nearly always been poor. We should remember Amara’s law, which is that we tend to overestimate the effect of technology in the short run and underestimate it in the long run.”

Robot in a suit and tieTest of metal

Yet it’s hard to dispute that these are exciting times for AI. Businesses need to keep tabs on its increasing capabilities and be alive to how these might improve productivity and spawn commercial opportunities. It’s no coincidence that five of the world’s biggest tech firms – Twitter, Microsoft, Apple, Google and Amazon – have acquired UK-based AI companies Magic Pony Technology, SwiftKey, VocalIQ, DeepMind and Evi respectively.

“While the industrial revolution replaced human brawn in the late 18th century, now AI – particularly machine learning – is set to compete with the human mind,” says Tej Parikh, the IoD’s senior economist. “Today’s machines can process information with greater accuracy than humans and, most crucially, improve their performance without our input.”

The distinction that Parikh makes between AI and machine learning (an approach to achieving genuine AI) is important, says Kasia Borowska, MD of Brainpool AI, a global network of experts in the field. She explains: “When we talk about AI, most of the time we’re actually referring to machine learning and algorithms that need a lot of data and human intervention to be programmed in a way that enables them to perform specific tasks. Computers that have undergone machine learning can perform only one task. A human, on the other hand, can learn skills by performing one task and then transfer these to another.” Computers, she stresses, are “very far off from being able to do this”.

Does this mean that one of the big fears people have about AI – that it will cause mass redundancies – are unfounded? Yes… and no, according to Borowska. “When we talk about it taking our jobs and so on, what we’re discussing is artificial general intelligence,” she says, referring to the possibility that a machine will one day experience consciousness. “It will take a good few years before we come close to that situation. Three decades ago, people were saying that the internet would take our jobs. To some extent it has – look at retail, for example – but it’s created more jobs than it’s taken.”

Indeed, a soon-to-be-published Accenture study of more than 1,000 large firms that have adopted AI to varying degrees will highlight the emergence of new roles. These include “trainers”, who teach AI systems how to perform; “sustainers”, who ensure that the systems remain fit for purpose; and “explainers”, who liaise between technologists and business leaders.

Nir Oren, a reader in the Department of Computing Science at the University of Aberdeen and a member of the Loebner Prize management team, believes that “perhaps surprisingly, some white-collar jobs are going to be rendered obsolete. AI is proving very effective at tasks such as investment portfolio management, for instance,” he says. “But in many of these areas, at least in the short term, AI will be used to augment human skills rather than replace them.”

Thanks to advances in fields such as computer vision, “data-entry jobs and the like are more at risk, as is manual work such as driving”, Oren predicts, although the news is better for people whose roles require a lot of dexterity. “Robots are still bad at tasks where fine manipulation is required – strawberry-picking, say – although constant advances are being made,” he says. “Tesla has even removed automation because it found that humans were better at some tasks.”

Tesla’s chairman and CEO, Elon Musk, recently said that “humans are underrated” when he admitted that the main cause of its failure to hit production targets for the Model 3 car in the first quarter of this year was its adoption of too much new technology too soon.

Oren goes further than Musk, arguing that humans are irreplaceable in certain capacities, certainly in the medium term. “Jobs that require a lot of human interaction – those of educators and physicians, for instance – will be safe,” he says. “AI will supplement, rather than replace, humans in jobs requiring a deep understanding of their behaviour.”

Illustration of brainProductive conversation

Dan Whaley is an AI consultant at Orange Bus, which designs user-friendly interactive technology. He has observed some pioneering work in the service sector on “the automation of cognitive capabilities” over the past few years.

“In contact centres, chatbots or virtual assistants are performing tasks in the front office, while robots are working in the back office. In both areas there have been significant cost savings,” Whaley reports. “Jobs in which humans play a predominantly advisory role, such as that of financial consultant or paralegal, are next in line for automation.”

As artificial neural networks become increasingly sophisticated and more data is generated in every walk of life (the world will be creating 163 trillion gigabytes annually by 2025 – a tenfold increase on this year’s total – according to the International Data Corporation) other, more surprising, AI applications are emerging. Borowska reports that “cognitive scientists are looking into something called choice reduction in the retail industry. On average, we used to be happier with things we’d bought, because there was less choice. Today, we’re more likely to regret our selections. Using machine learning for stock control, you’re now able to evaluate the optimum number of, say, ketchup brands you need in a store to maximise customer satisfaction.”

Agriculture, she adds, is another industry in which AI is making notable progress. “One client came to us recently wanting to use vision systems to detect and eliminate unhealthy broccoli crops, for instance. Another farm in Malaysia sought help in maximising the size of its chilli peppers. A machine vision system now analyses drone camera images and makes correlations between the size of the chillis and their growing conditions.”

We can also thank computer vision for advances in healthcare. In a recent study at the University of Heidelberg, researchers showed an AI system 100,000 dermoscopic images to teach it to distinguish between benign moles and malignant melanomas. Given 300 further test images, it was able to detect 95 per cent of skin cancer cases correctly. By comparison, the combined success rate of the 58 dermatologists drawn from around the globe to take part in the research was less than 87 per cent.

boardroom bots?

The overall socioeconomic benefits of AI should inspire, rather than intimidate, then. But what of the prospect of artificially intelligent business leaders? Could a robot really be joining you in the boardroom in the not-too-distant future? The idea isn’t as outlandish as it might sound. Finnish IT software and service company Tieto has appointed an AI app named Alicia T as a full member of the management team (with the capacity to cast votes) in one of its business units.

“More broadly, emerging technologies will disrupt firms’ approaches to sourcing skills,” Parikh says. “With the underlying systems developing at an exponential pace, agile and forward-thinking businesses will be best placed to take advantage. This means rethinking talent, culture and organisational forms. Directors need to map out which skills will still be required in five years’ time and how they can recruit, develop and retain people who can adapt to the technology and prosper alongside it.”

Whaley agrees. “Getting familiar with the fundamentals of AI will help business leaders to see through the hype and spot charlatans peddling impossible dreams,” he says. “It’s particularly important for them to understand the basics of machine learning, extending into deep learning, speech recognition, natural language processing and machine vision.”

Oren says that leaders must learn to discern “where the human touch is still critical”, adding that “large-ish organisations should have a group of people who understand the science and can brief their leader on where it’s going and what optimisations can take place”.

Borowska goes as far as to argue that any firm failing to implement machine learning in the next 10 years risks getting “eaten up by another”, given that competitors will be performing tasks “hundreds or even thousands of times faster and with much greater accuracy”.

Those embracing AI should not expect a completely easy ride: the safety issues and ethical considerations involved, and what regulations may unfold as AI becomes more ubiquitous, are becoming the subject of heated debate. For his part, Taylor says that the Royal Society of Arts is trying to focus the discussion on key issues of human interest, such as ensuring that autonomous systems do not inadvertently harm society and guarding against their misuse by malicious actors. But the overwhelming message from the experts is that AI’s potential benefits vastly outweigh any risks it may pose – and that any other view is, for now, pure science fiction.

Click here to read Russ Shaw’s director’s guide to embracing AI

AILeadership

About author

Nick Scott

Nick Scott

A former editor-in-chief of The Rake and deputy editor of the Australian edition of GQ, Nick has had features published in titles including Esquire, The Guardian, Observer Sport Monthly and Rolling Stone Australia and is a contributing editor to Director magazine. He has interviewed celebrities including Hugh Jackman, Daniel Craig and Elle Macpherson, as well as business people including Sir Richard Branson, Charles Middleton and Nick Giles and Michael Hayman MBE.

No comments

Time limit is exhausted. Please reload the CAPTCHA.