Stuart Russell and Peter Norvig define AI as “the designing and building of intelligent agents that receive percepts from the environment and take actions that affect that environment” . The most critical difference between AI and general purpose software is in the phrase “take action”. AI enables machines to respond on their own to signals from the world at large, signals that programmers do not directly control and therefore can’t anticipate.
According to PwC : “the fastest-growing category of AI is machine learning, the ability of software to improve its own activity, based on interaction with the world at large. The spectrum of AI can be divided into three: Assisted Intelligence, widely available today, improves what people and organisations are already doing. Augmented Intelligence, emerging today, enables people to do things they couldn’t otherwise do. Autonomous Intelligence, being developed for the future, establishes machines that act on their own.”
AI could contribute trillions of dollars to the global economy within a decade. However, new and advancing technologies inevitably raise new controversies and responsibility issues, and AI will undoubtedly have a significant social impact that will need to be carefully managed.
AI is likely to become increasingly powerful and useful, and thus more and more important. Yet experts caution that without ensuring that AI is deployed robustly, safely and as part of a well-managed system, businesses will create enormous risks for themselves. To counter this, businesses will need a sound grasp of what the AI agents they deploy do, to what sets out data, and with what likely or possible outcomes.
Whilst AI technologies offer huge potential, they also open businesses up to new risks, and the possibility of becoming victim to abuses of power by unscrupulous suppliers. If AI technologies or services are outsourced, the provenance, ownership and control of those third parties involved should be known.
AI’s impact on employment levels is likely to be profound as that number of jobs which could previously only be done by people diminish. Job numbers at risk of automation are estimated at 47% in the USA, 35% in the UK and 77% China - while across the OECD it's an average of 57%. Foxconn, a key manufacturing partner for Apple, Google, and Amazon, is the world's tenth largest employer and it has already replaced 60,000 workers with robots. McDonald's chief executive Ed Rensi told Fox Business that if minimum wage in the US rose, the fast-food chain would consider robots, according to CSLA's Seagrim. "It's cheaper to buy a $35,000 robotic arm than it is to hire an employee who is inefficient making $15 an hour bagging French fries," he told the broadcaster.
The World Economic Forum has warned that the rise of robots will lead to a net loss of over 5 million jobs in 15 major developed and emerging economies by 2020. Increasing automation then leads to the challenge of building societies that cope with this transition, and building resilience amongst workers. The social impact of artificial intelligence may become one of increasing inequality, with the 1% benefiting enormously, to the detriment of the 99%, according to Professor Stephen Hawking. Business can play a role in building economic resilience and new industries amongst those replaced by machines.
The use and collection of data by tech companies – by Google, Amazon, Facebook, Apple etc. – raises significant issues of privacy and consent, given that it is becoming increasingly unfeasible that one can opt out and still interact fully in society and the economy. The question of ownership of data is also raised. Whilst there may be a working assumption that it belongs to the big companies that generate it, this fails to tackle privacy concerns. In Trento, Italy, hundreds of families are living with a ‘New Deal’ on Data – they get notification and control of data generated about them. It’s securely shared in an auditable way. These people actually share a lot more than those who do not get this control over their data, potentially because, once they’re in control, they recognise the value of sharing.
However, in terms of work life and data, PwC notes the possibility that companies will increasingly monitor employee data to track efficiency. Contracts may require employees to hand over more and more of their data on their health, performance, and possibly private life in return for job security. They note that only 3 out of 10 participants in their global survey would be happy with this on average.
Artificial intelligence and robotics prompts new ethical considerations. For instance, when do driverless cars decide to swerve to avoid an accident if the only alternative is crashing into pedestrians? Further, machine learning can lead to systems developing social prejudices. For instance, if artificial intelligence is used to filter job applicants, the system may replicate subconscious prejudices built into it by its creators.
These ethical considerations do not mean that artificial intelligence and robotics should not be used, but rather that there is a philosophical and ethical dimension to these technologies that businesses should be aware of and fully engaged in.