Responsible AI

Register your interest in this issue

Is your business adopting artificial intelligence (AI) responsibly?

EXCELLENT Answers

No EXCELLENT answers have been published for this question.

GOOD Answers

No GOOD answers have been published for this question.

OKAY Answers

No OKAY answers have been published for this question.

POOR Answers

No POOR answers have been published for this question.

Stuart Russell and Peter Norvig define AI as “the designing and building of intelligent agents that receive percepts from the environment and take actions that affect that environment” . The most critical difference between AI and general purpose software is in the phrase “take action”. AI enables machines to respond on their own to signals from the world at large, signals that programmers do not directly control and therefore can’t anticipate.

According to PwC : “the fastest-growing category of AI is machine learning, the ability of software to improve its own activity, based on interaction with the world at large. The spectrum of AI can be divided into three: Assisted Intelligence, widely available today, improves what people and organisations are already doing. Augmented Intelligence, emerging today, enables people to do things they couldn’t otherwise do. Autonomous Intelligence, being developed for the future, establishes machines that act on their own.”

AI could contribute trillions of dollars to the global economy within a decade. However, new and advancing technologies inevitably raise new controversies and responsibility issues, and AI will undoubtedly have a significant social impact that will need to be carefully managed.

AI is likely to become increasingly powerful and useful, and thus more and more important. Yet experts caution that without ensuring that AI is deployed robustly, safely and as part of a well-managed system, businesses will create enormous risks for themselves. To counter this, businesses will need a sound grasp of what the AI agents they deploy do, to what sets out data, and with what likely or possible outcomes.

Whilst AI technologies offer huge potential, they also open businesses up to new risks, and the possibility of becoming victim to abuses of power by unscrupulous suppliers. If AI technologies or services are outsourced, the provenance, ownership and control of those third parties involved should be known.

AI’s impact on employment levels is likely to be profound as that number of jobs which could previously only be done by people diminish. Job numbers at risk of automation are estimated at 47% in the USA, 35% in the UK and 77% China - while across the OECD it's an average of 57%. Foxconn, a key manufacturing partner for Apple, Google, and Amazon, is the world's tenth largest employer and it has already replaced 60,000 workers with robots. McDonald's chief executive Ed Rensi told Fox Business that if minimum wage in the US rose, the fast-food chain would consider robots, according to CSLA's Seagrim. "It's cheaper to buy a $35,000 robotic arm than it is to hire an employee who is inefficient making $15 an hour bagging French fries," he told the broadcaster.

The World Economic Forum has warned that the rise of robots will lead to a net loss of over 5 million jobs in 15 major developed and emerging economies by 2020. Increasing automation then leads to the challenge of building societies that cope with this transition, and building resilience amongst workers. The social impact of artificial intelligence may become one of increasing inequality, with the 1% benefiting enormously, to the detriment of the 99%, according to Professor Stephen Hawking. Business can play a role in building economic resilience and new industries amongst those replaced by machines.

The use and collection of data by tech companies – by Google, Amazon, Facebook, Apple etc. – raises significant issues of privacy and consent, given that it is becoming increasingly unfeasible that one can opt out and still interact fully in society and the economy. The question of ownership of data is also raised. Whilst there may be a working assumption that it belongs to the big companies that generate it, this fails to tackle privacy concerns. In Trento, Italy, hundreds of families are living with a ‘New Deal’ on Data – they get notification and control of data generated about them. It’s securely shared in an auditable way. These people actually share a lot more than those who do not get this control over their data, potentially because, once they’re in control, they recognise the value of sharing.

However, in terms of work life and data, PwC notes the possibility that companies will increasingly monitor employee data to track efficiency. Contracts may require employees to hand over more and more of their data on their health, performance, and possibly private life in return for job security. They note that only 3 out of 10 participants in their global survey would be happy with this on average.

Artificial intelligence and robotics prompts new ethical considerations. For instance, when do driverless cars decide to swerve to avoid an accident if the only alternative is crashing into pedestrians? Further, machine learning can lead to systems developing social prejudices. For instance, if artificial intelligence is used to filter job applicants, the system may replicate subconscious prejudices built into it by its creators.

These ethical considerations do not mean that artificial intelligence and robotics should not be used, but rather that there is a philosophical and ethical dimension to these technologies that businesses should be aware of and fully engaged in.

Algorithm

A process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer.

Artificial Intelligence

AI is ‘the designing and building of intelligent agents that receive percepts from the environment and take actions that affect that environment’. The most critical difference between AI and general purpose software is in the phrase “take action”. AI enables machines to respond on their own to signals from the world at large, signals that programmers do not directly control and therefore can’t anticipate. The fastest-growing category of AI is machine learning, the ability of software to improve its own activity, based on interaction with the world at large.

AI can be divided into three broad types:

  • Assisted Intelligence (also known as “narrow AI”) , widely available today, improves what people and organisations are already doing
  • Augmented Intelligence, emerging today, enables people to do things they couldn’t otherwise do (this is similar to Artificial General Intelligence (AGI): the level of intelligence a machine would need to attain to successfully perform any intellectual task that a human being can)
  • Autonomous Intelligence (also known as Superintelligent AI (ASI)), being developed for the future, establishes machines that act on their own.
Automation

The use or introduction of automatic equipment meaning that activities can be done by machine that had previously been done by people.

Big Data

Extremely large data sets that may be analysed computationally to reveal patterns, trends, and associations, especially relating to human behaviour and interactions.

Chain-of-ownership

Whilst companies may buy the use of AI technologies, this does not necessarily mean that they are the ultimate owners of the algorithm, the data that is inputted into it and its insights. The chain-of-ownership refers to who the creators of the AI are, and who the ultimate owners and controllers of it, its inputs and its outputs.

Machine Learning

A type of artificial intelligence (AI) that allows software applications to become more accurate in predicting outcomes without being explicitly programmed. Machine learning algorithms are often categorised as being supervised or unsupervised.

Answering YES

All Businesses MUST

Describe how they currently use AI and outline any future plans

Confirm that full strategic and governance responsibility for AI is assumed at Board level

Describe their policies and practices for safeguarding that data and managing and controlling any insights gleaned

Explain if and how its algorithms are developed so that they do not contain inherent biases

Describe their policies and practices for safeguarding data used by AI and managing and controlling any insights gleaned

All Businesses MAY

Explain any HR policies relating to automation and artificial technologies

Describe any partnerships they have with other organisations to use the data they collect/store both with business and social objectives

Describe which stakeholder groups’ data their AI tools interrogate

Explain how they support research into AI and whether they are part of any collaborative efforts to further public understanding and responsible development of AI

State any philosophies and beliefs they hold relating to data or AI

Answering NO

All Businesses MUST

Explain why they do not meet the requirements to answer YES to the question, listing the business reasons, any mitigating circumstance or any other reasons that apply

All Businesses MAY

Describe any efforts to use data and AI responsibly that do exist, even though all the requirements to answer YES to this question are not met

Mention any future intentions regarding this issue

Answering NOT APPLICABLE

All Businesses MUST

Confirm that the do not use AI in any part of their business operations

All Businesses MAY

Mention any future intentions regarding this issue

DON'T KNOW is not a permissible answer to this question

Version 2

To receive a score of 'Excellent'

Responsible use of artificial intelligence is a key aspect to the business. The company carefully considers the use and consequences of using artificial intelligence and maintains good controls over its implementation within the business having consideration to the needs of the business, its workforce and economy generally

Examples of policies and practices which may support an EXCELLENT statement (not all must be observed, enough should be evidenced to give comfort that the statement is the best of the four for the business being scored):

  1. The board understands issues relating to AI and takes full and active responsibility
  2. The board ensures chains of ownership of AI technologies mean that it controls its data, programs and insight
  3. The business ensures that the organisations it sources its algorithms from are reputable and transparent
  4. Employees are made aware of risks to their jobs posed by automation, and are prepared for this
  5. Support is offered for employees to adjust to automation, for instance careers advice or retraining opportunities
  6. The company discloses what type of data it collects and feed its AI developments
  7. Company has carefully defined what its sensitive data for AI is and controls who has access to it
  8. The company considers the impact on all employees and society before implementing AI technology
  9. The company engages employees in discussions on AI replacing current jobs and actively takes on board their views
  10. The company carefully considers potential ethical issues with AI technologies, and acts to resolve them
  11. HR policy takes into account how humans and machines can work together
  12. Company data is used to promote other social goods – such as health, public policy or the environment
  13. Company is transparent about how AI is used
  14. Has a dedicated team who is responsible for data input, training and resulting output
  15. Publicly commits to using AI responsibly
  16. Influences others on AI responsibility
  17. Board level responsibility is taken on issues relating to AI
  18. Internal procedures assure accountability of AI developments
  19. The company’s diversity and inclusion policy encompasses AI
  20. The company ensures that the data sets and methodologies used in training AI software have integrity, are unbiased and kept up to date
  21. Takes ownership and responsibility for the output of AI systems and the results of the actions of systems controlled by AI
  22. Understands the consequence of using AI for its business and the wider world
  23. Understands the use of AI within its supply chain and embedded into systems it may acquire or use from others
To receive a score of 'Good'

The company is committed to responsible AI and is aware of the associated risks takes notice of risks concerning AI, both to it and to society in general, and aims to resolve them effectively, and has a high level of oversight over the AI it uses

Examples of policies and practices which may support a GOOD statement (not all must be observed, enough should be evidenced to give comfort that the statement is the best of the four for the business being scored):.

  1. The board understands AI in the context of its business, its risks and implications but no clear chain of responsibility is in place
  2. The company has a large degree of oversight of the chains of ownership of its algorithms, but does not have full control
  3. The company makes efforts to ensure the suppliers of its AI are reputable
  4. Employees are aware of risks to their jobs from automation
  5. The company will share data for social and environmental benefit if directly approached, It does not seek out partnerships
  6. Employees are engaged in discussions on AI replacing current jobs
  7. Ethical issues that may arise due to AI are considered
  8. Data is protected to an industry standard level, and is eliminated if unneeded
  9. The company discloses what sorts of data it collects for its AI developments
To receive a score of 'Okay'

Whilst not a priority for the company, it is aware of AI risks and recognises the importance of responsible AI OR responsible AI is not relevant to the business

Examples of policies and practices which may support an OKAY statement (not all must be observed, enough should be evidenced to give comfort that the statement is the best of the four for the business being score

  1. At senior levels there is understanding of AI
  2. Some efforts are made to have oversight of chains of ownership
  3. The company’s stability, reputation and strategy is not compromised by risk relationships with AI suppliers
  4. Company complies with the data protection act or relevant legislation
  5. Whilst care is taken to avoid mistakes relating to AI, the broader social impact is not considered
  6. The company is willing to inform employees about risks to their jobs, but only if they actively seek out the information
To receive a score of 'Poor'

The company takes an irresponsible approach AI and ignores associated risks

Examples of policies and practices which may support a POOR statement (not all must be observed, enough should be evidenced to give comfort that the statement is the best of the four for the business being scored):

  1. Chains of ownership of AI put the company’s stability, reputation and strategy at risk
  2. There is no clear position of responsibility AI within the company
  3. The board and other senior figures within the company ignore issues relating to AI
  4. The company has no oversight or control of the data inputted to the algorithms used and their impacts and insights
  5. Company fails to prepare workers for AI job losses
  6. Sells on data illegally, or fails to fully protect anonymity
  7. Data is collected and stored on customers without their consent
  8. Stakeholders have no access to their personal data
  9. The company prioritises short term profitability over workers’ welfare and safe use of AI
  10. Ethical issues are considered irrelevant at all levels
  11. Job losses due to automation are abrupt
  12. Workers are not knowledgeable about the risk automation poses to their jobs