Technological advancements have increased the quantities of data that can be collected, processed and stored, and the speed at which this happens. Data is now an incredibly valuable and useful tool available to organisations with a range of applications. However, with this comes ethical considerations that go beyond just protecting the privacy of those whose data is being collected.
Consumers are becoming increasingly concerned about the extent that companies are able to collect data on them without their knowledge, how this data is used, and the potential insights that can be gleaned from seemingly innocuous information. More and more voices are challenging the amount of control companies can wield over individuals’ lives as a result of the data they have gathered.
Businesses that engage in data processing and storage face considerable fines for breaches or lapses in privacy protection. Risks to reputation and losing stakeholder trust and confidence may be considerably greater still. As such, the financial implications for getting things wrong are great.
At the same time, good data used well has enormous potential for positive impact on people and planet. More and more businesses are looking beyond mere risk and liability minimisation to explore how their data activities might contribute to the public good.
Responsible Data Forum defines handling data ‘responsibly’ as: “The duty to ensure people’s right to consent, privacy, security and ownership around the information processes of collection, analysis, storage, presentation and reuse of data, while respecting the values of transparency and openness.” It aims to do no harm and to use the outcomes of data processing to create products and services that benefit individuals and society at large.
Using customers’ data can help businesses to target advertising, tailor services and innovate new products - it has a positive impact on the bottom line. Consumers can also benefit through improved services, new products that meet particular needs, or personalised support, such as recommendation algorithms and financial planning assistance. However, collecting and processing data has an unavoidable impact beyond the organisation.
The increasing reliance on data use can also exacerbate digital inequality. Digital illiteracy or lack of access to technologies can mean that some groups cannot share data, cannot control what data is being shared, do not benefit from its results and do not understand their rights on the information that is collected on them. For instance, those on low incomes may not be able to afford smartphones or important security updates to platforms, making them more at risk of financial crime, and less able to access online services. Companies have also been criticised for collecting data on consumers and using that data to develop products and services without compensating them. Data is valuable, and the ‘if it's free you’re the product’ model creates concerns about who should benefit from an individual’s data.
Similarly, the insights gained from data analysis can entrench divisions. Criticism has been levelled at businesses seeking to make their products addictive in order to monetise users’ time, attention and networks, or to advertise products that they don’t need or are harmful, for example debt.
The gathering and storing of data of individuals and groups may also facilitate human rights abuses. These can range from aiding government surveillance, enabling internment, facilitating discrimination of ethnic groups or marginalised communities or even directly contributing to genocide. Whilst these effects may be unintended from the point of view of businesses, they should be aware of the possible ramifications of collecting certain data, who they share it with and under what circumstances. Companies have already been criticised about illegally sharing data with governments.
Taking an ethical approach to data starts before any data has been collected, and progresses throughout the project or product design and implementation process. Issues to be considered include:
- Data minimisation. This avoids the collection of data for collection’s sake. By carefully identifying what data is needed, an organisation can avoid gathering, and taking on responsibility for, unnecessary data. This also minimises the risks of data loss and data theft, and the environmental impact of data storage.
- Privacy by design. Building privacy measures in at the beginning of a project is easier and more effective than tacking it on at the end. For this to be successful, privacy is something key individuals across the organisation need to understand fully, not just the IT department or other internal siloes.
- Cautious approach. An organisation taking a cautious approach will seek to understand fully the risks and possible repercussions of collecting data and, ultimately, if these cannot be fully grasped, this data may not be gathered. This may mean that some functions cannot be carried out, but ultimately protects data subjects from unforeseen negative effects. This can apply at any stage of the data process.
- Informed consent. For consent to be truly informed, it must meet several criteria. Firstly, the consumer must be truly informed, so information must be available and clear. Consent not only applies to data being gathered, but also how it will be used. Pre-ticked boxes and opt-outs are not sufficient for gaining truly informed consent. Additionally, refusing consent cannot result in loss of services, so alternatives must be available, including non-digital options. All-or-nothing options that currently exist, such as those consumers ‘agree’ to to use social media or apps, do not meet this criteria.
- Data security. An organisation must take adequate measures to ensure the privacy of data subjects. This may include protecting hardware as well as databases, and implementing measures such as strong passwords or clear-desk policies to prevent human error. Safeguards and measures such as encryption can help to avoid the worst if errors or breaches do happen. Securing data properly is not just about protecting individuals’ privacy, but may also be necessary to protect groups of people, especially if data on these groups could cause them harm or disadvantage, such as communities at risk of repression, violence or exploitation.
- Sensitive data. It is especially important that companies protect sensitive data, which can include, but is not limited to, medical records, personal financial data, personal communications etc. Companies should be aware that what is ‘sensitive’ data varies from person to person and can change with circumstances.
- Combining datasets. Linking or combining datasets can lead to greater insights and understanding, but the increase in the number of data points can threaten individuals’ anonymity. This is known as the Mosaic Effect. Additionally, initial consent may not extend to this new use, particularly if anonymity is compromised, so if data collected is going to be used in this way, it should be made clear to data subjects.
- Processing. Any kind of processing of data has been subject to human decision making, and therefore risks biases and errors, which can then be magnified and reproduced. Involving diverse groups of people at this stage can go some way to mitigating this, such as consulting groups whose data is being used.
- Value chain. Data security is something that can be compromised at any stage of the value chain, whether this be buying or selling data itself or the tools used to collect, store and analyse it.
- The benefits and risks of sharing data. Data has the potential to provide valuable insights and solutions for any number of challenges, yet keeping these internal to the organisation that has analysed the data limits their impacts. Sharing information between institutions, such as tech companies and governments, can improve public services and increase efficiencies, or publicly owned data can assist organisations in tackling and understanding issues concerning public health, infrastructure or behaviour change. However, this sharing also comes with risks. Citizens are sometimes not aware that public bodies are sharing their data, and this data can be used simply to serve the profits of companies. Alternatively, there are civil and human rights concerns with companies sharing data with governments, repressive or otherwise.
- Ownership. The current model assumes that data is owned by those that collect it. This includes the insights gained from it. An ethical approach to data understands that individuals should have ownership over the data about themselves, and therefore have a say in what happens to it. Insights from aggregate data can also be shared more widely or be made public to serve the common good, ensuring adequate protection for the individuals involved.
- Accountability. Organisations can use the data gathered on subjects to categorise and label them. This information is rarely available to the consumers themselves and means that they cannot challenge or correct this process. Responsible data promotes transparency and accountability in how data is used instead, and allows data subjects more control over this process.
- Secure disposal/archiving. Once data is no longer in active use, or is to be deleted, care must be taken to ensure this happens effectively, and that any archive is protected against breach, error or future changes, such as the company being sold, or future government intervention.
While regulations exists regarding data use, these often focus more on minimising corporate liability than protecting the rights of data subjects, and have been criticised for being insufficient or unable to keep up with technological advancements. Organisations going beyond regulatory minimums have clear policies and practices to guide their actions and have clear guidance on what is and is not acceptable when it comes to data.
The EU’s GDPR (General Data Protection Regulation), set to come into force on 25 May 2018, is a huge development in strengthening regulation, granting further rights to the individual, and redressing the power imbalance between data subjects and the companies who hold data on them. The GDPR will affect any organisation that controls or processes data on EU citizens. It will affect how organisations run their day to day operations, putting a higher burden of proof of responsibility onto those organisations.
Included in the changes are:
- Mandatory breach reporting (with some exemptions) and fines of up to £20 million, or 4% of annual turnover, whichever is higher
- Broadens and clarifies definitions of personal data, e.g. genetic and biometric data
- Grants individuals the right to be forgotten, which includes throughout an organisation’s supply chain
- Grants individuals the right to data portability, enabling them to share their data with who they like, and to specify the amount of time it is shared for, e.g. open banking
- Strengthens the definition of informed consent.