Data Ethics: Doing the Right Thing with Data

Data Ethics: Doing the Right Thing with Data

By Rob Meredith (Principal Consultant)

In a recent data governance engagement, we were asked to undertake something not often requested by a private sector client: draw up a data ethics policy. 

These kinds of policies are more often seen in government and research institutions where working with highly sensitive personal data is common and highly regulated.  Our client rightly recognised, though, that trust can be a competitive advantage.  Think of companies like Apple: they’ve made trusting them to do the right thing with your personal information a core part of their brand that – they would contend – contrasts with their competitors.  Competing on trust, especially related to information,  is becoming more important across the board with the ubiquity of data-driven marketing, AI’s voracious use of data, and frequent breaches of confidentiality. 

Our client wanted us to go beyond a privacy policy for handling personal data (which they already had).  They wanted to be able to point to a policy document that explicitly called out using and handling all data in an ethical manner.  Our client operates in an industry where trust is central to the services offered, but where high profile breaches of that trust across the sector had come to light.  The data ethics policy was one part of being able to show that they are deserving of their customers’ trust and business. 

What is ethics? 

Ethics isn’t about creating a shopping list of acceptable and unacceptable actions, or a  motherhood statement of corporate values.  Ethics is about providing a rational framework for deciding what’s right and wrong.  While people can have legitimate differences of opinion about what’s right and wrong in a given situation, it’s possible to establish an agreed, repeatable process for arriving at those conclusions.  The data ethics policy we developed set out the ethical principles that provide this decision-making framework in relation to data and information for our client. 

Isn’t doing what’s right just doing what’s legal? 

Making sure an action is legal is the bare minimum standard that people are obliged to meet.  Indeed, in a general sense, it can be argued that doing the right thing may even sometimes breach legal obligations.  In the context of an official corporate policy, though, all ‘right actions must be legal.  However, not all legal actions are right.  In other words, doing the right thing is about going beyond just avoiding what we mustn’t do, and actively doing what we should do. 

Our approach 

There are lots of different approaches to establishing an ethical decision-making framework.  Rather than getting caught up in academic debates that have literally been going on for millenia, we turned to other areas of applied ethics and adopted a general model of ethical principles (in this case taken from medical ethics): 

  1. Beneficence – our actions should ‘do good’ in the world, with an active intent to produce some positive outcome. 
  2. Non-maleficence – the classic ‘do no harm’ principle of medical ethics.  Our actions should be intended to remove, prevent or avoid delivering negative outcomes, including foreseeable side-effects. 
  3. Respect for the autonomy of others – ensuring that we don’t manipulate or decieve others and that any action we take on their behalf, or that they are convinced by us to take, is based on informed consent. 
  4. Justice and fairness – our actions should be (and be seen to be) fair and equitable to all, with equity of access to resources and opportunities, and the applications of obligations and norms is done so fairly and transparently. 

Data Ethics Principles 

From the general principles above, we derived the following principles specific to actions related to data and information collection, use and management.  These were the principles that were agreed on with the client – there are other possible principles which could be derived, such as a principle of equity of access to information, etc.  Note that the terms data and information were used interchangeably by the client. 

1. Data is collected, used and managed for clearly defined and ethical purposes 

Having a clearly articulated purpose for an information asset is central to everything we do to manage and govern data, whether that’s designing the asset, managing data quality (fitness for purpose), or realising value from the data.  Having an articulated purpose has always been an important part of complying with the Privacy Act, but this principle extends beyond just personal information and meeting obligations under that Act.  

Ethics considers both the means and the ends (or goals) of an action, and while this principle may seem recursive, the actions we take with collecting, using and managing data (the means) are only right if the ends in mind are themselves right.  In other words, the business purpose for data needs to be articulated, and it must itself be legal and ethical before even considering whether the means to those ends are ethical. 

2. All employees are accountable for the way they collect, use and manage data 

The onus is on everyone to proactively ensure that the organisation does the right thingThis applies not just to data and information they are working with themselves, but to call out situations that they see of data and information being misused or inadequately managedRegardless of their position in the organisation, every individual is accountable for ensuring the right thing is done. 

3. Data is only collected, used and disclosed with informed consent 

Where data or information is shared with the organisation – whether that be personal information from clients, or other third-party information resources, that information’s use for its primary purpose as well as secondary or tertiary purposes with explicit or reasonably implied consent from the sharerThat consent must be fully informed by relevant considerations, and free of coercion, manipulation or deception. 

4. Data risks, biases and harms are minimised 

Negative outcomes of actions related to information collection, use and management need to be prevented and avoidedThis includes reasonably anticipated side-effects, and covers data risks (such as loss of information, unavailability, breaches of confidentiality, etc.), biases in the use and application of information, as well as other negative impacts on employees, clients, partners and the communityThis is an active obligation to ‘do no harm.’ 

5. Data assets are actively curated and managed for their defined purpose 

The positive version of the principle above there is an active obligation to pursue the purpose for which an information asset has been compiledFailure to do so is to not realise the value of that assetIndeed, unlike physical assets where the asset may retain its value whether it is used or not, the value of an information asset can only be derived from its use (and when no longer of use, the risk minimisation principle above dictates that it should be destroyed rather than retained). 

Putting it Into Practice 

Having a policy on paper will not in itself do anything to change practice in an organisation.  Introducing any change, but particularly change related to individual behaviour, involves a cultural shift and requires a number of parallel efforts from giving individuals tools and training to change, as well as leadership and structural change. 

On the individual front, the ethics framework was embedded into the annual mandatory information management training all staff undertake.  In addition to this, a data ethics ‘explainer’ was developed, translating the ethics framework into four questions staff can ask themselves to work through the decision-making process:  

Do I have a clear legal and ethical purpose for what I’m about to do?

Do I understand why I’m doing what I’m doing, and is that both legal and reasonable ethically? If the answer is no to either of these, what alternative actions are more appropriate?

What are the potential negative consequences of my actions?

Is there any way I can achieve the stated purpose in a less harmful, or less risky way? Can mitigants be put in place to minimise risks? Are the negative consequences outweighed by the positives? Can I use less information to achieve the same or materially similar outcome?

Do I have permission to use the information in this way?

If the information was shared with us, has the sharer given informed consent for this purpose, and if not expressly provided, is information consent reasonably implied? Do I have permission from the data owner? Am I compliant with our policies? If the answer is no to any of these, can consent be obtained, or an alternative approach be adopted?

Will this action add value to our organisation, our customers and our stakeholders?

What are the actual benefits that justify this course of action? Do those benefits outweigh the negative consequences? For both positive and negative outcomes, this carefully about how to weigh up competing interests between customers, regulators, investors, other stakeholders and our organisation.

Beyond cultural change at the individual level, the following organisational elements are also fundamental to a broader organisation cultural shift: 

Training and education 

While there’s an expectation that every individual is accountable for ensuring ethical data behaviours, it’s unreasonable for this to be the case without appropriate organisational support.  That means giving them knowledge and training in the data ethics principles in the policy, and how to apply those principles in order to make good decisions about their actions. 

Leadership, support and advice 

Embedding cultural practices requires commitment from senior leadership down including the board, executive leadership team and all other levels of management. Leaders need to clearly articulate their commitment to ethical data practices, demonstrate that commitment through their own behaviour, and show an expectation of others to behave in a similar manner. 

The organisation’s leadership should ensure that everyone is confident that they are supported in considering the ethical aspects of their actions – and in some cases, directly questioning the ethical basis for actions they’ve been asked to perform. Everyone needs to know that they are supported in doing what is right. 

A framework of roles, organisational structures and processes ensuring accountability 

As a principle of good corporate governance, accountability depends on clearly articulated roles, responsibilities and lines of reporting.  This includes general line-of-business, business unit and divisional structures, as well as governance structures for decision-making, including data governance.  With clearly articulated roles and responsibilities, everyone can be confident in knowing who is responsible for addressing which aspects of a data-ethics issue, and how to escalate if necessary. 

Routinely considering data ethics in each stage of the information lifecycle 

Information and data naturally progress through a series of different stages as it is captured or created, stored and actively used, archived and finally disposed of.  At each stage, the ethical considerations of actions should be assessed.  Should we collect this information, in this way?  Should we use it for this secondary purpose?  Do we really need to retain this information or should we dispose of it?  Are we disposing of information safely and securely? 

Final Thoughts 

In deciding to adopt a data ethics policy, the client considered simply introducing a data use policy instead.  The argument in favor of doing that was that a usage policy is more explicit about what’s permitted and what’s not, and is a similar approach to, say, IT usage policies.  With an ethics policy, instead you have a framework for decision-making that could result in different conclusions given different values and moral positions of the actor. 

We settled on the ethics approach for a couple of reasons.  While theoretically simpler to apply, a usage policy becomes very cumbersome very quickly as you try to enumerate all of the potential scenarios of what’s permitted and what’s not.  Conceptually, you’re also still applying someone’s values and morals, so you can’t escape the need to put in the ethical leg work.  At the end of the process you still end up with a document that will have gaps in coverage. You can’t anticipate, a priori, every single circumstance in order to allow or disallow it.  Questions like ‘should we monetise an information asset?’ or ‘can we use this data for an AI application?’ will always result in an answer of ‘it depends’, with the specifics of a situation being a crucial factor in deciding right or wrong.  An ethics policy is able to cater for this, whereas a usage policy struggles.