Data driven decision making & algorithms will improve ethical decision making

In his book “The (honest) truth about dishonesty” Dan Ariely shows that ethical choices aren’t fixed but context dependent. In my quest to stimulate discussions about the subjectivity of data and the importance of being transparent about ethics in decision making I decided to write my 2cs worth.

“Every decision has an ethical dimension even when we are not aware of it”

Every decision has an ethical dimension even when we are not aware of it. Even buying something as simple as a pack of milk. When buying a pack of milk most of us will not consider the ethics behind that decision. We will buy the cheapest, brand the one we always buy, the one that happens to be available in the store we happen to be, … Others may not agree with our choice because of ethical reasons for example because they think you should only by local products and the brand you’ve bought comes from another country. Or they may say that it is ethically not justifiable to use any animal produce. Others may have ethical issues with the store you buy the milk from because of how they treat their employees or they may have issues with the type of packaging of the milk. For every decision there are ethical arguments pro and con. A nice intro for the interested is “Moral Decision Making: How to Approach Everyday Ethics”.

The dark & light side of algorithms

The growth in the use of data driven algorithms has already ignited the discussion about ethical issues. Books like “Weapons of Math Destruction”, in which Cathy O’Neil describes how ‘unchecked’ algorithms will maintain, reinforce and amplify existing biases, inequalities and injustices, have appeared. Sam Ransbotham published a good article in MIT Sloan Management Review,“Balance Efficiency With Transparency in Analytics-Driven Business”, that I commented on in my article “Ethics, Decision Making, Algorithms, Automation”.

The use of algorithms makes it possible to have this deeper discussion on ethics in decisioning that wasn’t possible before”

Most of these discussions are based on the issue that historic data presents patterns of historic (biased, prejudiced, unfair,…) decision making that, if unchecked, will show up in the new algorithms and “survive”. As I point out in my article “Ethics, Decision Making, Algorithms, Automation” they forget that “the use of algorithms makes it possible to have this deeper discussion on ethics in decisioning that wasn’t possible before (and that we definitely need to have) because all the algorithms were hidden in the brains of humans.” They also fail to mention that with automated algorithms the ‘receivers’ of the decisions will be treated more consistent and more equal. The problem is evaluating the algorithms and making ethical choices transparent.

When evaluating algorithms we need to be aware that algorithms are not neutral. They’re always defined with a goal in mind and goals are based on assumptions, priorities, and emotions. Choices have been made in: building the algorithm, the techniques employed, the design and programming, the translation of the technical output to a decision. Finally, you might have rogue algorithms: algorithms that have errors in them, because they’re not developed correctly, or that are using data that you do not want, and that, because of that, might have an effect that you do not want. See also: “The Ethics of Wielding an Analytical Hammer”

Actually, we need to be aware that implementing algorithms introduces unique additional biases as described in “The Hidden Side Effects of Recommendation Systems” in MIT Sloan Management Review. And we need to take our own actions and behavior into account. Our actions influence the outcome of and therefor we need to be very much aware of the limited time a prediction maybe used and we need to take our actions into account in our final algorithm. When acting on a prediction we either take an action to prevent it from happening, ensure its stability, or increase the likelihood that it will happen. We are changing the reality the original algorithm was based upon.

Legal requirements are already set for algorithms (and the data) with in EU (GDPR) and other countries. In the future we may need to have independent auditing of algorithms as proposed in the article “Why We Need to Audit Algorithms” in Harvard Business Review.

A major topic in the discussion of ethics and AI is the “Trolley Problem” as it relates to self driving vehicles. The “Trolley Problem” is a thought experiment in ethics that was often criticized in the past but it has become very relevant with the introduction of self dring vehicles. For an explanation see “Would you sacrifice one person to save five?”. To get a human perspective MIT started a project to collect as many human judgements from across the globe that they called “Moral Machine”. In a paper in Nature, “The Moral Machine experiment” (summarized here: “Should a self-driving car kill the baby or the grandma? Depends on where you’re from.”), the results show that humanity is divided on the best ethical choice to make.

Philosophers have already started to tackle the ethical issues related to self driving vehicles. They have started to write algorithms based on several ethics theories that could be useful to implement based on the theory of choice. See “Philosophers are building ethical algorithms to help control self-driving cars”. There are still other ethical problems to be solved related to self driving vehicles like (but not limited to): “could cars be programed to drive past certain shops shops?”, “who is responsible if the car is programmed to put someone at risk?”, “drinking could increase once drunk driving isn’t a concern”, “an autonomous car is basically big brother on wheels”, and “if autonomous cars increase road safety and fewer people die on the road, will this lead to fewer organ transplants?”.

The “Trolley Problem” is a ‘straightforward’ problem compared to day-to-day business problems. A business needs to determine the balance between ethically treating customers, employees, suppliers, shareholders, partners, competitors and other stakeholders. They need to have a point of view on how to handle the environment, government regulations, ethical issues as human trafficking, sourcing from conflict areas, corruption and more. There are a lot of conflict areas in balancing this but there are very few companies that actually spell out their ‘ethical make up’. Most tend to fall back on compliance with legal requirements and maximizing shareholder value (whatever that may mean). Some companies, however, have recently instituted the function of Chief Ethics Officer to handle these kinds of issues.

A business needs to determine the balance between ethically treating customers, employees, suppliers, shareholders, partners, competitors and other stakeholders. They need to have a point of view on how to handle the environment, government regulations, ethical issues as human trafficking, sourcing from conflict areas, corruption and more.”

When taking all this into account we may actually be able to get better decisions. In the article “Make “Fairness by Design” Part of Machine Learning” the authors conclude: “By making fairness a guiding principle in machine learning projects, we didn’t just build fairer models — we built better ones, too”. And the author of “Want Less-Biased Decisions? Use Algorithms.” makes a very true statement: “So the next time you read a headline about the perils of algorithmic bias, remember to look in the mirror and recall that the perils of human bias are likely even worse.”

Subjectivity of Data

A major issue that is important in this respect is the subjectivity of data and the ethical boundaries of data collection and usage. Some of the boundaries are set by laws, like the implementation of GDPR in the EU and other (local) privacy and data regulations. A business has also to decide what their ethical boundaries are in collecting and using data. Does profit/growth/shareholder value/market capitalization/ justify everything?

the context of data is always relevant, data cannot manifest the kind of perfect objectivity that is sometimes imagined”

A recent article in “The New Atlantis”, “Why Data Is Never Raw”, makes an important point, that most scientist have learned and some have remembered: the context of data is always relevant, data cannot manifest the kind of perfect objectivity that is sometimes imagined. My former colleague Anthony Scriffignano, Chief Data Scientist at Dun & Bradstreet, wrote several very good articles on this subject here on LinkedIn. The key is that any data we create and collect are a result of the theory we hold of reality, the choices we make and the priorities we have, the technology and the sources we use and more. We will also need to accept that we will never have all data relating to a subject, entity, event,… No matter what big data marketing says: deciding based on complete information will remain a non starter.

Leave a comment

Your email address will not be published. Required fields are marked *