Algorithm Ethics – The Next Chapter

Who does your computer think I am? In our everyday life every person is digitally represented in a multitude of IT systems based on invisible algorithms, which in a more or less pervasive way controls pieces of our lives; Which way to drive, what movies to watch, but also how to accept or decline offers and applications, or how to priorise medical treatments.

Second chapter of our talk on “Algorithm Ethics”
https://www.youtube.com/watch?v=i_Foj…

By Majken Sander and Joerg Blumtritt
Recorded session at Strata & Hadoop World Conference Singapore 2015

Based on data, algorithms assume our preferences, interests, and make predictions on our future actions. Recommendation engines, search, and advertising targeting are the most common applications. With data collected on mobile devices and the Internet of Things, these user profiles become algorithmic images of our identities. These images can add deep insight into peoples personalities to classic social research – or they might even substitute it.

We can also use such data based representations of ourselves to built intelligent agents who can act in the digital realm on our behalf: The AlgorithmicMe™.

It raises important questions about the transparency of these algorithms including our ability to or just as important, our lack of ways, to change or affect the way an algorithm views us, as these algorithms bear value judgments, decisions on methods, or pre-sets of the program’s parameters — choices made on how to deal with tasks according to social, cultural, or legal rules or personal persuasion.

We need to address the end users who need higher awareness, more education, and insight regarding those subjective algorithms that affect our lives. We also need to look at ourselves, data consumers, data analysts, and developers, who more or less knowingly produce subjective answers by our choice of methods and parameters — unaware of the bias we impose on a product, a company, and its users.

We will present some of these value judgements with examples and discuss their consequences. We will also present possible ways to resolve the problem: algorithm audits and standardized specifications, but also more visionary concepts like a “AlgorithmicMe”, “data ethics oath” and “algorithm angels” that could raise awareness and guide developers in building their smart things.