In many ways, our lives are dependent on implicit value judgements: Search engine results are returned based on what they consider are individually relevant. An algorithm in the ad-network’s targeting system selects which ads we see. Image processing – instagram as well as in MRI – forms pictures of our environment on our behalf. And as drones as prepared for autonomous kill decisions, this discussion becomes existential.
These ‘decisions’ come down to algorithms, and the “Value Judgements” attached to them interfere with our daily lives. We are however usually not aware of the judgements that are buried into our many devices.
This session gives in introduction into the three different forms of value judgements in algorithms, and will go beyond the obvious “calculable” value judgements – like credit scoring – and instead address the multitude of “hidden” ethic algorithms that far more pervasive.
These value judgements include:
1) Choosing a method
2) Setting of parameters
3) How to deal with uncertainty and misclassification.
All three judgements are mostly made implicitly, so for many applications, the only way to understand these presumptions is to “open the black box” – to HACK them.
Given all that, I would like to demand three points of action:
– to the developers: you have to keep as many options open as possible and give others a chance in changing the pre-sets (and customers: you must insist of this, when you order the programming of applications);
– to the educational systems: teach people to hack, to become curious about seeing behind things.
– to our legislative bodies: make hacking things legal. Don’t let copyright, DRM and the like being used against people who re-engineer things. Only what gets hacked, gets tested. Let us have sovereignty over the things we have to deal with, let us shape our surroundings according to our ethics.