UnBias: Emancipating users against algorithmic biases
This blog is from a series that we’re publishing by the award holders of EPSRC DE Theme TIPS; a call supporting user driven and interdisciplinary research solving problems in trust, identity, privacy and security.
In an age of ubiquitous data collecting, analysis and processing, how can citizens judge the trustworthiness and fairness of systems that heavily rely on algorithms? News feeds, search engine results and product recommendations increasingly use personalisation algorithms to help us cut through the mountains of available information and find those bits that are most relevant. How can we know if the information we get really is the best match for our interests?
There is no such thing as a neutral algorithm. As anyone who has ever created something knows, even something as simple as a meal, the act of creating inevitably involves choices that will affect the properties of the final product. Despite this truism recommendations and selections made by algorithms are commonly presented to consumers as if they are inherently free from (human) bias and ‘fair’ because the decisions are ‘based on data’.
During the recent controversy about possible political bias in Facebook’s Trending Topics, the focus was almost exclusively on the role of the human editors, even though 95% or more of the news selection process is done by algorithms. Human judgements however are ultimately also based on data.
If there is anything that makes an algorithm based system more trustworthy than a human based one, surely it cannot simply be the use of data alone but rather comes down to audit-ability. An algorithm is a piece of code that can be inspected and analysed. All elements that go into the decision making process can in principle be revealed. If we know the equation we can follow the chain of logic that leads from the inputs to the output.
Moreover, all the inputs that are used in the process are in principle identifiable. Clearly this trustworthiness can only be given to subjects of the algorithm’s decision making if there is transparency. This reasoning is at the heart of legal protections such as Principle 6 of the Data Protection Act: “The right of subject access allows an individual access to information about the reasoning behind any decisions taken by automated means”.
The reality of the user experience however is often far removed from such transparency. When using online services, users are generally given next-to-no information about the algorithms, or even the data that is used. They are instead expected to blindly trust the service provider. In part this is due to commercial interests for whom the algorithms are key intellectual property. Increasingly however the complexity of the algorithms, which can include many hundreds of parameters and possibly incorporate machine-learning elements, can make it very challenging even for the designers of the systems to transparently understand why a specific conclusion might have been reached.
Furthermore, in order for transparency to be meaningful it must provide interpretable understanding of the decision process, not pages upon pages of code or equations that would only be accessible to a handful or experts. Transparency at the level of code would be so opaque for most users as to make the current lack of understandability of terms and conditions documents pale in comparison.
Starting in September 2016, the RCUK Digital Economy Theme-funded project “UnBias: Emancipating Users Against Algorithmic Biases for a Trusted Digital Economy” will look at all of the issues above in much greater detail. A large part of this work will include user group studies to understand the concerns and perspectives of citizens. UnBias aims to provide policy recommendations, ethical guidelines and a ‘fairness toolkit’, co-produced with young people and other stakeholders. This will include educational materials and resources to support youth understanding about online environments, as well as raise awareness among online providers about the concerns and rights of young internet users. The project is relevant for young people as well as society as a whole to ensure trust and transparency are not missing from the internet. The results will be widely disseminated to a variety of audiences ranging from academic peer-review journals to community groups of interest such as secondary schools and youth clubs.
- Derek McAuley
- Tom Rodden
- Ansgar Koene
- Elvira Perez Vallejos
- Marina Jirotka
- Helena Webb
- Michael Rovatsos
Ansgar Koene is Senior Research Fellow at the University of Nottingham Horizon Digital Economy Research Institute and Co-I on the UnBias project where he focuses on policy implications of algorithm mediated information flows. Check out the UnBias project website and follow Ansgar on Twitter at @. Don’t forget to follow Digital Catapult too @DigiCatapult.