Join the discussion - trust, technology and truth-claims

Join the discussion - trust, technology and truth-claims

The University's new Trust & Technology Initiative is setting the agenda on issues of trust and technology, and driving the creation of new technologies and applications.

The Initiative brings together and drives forward interdisciplinary research exploring the dynamics of trust and distrust in relation to internet technologies, society and power. It aims to better inform trustworthy design and the governance of next-generation tech at the research and development stage, using research from Cambridge and beyond.

To mark the launch of the Trust & Technology Initiative late last month, researchers across Cambridge shared their perspectives on the subject. Now, we would like to hear your thoughts about truth-claims in the digital age.

Start by reading this short article by Dr Ella McPherson of the Department of Sociology and then tell us how you personally evaluate the trustworthiness and credibility of online content that you are reading.

Dr Ella McPherson

Trust, technology and truth-claims

My research focuses on the production, evaluation and contestation of truth-claims in the digital age, and my path into this tangled topic is the empirical case of human rights fact-finding. Because it is so high-risk and so contested, this practice is a canary in the coalmine for wider professions and publics struggling to get to grips with the new information order. Indeed, human rights practitioners working with digital evidence were sounding alarm bells about fake news well before the problem became mainstream and are at the cutting edge of verification methodologies. A concern with trust (and with the associated concepts of trustworthiness and credibility) is at the centre of their work – and thus it is at the centre of mine.

This concern has many dimensions, but I would like to highlight two here that are particularly relevant to the launch of our new and exciting Trust and Technology Initiative. First, we should reflect on the methods we use to evaluate and establish trustworthiness and credibility. As we increasingly encounter unknown sources of information in our hyper-mediated world, we increasingly need to use these methods. Verification is, however, resource-intensive; it requires time and knowledge. Technologists have therefore been seeking and implementing ways of building credibility and trustworthiness cues into ICTs. These practices have significant implications for inequalities in our societies, a second key concern of my research – yet we are so often caught up in protecting ourselves from bad intentions and deceptions that we often overlook these implications. I often use the example of Twitter’s blue verified badge to explain this; a user who has the badge has been verified by Twitter as ‘authentic,’ and as a result, the badge may be used as an identity verification shortcut by fact-finders evaluating a tweet’s truth-claim. But who gets the badge? Twitter says the verified user must have ‘an account of public interest. Typically this includes accounts maintained by users in music, acting, fashion, government, politics, religion, journalism, media, sports, business, and other key interest areas.’ So it is a pretty elite (and gendered) subset who have the privilege of this shortcut to credibility. As these verification technologies proliferate, we should be mindful of whose cultural understandings of trustworthiness and credibility are built into them, who can meet these standards, who is excluded, and what the implications are for truth-claims in the public sphere.

The second dimension of the relationship between trust and technology I wish to briefly explore is how technologies interfere with and even displace interpersonal trust, which is often built over time through demonstrations of performance and reciprocity. Though new ICTs connect human rights fact-finders to previously inaccessible information, fact-finders still state that face-to-face interviews with witnesses are the gold standard for gathering evidence. This is in part because the information exchange between human rights fact-finder and witness depends on a mutual trust supported by being in each other’s presence. By mediating across time and place, ICTs can interfere with this trust-building, so much so that some fact-finders interviewed by The Whistle team said they eschew technology out of the concern that it renders information exchange into information extraction. Other technologies are deliberately developed to replace trust through decreasing the risks we use trust to overcome. As Onora O’Neill explains so well, we trust when we don’t have guarantees. We used to have to trust that our children would walk home safely from school – specifically, we would have to trust not only our children but also all the people they encountered on that walk. Now, we can track them real-time on our iPhones with the Find my Friends app; we can guarantee their locations, or at least the locations of their phones. The displacement of trust with technologies is of significant consequence when trust is good for the citizens of a society (which it not always is).

Because of its interdisciplinarity and its reach, the Trust and Technology Initiative is well-poised to explore these dimensions as relates to both research and practice. I am delighted to be a part of it!

This Cambridge perspective piece by Dr Ella McPherson of the Department of Sociology was first published on the Trust & Technology Initiative's website.

Dr Ella McPherson is a Lecturer in the Sociology of New Media and Digital Technology, the Anthony L Lyster Fellow of Queens' College and Co-Director of the Centre of Governance and Human Rights. She leads The Whistle, an academic start-up that supports human rights reporting in the digital age.

Get involved

How do you evaluate the trustworthiness and credibility of online content that you're reading? Tell us by joining the discussion in the official Cambridge Alumni LinkedIn group.

You will need to be a member of the University's official Cambridge Alumni LinkedIn group to take part. If you're not already one of the more than 15,000 members and would like to become one, please log-in to LinkedIn, go to the group and select 'Request to join'. We try to expedite these requests but please bear with us – it may take up to seven days at times of high demand.

The Trust & Technology Initiative

The Trust & Technology Initiative is unique in considering the interplays and feedback loops between technology fundamentals, societal impact and governance of next-generation systems as those systems are developed. Our particular ability to connect cutting-edge deep technology with social science and humanities expertise enables us to dynamically explore emergent use cases and allows us to envisage and experiment with realistic future scenarios. It promotes informed, critical and engaging voices that support individuals, communities and institutions in light of technology’s increasing pervasiveness in societies.  

At the heart of this multifaceted topic is the newly-established Trust & Technology Initiative. This strategic research network is a ‘big tent’, bringing people together, facilitating collaboration and engaging industry, civil society, government and the public.

Find out more about the Initiative