Please find below publications and papers on privacy and online harms that have been produced by the REPHRAIN team.
March 2024
Anticipating the Use of Robots in Domestic Abuse: A Typology of Robot Facilitated Abuse to Support Risk Assessment and Mitigation in Human-Robot Interaction
Prepared by Katie Winkle and Natasha Mulvihill.
Abstract: Domestic abuse research demonstrates that perpetrators are agile in finding new ways to coerce and to consolidate their control. They may leverage loved ones or cherished objects, and are increasingly exploiting and subverting what have become everyday ‘smart’ technologies. Robots sit at the intersection of these categories: they bring together multiple digital and assistive functionalities in a physical body, often explicitly designed to take on a social companionship role. We present a typology of robot facilitated abuse based on these unique affordances, designed to support systematic risk assessment, mitigation and design work. Whilst most obviously relevant to those designing robots for in-home deployment or intrafamilial interactions, the ability to coerce can be wielded by those who have any form of social power, such that our typology and associated design reflections may also be salient for the design of robots to be used in the school or workplace, between carers and the vulnerable, elderly and disabled and/or in institutions which facilitate intimate relations of care.
A copy of this paper can be found here.
January 2024
Harm, Injustice & Technology: Reflections on the UK’s subpostmasters’ case
Prepared by Michael McGuire and Karen Renaud.
Abstract: One of the more striking recent miscarriages of justice was perpetrated by the UK’s Post Office when subpostmasters and subpostmistresses were prosecuted for fraud that actually arose from malfunctioning software. Over 700 were victimised, losing homes and livelihoods. We first use a zemiological lens to examine the harms caused by these events at both a first and second-order range – referred to as ‘ripples’. Yet, the zemiological analysis, while useful in identifying the personal harms suffered by postmasters, is less successful in associating with some of the wider costs – especially to the justice system itself. Additional tools are required for identifying how technology might be culpable in the damage that unfolded. We use a technological injustice lens to augment the zemiological analysis, to reveal how and why technology can harm, especially when appropriate checks and balances are missing, and naïve belief in the infallibility of technological solutions prevails.
A copy of this paper can be found here.
September 2023
Cashing in on Contacts: Characterizing the OnlyFans Ecosystem
Prepared by Pelayo Vallina, Ignacio Castro and Gareth Tyson.
Abstract: Adult video-sharing has undergone dramatic shifts. New platforms that directly interconnect (often amateur) producers and consumers now allow content creators to promote material across the web and directly monetize the content they produce. OnlyFans is the most prominent example of this new trend. OnlyFans is a content subscription service where creators earn money from users who subscribe to their material. In contrast to prior adult platforms, OnlyFans emphasizes creator-consumer interaction for audience accumulation and maintenance. This results in a wide cross-platform ecosystem geared towards bringing consumers to creators’ accounts. In this paper, we inspect this emerging ecosystem, focusing on content creators and the third-party platforms they connect to
A copy of this paper can be found here.
November 2022
On the privacy of mental health apps: An empirical investigation and its implications for app development
Prepared by Leonardo Horn Awaya, M. Ali Babar, Awais Rashid & Chamila Wijayarathna
Abstract: An increasing number of mental health services are now offered through mobile health (mHealth) systems, such as in mobile applications (apps). Although there is an unprecedented growth in the adoption of mental health services, partly due to the COVID-19 pandemic, concerns about data privacy risks due to security breaches are also increasing. Whilst some studies have analyzed mHealth apps from different angles, including security, there is relatively little evidence for data privacy issues that may exist in mHealth apps used for mental health services, whose recipients can be particularly vulnerable. This paper reports an empirical study aimed at systematically identifying and understanding data privacy incorporated in mental health apps. We analyzed 27 top-ranked mental health apps from Google Play Store. Our methodology enabled us to perform an in-depth privacy analysis of the apps, covering static and dynamic analysis, data sharing behaviour, server-side tests, privacy impact assessment requests, and privacy policy evaluation. Furthermore, we mapped the findings to the LINDDUN threat taxonomy, describing how threats manifest on the studied apps. The findings reveal important data privacy issues such as unnecessary permissions, insecure cryptography implementations, and leaks of personal data and credentials in logs and web requests. There is also a high risk of user profiling as the apps’ development do not provide foolproof mechanisms against linkability, detectability and identifiability. Data sharing among 3rd-parties and advertisers in the current apps’ ecosystem aggravates this situation. Based on the empirical findings of this study, we provide recommendations to be considered by different stakeholders of mHealth apps in general and apps developers in particular. We conclude that while developers ought to be more knowledgeable in considering and addressing privacy issues, users and health professionals can also play a role by demanding privacy-friendly apps.
A copy of this paper can be found here.
June 2022
An Investigation Into the Sensitivity of Personal Information and Implications for Disclosure: A UK Perspective
Prepared by Rahime Belen-Salglam, Jason Nurse and Duncan Hodges.
Abstract: The perceived sensitivity of information is a crucial factor in both security and privacy concerns and the behaviors of individuals. Furthermore, such perceptions motivate how people disclose and share information with others. We study this topic by using an online questionnaire where a representative sample of 491 British citizens rated the sensitivity of different data items in a variety of scenarios. The sensitivity evaluations revealed in this study are compared to prior results from the US, Brazil and Germany, allowing us to examine the impact of culture. In addition to discovering similarities across cultures, we also identify new factors overlooked in the current research, including concerns about reactions from others, personal safety or mental health and finally, consequences of disclosure on others. We also highlight a difference between the regulatory perspective and the citizen perspective on information sensitivity. We then operationalized this understanding within several example use-cases exploring disclosures in the healthcare and finance industry, two areas where security is paramount. We explored the disclosures being made through two different interaction means: directly to a human or chatbot mediated (given that an increasing amount of personal data is shared with these agents in industry). We also explored the effect of anonymity in these contexts. Participants showed a significant reluctance to disclose information they considered “irrelevant” or “out of context” information disregarding other factors such as interaction means or anonymity. We also observed that chatbots proved detrimental to eliciting sensitive disclosures in the healthcare domain; however, within the finance domain, there was less effect. This article’s findings provide new insights for those developing online systems intended to elicit sensitive personal information from users.
A copy of this paper can be found here.
May 2022
Personal information: Perceptions, types and evolution
Prepared by Rahime Belen-Salglam, Jason Nurse and Duncan Hodges.
Abstract: Advances in technology have made us as a society think more about cyber security and privacy, particularly how we consider and protect personal information. Such developments have introduced a temporal dimension to the definition of personal information and we have also witnessed new types of data emerging (e.g., phone sensor data, stress level measurements). These rapid technological changes introduce several challenges as legislation is often inadequate, and therefore questions regularly arise pertaining whether information should be considered personal or sensitive and thereby better protected. In this paper, therefore, we look to significantly advance research into this domain by investigating how personal information is regarded in governmental legislations/regulations, privacy policies of applications, and academic research articles. Through an assessment of how personal information has evolved and is perceived differently (e.g., in the context of sensitivity) across these key stakeholders, this work contributes to the understanding of the fundamental disconnects present and also the social implications of new technologies. Furthermore, we introduce a series of novel taxonomies of personal information which can significantly support and help guide how researchers and practitioners work with, or develop tools to protect, such information..
A copy of this paper can be found here.
January 2022
Nothing to Be Happy About: Consumer Emotions and AI
Prepared by Mateja Durovic and Jonathan Watson
Abstract: Advancements in artificial intelligence and Big Data allow for a range of goods and services to determine and respond to a consumer’s emotional state of mind. Considerable potential surrounds the technological ability to detect and respond to an individual’s emotions, yet such technology is also controversial and raises questions surrounding the legal protection of emotions. Despite their highly sensitive and private nature, this article highlights the inadequate protection of emotions in aspects of data protection and consumer protection law, arguing that the contribution by recent proposal for an Artificial Intelligence Act is not only unsuitable to overcome such deficits but does little to support the assertion that emotions are highly sensitive.
Keywords: AI; consumer law; new technologies; regulation; emotions; EU Law
A copy of this paper is available for download here.
January 2021
Manipulation and liability to defensive harm
Prepared by Massimo Renzo
Abstract: Philosophers working on the morality of harm have paid surprisingly little attention to the problem of manipulation. The aim of this paper is to remedy this lacuna by exploring how liability to defensive harm is affected by the fact that someone posing an unjust threat has been manipulated into doing so. In addressing this problem, the challenge is to answer the following question: Why should it be the case (if it is, indeed, the case) that being misled into posing an unjust threat by manipulation makes a difference to one’s liability, as compared to being misled into doing so by natural events or by someone’s honest attempt to persuade us? To answer this question, I first outline an account of manipulation and then use it to defend what I shall call the ‘‘Pre-emption Principle.’’
Paper available to download here.