» Lire en français: French
With users reporting high levels of privacy problems connected to the sharing of photo and video content online, and with image scraping and facial recognition technology more widespread, an easy-to-use effective solution is needed.
Mauro Cherubini is a professor of information systems. His research focuses on the use of results from psychology and behavioural economics for the design of mobile interfaces.
While incidents of revenge porn may capture the media headlines, online image-related privacy issues run much wider than this type of malicious activity. In a world where software programs trawl the internet extracting image data for companies to use for commercial purposes, and where artificial intelligence can identify individuals from facial features, everyone who appears in a video or photo is a potential victim. Whether it is cyberstalking, cyberbullying, identity theft or discrimination, there is a degree of risk involved as soon as our image is posted online. A recent study shows, for example, that employers frequently view candidates’ profiles on social media. Understandably then, calls for more personal control over how and where our image appears online are growing more insistent.
The Universal Declaration of Human Rights gives everyone the right to privacy and the protection of the law against privacy interferences. In most developed economies, there is usually some form of legislative framework that covers online privacy issues. However, the protection provided to individuals for multiparty privacy conflicts (MPCs) – the uploading of a person’s image without their consent – varies considerably (it is often primarily directed at severe and malicious cases, for example). Given the millions of photos and videos posted online everyday, tackling MPCs successfully requires solutions to support any legislation.
The problem is very widespread: a survey of 1,792 social media users from across the US by Prof. Mauro Cherubini and Prof. Kévin Huguenin, and by Research Associate Dr Kavous Salehzadeh Niksirat, HEC Lausanne (UNIL) and their research colleagues reveals that some 85% of users have shared content online in which other people were shown or have appeared. In the 12 months prior to the study, 16% of users had caused an MPC as an uploader, almost a quarter were victims of an MPC, and some 7% experienced serious consequences, such as public shaming, discrimination or revenge porn.
It is the seriousness and scale of the problem that prompted Prof. Cherubini, Prof. Huguenin and Dr Niksirat to search for a practicable solution to MPCs that could be implemented by online service providers. Any solution had to be able to prevent MPCs, and safeguard the privacy of individuals (data subjects) where their image appeared in content shared on a social media platform by someone else (the uploader). It also had to do it in a way that was both effective, relatively simple and easy to use, and generally acceptable to users (in particular, the data subjects).
This prompted the authors to search for a variety of privacy protection mechanisms that could be used to assess users’ preferences and led to the development of three so-called precautionary mechanisms. FaceBlock is a privacy mechanism where users verify their identity to the content sharing service provider using a selfie and documented ID. When content is uploaded, unless consent is given after being alerted by automatic notification, then everyone’s face other than the uploader’s is blurred in the photo. ConsenShare is similar to FaceBlock and is based on earlier research, but its initial identity management stage is performed by a third party service, separate from the content sharing platform. (ConsenShare has been covered in a separate HECimpact article). The third method, Negotiation, which, like FaceBlock, automatically flags content to data subjects allowing them to choose from four settings – ‘private’, ‘only friends in common’, ‘only friends’, and ‘public’ – and then combining responses and applying the most restrictive overall.
To further test the suitability and practicability of their approach, the researchers also sought the views of the users of online social media (broadly defined to include instant messaging apps and multimedia sharing websites), both uploaders and data subjects. In particular, they asked users about two main types of strategy for dealing with MPCs:
a) precautionary mechanisms: i.e. mechanisms that use technology (algorithms and AI) to automate the solution and give control to the people shown in the image. The US social media users included in the study were asked about the FaceBlock, ConsenShare and Negotiation solutions described above.
b) and dissuasive mechanisms, i.e. measures that use behavioural science techniques to discourage uploaders from doing so without obtaining consent. Often known as ‘nudges’, these types of intervention have proved a cost-effective, user-friendly and relatively successful means of achieving public policy aims. With MPCs, the aim is to show the uploader a ‘warning’ message before they have uploaded content in an attempt to get them reflect on the consequences of sharing it. Hopefully, this may prompt them to seek consent before uploading content and dissuade them from doing so without consent. Of the five on-screen dissuasive messages shown to users, four were designed to use fear and/or shame to deter people from sharing pictures without consent. These threatened to notify the uploader’s contacts of an offence immediately, or warned of the possibility of fines and legal action, for example. A fifth message attempted to get uploaders to empathise with potential victims, reminding them that other people might object to sharing content, even though the uploader considered it acceptable.
The feedback from US social media users made it clear that there is strong support for finding a way to prevent MPCs on social media platforms. Overall, the survey respondents favoured precautionary measures over dissuasive mechanisms. This appeared to be because the precautionary methods removed the uncertainty involved in deciding whether content was appropriate to share, enforced collaboration, could warn users whether non-consensual uploads were circulating, and generally provided more control and protection to social media users. While so-called precautionary measures did not restrict the ability of uploaders to share content, they did enable each person appearing in an item of shared content to vet that content’s suitability for sharing.
A practicable solution
While the majority of users favoured precautionary mechanisms, further research by the authors confirmed that there is also a role for dissuasive mechanisms in practicable MPC solutions.
The next step was to think about other solutions than the ones originally envisaged. The authors therefore involved social media users in a participatory design workshop to create solutions to MPCs. Overall, the solutions put forward were close to those tested by the researchers, and there is potential for creating a practicable, hybrid solution that combines elements from the various proposals.
However, another key idea emerged from the workshop, namely mediation between uploaders and potential victims, either before posting to agree on how content can be shared, or afterwards as a means of reconciliation. Mediation of this kind is difficult to implement, however, if it involves human beings, given the vast number of cases to be dealt with on the web. The solution could therefore be a chatbot, and this will be the focus of the researchers’ future work in this field. It is possible, for example, to envisage a solution where the uploader automatically receives a warning reminding them that they need to obtain prior consent. If they persist in publishing the content without consulting the data subjects, they are informed and can either give their consent or enter into a mediation with the poster using a chatbot.
There is no question that MPCs are a serious and growing issue for providers and users of online social media. While there are still some details that need addressing with the solutions suggested by the authors, the good news is that they have shown that effective and convenient mechanisms to prevent MPCs are possible. The challenge is less about removing any flaws in the proposed solution, than the lack of public awareness of the problem and whether social media will agree to implement meaningful solutions.
Related research paper: Cherubini M., Salehzadeh Niksirat K., Boldi M-O., Keopraseuth H., M.Such J., Huguenin K. (2021). When Forcing Collaboration is the Most Sensible Choice: Desirability of Precautionary and Dissuasive Mechanisms to Manage Multiparty Privacy Conflicts, Proceedings of the ACM on Human-Computer Interaction.
Featured image by: Kindel Media / Pexels.com