» Lire en français: French
Privacy issues associated with the online sharing of data, from photos to genetic information, have been the subject of much discussion recently. One particular problem is how to protect the privacy of people who are featured in shared data, such as group photos, for example, but are not the uploaders of that data, and may not even be aware of the data being shared.
5 min read
Kévin Huguenin (HEC Lausanne-UNIL), Alexandra-Mihaela Olteanu (HEC Lausanne-UNIL & EPFL), and their co-researchers address this problem by creating a secure consent-based system that service providers can implement to protect people where their privacy may be affected by the online sharing of data.


In April 2018, Mark Zuckerberg, Facebook CEO and founder, gave evidence to US politicians on Capitol Hill about the way Facebook handles user data. These hearings merely served to highlight the public discussion taking place about the use of personal data by organizations that host or gather that data. Since the advent of the Internet people have freely posted personal data online, often with little thought about the consequences. However, with the rise of social networks, billions of people are now sharing data, and there are increasing concerns about how that information is controlled and used.
Some data may seem strictly personal, but actually reveals information about others
In principle, it is broadly accepted that individuals should be able to exercise control via consent over information about their identity. However, the issue becomes more complicated when the data involved has privacy implications for people other than the uploader of the data. This type of data is often referred to as multiple-subject personal data. Examples include group photos, or videos in which multiple persons appear. Equally, data that seems strictly personal but actually reveals information about others, also known as interdependent personal data, presents problems as well. An example here would be genomic data, which might have implications for an uploader’s relatives.
It would make sense to ensure that, whenever someone shares multiple-subject personal data – a group photo, for example -, that anyone who wishes to have their privacy preserved is able to, and that includes being able to preserve the privacy of people who are unaware that the data is being uploaded. Indeed, some countries have introduced laws to protect privacy and prevent online abuse of material, such as the laws that deal with revenge pornography for example. However, providing people with control over such material is a difficult technical challenge.
Identity protection
This is the challenge that Kévin Huguenin, Alexandra-Mihaela Olteanu, and their co-researchers, address with the ConsenShare system they have developed (as detailed in their paper presented at the Network and Distributed System Security Symposium 2018 in San Diego, CA, USA).
This system is able to protect people who are unaware that their privacy is at risk of being breached
Their solution achieves a number of key objectives. Although tested with the uploading of images, it is capable of working with a variety of different types of data (audio, video, co-location, genomic). It is able to identify individuals who may be affected by disclosure of data, collect their consent before the data is disclosed, and process and share the data, all the while preserving privacy. Crucially, it is able to protect people who are unaware that their privacy is at risk of being breached and even able to protect the identity/privacy of individuals from the platform providers that host the service.
In basic terms ConsenShare relies on two entities. The first is an Identity Management Service (IMS) that deals with the identifying and contacting of individuals. The second is a Content Management Service (CMS) that handles the uploading and sharing of the data, whether that is a picture or video, for example, as well as the consent process including any necessary changes to the data. In the case of photos, for example, Flickr might be the CMS and Facebook the IMS. The protocols governing the communication between these two main elements ensures that information learned by the CMS, IMS and individuals involved is minimal, particularly where some of the individuals involved do not give their permission.
Take the uploading of photos. Initially, users register for the service providing information to confirm their identity. Then, if any content is uploaded via the service, the system detects the users that are affected, other than the uploader, contacts those individuals, offers the opportunity to provide consent, and then publishes the content with the appropriate restrictions depending on whether consent was granted or not. Users can set policies to govern the level of consent they wish to provide. Non-registered users, who are unable to give consent, have their details in the image obfuscated e.g. faces are blurred. By uploading a few pictures of their face to the IMS, a user can benefit from being asked for consent when pictures and videos featuring them are to be shared on the CMS.
The system manages to detect the registered users associated with any particular content
In doing so, the system manages to detect the registered users associated with any particular content using the appropriate technology (face recognition technology for photos, for example), and contact them privately, providing only as much information as needed to make an informed decision about consent, and hiding information that needs consent from other users. It also provides a range of actions available, depending on the nature of the consent decision – it can remove someone from a photo, for example, or obfuscate all or part of that person. In doing so, the system reconciles different consent decisions from different people relating to a single piece of content.
Huguenin, Olteanu and their co-researchers designed the system to ensure that it is robust and effective in terms of security and privacy. This included, for example, checking that fake accounts cannot be created, and that malicious uploaders are not able to bypass the system. They also made sure that the system is practical in terms of the demands it places on the users’ CPU and network connection at peak use.
User demand
A survey conducted by the research team suggests a need and a demand for the ConsenShare service. The results showed that at least 60 per cent of over 300 participants who responded were concerned about the sharing of location, multimedia and genomic data. Ten per cent had been victims of discrimination or prejudice as a result of the posting of online content about them. Furthermore, only 40 per cent of Facebook users in the survey tagged or linked their friend’s profile, meaning those friends could be unaware of content that featured them. And a third of participants had contacted friends to ask them to remove posted content. While over half of the participants, 54 per cent, indicated that they would certainly use a system like ConsenShare.
As the authors note, there are many reasons for deploying the ConsenShare service. For organizations there is a strong business case, as adoption may help to avoid investigation or litigation and, by implementing a service that users value, adoption may well increase the user base and therefore boost revenues indirectly, plus there may be opportunities to monetize the service more directly. While for policymakers, the knowledge that such solutions exist, create greater legislative choices where privacy protection is concerned.
Related research paper: Consensual and Privacy-Preserving Sharing of Multi-Subject and Interdependent Data. In Proceedings of the 25th Network and Distributed System Security Symposium (NDSS), San Diego, CA, United States, Feb. 2018. DOI: 10.14722/ndss.2018.23002
Featured image by Ldprod | Dreamstime.com