Summary of PETS 2020

SUMMARY OF PETS 2020

I’ve recently attended the Privacy Enhancing Technologies Symposium 2020 or so-called PETS. PETS is one of the leading venues for privacy research (with around 23% acceptance rate). All accepted papers get published in the PoPETs journal (Proceedings on PETs).

Given the Covid-19 situation, almost all research conferences were organized online, where researchers can present and attend virtually! 

Even though online venues are less engaging and provide fewer networking opportunities, they still have advantages, for example, if one misses a session (e.g., in case of parallel sessions) it is possible to watch offline streams on Youtube. Needless to say that the registration fees are much cheaper and this is wonderful for young students/researchers to attend top-tier venues.

 

My purpose in writing this blog is to provide a summary of the interesting research projects that I’ve seen in PETS.

Keynote: PRIVACY THREATS IN INTIMATE RELATIONSHIP

I am working on the project of Multiparty Privacy Conflict (MPC) that investigates how technological interventions can deter social media users from non-consensual multimedia sharing. Such privacy conflicts can mainly happen between users who have closer relationships such as close friends or ex-partners. I attended an excellent keynote by Karen Levy from Cornell University. The keynote shares a similar motivation with the MPC project on how people with intimate relationships (e.g., parents, close families, partners, friends) can threaten each others’ privacy. Given that people with intimate relationships usually share much common information, disclosure of such data can cause serious privacy risks. 

The keynote speaker identified four features (motivations) on why privacy threats could happen in intimate relationships including: 

  1. Personal benefits or most of the time emotional reasons such as love might trigger peers to disclose private information. 
  2. Living in the same location or sometimes sharing the same electronic device might help for privacy breach. 
  3. Power differentials such as financial dependence might allow the peers to let themselves violate the privacy of the dependent peer. 
  4. Co-owning some resources from the peer such as photos, videos, diaries, or secrets makes it easier to violate privacy. 

The speaker provided several implications for design on how to avoid such issues. The most interesting implications were: 

  1. Intimate monitoring is not always bad in particular for parental control (even it is required). 
  2. The information that can be transmitted visually is riskier. 
  3. Notifications of ‘changes’ in the default setting are highly sensitive. Many people do not change the default setting, but in the intimate relationship changing the default setting for privacy can create suspicion. Then technology should not share setting changes among the mates. 
  4. The relationship between mates can change over time, so the technology also should change accordingly. For example, if two people broke their relationship a social networking site such as Facebook should let those persons set their privacy settings regarding their ex-mate.
  5. IoT devices and in particular household technologies such as Smart TVs (e.g., Netflix) or smart speakers (e.g., Amazon echo) usually can be shared among people living in the same place. Such services should let users protect their privacy by having multiple accounts and having a password for each user.

The content of the talk was published in the Journal of CYBERSECURITY. (for more details)

—-

Besides the keynote talk I also noted several interesting studies:

PRIVACY AT A GLANCE: The User-Centric Design of Data Exposure Visualizations for an Awareness-Raising Screensaver

Daricia Wilkinson (Clemson University), Paritosh Bahirat (Clemson University), Moses Namara (Clemson University), Jing Lyu (Clemson University), Arwa Alsubhi (Clemson University), Jessica Qiu (Clemson University), Pamela J. Wisniewski (University of Central Florida), and Bart Knijnenburg (Clemson University)

This paper studies how visual granularity of the depicted information on smartphones will influence users’ utility perception and how further obfuscating such data can help for privacy protection. Authors studied different levels of granularity (low, medium, high, very high) in the smartphone visual design. 

They found that moderate granularity offers better glanceability. It means that users can capture or perceive information in a quick way with a minimum cognitive effort. But on the other hand, they found high granularity is good for comprehension or a deeper understanding of data. As a takeaway message, less granular data can deliver more and easy to consume information. Thus, providing less information in smartphone visual design while supports utility, can be beneficial for privacy. For example, users can hide their information from adversaries who do shoulder surfing! (for more details)

WHEN SPEAKERS ARE ALL EARS: Characterizing Misactivations of IoT Smart Speakers

Daniel J. Dubois (Northeastern University), Roman Kolcun (Imperial College London), Anna Maria Mandalari (Imperial College London), Muhammad Talha Paracha (Northeastern University), David Choffnes (Northeastern University), and Hamed Haddadi (Imperial College London)

Smart speakers such as Amazon Alexa can be miss activated due to miss-spelling or using similar keywords. For example, calling a friend called Alex might miss-activate the smart device (hearing it Alex’a’) and it should record part of a private conversation. This paper studied how often such incidences could happen and how it is different between different devices (e.g. Alexa vs. Siri) and different accents (e.g., UK vs. US English), and what are the keywords that can lead to miss activation. 

As a Human-Computer Interaction (HCI) researcher who always deals with human subjects to evaluate technology, it was interesting for me to see how such research questions could be addressed without involving users in the experiment.

The authors located different smart speakers inside a small room and then played famous TV shows for several hours. They observed internet traffic (by network and cloud analysis) and miss activations (by camera observance). To sum, authors found such miss activations could happen frequently and may last longer. But they could not find deliberate or malicious miss activations. (for more details)

THE PRICE IS (NOT) RIGHT: Comparing Privacy in Free and Paid Apps

Catherine Han (University of California, Berkeley), Irwin Reyes (Two Six Labs / International Computer Science Institute), Álvaro Feal (IMDEA Networks Institute / Universidad Carlos III de Madrid), Joel Reardon (University of Calgary / AppCensus, Inc.), Primal Wijesekera (International Computer Science Institute / University of California, Berkeley), Narseo Vallina-Rodriguez (IMDEA Networks Institute / International Computer Science Institute / AppCensus, Inc.), Amit Elazari (University of California, Berkeley), Kenneth A. Bamberger (University of California, Berkeley), and Serge Egelman (International Computer Science Institute / University of California, Berkeley / AppCensus, Inc.)

What do you expect from a service provider if you pay for a mobile app? The answer could be “if I pay for an app I would not see any advertisement and I expect that my privacy will be better preserved by the service provider”. 

The paper used a survey with 1000 respondents asking respondents to click which app (e.g., Facebook) they want to install? A free Facebook app included advertisements or a 0.99 USD paid Facebook app without advertisements. 40% of respondents preferred to download the paid version. The main reason (as expected) was removing the advertisement and then to utilize better features. 30% of respondents believed their data will be treated differently for example for tracking. In sum, they think the paid version will better protect their data and privacy.

But how about the real situation? The study collected over 5000 apps on Google Play which has both free and paid versions and then scrutinized their features and privacy policies. Surprisingly in most of the cases paid versions and free versions treat similarly to user privacy and even for 4% of the apps the paid version contains more advertisement libraries that do not exist in the free version. Thus, “paying for privacy” indeed is a misconception among the users (for more details).

Author: Kavous SALEHZADEH NIKSIRAT

© Thumbnail photo by Dayne Topkin on Unsplash