BEST PRESENTATION AWARD
Compliance as baseline, or striving for more? How Privacy Engineers’ Work and Use Privacy Standards
,
by Zachary Kilhoffer, Devyn Wilder and Masooda Bashir.

Proceedings

July 8th

09:00-09:10 Welcome, introductions and opening remarks
09:10-10:20 Session 1. Chair: Isabel Wagner
-Context-Aware Detection of Personal Data with LLMs, Invited Talk by Stefano Bennati and Elena Vidyakina (HERE). Building a high-quality global digital map requires processing multiple data types from various sources. With more providers and data types, the risk increases for personal data inadvertently entering the map unnoticed. We built a suite of AI-based tools to identify personal data during ingestion and remove them before it gets into the map. This talk will cover our work on finetuning an LLM model to classify personal and business names, e.g. the business owner's name being mistakenly reported as the business' name. This classification is especially tricky as it heavily depends on context, which might not be available during analysis, e.g. ”Baker” could refer to a surname or a profession.
-SPIDER: Interplay Assessment Method for Privacy and Other Values by Zoltan Adam Mann, Jonathan Petit, Sarah Thornton, Michael Buchholz and Jason Millar. This paper proposes SPIDER, a methodology for the systematic assessment of the interplay between privacy and other values. With SPIDER, system designers can investigate, quantify, and visualize the type (positive/neutral/negative) and strength of the interplay between privacy and other values, from different stakeholders’ point of view. This helps identify areas where further improvement of the design is needed to resolve tensions between privacy and other values.
10:20-10:35 Coffee Break
10:35-12:35 Session 2. Chair: Meiko Jensen
-That’s rather inappropriate, dear: the challenges of determining ‘appropriate’ measures for personal data deletion, Industry Talk by Cat Easdon (Dynatrace). Data protection laws require us to take ‘appropriate’ and ‘reasonable’ technical and organizational measures. But who decides what’s appropriate? Legal teams, regulators, and market best practices can all provide guidance, but privacy engineering professionals are still faced with tough choices when weighing up potential risks to data subjects against competing concerns such as security and business agility. In this talk, we’ll explore these tough choices in two case studies of personal data deletion. How far should you go with deletion – far enough that you violate other compliance requirements? Far enough that not even a trace of indirectly identifiable data is left? Come down the deletion rabbit hole with us, and let’s start a discussion about what degree of deletion really is appropriate.
-Compliance as baseline, or striving for more? How Privacy Engineers’ Work and Use Privacy Standards by Zachary Kilhoffer, Devyn Wilder and Masooda Bashir. This paper presents the results of an interview study with privacy engineers, to better understand the work they do, and how privacy standards fit in.
-Data Retention Disclosures in the Google Play Store: Opacity Remains the Norm by David Rodríguez Torrado, Celia Fernández-Aller, José M. Del Álamo and Norman Sadeh. This paper presents an evaluation of Android applications’ privacy policies, focusing on how they articulate and disclose data retention periods.
-Privacy Fails: Trackers in the Wild, Industry Talk by Tobias Kupek (Good Research). This talk will present a work on semi-automated deep dives into the network traffic of a wide range of U.S.-focused web and mobile applications. The talk will discuss some unexpected, and often unintentional, privacy leaks discovered during the analysis of tracker behavior and third-party data flows. This includes PII leakage by analytic tools and mobile app SDKs collecting data for various purposes. The goal is to raise awareness for more careful implementations of third party SDKs and data-minimization in general.
12:35-13:30 Lunch
13:30-15:15 Session 3. Chair: Isabel Barberá
-A Qualitative Analysis Framework for mHealth Privacy Practices by Thomas Cory, Wolf Rieder and Thu-My Huynh. This paper introduces a novel framework for the qualitative evaluation of privacy practices in mHealth apps, particularly focusing on the handling and transmission of sensitive user data.
-Navigating Privacy Patterns in the Era of Robotaxis by Ala'A Al-Momani, David Balenson, Zoltán Ádám Mann, Sebastian Pape, Jonathan Petit and Christoph Bösch. This paper investigates the applicability of privacy patterns in the context of robotaxis, a use case in the broader Mobility-as-aService (MaaS) ecosystem.
-Privacy Patterns: Past, Present, and Future?, Panel by David Balenson. The privacy patterns at privacypatterns.org were developed collaboratively over one decade ago as an open set of design patterns to help facilitate “privacy-by-design” for software engineers. The goal of the patterns was to create a “living community” of users and they have seen not only considerable application and use but have also been the subject of considerable analysis. However, it’s not clear there has been a sustained effort to maintain and expand the patterns to keep pace with technology development. The goal of this panel is to engage some of the original team members and other privacy researchers to revisit the motivation for the privacy patterns, discuss their practical use and analysis in real-world privacy engineering scenarios, and explore future opportunities for expanding the patterns and their use, including in specific domains.
15:15-15:35 Coffee Break
15:35-17:05 Session 4. Chair: Victor Morel
-Differentially Private Multi-label Learning Is Harder Than You’d Think, by Benjamin Friedl, Anika Hannemann and Erik Buchmann. This paper explores the intricacies of applying the Private Aggregation of Teacher Ensembles (PATE) framework to multi-label learning and identifies the challenges involved.
-An applied Perspective: Estimating the Differential Identifiability Risk of an Exemplary SOEP Data Set by Jonas Allmann, Saskia Nuñez von Voigt and Florian Tschorsch. Using real-world study data usually requires contractual agreements where research results may only be published in anonymized form. Requiring formal privacy guarantees, such as differential privacy, could be helpful for data-driven projects to comply with data protection. However, deploying differential privacy in consumer use cases raises the need to explain its underlying mechanisms and the resulting privacy guarantees. In this paper, the authors review and extend an existing privacy metric. They show how to compute this risk metric efficiently for a set of basic statistical queries.
-Attaxonomy: Unpacking Differential Privacy Guarantees Against Practical Adversaries by Rachel Cummings, Shlomi Hod, Jayshree Sarathy and Marika Swanberg. Differential Privacy (DP) is a mathematical framework designed to mitigate privacy risks associated with machine learning and statistical analyses. Despite the growing adoption of DP, its technical privacy-loss parameters do not easily map to notions of real-world privacy risks. One approach for contextualizing these parameters is via defining and measuring the success of technical attacks, but so far the guarantees of DP only protect against strong adversaries. In this work, we offer a detailed taxonomy of attacks, showing the various dimensions of attacks and highlighting that many real-world settings have been understudied. Using this taxonomy, we generalize the notion of reconstruction robustness of Balle et al. [3] to a less-informed adversary that is armed with distributional, rather than datasetlevel, auxiliary knowledge. We extend the worst-case guarantees of DP to this average-case setting. Along the way, we offer insights towards making informed choices around DP’s privacy-loss parameters.
17:05 Closing remarks