The 2025 European Symposium on Usable Security

Dates and Location: September 10 & 11, 2025, Manchester, United Kingdom.

Experience EuroUSEC 2025 in Manchester – the vibrant north England city that lives and breathes football! Join leading minds in usable privacy and security as we unite to explore, learn, and drive innovation in cybersecurity and privacy. Be part of the future—where research meets real-world impact!.

The European Symposium on Usable Security (EuroUSEC) is a forum for research and discussion on human factors in security and privacy. EuroUSEC solicits previously unpublished work offering novel research contributions in any aspect of human-centered security and privacy. EuroUSEC aims to bring together researchers, practitioners, and students from diverse backgrounds, including computer science, engineering, psychology, the social sciences, and economics, to discuss issues related to human-computer interaction, security, and privacy

EuroUSEC is part of the USEC family of events: https://www.usablesecurity.net/USEC/index.php

We want EuroUSEC to be a community-driven event and would love to hear any questions, comments, or concerns you might have regarding changes from last year—please just email the programme chairs.

We thank Research Institute for Sociotechnical Cyber Security (RISCS) for sponsoring us. Their support will allow a free access to a limited number of UK PhD students to the conference and other expenses to keep registration costs to a minimum.

We also thank Lancaster Univeristy for providing their onground administrative support.



Keynote Speakers

Dr. Marc J. Dupuis

Talk Title: Faith, Fear, and Fallibility: A Human-Centered Vision for Cybersecurity Inspired by Religion and Beyond

Talk Abstract: Despite decades of effort, organizations continue to struggle with the human side of cybersecurity. Security policies are written, awareness campaigns are launched, and training is delivered; yet, employees still make mistakes, disregard guidance, or act contrary to expectations. These behaviors are not merely signs of negligence but reflections of the complexity of human nature. Where cybersecurity frameworks often lean heavily on rules and compliance, they tend to lack depth in addressing the emotional, psychological, and even existential dimensions of human behavior.

This presentation explores a broader, more empathetic vision for cybersecurity by drawing upon insights not only from world religions but also from research on fear, shame, regret, and grace—core emotional and moral experiences that influence decision-making. Religions, which have long grappled with human fallibility, offer practices and paradigms for encouraging right action while making space for forgiveness, growth, and redemption. Similarly, emotional states like fear, shame, regret, forgiveness, and grace can either hinder or help secure behavior depending on the one employed how they are engaged. By integrating perspectives from organized religion, psychology, and behavioral science, we propose a more human-centered cybersecurity vision—one that recognizes the limits of punitive models and instead explores what it means to care for users who inevitably make mistakes. This approach is not merely for the sake of demonstrating compassion for those that make mistakes, but equally for the organization so that it may be more secure and resilient from a cybersecurity perspective.

We present findings from our qualitative research with religious leaders, along with feedback from cybersecurity professionals on this expanded model, as well as insight gleaned from our other studies on the use of emotion to engender behavioral change. This work aims to spark a new conversation in the field—one that reimagines “best practices” not merely in terms of efficiency or compliance, but through a lens of empathy, accountability, and the messy reality of being human.

Biography: Marc J. Dupuis, Ph.D., is an Associate Professor within the Division of Computing and Software Systems at the University of Washington Bothell where he also serves as the Graduate Program Coordinator. Dr. Dupuis earned a Ph.D. in Information Science at the University of Washington with an emphasis on cybersecurity. Prior to this, he earned an M.S. in Information Science and a Master of Public Administration (MPA) from the University of Washington, as well as an M.A. in Political Science at Western Washington University.

His research area is cybersecurity with an emphasis on the human factors of cybersecurity. The primary focus of his research involves the examination of psychological traits and their relationship to the cybersecurity and privacy behavior of individuals. This has included an examination of antecedents and related behaviors, as well as usable security and privacy. His goal is to both understand behavior as it relates to cybersecurity and privacy, and discover what may be done to improve that behavior.

More recently, Dr. Dupuis and his collaborators have been exploring the use of fear appeals, shame, regret, forgiveness, and grace in cybersecurity, including issues related to their efficacy and the ethics of using such techniques to engender behavioral change.

Dr Jason R.C. Nurse

Talk Title: It's *not* all about the Benjamins: The real harms of cyber attacks

Talk Abstract: Cyber-attacks pose a significant threat for organisations and individuals, with ransomware itself devastating countless lives across the world. This keynote talk seeks to move beyond focusing on the cyber-attacks themselves to explore the depth and breadth of harms experienced by victims of these crimes. Drawing on insights from extensive interviews with victims, incident responders, negotiators, law enforcement, and government officials, we uncover a range of severe consequences that extend beyond monetary loss. This is particularly in the case of ransomware attacks.

As we will discuss, organisations face significant risks of business interruption and data exposure, which can lead to substantial financial penalties, reputational damage, and potential legal repercussions. For employees – and specifically thinking about the human factor – the impact can be equally devastating. The psychological toll of a ransomware attack, for instance, is profound, leading to increased stress, anxiety, and even post-traumatic stress disorder. Furthermore, the physical consequences, such as disrupted work routines and extended work hours, can exacerbate these mental health challenges.

This talk also explores the factors that can either mitigate or exacerbate these harms, including organisational preparedness, leadership culture, and effective crisis communication. By understanding these dynamics, organisations can develop robust strategies to minimise the impact of ransomware attacks and support their employees during and after such incidents. This presentation aims to shift the narrative and research surrounding cyber-attacks, highlighting the human cost of these attacks. By recognising the multifaceted nature of cyber-attack harms, we can advocate for more comprehensive and effective response strategies, ultimately protecting organisations and their employees from this growing threat.

Biography: Dr Jason R.C. Nurse is a Reader in Cyber Security in the Institute of Cyber Security for Society and the School of Computing at the University of Kent. He also holds the roles of Associate Fellow at The Royal United Services Institute (RUSI), Visiting Fellow in Defence and Security at Cranfield University, and Research Member of Wolfson College, University of Oxford.

His research interests include human aspects of cyber security, cyberpsychology, cyber harms, security culture, ransomware, cyber insurance, and corporate communications and cyber security.

Dr Nurse has published over 120 peer-reviewed articles in prestigious security journals, and his research has been featured in national and international media including the BBC, Associated Press, The Wall Street Journal, The Washington Post, Newsweek, Wired, The Telegraph, and The Independent. Prior to joining Kent in 2018, Dr Nurse was a Senior Researcher in Cyber Security at the University of Oxford and before that, a Research Fellow in Psychology at the University of Warwick.

Call for Papers

We invite you to submit a paper and join us in Manchester, UK at EuroUSEC 2025.

We welcome submissions containing unpublished original work describing research, visions, or experiences in all areas of usable security and privacy. We also welcome systematization of knowledge (SOK) papers with a clear connection to usable security and privacy. Well executed replication studies are also welcomed. We appreciate a variety and mixture of research methods, including both qualitative and quantitative approaches

Topics include, but are not limited to:

  • usable security and privacy implications or solutions for specific domains (such as IoT, ehealth, and vulnerable populations)
  • methodologies for usable security and privacy research
  • role of AI/Generative AI technologies in improving usable security and privacy
  • field studies of security or privacy technology
  • longitudinal studies of deployed security or privacy features
  • new applications of existing privacy/security models or technology
  • innovative security or privacy functionality and design
  • usability evaluations of new or existing security or privacy features
  • security testing of new or existing usability features
  • lessons learned from the deployment and use of usable privacy and security features
  • reports of failed usable privacy/security studies or experiments, with the focus on the lessons learned from such experience
  • papers with negative results
  • reports of replicating previously published important studies and experiments
  • psychological, sociological, cultural, or economic aspects of security and privacy
  • studies of administrators or developers and support for security and privacy
  • studies on the adoption or acceptance of security or privacy technologies
  • systematization of knowledge papers
  • impact of organizational policy or procurement decisions on security and privacy

We aim to provide a venue for researchers at all stages of their careers and at all stages of their projects.

All submissions will undergo a double-blind review by at least two reviewers. The submissions will receive three decisions: Accept, Shepherding, or Reject. Papers receiving shepherding decisions will engage with an appointed Shepherd, and a revised version will be prepared and must be approved by the shepherd before being accepted. During the shepherding phase, the identities of the authors and one shepherding reviewer will be disclosed for communication purposes. Shepherding will take place outside the conference management system. The authors will be responsible for reaching out to Shepherd



Important Dates (Anywhere on Earth (AoE))

Paper registration deadline (mandatory):       Monday, 13th May, 2025
Paper submission deadline: Friday, 16th May, 2025(Hard Deadline)
Author's notification: Monday, 23rd June, 2025
   
Revision period (For shepherded papers): Tuesday, 24th June to Monday 7th July, 2025
Author's notification (For shepherded papers): Monday, 14th July, 2025
   
Camera-ready submission for all papers: Monday, 4th August, 2025


Submission Instructions

Upload your submission via this link:

Disclaimer: The Microsoft CMT service was used for managing the peer-reviewing process for this conference. This service was provided for free by Microsoft and they bore all expenses, including costs for Azure cloud services as well as for software development and support.

  1. All submissions must report original work written in English.
    • Using Text generated from large language models such as ChatGPT for purposes other than editing the author’s own text is not allowed. While we do not plan to use any tools to check all submissions, we will investigate submissions brought to our attention and will reject them.
    • Authors must clearly document any overlap with previously or simultaneously submitted papers from any of the authors (email the chairs a PDF document outlining this)
  2. Submissions should be anonymized for review. No author names or affiliations should be included in the title page or the body of the paper. Acknowledgments should also be removed, and papers should not reveal authors' identities.
  3. Refer to your own related work in the third person, as though someone else had written it. This also includes, e.g., data sets: "We received data from Smith et al. [31] in our experiment." Do not blind citations except in extraordinary circumstances. If in doubt, contact the chairs.
  4. All submissions should be at most 10 pages, double-column excluding bibliography) reporting on mature work. Appendices are not counted towards the page count, but note that reviewers will not necessarily read the appendices: the text should be sufficient without appendices. If accepted, authors can include a link to appendices in their paper (hosted on a service such as OSF).
  5. Papers must be typeset in A4 format (not "US Letter") using the IEEE conference proceeding template with the appropriate options [Templates here]. Failure to adhere to the page limit and/or formatting requirements will be grounds for desk rejection.
  6. Systematization of Knowledge paper titles must begin with SOK:
  7. Replication studies must mention “Replication” in the title

Simultaneous submission of the same paper to another venue with proceedings or a journal is prohibited. Authors may post pre-prints, however—please consult the guidelines for further information. Serious infringements of these policies may result in the paper's rejection, and the authors may be put on a warning list, even after if we only become aware of the violation after the paper has been accepted. If you have questions about this policy, contact the EuroUSEC chairs..

The conference has an agreement with CPS to handle production of conference proceedings content. This conference is not sponsored by the IEEE. Accepted papers will be submitted for possible inclusion into IEEE Xplore. All conference content submitted to IEEE Xplore is subject to review based on meeting IEEE scope and quality requirements. If the conference is found not to meet these requirements, content may not appear in IEEE Xplore

At least one author of each accepted paper must register and attend to present the paper IN PERSON. We will only permit virtual presentations in exceptional circumstances.

Contact EuroUSEC chairs if there are any questions.

General Chairs

Programme Committee Chairs

The chairs can be contacted at pc.chairs.eurousec

Programme Committee

  • Adam Jenkins, King's College London (UK)
  • Agnieszka Kitkowska, Jönköping University (Sweden)
  • Alaa Nehme, Mississippi State University (USA)
  • Alvi Jawad, Carlton University (Canada)
  • Anna Leschanowsky, Fraunhofer Institute for Integrated Circuits IIS (Germany)
  • Anna-Marie Ortloff, University of Bonn (Germany)
  • Anne Hennig, Karlsruhe Institute of Technology (Germany)
  • Anuj Gautam, University of Illinois at Urbana-Champaign (USA)
  • Arianna Rossi, Sant'Anna University of Advanced Studies (Italy)
  • Bernardo Breve, University of Salerno (Italy)
  • Bilal Naqvi, Lappeenranta University of Technology (Finland)
  • Christian Eichenmüller, Friedrich-Alexander-Universität Erlangen-Nürnberg (Germany)
  • Christian Tiefenau, University of Bonn (Germany)
  • Christine Utz, Radboud University (Netherlands)
  • Cigdem Sengul, Brunel University (UK)
  • Claudia Negri-Ribalta, University of Luxembourg (Luxembourg)
  • Collins Munyendo, The George Washington University (USA)
  • Daniel Thomas, University of Strathclyde (UK)
  • Diana Freed, Brown University (USA)
  • Divyanshau Bhardwaj, CISPA Helmholtz Center for Information Security (Germany)
  • Elham Al Qahtani, University of Jeddah (Saudi Arabia)
  • Eman Alashwali, King Abdulaziz University (Saudi Arabia)
  • Emiram Kablo, University of Paderborn (Germany)
  • Emma Nicol, University of Strathclyde (UK)
  • Gaston Pugliese, Friedrich-Alexander-Universität Erlangen-Nürnberg (Germany)
  • Habiba Farzand, University of Bristol (UK)
  • Hana Habib, Carnegie Mellon University (USA)
  • Hazel Murray, Munster Technological University (Ireland)
  • Ingolf Becker, University College London (UK)
  • James Nicholson, Northumbria University (UK)
  • Jan Nold, Ruhr University Bochum (Germany)
  • Jan-Willem Bullee, University of Twente (Netherlands)
  • Jason Jaskolla, Carlton University (Canada)
  • Jide Edu, University of Strathclyde (UK)
  • Jingjie Li, University of Edinburgh (UK)
  • Joakim Kävrestad, Jönköping University (Sweden)
  • Julie Wunder, Friedrich-Alexander-Universität Erlangen-Nürnberg (Germany)
  • Jurlind Budurushi, Baden-Wuerttemberg Cooperative State University (Germany)
  • Karoline Busse, University of Applied Administrative Sciences Lower Saxony (Germany)
  • Kavous Salehzadeh Niksirat, EPFL (Switzerland)
  • Kévin Huguenin, University of Lausanne (Switzerland)
  • Kieron Ivy Turk, Surrey University (UK)
  • Lorin Schoni, ETH Zurich (Switzerland)
  • Mainack Mondal, IIT Kharagpur (India)
  • Maksim Kalameyets, Newcastle University (UK)
  • Marvin Ramokapane, University of Bristol (UK)
  • Muriel-Larissa Frank, University of Luxembourg (Luxembourg)
  • Nicola Zannone, Eindhoven University of Technology (Netherlands)
  • Nicolas E. Díaz Ferreyra, Hamburg University of Technology (Germany)
  • Noé Zufferey, ETH Zurich (Switzerland)
  • Ola Michalec, Bristol University (UK)
  • Pavlo Burda, ICT Institute (Netherlands)
  • Raphael Serafini, University of Cologne (Germany)
  • Robert Biddle, Carleton University (Canada)
  • Ruba Abu-Salma, King's College London (UK)
  • Ryan Gibson, University of Strathclyde (UK)
  • Sabid Bin Habib Pias, Indiana University (USA)
  • Sara Haghighi, University of Maine (USA)
  • Scott Harper, Surrey University (UK)
  • Shan Xiao, Gonzaga University (USA)
  • Shijing He, King's College London (UK)
  • Simon Parkin, Delft University of Technology (Netherlands)
  • Sotirios Terzis, University of Strathclyde (UK)
  • Stefanos Evripidou, University of Glasgow (UK)
  • Stephan Wiefling, swiefling.de & Vodafone (Germany)
  • Thomas Gross, Newcastle University (UK)
  • Weijia He, University of Southampton (UK)
  • Xiaowei Chen, University of Luxembourg (Luxembourg)
  • Yasmeen Abdrabou, Technical University of Munich (Germany)

Organising/Finance Chair

Publicity Chairs

  • Scott Harper, Newcastle University (UK)
  • Huiyun Tang, Luxembourg University (Luxembourg)

Technical Support

  • Frazer Sandison, Strathclyde University (UK)

Steering Committee

  • Oksana Kulyk, IT University of Copenhagen (Denmark)
  • Karen Renaud, University of Strathclyde (UK)
  • Peter Mayer, University of Southern Denmark (Denmark)
  • Angela Sasse, Ruhr University Bochum / Ruhr-Universität Bochum (Germany)
  • Melanie Volkamer, Karlsruhe Institute of Technology (Germany)
  • Charles Weir, Lancaster University (UK)
  • Farzaneh Karegar, Karstad University (Sweden)


Program

All times in the program are given in the British Summer Time (UTC +01:00). You can use this link to convert the times to any time zone you wish.

The preliminary program is available below.

Wednesday 10th September 2025
8:30 - 9:00 | Registration

9:00 - 9:15 | Welcome and Greetings

9:15 - 10:15 | Keynote 1: Dr. Marc J. Dupuis. "Title: Faith, Fear, and Fallibility: A Human-Centered Vision for Cybersecurity Inspired by Religion and Beyond"

10:15 - 10:30 | Coffee Break

10:30 - 12:00 | Technical Paper Session 1: Social and Psychological Aspects of Cybersecurity (20 minutes per paper + 10 minutes for questions)

Session Chair: Michelle O Keeffe
I tell him everything that I do”: An investigation of privacy and safety implications of AI companion usage Anine Henriksen (University of Southern Denmark); Raha Asadi (IT University of Copenhagen); Oksana Kulyk (IT University of Copenhagen); Peter Mayer (University of Southern Denmark); Anne Gerdes (University of Southern Denmark)

The advances of generative AI and in particular large language models (LLM) resulted in the proliferation of AI chatbots designed for a variety of functions. One example of such chatbots are the so-called AI companion apps that allow creating an anthropomorphised character one can interact with. Indeed, AI companions become an increasingly common part of people's daily lives resulting in increased risks of adverse privacy and safety consequences. In this work we investigate the experiences of users of the Replika chatbot, an AI companion app that is advertised as ``the AI companion who cares''. We analyse 111 Reddit posts of Replika users, focusing on data shared with the app as well as harms users experience from interacting with the app. Our analysis shows that Replika is commonly seen as a simulation of human relationships, which results in users being attached to their chatbot in a similar way they would be attached to their romantic partner or a close friend. Such an attachment leads to significant amounts of sensitive data shared with the Replika app such as details about one's personal life, mental health issues, or sexual preferences. On the other hand, unexpected changes in Replika's workings, e.g., due to new restrictions or bugs introduced by software updates, elicit strong reactions among its users who report harms akin to feeling betrayed or abandoned by a real-life companion. Our research concludes the need for further investigation of relationships to AI companions and of possible ways to mitigate these privacy and safety risks.

SOK: Cognitive Dissonance Theory in the Cybersecurity Domain Paul Van Schaik (Teesside University); Karen Renaud (University of Strathclyde)

Cognitive dissonance occurs when people reject new evidence that contradicts existing knowledge or behaviours. This is relevant to cybersecurity, a domain that often needs to persuade people to adopt new practices or change the way they currently behave. We carried out a scoping review of the literature determine the extent to which cybersecurity researchers have engaged with this concept. We find that whereas many papers mention the phenomenon, very few reported on actual studies into this phenomenon in cybersecurity. We conclude by suggesting a number of directions for research.

"Even after two years, we still have a bad feeling": Two Comparative Case Studies of the Effects of a Cyberattack on Fear, Trust in the IT Department and Security-Related Stress. Markus Schoeps (Ruhr-University Bochum); Sangavi Shanthakumar (Ruhr-University Bochum); Philipp Müntefer (Ruhr-University Bochum); Martina Angela Sasse (Ruhr-University Bochum)

When organizations fall victim to a cyberattack, it can have psychological consequences on employees, the IT department, and the relationship between them. Some prior studies investigated these effects. Still, the areas of trust and security-related stress remain largely unexplored, even though these two variables can have serious effects on IT security behavior and employee well-being. We conducted a study in two research organizations in a German-speaking country, one of which was breached by a serious cyberattack. We used a questionnaire to gather data about employees' fear of cyberattacks, trust in the IT department, and security-related stress, among others (n = 149), and conducted semi-structured interviews with n = 7 IT employees of both organizations, asking about their perceived employee-trust, security-related stress and experience of the attack. We find that participants of the breached organization report less trust in the IT department, worse communication with the IT department and more security-related stress. We found positive correlations between trust in the IT department and the assessment of the communication of the IT department, and negative correlations between the two aforementioned variables and security-related stress. Results from the interviews show that IT employees report a high level of security-related stress, attach great importance to communication to improve employee trust, and describe a hardening of rules and an increase in communication after the attack. We further discuss the impact of our findings and conclude that a cyberattack can harbor the risk of an increase in security-related stress for all employees because of stricter security requirements, and that this can decrease employee trust, even though it is often meant otherwise.


12:00 - 13:00 | Lunch Break

13:00 - 14:30 | Technical Paper Session 2: Authentication and User Perspectives (20 minutes per paper + 10 minutes for questions)

Session Chair: Anne Hennig
Are Passkeys the Key? Older Adults’ Perceptions of Passwordless Authentication Brandon Willis-Arnold (Northumbria University); Paul Vickers (Northumbria University); James Nicholson (Northumbria University)

Older adults face unique challenges with password-based authentication, often due to age-related memory decline, leaving their digital accounts vulnerable to compromise. Passkeys—a novel authentication method that replaces passwords with device-based tokens and biometrics—may offer an accessible alternative for this demographic. We conducted three qualitative workshop sessions (n=23) with community-dwelling older adults, discussing their current authentication practices, demonstrating passkey functionality, and eliciting their views on its usability and adoption barriers. Participants responded positively to the concept of passkeys, appreciating the elimination of memory burden, yet concerns emerged regarding device compatibility, transferring passkeys across devices, and potential risks if others accessed their hardware. Importantly, we found that older adults are generally reluctant to experiment with new authentication mechanisms on their own, emphasising the need for careful introduction and support to enable adoption. Our findings identify both opportunities and critical challenges for designing passkey systems that are inclusive of older users.

"Lying makes me look suspicious": Users' Perspectives and Analysis of Security Questions Xin Sun (Concordia University); Kavous Salehzadeh Niksirat (Max Planck Institute for Security and Privacy); Kévin Huguenin (University of Lausanne); Mohammad Mannan (Concordia University); Amr Youssef (Concordia University)

Despite recommendations from security authorities, such as those of the US NIST, against the use of security questions for online authentication, these methods are still used for login and account recovery processes. Although research on security questions has a long history, key gaps remain, particularly regarding user perceptions and the requirements used by websites for selecting and answering security questions. In this paper, we address these gaps through a two-part study: (i) a user survey (N = 292) capturing insights from a diverse US sample, and (ii) an analysis of an extensive set of 26 security requirements across 73 websites, totaling 1913 security questions. Our findings reveal notable user misconceptions, such as users’ believing that websites already possess correct answers to personal security questions. Additionally, We identify widespread insecure practices, such as accepting single characters and allowing identical answers across multiple security questions. By addressing both user perceptions and website security requirements, we provide a comprehensive understanding of weaknesses in current security question practices and contribute to the ongoing discourse on strengthening authentication methods.

Beyond the End User: A Stakeholder-Centred Mapping of XR Authentication Research Christina Katsini (University of Warwick); Gregory Epiphaniou (University of Warwick); Carsten Maple (University of Warwick)

The rapid growth of Extended Reality (XR) technologies has intensified the need for authentication mechanisms that are secure, usable, and responsive to the contextual and organizational complexities of immersive environments. Although existing research often focuses on technical performance and end-user usability, it rarely accounts for the broader ecosystem in which XR authentication mechanisms are designed, deployed, and regulated. This paper presents a first systematic exploration into how XR authentication research engages with key stakeholder types whose requirements and constraints shape the viability and adoption of user authentication mechanisms. We adopt a mixed-method approach comprising expert interviews to identify stakeholder types, a systematic mapping of 59 publications to assess their representation in the literature, and the development of a formal stakeholder model to capture their roles, interactions, and relationships, offering a structured foundation for aligning user authentication design with multi-stakeholder requirements and objectives, contributing to the development of more inclusive, deployable, and context-aware XR security solutions.


14:30 - 15:00 | Coffee break

15:00 - 16:30 | Technical Paper Session 3: Misinformation and Credibility Assessment (20 minutes per paper + 10 minutes for questions)

Session Chair: Kévin Huguenin
When Trust Overrides Caution: Investigating Spear Phishing in Personal Contexts Among Young and Older Adults Rhian Lukins (Northumbria University); Neeranjan Chitare (Birmingham City University); Nalin Arachchilage (RMIT); Lynne Coventry (Abertay University); James Nicholson (Northumbria University)

Spear phishing messages are highly tailored attacks designed to obtain confidential information or funds from individuals, yet systematically studying these attacks in non-organisational settings is challenging. This study conducted a realistic simulated spear-phishing campaign aimed at the general public. Among 20 younger adults (aged 18–25) and 21 older adults (aged 65 and above), 65% of younger participants and 90% of older participants entered their personal information on a ‘fake’ website after receiving the spear-phishing email. While some participants recognised signs of a potential scam, they dismissed these warnings due to their trust in the sender and the belief that someone they knew could not be spoofed by a malicious actor. These findings highlight how personal trust in an individual, rather than a recognised organisation, can override suspicion. We discuss the implications of our results and the ethical considerations of gathering such in-the-wild data using deceptive methods.

Between Privacy and Transparency: Exploring the Ways of Communicating Credibility Assessment Huiyun Tang (University of Luxembourg); Björn Rohles (Digital Learning Hub Luxembourg); Yuwei Chuai (University of Luxembourg); Gabriele Lezini (University of Luxembourg); Anastasia Sergeeva (University of Luxembourg)

Reducing the spread of misinformation remains a complex problem, especially on encrypted social messaging platforms. AI-based fact-checking systems offer a promising alternative to manual verification, making faster and more scalable responses. However, how these systems communicate their findings to users is still an open design problem. Current approaches, such as binary warning labels, often fail to capture more subtle or partially misleading content. At the same time, users’ limited attention and the overwhelming volume of online information constrain how much and what kind of verification feedback can be delivered. This study explores how two key dimensions of feedback source (content or context-base) and granularity (binary vs. fine-grained assessments) affect users’ trust in the system, perceptions of usefulness, and judgments of content accuracy. In a pre-registered online experiment (n = 537), we tested how these design factors influence user responses. We found that credibility heuristics based on both content and context sources of credibility are valuable to users for making decisions, and that short-heuristic-based explanations are useful to users. In addition, we found that acknowledgement of system certainty about the verdict also helps users to tailor their opinions about the information. Our findings suggest that context-acknowledged short feedback based on heuristics may be a promising design direction to support users in assessing misinformation, even on platforms with limited content visibility, such as encrypted messaging apps.

Nunti-Score: Support Users in the Assessment of News Article Previews in OSNs Stephan Escher (Saxon Data Protection and Transparency Officer); Sebastian Pannasch (TU Dresden); Thorsten Strufe (KIT)

The rise of fake news and populism online increases the demand for tools to counter misinformation and better inform users about the credibility of posts in social media. Effective information with high user acceptance is anticipated to increase competence in assessing credibility and assigning trust to posts encountered online. Misjudgment due to the increasing difficulty to evaluate the credibility of news sources can lead to societal risks, even to the point of threatening democratic societies, given the increasingly incidental information acquisition of citizens on the internet. In this paper, we investigate the utility and acceptance of an approach to enrich news article previews with context information. It visualizes the information quality of linked news articles with a rating, which is based on automatically extracted background information. Based on results from an online experiment with 455 participates, we obtained indications that users were better able to judge the credibility of news articles in OSNs with higher certainty. Additional feedback confirmed that transparency and comprehensibility of the rating were fundamental for its acceptance.


From 19:30 | Dinner. Place: Browns Brasserie & Bar Manchester.

Thursday 11th September 2025
9:15 - 10:15 | Keynote 2: Dr Jason R.C. Nurse. Title: "It's *not* all about the Benjamins: The real harms of cyber attacks"

10:15 - 10:30 | Coffee Break

10:30 - 12:00| Technical Paper Session 4: Organizational Cybersecurity Practices (20 minutes per paper + 10 minutes for questions)

Session Chair: James Nicholson
Do (Not) Tell Me About My Insecurities: Assessing the Status Quo of Coordinated Vulnerability Disclosure in Germany Amid New EU Cybersecurity Regulations Sebastian Neef (Technische Universität Berlin); Cenk Schlunke (Technische Universität Berlin); Anne Hennig (Karlsruhe Institute of Technology)

"In our interconnected world, good IT security practices are necessary to avoid vulnerabilities and data breaches. Providing security contacts, e.g., via Coordinated Vulnerability Disclosure (CVD) programs or security.txt files, is an important practice for businesses to facilitate vulnerability reporting by external parties.Within a longitudinal study, we analyzed Germany's DAX 40 companies' adoption, challenges and experiences with CVD programs. In addition to monitoring publicly available information about their CVD programs, we sent out questionnaires via email and postal mail in 2023 and 2025, and we received answers from 20% of the companies. The adoption rates show a significant increase from 50% (2023) to over 90% (2025), with ten new CVD programs and 25 new security.txt files being available.The survey answers reveal that, for example, legal obligations (e.g., NIS2 and CRA) drive the adoption of CVD practices, but lack of (human) resources and varying report quality are considered drawbacks. As the first study to survey German DAX companies on their CVD practices, our results can help foster the adoption and understanding of security programs by SMEs and other companies, or provide insights for policy makers in practical challenges and experiences from the industry.

Towards an Employee-Centric Framework of Cybersecurity Alexandra von Preuschen (Justus-Liebig-University Giessen); Roman Henke (Justus-Liebig-University Giessen); Manpreet Kaur (Justus-Liebig-University Giessen); Julian Nickel (Justus-Liebig-University Giessen); Monika Schuhmacher (Justus-Liebig-University Giessen)

"While the critical role of employees in cybersecurity has long been recognized, there is a lack of a comprehensive and holistic understanding regarding the key points of contact of employees with cybersecurity, beyond mere preventive actions. To address this gap, we conducted an exploration of key points of contact between employees and cybersecurity using semistructured interviews (n = 20) and identified employee aspects within the functions of the NIST Cybersecurity Framework (NIST-CSF). We demonstrate that particular perceptions, emotions, and social dynamics are relevant to employees’ perspectives on cybersecurity in comparison to the rather technical inclusion of the employees within the NIST-CSF. By aligning the employees’ perspectives with the functions/subcategories of the NIST-CSF, we take a step toward an employee-centric framework for cybersecurity that highlights these essential key points of contact between employees and cybersecurity and points out gaps for the integration of employees in organizational cybersecurity. We conclude the paper by giving recommendations for practitioners and future research.

Measuring and Benchmarking Incident Response Readiness Muntathar Abid (University of Technology Sydney); Priyadarsi Nanda (University of Technology Sydney )

Small-to-medium enterprises (SMEs) remain disproportionately vulnerable to cyber incidents due to constrained resources and underdeveloped operational practices. While many maintain incident response plans (IRPs) to meet regulatory requirements, these plans are often untested and poorly integrated into operational workflows; resulting in delayed containment, unclear escalation, and inconsistent response actions. This disconnect between documentation and execution represents a critical readiness gap that can significantly increase the impact and duration of cyber events. To address this challenge, this paper introduces the Incident Response Readiness Score (IRRS);a scenario-based assessment framework designed to empirically evaluate an organisation's incident response capability under simulated conditions. The IRRS applies a structured scoring rubric calibrated through a Scenario Risk Index, enabling proportional evaluation of performance across diverse incident types. By transforming qualitative incident response actions into a reproducible and risk-weighted metric, the IRRS offers a practical and scalable means of assessing and improving cybersecurity readiness for different type organisations.


12:00 - 13:00 | Lunch

13:00 - 14:30 | Technical Paper Session 5: Cybersecurity for Specific User Groups (20 minutes per paper + 10 minutes for questions)

Session Chair: Paul van Schaik
Cybersecurity Concerns of Older Irish Adults: A Comparison Across High, Medium and Low Tech Usage Michelle O'Keeffe (Munster Technological University); Jacob Camilleri (Munster Technological University); Ashley Sheil (Munster Technological University); Sanchari Das (George Mason University); Melanie Gruben (Munster Technological University); Moya Cronin (Munster Technological University); Miriam Curtin (Munster Technological University); Aoife Long (University College Cork); Hazel Murray (Munster Technological University)

Older adults in Ireland face critical cybersecurity challenges that impact their digital security posture, risk perception, and online engagement. Through a qualitative analysis of semi-structured interviews with 77 participants aged 60 and above, we identify key barriers, including cybersecurity knowledge gaps, usability challenges with authentication mechanisms, and a lack of accessible security support. Participants report concerns about online fraud, privacy risks, and the overwhelming nature of security advice, which often results in digital disengagement. By categorizing participants by technology usage-low, medium, and high, we uncover distinct cybersecurity behaviors. Greater technology use is often accompanied by increased awareness of cyber threats; however, this does not necessarily translate into secure practices, as many continue to struggle with two-factor authentication (2FA) and secure password management. Medium-usage participants exhibit high anxiety over scams and rely on family for security support, while low-usage individuals, though less exposed to threats, remain at risk due to a lack of foundational digital skills and reliance on avoidance-based security strategies. Our findings highlight the urgent need for targeted cybersecurity education, simplified authentication systems, and age-appropriate security interventions to enhance digital confidence, cyber resilience, and online inclusion among older adults, ensuring their secure and equitable participation in an increasingly digital society.

SoK: AI Support for Analyst Situation Awareness in Security Operation Centres Navodika Madushan Karunasingha Karunasingha Gedara (UNSW), Mohan Baruwal Chhetri (CSIRO); Surya Nepal (CSIRO); Cecile Paris (CSIRO); Salil S. Kanhere (UNSW)

"Cyber situation awareness (CSA) is critical for effective decision-making in Security Operations Centres (SOCs). However, existing research lacks a structured understanding of how AI systems support analyst CSA across different decision-making modes. In this paper, we present a systematisation of knowledge (SoK) that examines how AI-enabled tools contribute to the three levels of CSA: perception, comprehension and projection, across three decision-making modes: automated, augmented, and collaborative. We introduce a three-dimensional framework to assess over 70 selected studies, capturing the landscape of AI support across these CSA levels, decision-making modes, and the diversity of SOC tasks. Our analysis reveals a dominant focus on perception-level support in automated settings, limited attention to higher-level CSA in augmented mode, and no work addressing collaborative decision-making. Based on these findings, we outline research directions for designing AI systems that comprehensively enhance analyst CSA in SOCs."

HTTPS-Only Modes: Improving warnings in Tor Browser and beyond Killian Davitt (King's College London); Steven Murdoch (UCL)

HTTPS-Only modes are new browser security features that present users with a warning page before proceeding to non-HTTPS websites. Despite these modes being available in most major browsers, little to no work has been done researching what these modes should be aiming to do, or how users react to these warnings. SSL Stripping attacks, which these modes mitigate are common in the Tor network. As a result, we studied these warnings in the context of Tor Browser. We deployed a survey of Tor experts and gathered their thoughts on these browser modes in general, as well as gaining specific feedback on 3 current warning pages. We report a number of potential improvements to HTTPS-Only mode warning pages. Future warning pages should mention specific types of attack that could occur. Warnings should also include discussion about the integrity of web content, not just confidentiality. The context of the website being visited is also not mentioned by current warning pages. Participants also highlighted that the warning as it appears in Tor Browser should feature some Tor specific advice. Finally, prompted by some participant responses, we engage in a discussion about whether the warnings should aim to deter non-HTTPS connections fully, or seek to empower users to make a determination themselves.


14:30 - 15:00 | Coffee break

15:00 - 16:30 | Technical Paper Session 6: IoT Security and Privacy (20 minutes per paper + 10 minutes for questions)

Session Chair: Charles Weir
IoT labels’ impact on security and privacy concerns Yi-Shyuan Chiang (University of Illinois at Urbana-Champaign); Pardis Emami-naeini (Duke University); Camille Cobb (University of Illinois at Urbana-Champaign)

Countries are launching Internet of Things (IoT) cybersecurity label programs to help consumers make more informed purchasing decisions and motivate manufacturers to create more secure IoT products. In such programs, products that meet program requirements can be sold with a special label to signal cybersecurity compliance. Currently, there is no evidence-based guidance or standardized implementation of labels or label-awarding program policies. We conducted an online survey to understand the impact of IoT labels and choices such as validation requirements (i.e., whether manufacturers need to self attest or seek third-party audits to validate their products' compliance) regarding participants' security and privacy concerns. Our research provides empirical evidence to guide policy choices by effective label-awarding programs. We find that the presence of IoT labels alleviated both security and privacy concerns; however, we did not find differences between other program implementation choices. We provide recommendations for IoT cybersecurity label programs and discuss the potential societal impacts of label programs.

The Illusion of Control in Smart Homes: How IoT App Privacy Statement and Device Interaction Complexity Impact User Security Practices Leena Marghalani (KFUPM); Walid Aljoby (KFUPM); Ahmad Asadullah (Loughborough Business School)

"While security experts have extensively identified risks in smart homes with interconnected Internet of Things (IoT) devices, little work has examined user-perceived control over security practices. Particularly how the complexity of IoT device interactions and the content of device app privacy statements influence user perceptions. To fill this gap, we developed a threat and mental model grounded in the Illusion of Control (IoC) theory and empirically evaluated how IoT privacy statements and interaction complexity shape users' security practices. We surveyed 102 participants to measure how security knowledge, security attitude, perceived controllability, understanding of privacy statement, and perceived IoT interaction complexity influence security practices. Our findings confirm three key insights. First, overconfidence in security management weakens the adoption of secure practices. Second, users who understand and trust the privacy statement of IoT applications are more likely to engage in secure practices. Third, the results suggest that users who perceive IoT interoperability security as too complex are less likely to adopt protective measures. Based on our findings, we provide recommendations to IoT security experts to develop more effective IoT security and privacy measures."

XSec Companions: Exploring the Design of Cybersecurity Companions Sarah Delgado Rodriguez (University of the Bundeswehr Munich); Sarah Elisabeth Winterberg (LMU Munich); Franziska Bumiller (LMU Munich, University of the Bundeswehr Munich); Felix Dietz (University of the Bundeswehr Munich); Florian Alt (LMU Munich); Mariam Hassib (RWTH Aachen)

Cyberattacks frequently target humans, for example, by using social engineering to trick them into revealing sensitive information or by exploiting insecure behavior. Traditional security awareness training and guidelines have proven insufficient to address this issue, as they are not tailored to individual usage conditions and are disconnected from real-world situations. Additionally, these trainings and guidelines do not adapt to users’ changing needs or evolving knowledge. Contrary to traditional training, personal cybersecurity companions, whether digital or tangible, provide a new, adaptive, and integrated way to assist users in understanding security concerns and behaving securely in cyberspace. In this paper, we explore the space of cybersecurity companions through ideation workshops (N = 12), particularly focused on privacy in IoT and phishing. Through the analyses of the end-user visions built during our workshops, we conceptualize and present the XSec Companions design framework. Our work can guide future researchers in developing both digital and tangible xSec Companions whilst providing an overview of the opportunities and challenges in this space.


16:15 - 16:45 | Closing remarks and Farewell





Registration

Registration is mandatory for participation in EuroUSEC. Please register using the following link: Register Now

At least one author for each accepted paper has to register until August 26th, 2025 . No onsite registration is available!

The prices for the registration are as follows.

Author 400 GBP
Other Participants 400 GBP


NOTE: Each paper must have at least one registration under the "Author" option. Please note that authors are expected to present their papers in person at the conference. The online option is reserved for those facing legitimate travel difficulties; however, just the need for a visa to travel to UK is not considered a valid reason. We strongly recommend that authors requiring a visa to travel to the UK apply as early as possible. See Visa/ETA section for further information or contact Conference Chairs.

Event Logistics

EuroUSEC 2025 will be held on September 10 and 11 in Manchester, UK.

Event location: Digital Security Hub (DiSH) 47 Lloyd Street, Manchester M2 5LE, Floor G, Heron House. The building is known as "Manchester Registry Office".

Travelling to Manchester : Manchester has many transport links including Rail, Coach, and Car. Situated at the heart of the M60 Ring Road, it is connected to motorways North, South, East, and West.

Traveling within Manchester : Manchester has bus, tram, train as its main methods of public transport, with a large number of dedicated cycle lanes throughout the city centre. This includes a specific free bus route around the city. The transport links are detailed here. Car parkings are hard to find in the city centre!

Accommodation: We will not arrange any hotel reservations for the attendees.

The conference takes place in the heart of Manchester's City Centre. Information for where to stay can be found here. Reservations for nearby stays can also be made through AirBnB or Booking.com

Visa/ETA

Please note that anyone traveling to the UK will need an Electronic Travel Authorisation (ETA). These are being introduced in several stages for different nationalities, over the next few months. This link is the best overview, where you can find official information and apply.

This link details key deadlines for groups of visitors (by country) are here.

Visas and Certificates of Attendence:

We may provide visa support letters to attendees as well as authors with accepted papers, however it does not issue formal invitation letters for visas.

Please keep in mind that the person who requests for an invitation letter must pay the registration fee first, and the letter can only be sent after the payment has been made.

Certificates of attendance may be requested in the registration form, and will be issued at the end of the conference.

Social Contract

To make EuroUSEC as effective as possible for everyone, we ask that all participants commit to our social contract:

  1. Engage and actively participate (to the degree you feel comfortable) with each talk.
  2. Be sure your feedback is constructive, forward-looking, and meaningful.
  3. The usable security & privacy community has earned a reputation for being inclusive and welcoming to newcomers; please keep it that way.
  4. We encourage attendees to aim to meet at least three new people from this year's EuroUSEC. The meal breaks and the participatory activity are the perfect opportunities for this.
  5. We strongly encourage tweeting under the hashtag "#EuroUSEC2025" and otherwise spreading the word about work you find exciting at EuroUSEC.
  6. EuroUSEC 2025 follows the USABLE events Code of Conduct.