Dates and Location: September 10 & 11, 2025, Manchester, United Kingdom.
Experience EuroUSEC 2025 in Manchester – the vibrant north England city that lives and breathes football! Join leading minds in usable privacy and security as we unite to explore, learn, and drive innovation in cybersecurity and privacy. Be part of the future—where research meets real-world impact!.
The European Symposium on Usable Security (EuroUSEC) is a forum for research and discussion on human factors in security and privacy. EuroUSEC solicits previously unpublished work offering novel research contributions in any aspect of human-centered security and privacy. EuroUSEC aims to bring together researchers, practitioners, and students from diverse backgrounds, including computer science, engineering, psychology, the social sciences, and economics, to discuss issues related to human-computer interaction, security, and privacy
EuroUSEC is part of the USEC family of events: https://www.usablesecurity.net/USEC/index.php
We want EuroUSEC to be a community-driven event and would love to hear any questions, comments, or concerns you might have regarding changes from last year—please just email the programme chairs.
We thank Research Institute for Sociotechnical Cyber Security (RISCS) for sponsoring us. Their support will allow a free access to a limited number of UK PhD students to the conference and other expenses to keep registration costs to a minimum.
We also thank Lancaster Univeristy for providing their onground administrative support.
Talk Abstract: Despite decades of effort, organizations continue to struggle with the human side of cybersecurity. Security policies are written, awareness campaigns are launched, and training is delivered; yet, employees still make mistakes, disregard guidance, or act contrary to expectations. These behaviors are not merely signs of negligence but reflections of the complexity of human nature. Where cybersecurity frameworks often lean heavily on rules and compliance, they tend to lack depth in addressing the emotional, psychological, and even existential dimensions of human behavior.
This presentation explores a broader, more empathetic vision for cybersecurity by drawing upon insights not only from world religions but also from research on fear, shame, regret, and grace—core emotional and moral experiences that influence decision-making. Religions, which have long grappled with human fallibility, offer practices and paradigms for encouraging right action while making space for forgiveness, growth, and redemption. Similarly, emotional states like fear, shame, regret, forgiveness, and grace can either hinder or help secure behavior depending on the one employed how they are engaged. By integrating perspectives from organized religion, psychology, and behavioral science, we propose a more human-centered cybersecurity vision—one that recognizes the limits of punitive models and instead explores what it means to care for users who inevitably make mistakes. This approach is not merely for the sake of demonstrating compassion for those that make mistakes, but equally for the organization so that it may be more secure and resilient from a cybersecurity perspective.
We present findings from our qualitative research with religious leaders, along with feedback from cybersecurity professionals on this expanded model, as well as insight gleaned from our other studies on the use of emotion to engender behavioral change. This work aims to spark a new conversation in the field—one that reimagines “best practices” not merely in terms of efficiency or compliance, but through a lens of empathy, accountability, and the messy reality of being human.
Biography: Marc J. Dupuis, Ph.D., is an Associate Professor within the Division of Computing and Software Systems at the University of Washington Bothell where he also serves as the Graduate Program Coordinator. Dr. Dupuis earned a Ph.D. in Information Science at the University of Washington with an emphasis on cybersecurity. Prior to this, he earned an M.S. in Information Science and a Master of Public Administration (MPA) from the University of Washington, as well as an M.A. in Political Science at Western Washington University.
His research area is cybersecurity with an emphasis on the human factors of cybersecurity. The primary focus of his research involves the examination of psychological traits and their relationship to the cybersecurity and privacy behavior of individuals. This has included an examination of antecedents and related behaviors, as well as usable security and privacy. His goal is to both understand behavior as it relates to cybersecurity and privacy, and discover what may be done to improve that behavior.
More recently, Dr. Dupuis and his collaborators have been exploring the use of fear appeals, shame, regret, forgiveness, and grace in cybersecurity, including issues related to their efficacy and the ethics of using such techniques to engender behavioral change.
Talk Abstract: Cyber-attacks pose a significant threat for organisations and individuals, with ransomware itself devastating countless lives across the world. This keynote talk seeks to move beyond focusing on the cyber-attacks themselves to explore the depth and breadth of harms experienced by victims of these crimes. Drawing on insights from extensive interviews with victims, incident responders, negotiators, law enforcement, and government officials, we uncover a range of severe consequences that extend beyond monetary loss. This is particularly in the case of ransomware attacks.
As we will discuss, organisations face significant risks of business interruption and data exposure, which can lead to substantial financial penalties, reputational damage, and potential legal repercussions. For employees – and specifically thinking about the human factor – the impact can be equally devastating. The psychological toll of a ransomware attack, for instance, is profound, leading to increased stress, anxiety, and even post-traumatic stress disorder. Furthermore, the physical consequences, such as disrupted work routines and extended work hours, can exacerbate these mental health challenges.
This talk also explores the factors that can either mitigate or exacerbate these harms, including organisational preparedness, leadership culture, and effective crisis communication. By understanding these dynamics, organisations can develop robust strategies to minimise the impact of ransomware attacks and support their employees during and after such incidents. This presentation aims to shift the narrative and research surrounding cyber-attacks, highlighting the human cost of these attacks. By recognising the multifaceted nature of cyber-attack harms, we can advocate for more comprehensive and effective response strategies, ultimately protecting organisations and their employees from this growing threat.
Biography: Dr Jason R.C. Nurse is a Reader in Cyber Security in the Institute of Cyber Security for Society and the School of Computing at the University of Kent. He also holds the roles of Associate Fellow at The Royal United Services Institute (RUSI), Visiting Fellow in Defence and Security at Cranfield University, and Research Member of Wolfson College, University of Oxford.
His research interests include human aspects of cyber security, cyberpsychology, cyber harms, security culture, ransomware, cyber insurance, and corporate communications and cyber security.
Dr Nurse has published over 120 peer-reviewed articles in prestigious security journals, and his research has been featured in national and international media including the BBC, Associated Press, The Wall Street Journal, The Washington Post, Newsweek, Wired, The Telegraph, and The Independent. Prior to joining Kent in 2018, Dr Nurse was a Senior Researcher in Cyber Security at the University of Oxford and before that, a Research Fellow in Psychology at the University of Warwick.
We invite you to submit a paper and join us in Manchester, UK at EuroUSEC 2025.
We welcome submissions containing unpublished original work describing research, visions, or experiences in all areas of usable security and privacy. We also welcome systematization of knowledge (SOK) papers with a clear connection to usable security and privacy. Well executed replication studies are also welcomed. We appreciate a variety and mixture of research methods, including both qualitative and quantitative approaches
Topics include, but are not limited to:
We aim to provide a venue for researchers at all stages of their careers and at all stages of their projects.
All submissions will undergo a double-blind review by at least two reviewers. The submissions will receive three decisions: Accept, Shepherding, or Reject. Papers receiving shepherding decisions will engage with an appointed Shepherd, and a revised version will be prepared and must be approved by the shepherd before being accepted. During the shepherding phase, the identities of the authors and one shepherding reviewer will be disclosed for communication purposes. Shepherding will take place outside the conference management system. The authors will be responsible for reaching out to Shepherd
Paper registration deadline (mandatory): | Monday, 13th May, 2025 |
Paper submission deadline: | Friday, 16th May, 2025(Hard Deadline) |
Author's notification: | Monday, 23rd June, 2025 |
Revision period (For shepherded papers): | Tuesday, 24th June to Monday 7th July, 2025 |
Author's notification (For shepherded papers): | Monday, 14th July, 2025 |
Camera-ready submission for all papers: | Monday, 4th August, 2025 |
Upload your submission via this link:
Disclaimer: The Microsoft CMT service was used for managing the peer-reviewing process for this conference. This service was provided for free by Microsoft and they bore all expenses, including costs for Azure cloud services as well as for software development and support.
Simultaneous submission of the same paper to another venue with proceedings or a journal is prohibited. Authors may post pre-prints, however—please consult the guidelines for further information. Serious infringements of these policies may result in the paper's rejection, and the authors may be put on a warning list, even after if we only become aware of the violation after the paper has been accepted. If you have questions about this policy, contact the EuroUSEC chairs..
The conference has an agreement with CPS to handle production of conference proceedings content. This conference is not sponsored by the IEEE. Accepted papers will be submitted for possible inclusion into IEEE Xplore. All conference content submitted to IEEE Xplore is subject to review based on meeting IEEE scope and quality requirements. If the conference is found not to meet these requirements, content may not appear in IEEE Xplore
At least one author of each accepted paper must register and attend to present the paper IN PERSON. We will only permit virtual presentations in exceptional circumstances.
Contact EuroUSEC chairs if there are any questions.
The chairs can be contacted at pc.chairs.eurousec
All times in the program are given in the British Summer Time (UTC +01:00). You can use this link to convert the times to any time zone you wish.
The preliminary program is available below.
The advances of generative AI and in particular large language models (LLM) resulted in the proliferation of AI chatbots designed for a variety of functions. One example of such chatbots are the so-called AI companion apps that allow creating an anthropomorphised character one can interact with. Indeed, AI companions become an increasingly common part of people's daily lives resulting in increased risks of adverse privacy and safety consequences. In this work we investigate the experiences of users of the Replika chatbot, an AI companion app that is advertised as ``the AI companion who cares''. We analyse 111 Reddit posts of Replika users, focusing on data shared with the app as well as harms users experience from interacting with the app. Our analysis shows that Replika is commonly seen as a simulation of human relationships, which results in users being attached to their chatbot in a similar way they would be attached to their romantic partner or a close friend. Such an attachment leads to significant amounts of sensitive data shared with the Replika app such as details about one's personal life, mental health issues, or sexual preferences. On the other hand, unexpected changes in Replika's workings, e.g., due to new restrictions or bugs introduced by software updates, elicit strong reactions among its users who report harms akin to feeling betrayed or abandoned by a real-life companion. Our research concludes the need for further investigation of relationships to AI companions and of possible ways to mitigate these privacy and safety risks.
Cognitive dissonance occurs when people reject new evidence that contradicts existing knowledge or behaviours. This is relevant to cybersecurity, a domain that often needs to persuade people to adopt new practices or change the way they currently behave. We carried out a scoping review of the literature determine the extent to which cybersecurity researchers have engaged with this concept. We find that whereas many papers mention the phenomenon, very few reported on actual studies into this phenomenon in cybersecurity. We conclude by suggesting a number of directions for research.
When organizations fall victim to a cyberattack, it can have psychological consequences on employees, the IT department, and the relationship between them. Some prior studies investigated these effects. Still, the areas of trust and security-related stress remain largely unexplored, even though these two variables can have serious effects on IT security behavior and employee well-being. We conducted a study in two research organizations in a German-speaking country, one of which was breached by a serious cyberattack. We used a questionnaire to gather data about employees' fear of cyberattacks, trust in the IT department, and security-related stress, among others (n = 149), and conducted semi-structured interviews with n = 7 IT employees of both organizations, asking about their perceived employee-trust, security-related stress and experience of the attack. We find that participants of the breached organization report less trust in the IT department, worse communication with the IT department and more security-related stress. We found positive correlations between trust in the IT department and the assessment of the communication of the IT department, and negative correlations between the two aforementioned variables and security-related stress. Results from the interviews show that IT employees report a high level of security-related stress, attach great importance to communication to improve employee trust, and describe a hardening of rules and an increase in communication after the attack. We further discuss the impact of our findings and conclude that a cyberattack can harbor the risk of an increase in security-related stress for all employees because of stricter security requirements, and that this can decrease employee trust, even though it is often meant otherwise.
Older adults face unique challenges with password-based authentication, often due to age-related memory decline, leaving their digital accounts vulnerable to compromise. Passkeys—a novel authentication method that replaces passwords with device-based tokens and biometrics—may offer an accessible alternative for this demographic. We conducted three qualitative workshop sessions (n=23) with community-dwelling older adults, discussing their current authentication practices, demonstrating passkey functionality, and eliciting their views on its usability and adoption barriers. Participants responded positively to the concept of passkeys, appreciating the elimination of memory burden, yet concerns emerged regarding device compatibility, transferring passkeys across devices, and potential risks if others accessed their hardware. Importantly, we found that older adults are generally reluctant to experiment with new authentication mechanisms on their own, emphasising the need for careful introduction and support to enable adoption. Our findings identify both opportunities and critical challenges for designing passkey systems that are inclusive of older users.
Despite recommendations from security authorities, such as those of the US NIST, against the use of security questions for online authentication, these methods are still used for login and account recovery processes. Although research on security questions has a long history, key gaps remain, particularly regarding user perceptions and the requirements used by websites for selecting and answering security questions. In this paper, we address these gaps through a two-part study: (i) a user survey (N = 292) capturing insights from a diverse US sample, and (ii) an analysis of an extensive set of 26 security requirements across 73 websites, totaling 1913 security questions. Our findings reveal notable user misconceptions, such as users’ believing that websites already possess correct answers to personal security questions. Additionally, We identify widespread insecure practices, such as accepting single characters and allowing identical answers across multiple security questions. By addressing both user perceptions and website security requirements, we provide a comprehensive understanding of weaknesses in current security question practices and contribute to the ongoing discourse on strengthening authentication methods.
The rapid growth of Extended Reality (XR) technologies has intensified the need for authentication mechanisms that are secure, usable, and responsive to the contextual and organizational complexities of immersive environments. Although existing research often focuses on technical performance and end-user usability, it rarely accounts for the broader ecosystem in which XR authentication mechanisms are designed, deployed, and regulated. This paper presents a first systematic exploration into how XR authentication research engages with key stakeholder types whose requirements and constraints shape the viability and adoption of user authentication mechanisms. We adopt a mixed-method approach comprising expert interviews to identify stakeholder types, a systematic mapping of 59 publications to assess their representation in the literature, and the development of a formal stakeholder model to capture their roles, interactions, and relationships, offering a structured foundation for aligning user authentication design with multi-stakeholder requirements and objectives, contributing to the development of more inclusive, deployable, and context-aware XR security solutions.
Spear phishing messages are highly tailored attacks designed to obtain confidential information or funds from individuals, yet systematically studying these attacks in non-organisational settings is challenging. This study conducted a realistic simulated spear-phishing campaign aimed at the general public. Among 20 younger adults (aged 18–25) and 21 older adults (aged 65 and above), 65% of younger participants and 90% of older participants entered their personal information on a ‘fake’ website after receiving the spear-phishing email. While some participants recognised signs of a potential scam, they dismissed these warnings due to their trust in the sender and the belief that someone they knew could not be spoofed by a malicious actor. These findings highlight how personal trust in an individual, rather than a recognised organisation, can override suspicion. We discuss the implications of our results and the ethical considerations of gathering such in-the-wild data using deceptive methods.
Reducing the spread of misinformation remains a complex problem, especially on encrypted social messaging platforms. AI-based fact-checking systems offer a promising alternative to manual verification, making faster and more scalable responses. However, how these systems communicate their findings to users is still an open design problem. Current approaches, such as binary warning labels, often fail to capture more subtle or partially misleading content. At the same time, users’ limited attention and the overwhelming volume of online information constrain how much and what kind of verification feedback can be delivered. This study explores how two key dimensions of feedback source (content or context-base) and granularity (binary vs. fine-grained assessments) affect users’ trust in the system, perceptions of usefulness, and judgments of content accuracy. In a pre-registered online experiment (n = 537), we tested how these design factors influence user responses. We found that credibility heuristics based on both content and context sources of credibility are valuable to users for making decisions, and that short-heuristic-based explanations are useful to users. In addition, we found that acknowledgement of system certainty about the verdict also helps users to tailor their opinions about the information. Our findings suggest that context-acknowledged short feedback based on heuristics may be a promising design direction to support users in assessing misinformation, even on platforms with limited content visibility, such as encrypted messaging apps.
The rise of fake news and populism online increases the demand for tools to counter misinformation and better inform users about the credibility of posts in social media. Effective information with high user acceptance is anticipated to increase competence in assessing credibility and assigning trust to posts encountered online. Misjudgment due to the increasing difficulty to evaluate the credibility of news sources can lead to societal risks, even to the point of threatening democratic societies, given the increasingly incidental information acquisition of citizens on the internet. In this paper, we investigate the utility and acceptance of an approach to enrich news article previews with context information. It visualizes the information quality of linked news articles with a rating, which is based on automatically extracted background information. Based on results from an online experiment with 455 participates, we obtained indications that users were better able to judge the credibility of news articles in OSNs with higher certainty. Additional feedback confirmed that transparency and comprehensibility of the rating were fundamental for its acceptance.
"In our interconnected world, good IT security practices are necessary to avoid vulnerabilities and data breaches. Providing security contacts, e.g., via Coordinated Vulnerability Disclosure (CVD) programs or security.txt files, is an important practice for businesses to facilitate vulnerability reporting by external parties.Within a longitudinal study, we analyzed Germany's DAX 40 companies' adoption, challenges and experiences with CVD programs. In addition to monitoring publicly available information about their CVD programs, we sent out questionnaires via email and postal mail in 2023 and 2025, and we received answers from 20% of the companies. The adoption rates show a significant increase from 50% (2023) to over 90% (2025), with ten new CVD programs and 25 new security.txt files being available.The survey answers reveal that, for example, legal obligations (e.g., NIS2 and CRA) drive the adoption of CVD practices, but lack of (human) resources and varying report quality are considered drawbacks. As the first study to survey German DAX companies on their CVD practices, our results can help foster the adoption and understanding of security programs by SMEs and other companies, or provide insights for policy makers in practical challenges and experiences from the industry.
"While the critical role of employees in cybersecurity has long been recognized, there is a lack of a comprehensive and holistic understanding regarding the key points of contact of employees with cybersecurity, beyond mere preventive actions. To address this gap, we conducted an exploration of key points of contact between employees and cybersecurity using semistructured interviews (n = 20) and identified employee aspects within the functions of the NIST Cybersecurity Framework (NIST-CSF). We demonstrate that particular perceptions, emotions, and social dynamics are relevant to employees’ perspectives on cybersecurity in comparison to the rather technical inclusion of the employees within the NIST-CSF. By aligning the employees’ perspectives with the functions/subcategories of the NIST-CSF, we take a step toward an employee-centric framework for cybersecurity that highlights these essential key points of contact between employees and cybersecurity and points out gaps for the integration of employees in organizational cybersecurity. We conclude the paper by giving recommendations for practitioners and future research.
Small-to-medium enterprises (SMEs) remain disproportionately vulnerable to cyber incidents due to constrained resources and underdeveloped operational practices. While many maintain incident response plans (IRPs) to meet regulatory requirements, these plans are often untested and poorly integrated into operational workflows; resulting in delayed containment, unclear escalation, and inconsistent response actions. This disconnect between documentation and execution represents a critical readiness gap that can significantly increase the impact and duration of cyber events. To address this challenge, this paper introduces the Incident Response Readiness Score (IRRS);a scenario-based assessment framework designed to empirically evaluate an organisation's incident response capability under simulated conditions. The IRRS applies a structured scoring rubric calibrated through a Scenario Risk Index, enabling proportional evaluation of performance across diverse incident types. By transforming qualitative incident response actions into a reproducible and risk-weighted metric, the IRRS offers a practical and scalable means of assessing and improving cybersecurity readiness for different type organisations.
Older adults in Ireland face critical cybersecurity challenges that impact their digital security posture, risk perception, and online engagement. Through a qualitative analysis of semi-structured interviews with 77 participants aged 60 and above, we identify key barriers, including cybersecurity knowledge gaps, usability challenges with authentication mechanisms, and a lack of accessible security support. Participants report concerns about online fraud, privacy risks, and the overwhelming nature of security advice, which often results in digital disengagement. By categorizing participants by technology usage-low, medium, and high, we uncover distinct cybersecurity behaviors. Greater technology use is often accompanied by increased awareness of cyber threats; however, this does not necessarily translate into secure practices, as many continue to struggle with two-factor authentication (2FA) and secure password management. Medium-usage participants exhibit high anxiety over scams and rely on family for security support, while low-usage individuals, though less exposed to threats, remain at risk due to a lack of foundational digital skills and reliance on avoidance-based security strategies. Our findings highlight the urgent need for targeted cybersecurity education, simplified authentication systems, and age-appropriate security interventions to enhance digital confidence, cyber resilience, and online inclusion among older adults, ensuring their secure and equitable participation in an increasingly digital society.
"Cyber situation awareness (CSA) is critical for effective decision-making in Security Operations Centres (SOCs). However, existing research lacks a structured understanding of how AI systems support analyst CSA across different decision-making modes. In this paper, we present a systematisation of knowledge (SoK) that examines how AI-enabled tools contribute to the three levels of CSA: perception, comprehension and projection, across three decision-making modes: automated, augmented, and collaborative. We introduce a three-dimensional framework to assess over 70 selected studies, capturing the landscape of AI support across these CSA levels, decision-making modes, and the diversity of SOC tasks. Our analysis reveals a dominant focus on perception-level support in automated settings, limited attention to higher-level CSA in augmented mode, and no work addressing collaborative decision-making. Based on these findings, we outline research directions for designing AI systems that comprehensively enhance analyst CSA in SOCs."
HTTPS-Only modes are new browser security features that present users with a warning page before proceeding to non-HTTPS websites. Despite these modes being available in most major browsers, little to no work has been done researching what these modes should be aiming to do, or how users react to these warnings. SSL Stripping attacks, which these modes mitigate are common in the Tor network. As a result, we studied these warnings in the context of Tor Browser. We deployed a survey of Tor experts and gathered their thoughts on these browser modes in general, as well as gaining specific feedback on 3 current warning pages. We report a number of potential improvements to HTTPS-Only mode warning pages. Future warning pages should mention specific types of attack that could occur. Warnings should also include discussion about the integrity of web content, not just confidentiality. The context of the website being visited is also not mentioned by current warning pages. Participants also highlighted that the warning as it appears in Tor Browser should feature some Tor specific advice. Finally, prompted by some participant responses, we engage in a discussion about whether the warnings should aim to deter non-HTTPS connections fully, or seek to empower users to make a determination themselves.
Countries are launching Internet of Things (IoT) cybersecurity label programs to help consumers make more informed purchasing decisions and motivate manufacturers to create more secure IoT products. In such programs, products that meet program requirements can be sold with a special label to signal cybersecurity compliance. Currently, there is no evidence-based guidance or standardized implementation of labels or label-awarding program policies. We conducted an online survey to understand the impact of IoT labels and choices such as validation requirements (i.e., whether manufacturers need to self attest or seek third-party audits to validate their products' compliance) regarding participants' security and privacy concerns. Our research provides empirical evidence to guide policy choices by effective label-awarding programs. We find that the presence of IoT labels alleviated both security and privacy concerns; however, we did not find differences between other program implementation choices. We provide recommendations for IoT cybersecurity label programs and discuss the potential societal impacts of label programs.
"While security experts have extensively identified risks in smart homes with interconnected Internet of Things (IoT) devices, little work has examined user-perceived control over security practices. Particularly how the complexity of IoT device interactions and the content of device app privacy statements influence user perceptions. To fill this gap, we developed a threat and mental model grounded in the Illusion of Control (IoC) theory and empirically evaluated how IoT privacy statements and interaction complexity shape users' security practices. We surveyed 102 participants to measure how security knowledge, security attitude, perceived controllability, understanding of privacy statement, and perceived IoT interaction complexity influence security practices. Our findings confirm three key insights. First, overconfidence in security management weakens the adoption of secure practices. Second, users who understand and trust the privacy statement of IoT applications are more likely to engage in secure practices. Third, the results suggest that users who perceive IoT interoperability security as too complex are less likely to adopt protective measures. Based on our findings, we provide recommendations to IoT security experts to develop more effective IoT security and privacy measures."
Cyberattacks frequently target humans, for example, by using social engineering to trick them into revealing sensitive information or by exploiting insecure behavior. Traditional security awareness training and guidelines have proven insufficient to address this issue, as they are not tailored to individual usage conditions and are disconnected from real-world situations. Additionally, these trainings and guidelines do not adapt to users’ changing needs or evolving knowledge. Contrary to traditional training, personal cybersecurity companions, whether digital or tangible, provide a new, adaptive, and integrated way to assist users in understanding security concerns and behaving securely in cyberspace. In this paper, we explore the space of cybersecurity companions through ideation workshops (N = 12), particularly focused on privacy in IoT and phishing. Through the analyses of the end-user visions built during our workshops, we conceptualize and present the XSec Companions design framework. Our work can guide future researchers in developing both digital and tangible xSec Companions whilst providing an overview of the opportunities and challenges in this space.
Registration is mandatory for participation in EuroUSEC. Please register using the following link: Register Now
At least one author for each accepted paper has to register until August 26th, 2025 . No onsite registration is available!
The prices for the registration are as follows.
Author | 400 GBP |
---|---|
Other Participants | 400 GBP |
NOTE: Each paper must have at least one registration under the "Author" option. Please note that authors are expected to present their papers in person at the conference. The online option is reserved for those facing legitimate travel difficulties; however, just the need for a visa to travel to UK is not considered a valid reason. We strongly recommend that authors requiring a visa to travel to the UK apply as early as possible. See Visa/ETA section for further information or contact Conference Chairs.
EuroUSEC 2025 will be held on September 10 and 11 in Manchester, UK.
Event location: Digital Security Hub (DiSH) 47 Lloyd Street, Manchester M2 5LE, Floor G, Heron House. The building is known as "Manchester Registry Office".
Travelling to Manchester : Manchester has many transport links including Rail, Coach, and Car. Situated at the heart of the M60 Ring Road, it is connected to motorways North, South, East, and West.
Traveling within Manchester : Manchester has bus, tram, train as its main methods of public transport, with a large number of dedicated cycle lanes throughout the city centre. This includes a specific free bus route around the city. The transport links are detailed here. Car parkings are hard to find in the city centre!
Accommodation: We will not arrange any hotel reservations for the attendees.
The conference takes place in the heart of Manchester's City Centre. Information for where to stay can be found here. Reservations for nearby stays can also be made through AirBnB or Booking.com
Please note that anyone traveling to the UK will need an Electronic Travel Authorisation (ETA). These are being introduced in several stages for different nationalities, over the next few months. This link is the best overview, where you can find official information and apply.
This link details key deadlines for groups of visitors (by country) are here.
Visas and Certificates of Attendence:
We may provide visa support letters to attendees as well as authors with accepted papers, however it does not issue formal invitation letters for visas.
Please keep in mind that the person who requests for an invitation letter must pay the registration fee first, and the letter can only be sent after the payment has been made.
Certificates of attendance may be requested in the registration form, and will be issued at the end of the conference.
To make EuroUSEC as effective as possible for everyone, we ask that all participants commit to our social contract: