NSS 2023: 17th International Conference on Network and System Security
SocialSec 2023: 9th International Symposium on Security and Privacy in Social Networks and Big Data
University of Kent, Canterbury, UK
| August 14-16, 2023
Julia Hesse, IBM Research Zurich, Switzerland
Against numerous predictions of security experts, passwords remain the most widely used authentication method on the internet in 2023. The average number of password-protected accounts of US and European citizen is far beyond 50. Passwords are inherently insecure in use, as we humans tend to reuse the few strong passwords we manage to remember. Protecting passwords from breaches is hence vital to prevent attackers to take over our accounts, our online resources, and our private data. Still, countless password breaches occur each days, and despite our efforts to choose secure passwords, we occasionally find them on https://haveibeenpwned.com/Passwords. How can that happen? In this talk, the speaker will explain how password verification is deployed in practise, and why even our most secure passwords are prone to breaches. She will then show simple and efficient solutions from the field of cryptography that prevent password breaches, and discuss their quantum-safety and adoption in real-world systems.
Julia Hesse is a cryptographer at IBM Research Zurich, working on the provable security of real-world protocols. She is particularly interested in securing human authentication systems based on biometrics or passwords. She is the receipient of a Swiss National Science Foundation Ambizione grant, which supports in particular her work in the area of developing further the cryptographic tools for secure biometric authentication. Before joining IBM, she was a Postdoctoral Researcher at TU Darmstadt and ENS Paris. She received her PhD from Karlsruhe Institute of Technology in 2016, under the supervision of Dennis Hofheinz, and working on the relevance of multilinear maps for building advanced cryptographic primitives. Julia regularly serves on the program committees of both cryptographic and security conferences, has chaired the security standardization research conference in 2023 and will chair Eurocrypt 2024. She is a member of the Internet Research Task Force (IRTF) cfrg Panel since 2020. Julia lives in Switzerland with her husband and two kids. In her free time she enjoys playing tennis or the piano, meeting friends, and going on hikes. Secret skill: she is a cat *and* dog person.
Nishanth Sastry, University of Surrey, UK
Web and Social Media have become a morass of online harms such as hate or partisan speech. Almost every website we visit subjects us to ubiquitous tracking by a highly developed ad-driven commercial ecosystem. This talk will present our recent efforts to conduct large-scale/multi-country measurement studies that show how these negative aspects of our online lives are affecting web users in major countries around the world such as UK, USA, India and China. We will discuss whether the movement to “redecentralise” the web and social media can address these concerns, identify potential limitations, and suggest possible future directions.
Nishanth Sastry is Professor of Computer Science and the Director of Research at Department of Computer Science, University of Surrey, UK. His research spans a number of topics relating to social media, content delivery and networking, and online safety and privacy. He is Joint Head of the Distributed and Networked Systems Group and co-leads the Surrey Security Network. He is also a Surrey AI Fellow and a Visiting Researcher at the Alan Turing Institute, where he is a co-lead of the Social Data Science Special Interest Group. Nishanth holds a Bachelor's degree (with distinction) from R.V. College of Engineering, Bangalore University, a Master’s degree from University of Texas, Austin, and a PhD from the University of Cambridge, all in Computer Science. Previously, he spent over six years in the Industry (Cisco Systems, India and IBM Software Group, USA) and Industrial Research Labs (IBM TJ Watson Research Center). He has also spent time at the Massachusetts Institute of Technology Computer Science and AI Laboratory. His honours include a Best Paper Award at SIGCOMM Mobile Edge Computing in 2017, a Best Paper Honorable Mention at WWW 2018, a Best Student Paper Award at the Computer Society of India Annual Convention, a Yunus Innovation Challenge Award at the Massachusetts Institute of Technology IDEAS Competition, a Benefactor’s Scholarship from St. John’s College, Cambridge, a Best Undergraduate Project Award from RV College of Engineering, a Cisco Achievement Program Award and several awards from IBM. He has been granted nine patents in the USA for work done at IBM.
Lorenzo Cavallaro, University College London (UCL), UK
No day goes by without reading machine learning (ML) success stories across various application areas. Systems security is no exception, where ML's tantalizing performance leave one to wonder whether there are any unsolved problems left. However, machine learning has no real clairvoyant abilities and once the magic wears off, we're left in uncharted territory. Is machine learning truly capable of ensuring systems security? In this talk, we will highlight the importance of reasoning beyond mere performance by examining the consequences of adversarial attacks and distribution shifts in realistic settings. When relevant, we will also delve into behind-the-scenes aspects to encourage reflection on the reproducibility crisis. Our goal is to foster a deeper understanding of machine learning's role in systems security and its potential for future advancements.
Lorenzo Cavallaro is a Full Professor of Computer Science at University College London (UCL), where he leads the Systems Security Research Lab (https://s2lab.cs.ucl.ac.uk). He grew up on pizza, spaghetti, and Phrack, and soon developed a passion for underground and academic research. Lorenzo's research vision is to enhance the effectiveness of machine learning for systems security in adversarial settings. He works with his team to investigate the interplay between program analysis abstractions, representations, and ML models, and their crucial role in creating Trustworthy AI for Systems Security. Despite his love for food, Lorenzo finds his Flow in science, music, and family.