Encourages innovative and creative solutions.
Always positive and enthusiastic in class.
Fosters collaboration and teamwork.
Challenges students to reach their potential.
Associate Professor Leo Zhang serves in the School of Information and Communication Technology at Griffith University, Australia, where he also serves as the Program Director for the Bachelor of Cybersecurity. He earned his Ph.D. in Electrical Engineering from City University of Hong Kong in 2016. Following his doctoral studies, he worked as a Research Fellow in the Department of Computer Science at City University of Hong Kong. From 2018 to 2023, Zhang was a faculty member in the School of Information Technology at Deakin University. He joined Griffith University as a Senior Lecturer in March 2023 and was promoted to Associate Professor in December 2025.
Zhang's research specializes in cybersecurity, particularly trustworthy artificial intelligence and applied cryptography. He is a core member of the TrustAGI Lab, which focuses on endowing machines with human-level intelligence while ensuring trustworthiness and transparency. A member of IEEE and ACM, Zhang has published extensively in top conferences including IEEE Symposium on Security and Privacy (Oakland), ICLR, NeurIPS, ICML, IJCAI, USENIX Security, CVPR, and AAAI. Key publications include "ARES: Scalable and Practical Gradient Inversion Attack in Federated Learning through Activation Recovery" (2026, IEEE S&P); "Nasty Adversarial Training: A Probability Sparsity Perspective for Robustness Enhancement" (2026, ICLR); "Vanish into Thin Air: Cross-prompt Universal Adversarial Attacks for SAM2" (2025, NeurIPS, spotlight paper); "BiMark: Unbiased Multilayer Watermarking for Large Language Models" (2025, ICML); "DarkSAM: Fooling Segment Anything Model to Segment Nothing" (2024, NeurIPS); "DarkFed: A Data-Free Backdoor Attack in Federated Learning" (2024, IJCAI); "PointCRT: Detecting Backdoor in 3D Point Cloud via Corruption Robustness" (2023, ACM MM); and "Denial-of-Service or Fine-Grained Control: Towards Flexible Model Poisoning Attacks on Federated Learning" (2023, IJCAI). His publications have received over 5,876 citations on Google Scholar. He was recognized as one of the Notable Reviewers for ICLR 2025.
