A research program at the intersection of AI excellence and societal responsibility.
My research pursues two complementary goals: building powerful generative AI for real-world applications, and systematically studying the ethical risks such systems pose in terms like unfair and harmful biases. The overarching aim is to develop context-sensitive AI systems that are technically strong and responsibly deployable. My professorship is funded by the Excellence Initiative of the German federation and the federal states. Before starting my appointment as a Full Professor of Trustworthy Artificial Intelligence in the Department of Informatics, I was Associate Professor of Data Science at the University of Hamburg Business School. Previously, I was a Postdoctoral Researcher in the Natural Language Processing group at Bocconi University (Milan, Italy) where I was working on introducing demographic factors into language processing systems with the aim of improving algorithmic performance and system fairness. I obtained my Ph.D., awarded with the highest honors (summa cum laude), from the Data and Web Science group at the University of Mannheim (Germany), where my research focused on the interplay between language representations and computational argumentation. During my studies, I also conducted research internships at and became an independent research contractor for Grammarly Inc. (New York City, U.S.) and for the Allen Institute for Artificial Intelligence (Seattle, U.S.).
Core themes.
My research portfolio spans foundational questions about trustworthy AI as well as ambitious applications e.g., in the sciences.
Safe, fair, and inclusive generative AI
Studying harmful biases, discriminatory behavior, safety failures, and mitigation strategies in generative AI systems, with a focus on realistic usage scenarios and ecologically valid evaluation.
Multilingual, multicultural, and multimodal AI
Understanding how language models and their multimodal extensions (e.g., vision-language models, audio-language models, etc.) behave across languages (and smaller linguistic varieties like dialects), and cultures. Especially where benchmarks and systems have historically marginalized underrepresented communities and their knowledge.
Interpretability, benchmarking, and robustness
Developing datasets and methods to probe what models learn, where they fail and why, and how robustly they behave under contextual variation.
AI for expert domains and scientific discovery
Exploring how generative and agentic AI can support complex scientific and technical environments, including particle accelerators and laser operations, through knowledge-rich and context-sensitive system design.
Selected third-party-funded projects.
E4-MALM — Evaluating, Explaining, and Enabling Ethical Multi-Agent Systems of Large Language Models
Project within the DFG Priority Programme SPP 2556. The project focuses on how unfair bias emerges and amplifies in multi-agent LLM systems, and how such dynamics can be explained and controlled.
Funder: Deutsche Forschungsgemeinschaft
Contested Climate Futures: Discursive Powerplay in the Media
Project within the Cluster of Excellence “Climate, Climatic Change, and Society” (CLICCS) on multimodal media analysis of climate discourse with novel efficient and fair multimodal and multilingual AI methods.
Funder: Deutsche Forschungsgemeinschaft
AI-Powered XFEL Laser Operations: Boosting Uptime with Language Models
Project on language-model-based support for scientific operations in complex laser infrastructure, connecting AI methods with complex technical environments.
Funder: Data Science in Hamburg - Helmholtz Graduate School for the Structure of Matter
GeFMT — Gender-Fair Language in German Machine Translation
Project developing resources and methods to evaluate the gender inclusion in German machine translation.
Selected research highlights.
Research on fairness, inclusiveness, multilinguality, multimodality, interpretability, and scientific applications of AI. My full list of publications is available on Google Scholar.
Recent highlights
ArXiv Pre-print
EACL 2026
EMNLP 2025
NAACL 2025 · Outstanding Paper Award
NAACL 2025
Science Advances 2025
Representative earlier work
EACL 2024 · Social Impact Award
ACL 2023
EMNLP 2022
COLING 2022
ACL-IJCNLP 2021
AAAI 2020
Awards and media recognition.
My work has received recognition for both scientific quality and social impact, and has also reached broad public visibility.
Awards
Selected public resonance
Let’s connect.
For collaborations, invited talks, student opportunities, media requests, or research exchange, feel free to get in touch.
Impressum.
Anne Lauscher
Trustworthy AI Lab
University of Hamburg
Bundesstrasse 56b
20146 Hamburg
anne dot lauscher at uni-hamburg dot de