Project Description
Recent LLMs have demonstrated remarkable capabilities across a range of tasks, yet their underlying linguistic competence remains less explored. As these models increasingly inform research and applications in the social sciences and beyond, a systematic evaluation of their linguistic abilities becomes critical. This project investigates the extent to which LLMs capture core aspects of linguistic knowledge, including syntax, semantics, pragmatics, and sociolinguistic variation. Drawing on methods from linguistics, cognitive science, and natural language processing, we design targeted evaluation tasks to probe specific linguistic phenomena, develop new benchmarks, and identify systematic strengths and weaknesses in current models. Our aim is to contribute to a more rigorous understanding of LLM capabilities and limitations, providing insights that are essential for both theoretical modeling and practical deployment.
Project Team
Name | |
---|---|
Ma, Bolei | bolei.ma@lmu.de |
Publications
- Bolei Ma. 2024. Evaluating Lexical Aspect with Large Language Models. In Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics, pages 123–131, Bangkok, Thailand. Association for Computational Linguistics.
- Bolei Ma, Yuting Li, Wei Zhou, Ziwei Gong, Yang Janet Liu, Katja Jasinskaja, Annemarie Friedrich, Julia Hirschberg, Frauke Kreuter, Barbara Plank. 2025. Pragmatics in the Era of Large Language Models: A Survey on Datasets, Evaluation, Opportunities and Challenges. Preprint.