Annotated Data Quality

Project Description

This line of research is dedicated to understanding what drives and impacts the process of human annotation of Machine Learning (ML) training data. Building on a variety of literature from survey methodology, social psychology and computer sciences we conduct experimental research in order to detect sources of bias in the annotation process. Our studies indicate that annotation is sensitive to slight changes in the annotation task design, the task order and certain annotator demographics.

Publications

Contact person

Jacob Beck
Bolei Ma