
Upcoming Events
About Me.
We maintain an active research agenda and are pleased to share our ideas and results with the broader research community. Below you will find our upcoming events.
October 2025
Getting Started with LLM Evaluation: A Primer for Psychometricians.
A 2-hour workshop to be presented by Jodi Casabianca.
Abstract: This session introduces psychometricians and assessment scientists to evaluation methods for large language model (LLM) applications in education. Through a combination of lecture and hands-on activities, participants will explore key evaluation techniques—including error analysis, human review, and the use of LLMs as evaluators ("LLM-as-a-judge"). The session will emphasize how psychometric principles can strengthen the rigor, validity, and interpretability of LLM application pipelines.
Can AI-generated rationales provide evidence that AI scores are valid?
*Paper presentation; to be presented by a colleague.

Recent Events
Title | Event Type | Meeting | Location | Date |
---|---|---|---|---|
Evaluating Rationales: A Comparative Study of LLMs and Human Raters in Assessing Language Learners’ Essays | Paper | National Council for Measurement in Education | Denver, CO, USA | 26/04/2025 |
Validity Evidence for Use and Interpretation of Scores from Generative AI | Paper | National Council for Measurement in Education | Denver, CO, USA | 25/04/2025 |
The Where, What, and How of the Job Market for Measurement
Professionals | Panel Discussion | National Council for Measurement in Education | Denver, CO, USA | 24/04/2025 |
Best Practices for AI Scoring | Workshop | National Council for Measurement in Education | Denver, CO, USA | 23/04/2025 |
Best Practices for AI Scoring of Constructed Responses | Workshop | International Association for Educational Assessment | Philadelphia, PA, USA | 22/09/2024 |