Ideas in Testing Research Seminar Schedule, October 20, 2017

Coffee & Networking (9:15 — 9:45)


Welcome and Introduction (9:45 — 10:00)


Compute Adaptive Testing (CAT) (10:00 — 11:00)

Item exposure mediation in CAT using temporary enemy item relationships (Becker, Pearson) abstract

Self efficacy and the CAT environment (Mark Brow, UIC) abstract

A Predicted Standard Error Reduction Stopping Rule for Multidimensional Computer Adaptive Tests (Neopolitan, Morris, Bass, Lauristen) abstract slides

Break 11:00-11:10 Break


Item Analysis & DIF (10:10 — 12:10)

Investigating item characteristics and DIF in MIMIC (Reboucas & Cheng, Notre Dame) abstract

Exploring the linguistic characteristics of DIF (Jorion, Pearson) abstract

Using Response Time to Detect Speededness Based on CUSUM (Yu & Cheng, Notre Dame) abstract

Lunch (12:10 — 1:00)


Research Discussion (1:00 — 1:45)

An agenda for psychometric research (Mead, Talent Algorithms Inc., Becker, Pearson, & Morris, IIT) abstract

Computational Modeling (1:45 — 2:45)

A computational model of targeted recruiting (Morris, IIT) abstract

A machine learning "Rosetta Stone" for psychologists and psychometricians (Mead, Talent Algorithms Inc. & Huang, Amazon) abstract slides

Precise estimation of type I error inflation from questionable research practices (Hernandez) abstract

Break 2:45 — 3:00


Applications (3:00 — 4:00)

Exploring Adolescent Personality with a Sliding Response Scale (Yankov, Bowling Green State University & Testify Software Solutions) abstract

Relative Index Score Report based on Estimated Domain Score (Denbleyker, Houghton Mifflin Harcourt) abstract

The psychometrics of Likert surveys: Lessons learned from analyses of the 16pf Questionnaire(Mead, Talent Algorithms Inc.) abstract slides

Closing comments (4:00)

Questions about the seminar may be directed to Alan Mead (), Scott Morris (), or Kirk Becker (). We hope you will join us.

Back to the main page