Effects of respondent training on self-report personality assessment: an item response theory approach
Zhang, Luyao
Loading…
Permalink
https://hdl.handle.net/2142/102913
Description
Title
Effects of respondent training on self-report personality assessment: an item response theory approach
Author(s)
Zhang, Luyao
Issue Date
2018-11-26
Director of Research (if dissertation) or Advisor (if thesis)
Drasgow, Fritz
Doctoral Committee Chair(s)
Drasgow, Fritz
Committee Member(s)
Chang, Hua-Hua
Fraley, R. Chris
Newman, Daniel
Roberts, Brent
Rounds, James
Department of Study
Psychology
Discipline
Psychology
Degree Granting Institution
University of Illinois at Urbana-Champaign
Degree Name
Ph.D.
Degree Level
Dissertation
Keyword(s)
Rater training
item response theory (IRT)
self-report personality assessment
the ideal-point model
intermediate items
Abstract
Within the item response theory (IRT) framework and inspired by the rater training literature, this study explored the effects of short online respondent training on personality item interpretation and responding and the number of response categories (i.e. polytomous vs. dichotomous) on item performance, model-data fit, and criterion-related validity. Participants recruited from MTurk (n = 1977) were randomly assigned to 1 of the 4 groups differing in training (i.e. training vs. no training) and response scale (i.e. 4-point Likert scale vs. dichotomous), and their responses to dominance and ideal-point personality measures were analyzed with GGUM, SGR, and 2PL. Results indicated that training was associated with more well-performing and more discriminating and informative intermediate items on the ideal-point scales when a dichotomous response scale was used. The dichotomous scale in general was related to better fit, while criterion-related validity stayed unaffected by both training and the response scale. Participants reported that they had been confused about personality items before, and were positive about the online training, which was consistent with the finding that trained participants on average spent 32 seconds less finishing the ideal-point surveys. Implications for future research and practice are discussed.
Use this login method if you
don't
have an
@illinois.edu
email address.
(Oops, I do have one)
IDEALS migrated to a new platform on June 23, 2022. If you created
your account prior to this date, you will have to reset your password
using the forgot-password link below.