Retellings and Semi-Structured Interviews for Assessing Reading Comprehension of Standardized Test Passages and of the Illinois Inventory of Educational Progress Passages as Compared to Scores on Multiple-Choice Test Items
Seda-Santana, Ileana
This item is only available for download by members of the University of Illinois community. Students, faculty, and staff at the U of I may log in with your NetID and password to view the item. If you are trying to access an Illinois-restricted dissertation or thesis, you can request a copy through your library's Inter-Library Loan office or purchase a copy directly from ProQuest.
Permalink
https://hdl.handle.net/2142/69222
Description
Title
Retellings and Semi-Structured Interviews for Assessing Reading Comprehension of Standardized Test Passages and of the Illinois Inventory of Educational Progress Passages as Compared to Scores on Multiple-Choice Test Items
Author(s)
Seda-Santana, Ileana
Issue Date
1988
Doctoral Committee Chair(s)
Pearson, P. David
Department of Study
Education
Discipline
Education
Degree Granting Institution
University of Illinois at Urbana-Champaign
Degree Name
Ph.D.
Degree Level
Dissertation
Keyword(s)
Education, Language and Literature
Education, Tests and Measurements
Education, Elementary
Education, Reading
Abstract
The literature in reading comprehension points to a disparity between current knowledge about the reading process and current assessment practices. Prevalent theoretical views favor a constructivist cognitive framework in which text meanings are determined by a composite of reader-related factors and text-related factors. In contrast, current assessment practices maintain single, pre-specified correct text interpretations with literal level understanding as their primary focus. In addition, the literature points to the irrelevancy of test scores in making instructional decisions.
Twenty-eight third graders and 25 sixth graders, all of whom had been part of the Reading Assessment Initiatives in the State of Illinois project, participated in the study. They had been tested with both a traditional and a novel multiple-choice test. For the present study, subjects re-read narrative and expository passages, retold the passages, and answered semi-structured interview questions individually. Protocols were scored wholistically and analytically. The resulting ratings and rankings were contrasted to those obtained in multiple-choice tests. Qualitative (case studies) and quantitative (correlations and frequencies) results indicated that subjects' ratings and rankings differed with the type of assessments used. Results also pointed to discrepancies in the use of text characteristics and salient text information by subjects and by test questions. The results support the notion that different assessment formats either assess different processes or different components of one process. In a survey, subjects indicated having a preference for novel assessment formats (interviews or multiple-choice test items with more than one correct answer) over traditional multiple-choice formats with a single correct answer. The study raises crucial issues such as the importance of task demands, text characteristics, reader factors, and source of responses. Future research should continue to explore alternative assessment procedures in order to bridge the present gap between assessment, theory, and practice.
Use this login method if you
don't
have an
@illinois.edu
email address.
(Oops, I do have one)
IDEALS migrated to a new platform on June 23, 2022. If you created
your account prior to this date, you will have to reset your password
using the forgot-password link below.