This item is only available for download by members of the University of Illinois community. Students, faculty, and staff at the U of I may log in with your NetID and password to view the item. If you are trying to access an Illinois-restricted dissertation or thesis, you can request a copy through your library's Inter-Library Loan office or purchase a copy directly from ProQuest.
Permalink
https://hdl.handle.net/2142/47619
Description
Title
Improving the Assessment of Student Code
Author(s)
Tischer, Matthew A
Contributor(s)
Lumetta, Steven S.
Issue Date
2013-05
Keyword(s)
automatic testing
software testing
software design
debugging
fault detection
genetic mutations
systems engineering
Abstract
Current methods for automatically grading student code have significant flaws. While methods that use test sets to determine code correctness
sucessfully identify perfect or extremely
flawed code, they may not effectively
classify code that does not fall into one of these two categories; furthermore,
they may not identify inputs that will crash improperly implemented student
code. I/O based testing is also unable to identify mistakes made within
a program; it can only inspect the program's outputs. Our research hopes
to improve this situation by creating tools that enable instructors to more
easily (and fairly) assess student code quality without costly manual review.
We also hope to use the results of this research to teach students how to
effectively test software.
As one part of this research, we tested the symbolic execution tool KLEE
on a set of student programs to determine how long it would take for KLEE
to identify errors in the student code. We found that if KLEE successfully
determines inputs that will crash simple student programs, it will do so in a
short period of time (1 second, for the given code).
We also investigated and attempted to define the characteristics of a good
test set. As one simple measure of quality, we created a simple tool that
determines the implications between tests in a test set.
We are currently developing a first-order mutation testing tool for the C
language to evaluate the quality of instructor test sets. We intend to compare
test sets performance on student code to their performance on mutated code
to determine whether mutation testing serves as an accurate indicator of test
set performance.
Use this login method if you
don't
have an
@illinois.edu
email address.
(Oops, I do have one)
IDEALS migrated to a new platform on June 23, 2022. If you created
your account prior to this date, you will have to reset your password
using the forgot-password link below.