Computer science assessments may be overestimating student readiness

Nov 07, 2022      By Scott Schrage | University Communication and Marketing

Nebraska’s School of Computing asked Ryan Bockmon and Stephen Cooper to develop an assessment that could evaluate whether students were ready for Computer Science I or might instead benefit from a pre-intro class.
Nebraska’s School of Computing asked Ryan Bockmon and Stephen Cooper to develop an assessment that could evaluate whether students were ready for Computer Science I or might instead benefit from a pre-intro class.

Welcome to Pocket Science: a glimpse at recent research from Husker scientists and engineers. For those who want to quickly learn the “What,” “So what” and “Now what” of Husker research.

What?

Even for computer science majors, introductory programming courses can prove a challenge. The sizable percentage of undergrads who receive D’s and F’s, or withdraw before they do, cannot continue forward with the more advanced coursework needed to earn a degree in the field. And research has indicated that even those earning C’s in an intro class often struggle with higher-level computer science courses.

So what?

With its first-year students not immune from those struggles, Nebraska’s School of Computing asked Ryan Bockmon and Stephen Cooper to develop an assessment that could evaluate whether students were ready for Computer Science I or might instead benefit from a pre-intro class.

The Husker duo chose to combine two existing instruments, both of which had been shown to effectively predict performance in computer science courses. Bockmon and Cooper then conducted a voluntary study whereby students enrolling in Computer Science I could choose to complete the new assessment at the start of the fall 2020 semester. Of the 459 enrolled students, 202 decided to participate.

To their surprise, the researchers found no meaningful link between performance on the assessment and performance in the introductory course. But the team did notice that their voluntary sample contained remarkably few students who went on to withdraw or receive D’s and F’s at the end of the semester. Curious, Bockmon and Cooper compared the final grades of the 202 students who took the assessment against the 257 who chose not to. (Students who withdrew were assigned a 0, the equivalent of an F, on a 4.0 scale.)

The results were striking:

  • Students who took the assessment averaged a 3.1 (B) in Computer Science I, whereas students who did not averaged a 2.3 (C+).

  • Students who did not take the assessment were 5.5 times more likely to drop out of the course and 2.7 times more likely to fail it.

Those findings pointed to the presence of a participation bias, meaning that the participants likely differed from the non-participants in ways that affected their performance in Computer Science I. And because the sample of participants was not representative of the class as a whole, the otherwise-valid assessment failed to predict that performance.

Now what?

Bockmon and Cooper suspect that participation bias is a problem for computer science departments around the country, many of which also offer voluntary pre-course assessments. Making those assessments mandatory, or increasing incentives for participation, could mitigate the problem — and ultimately better identify students who could use some early help.

View the original story in Nebraska Today.