The Rise of Program Auto-grading in Introductory CS Courses: A Case Study of zyLabs

Chelsea Gordon, Roman Lysecky, Frank Vahid

Research output: Contribution to journalConference articlepeer-review

Abstract

In recent years, hundreds of college courses have switched how they grade programming assignments, from grading manually and/or using batch scripts, to using commercial cloud-based auto-graders with immediate score feedback to students and the ability to debug and resubmit for a higher score. This paper provides data on the rise in usage of one of the most widely-used program auto-graders, zyLabs, as one indicator of the strong shift in college course grading to the auto-grading paradigm. The number of courses, instructors, and students using zyLabs have increased dramatically since it was first introduced, such that from 2016 to 2020, the number of courses per year grew from 284 to 2,175, the number of students per year from 24,216 to 132,121, and the number of instructors per year from 364 to 2,866. Most instructors state they previously graded programs by hand and auto-grading saved an average of 9 hours per week. The result is a substantial shift in the classroom dynamic that enables instructors and students to spend more time on quality teaching and learning.

Original languageEnglish (US)
JournalASEE Annual Conference and Exposition, Conference Proceedings
StatePublished - Jul 26 2021
Event2021 ASEE Virtual Annual Conference, ASEE 2021 - Virtual, Online
Duration: Jul 26 2021Jul 29 2021

ASJC Scopus subject areas

  • Engineering(all)

Fingerprint

Dive into the research topics of 'The Rise of Program Auto-grading in Introductory CS Courses: A Case Study of zyLabs'. Together they form a unique fingerprint.

Cite this