At the core of Minerva’s active learning philosophy is the concept of frequent, detailed feedback on every element of the student performance, which includes not just assignment submissions but poll answers, worksheets completed in breakout groups, and even spoken contributions made during class.
The sheer amount of data generated by any given student in a typical week of classes is overwhelming. The design challenge was daunting: How could we take this data and not only turn it into scored assessments with actionable feedback for the student but also simultaneously design it in such a way that the faculty members could efficiently grade with the least amount of friction?
While the feedback generated by the tools would need to be presented in a way that was actionable by the students, the main users of the grading tools would be faculty members and TAs who would need to assess outcomes against a set of rubrics across written and verbal contributions made inside and outside of the classroom.
I was the Product Designer on this initiative, working closely with our Chief Learning Scientist, our engineering team, as well as our Founding Dean and faculty members.
The timeline for the project was similar to that of the classroom project, in that it needed to be completed before classes began at Minerva in the fall of our inaugural year. We were constrained by the relatively small size of the product team as well as a small pool of faculty to research with and test against. Undaunted, we set to work.
To begin we broke the assessment tool process into two separate products, as they had vastly different inputs and needs. One product was focused on assessing items generated in the classroom. The other was primarily focused on assessing written assignments completed outside of the classroom.
We began with the tool required for assessing student performance from within the classroom, which consisted of:
Research began by discussing the product needs with our deans. It became clear that each item would need to be capable of having a score assigned to it from a predefined rubric, as well as an optional comment from the person doing the grading.
As I began talking with faculty that would be regularly using the tool, it became clear that there was a fear that the moments in class that needed to be assessed could become challenging to find in the sea of data that each class generates. A common theme seemed to be: “Let me quickly find a particular student’s contributions. Don’t make me hunt for the needle in the haystack.”
I began by sketching a variety of potential layouts, and after some feedback from faculty settled on an overall strategy of structuring the screen into thirds: One main column to house the elements to grade (either the classroom video and “transcript” list of accessible classroom moments, or the poll answers), one column to filter the results that would be shown in the main column, and a third column to assess the selected element.
With this sketch as a guide, I created higher fidelity designs and eventually created a clickable prototype that tested well with faculty. Buoyed with the feeling that I was on the right path, the Chief Learning Scientist and I worked with engineering to get the first version of the tool into production.
Once actual classroom data could be added into the tool, I scheduled several usability sessions where I observed graders attempting to assess an entire classroom’s worth of data. It quickly became apparent that finding what to grade was still a laborious process that could be improved.
To combat this I designed and tested a few enhancements. One was the ability to speed the video playback up by a certain percentage so that watching the classroom recording became more efficient.
In order to reduce the available pool of possible moments to grade, I also designed a way that the list of items in the classroom transcript could be filtered by length, so that only spoken contributions over X seconds or typed contributions over Y number of words would be displayed. This allowed shorter, less substantive elements to be filtered out, leaving a selection of items that were likely to be more worthy of assessment.
The other enhancement added at this stage was to tie in the bookmark function from the classroom. When a memorable moment occurred in class, the professor could note that moment with a keyboard shortcut and then quickly re-identify these moments in the grading tool after class.
One final improvement we added was efficient keyboard shortcuts. We realized that once an element was selected for grading, the process of choosing an outcome, assigning a number score, adding a written comment, and saving this grade could all easily be done without the need of a cursor.
With these features incorporated, the grading process became much more efficient. Faculty could find the best classroom moments quickly and assign grades to them seamlessly. The number of assessments per student went up 40% after the feature enhancements were introduced, leading to a stronger feedback loop and increased student performance.
After class participation, assignments were going to be responsible for the next-largest percentage of a student’s final grade. Careful thought needed to be placed into the design of the required tool.
We began with the core requirements for the product: A tool that could take PDFs of student-written assignments and allow the faculty to select portions of these to assign contextual graded outcomes and comments.
To remain consistent with the layout in place for the class grading tool, I settled on an overall structure that included a toolbar to select the student and assignment to be graded, a main column to display the selected work, and a column for assessing the submitted work.
To allow portions of the submission to be graded in context, I worked with our Chief Learning Scientist on a feature that allowed the instructor to highlight a portion of text which would then cause a modal to appear.
Within the modal, the grader would be able to select from a list of outcomes particular to the class, assign a grade to it, and then add an optional comment. We borrowed from the pattern designed for the class grader here to establish parity between the two grading applications.
The low fidelity sketches quickly became higher fidelity mockups that were then presented to and tested with faculty. After receiving positive feedback and discovering no major usability issues, the product underwent coding for production.
The tool was released in time for the start of classes and while it performed as designed, several needs were uncovered.
One was the issue of plagiarism. Without detection built into the tool, it became too easy for a student to submit an assignment that they didn’t write. To check for plagiarism, faculty were required to download all submitted assignments in bulk, upload them to a third-party service, and process the results on a case-by-case basis. This required a lot of time and effort on the part of faculty and wasn’t a sustainable solution.
To solve this, we partnered with the plagiarism service, Unicheck, and I designed a way to incorporate an “originality score” in the grader to quickly warn grading faculty of potential plagiarism issues.
As our grading policies evolved, it became required that each student be graded against a small list of “foreground” learning outcomes for any given assignment. As initially designed, this required the faculty member to manually track this on a student by student basis. While doable this required far too much effort. I designed and tested the concept of a “Foreground Learning Outcome” tracker. This was well-received by faculty and was incorporated into the tool.
A custom module allowed instructors to ensure that every student was assessed ont he most important learning outcomes for that class session.
The class and assignment grader tools enjoyed relatively smooth launches, but both did suffer from initial efficiency issues that required intense iteration.
Today, these tools are used by Minerva and global partners to successfully grade 5,000 classes and over 20,000 assignments per semester, helping accelerate the lives of students.
An important personal learning from this experience was one of context: grading is only a portion of a faculty member’s day, and the feedback style required by our pedagogy means that faculty spend more time on grading than their peers at other institutions. Consequently, every little bit of enhanced efficiency we design into the product pays dividends in the lives of the faculty.
© Matt Regan 2021