Skip to content

online trait-based marking

August 12, 2015

In order to provide useful feedback rapidly on those assessment items whose motivation is at least partly formative, I’m thinking about a variety of new dodges to try next year.

I expect I’ll have about 100 first year students taking the class I pretend is an introduction to hardware but which is actually an introduction to functional programming and semantics. There’s a lab session each week in which I experiment on them, and processing the data resulting from those experiments is a major headache. Feedback needs to be rapid if it’s to inform remedial action.

My workflow is pretty poor. So far, the test questions have been done on paper, even though each student is at a computer. My comrades and I mark those pieces of paper, but then it’s not over, because I have to type in all the scores. It takes too long. It often goes wrong.

solution traits

One method I adopted last year was to identify solution traits. Few mistakes are peculiar to one individual. Many good or bad properties of solutions, or traits, as I call them, are shared between many submissions. I learned to save time by giving traits a coding system. Markers just write trait codes on the scripts; the meaning of those codes is given once, on the web page associated with the assessment item. Markers also delivered a score for each part of the exercise and a total score. We tried to map traits to scores consistently, but that’s not always easy to do in one pass. Backtracking was sometimes required. If we had multiple markers, we’d share an office, and the shared trait description list would grow on the whiteboard as new things came up. I discovered that I could be bothered to type in both the score and the trait list for each student, but it was quite a bit of work.

The idea of solution traits as salient aspects of student submissions struck me as something from which we could extract more work. I ought to be mining trait data. Maybe later.

from traits to scores

Markers should not score solutions directly. If we can be bothered to classify traits, we should merely identify the subset of recognized traits exhibited by each question part. Separately, we should give the algorithm for computing a score from that trait subset. That way, we apply a consistent scoring system, and we can tune it and see what happens. Here’s the plan. Each trait is a propositional variable. A scoring scheme is given by a maximum score and a list of propositional formulae each paired with a tariff (which may be positive or negative). The score is the total of the tariffs associated with formulae satisfied by the trait set, capped below by 0 and above by the indicated maximum. We should have some way to see which solutions gain or lose as we fiddle with the scheme.

hybrid online marking jobs

Each marking job is presented to the marker as a web page showing the question, the candidate solution, the specimen solution, and the list of checkboxes corresponding to the traits identified so far. It may be that some magic pixie helpers (be they electronic or undergraduate) have precooked the initial trait assignment to something plausible. That is, online marking jobs have the advantage that we can throw compute power at them whenever solutions can be algorithmically assessed, but we don’t have to construct exercises entirely on that basis. If all is well, the marker need merely make any necessary adjustments to the trait assignment and ship it. Problems may include the need to add traits, so that should be an option, or to request a discussion with the question setter and flag the job for revisiting after that discussion. Concurrent trait addition may result in the need to merge traits: i.e., to define one trait as a propositional formula in terms of the others, with conflicting or cyclic definitions demanding discussion. Oh transactions.

making marking jobs online

How do I get these marking jobs online in the first place? Well, the fact that the students do the problems in labs where each has a computer is some help. Each exercise has a web page, and whenever it makes sense to request a solution which fits easily into a box in an HTML form, we can do that, whether it’s to be marked by machines or by people. But there may be components of solutions which are not so easily mediated: diagrams, mainly. I have previous at forcing students to type ASCII art diagrams in parsable formats, much to their irritation, but I would never dream of making such a requirement under time pressure. I need a way to get 100 students to draw part of an assignment on paper, then make that drawing appear as part of the online marking job with the minimum of fuss.

banners and scanners

I prepare the paper on which they will draw. It has a printed banner across the top, which consists of three lines of text, each with one long word and sixteen three-letter words, lined up in seventeen columns of fixed-pitch text. The three long words in the left column vary with and identify the task. The 48 three-letter words are fixed for the whole year. Each student has an identity code given by one three-letter word from each line, and the web page for the exercise reminds them what this code is. Each position in each line stands for a distinct number in 0..15, and the sum of the three positions is 0 (mod 16), so a clear indication of the chosen word in just two of the lines is sufficient to identify the student. I can print out a master copy of the page, then photocopy it 100 times and hand the pages out. The student individuates their copy by obliterating their assigned words, e.g., by crossing them out (more than once, please).

I collect in the pages at the end and I take them to the photocopy room, where the big fast photocopier can deploy its hidden talent: scanning. I get one enormous pdf of the lot. Unpacking the embedded images with pdfimage, I get a bitmap for each page. Using imagemagick, I turn each bitmap into both a jpeg (for stuffing into the marking job) and a tiff (cropped to the banner) which I then shove through tesseract, a rather good and totally free piece of optical character recognition software, well trained at detecting printed words in tiffs. The long words present in the scanned text tell me which exercise the jpeg belongs to; the short words missing from the scanned text tell me (with high probability) whose solution it is. Solutions not individuated by this machinery are queued for humans to assess, but experiments so far leave me hopeful. The workflow should be: stack paper in the document feeder, select scan-to-email, select the relevant daemon’s email address, press go. And we’re ready to bash through the marking jobs online.

mark retrieval by students

Once we markers have done all we can and it’s time to give the students their feedback, we push the release button which bangs the associated dinner gong. Online, the students are faced with a marking job, showing the question, their solution, the specimen solution, and an empty checkbox trait list, all in a form with a submit button. We oblige them to mark their own work in order to be told their given score. When they hit the submit button, they get to see their score, and on which traits their marker’s opinion diverges from their own. If they are at least two-thirds in agreement, a small donation is made to their reputation score (as distinct from their score in the topic of the assessment).

pixie helpers of the carbon kind

Tutorial homeworks can and should also give rise to online marking jobs in just the same way. Part of the exercise can be a web form and the rest done by uploading an image. There are scanners in the lab, but we can also arrange to allow submission of images by email from a phone or a tablet. Once a student has submitted a solution, they become available for marking jobs on the exercise, starting with their own, but also for other people. After the tutorial, each student should certainly confirm their self-assessment, but preferably also revisit any other marking they did prior to the tutorial. Reflection is reputation-enhancing. Peer-to-peer marking is reputation enhancing. Note that tutors will have the ability to eyeball homework submissions (if only to detect their absence) but are not paid to spend time marking them.

Expert students who have nothing to gain by taking a test (because they already have full marks in the test’s topic) may, if they have free time at the right time, collect reputation by marking their colleagues’ test submissions. The results they deliver must be moderated, but they do at least help to precook marking jobs for the official markers. Of course, reasonable accuracy is necessary for payment.

traits in relationship to the curriculum graph

An exercise (or parts thereof) should be associated with a topic and constitute evidence towards mastery of said topic. If some parts of an exercise conform to a standard scheme, that’s also useful information. Some traits will relate to the topic (e.g., typical misunderstandings) and some to the scheme, so we gain a good contribution to the trait set for a brand new question just by bother to make those links. We might also seek to identify regions in the teaching materials for the relevant topic which either reinforce positive traits or help to counteract negative ones: crowdsourcing such associations might indeed be useful.

the workflow is a lot, but it isn’t everything

My main motivation is to try to improve the efficiency of the marking process in order to give feedback more rapidly with less effort but undiminished quality. Thinking about technological approaches to the management of marking is something I enjoy a great deal more than marking itself, so I often play the game despite the little that’s left of the candle. I should watch that. But the shift to marking as an online activity also opens up all sorts of other possibilities to generate useful data and involve students constructively in the process. It’s a bit of an adventure.

Advertisements
3 Comments leave one →
  1. August 14, 2015 8:36 pm

    Ты комментарии читаешь?

  2. gasche permalink
    August 24, 2015 5:11 pm

    I like the idea of asking students to grade their own submissions. I’ve down “small rewards for secretely didactic task X” stuff in the past and, to my surprise, students can be motivated to do amounts of work disproportionate to the reward — especially the young ones. Going back with the program they submitted for their course project with the deal “if you succesfully factorize those two blurbs of code in one function, you’ll get ‘small reward'” was a raging success.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: