Sam Leith Sam Leith

Scrapping Oxford’s ‘traditional’ exams won’t make things fairer

Credit: iStock

Are exams… racist? Are exams snobs? If a report in yesterday’s Sunday Telegraph is to be credited, academics at Oxford and Cambridge are taking this question seriously. In the hopes of closing the ‘achievement gap’ between white middle-class students (who scoop more of the firsts and 2.1s) and students from disadvantaged backgrounds or other ethnicities, Russell Group universities are said to be considering replacing traditional exams with more ‘inclusive assessments’ such as open-book papers or even ‘take-home exams’. 

Call me a reactionary, but this sounds quite nuts. We can acknowledge that an achievement gap exists. We can certainly credit, too, that it might be in the interests of everybody to seek to understand and remedy it. But it’s a big – a giant, a 100-league-boot – leap from those two propositions to the idea that sitting finals in an exam room is the problem.  

The idea of ‘take-home’ papers being the solution to anything at all is just bizarre

Unseen examination papers under timed conditions are as good a test as we currently have of accumulated knowledge, ability under pressure and reasoning on the hoof. That they’re considered – in one of the more eye-opening phrases from yesterday’s report – ‘threats to self-worth’ is in some ways part of the point. Anything that rigorously tests how good you are at something is always going to be a threat to self-worth.  

And it is bizarrely patronising to suggest that there’s something intrinsic about that way of testing that disadvantages people on grounds of ethnicity or socioeconomic background. A university has three full years to teach its undergraduates how to do these exams. That’s a lot of the point of a university. Are we seriously suggesting that it’s harder to teach black or brown people, or working-class people, to sit exams? 

The achievement gap that the statistics show will no doubt be what Freudians would call ‘overdetermined’: i.e. there will be lots of overlapping reasons for it. One is that overeducated public schoolboys, trained and groomed at considerable private cost from early adolescence, are going to have a head start. But that’s a case for changing the system at the beginning of the process rather than at the end of it. 

And unless there is hard data to support the idea that traditional exams are the, or even a, major contributor, scrapping them makes no sense. Isn’t it much more likely that it’s not the form of the exams that are the problem, but that the abilities that they test are for whatever reason not there in some candidates? The problem isn’t the litmus test: the problem is the solution on the blotting paper.  

Not only is it distinctly up for question whether unseen examinations disadvantage certain ethnic or socioeconomic groups – what conceivable explanation could there be for such a finding? But if you’re to replace them with something else you’d need some robust data to show that those replacements would be an improvement. And the vagueness of these proposals – open-book? take-home? examine in interpretive dance? – certainly doesn’t seem to imply that such data exists.  

Open-book papers under exam conditions, in certain subjects, may have a useful place in assessment: the ability to rote-learn quotations, for instance, isn’t the most important life-skill and it doesn’t tell an examiner much about your analytical and reasoning abilities. But it’s hardly a magic bullet to solve disparities between middle-class students and those from less advantaged backgrounds. 

University lecturers are reporting epidemic levels of AI use in student essays, while detecting plagiarism and corner-cutting are getting harder and harder and taking up more and more of academics’ time. So in an age when cheating with ChatGPT is coming to be one of the biggest problems in higher education, the idea of ‘take-home’ papers being the solution to anything at all is just bizarre. 

At least with the knowledge that your finals will be unseen papers completed under exam conditions, there’s one remaining impediment to students being launched on the world of work with creditable-looking qualifications and no knowledge or skill whatever. Knowing you’ll have to sit finals remains a disincentive to letting AI write all your essays for you. The students who let ChatGPT take their courses for them, under the present system, will at least be found out in the end. If ChatGPT can take their finals for them too, the jig’s up.  

That dismaying vagueness applies, too, to the language around these proposed changes. A spokesman for the Office for Students is quoted as suggesting the ‘evidence that current assessment models may not be fair’ is that:

We encourage universities to consider whether their assessments are working properly for all students because we know that some students are more likely to attain lower grades than their peers, even when their prior academic performance is the same.

The first half of that statement – though no doubt the OFS has some stats that will support it – seems very sweeping. It’s remarkable if it’s true. But you wonder: prior academic performance assessed how? As for the leap to the idea that a timed, unseen paper completed under examination conditions and marked blind can be ‘not fair’? As academics sometimes say: citation needed.  

Watch more on Spectator TV:

Comments