Establishing an Impostor— Quantitative Assessment in Academia
A cluster of discussions recently have highlighted how quantitative assessment in academia risks systematically creating toxic hyper competitive environments, undermining our credibility in making decisions, and fundamentally undervaluing individuals.
Let us take a couple of ideas to explore this theme — cheating & academic misconduct; and academic metrics.
Cheating & Academic Misconduct
There are recent cases of students being accused of cheating in online tests. For those of us setting assessments in these chaotic times within the global pandemic, the pivot to open-book examinations often centres on students who cheat in exams.
Please do not misunderstand me — cheating is wrong. This is academic misconduct, and should not happen.
Yet, I suspect we create an environment where we encourage cheating, simply as we provide no options for the students themselves. Furthermore, as academics we want to ‘cheat’ in evaluating our students, through these simple numbers based exercises.
Human beings will often chase shortcuts, it’s an embodiment of us finding the most ‘efficient way’ in which we can do things. Cheating is at the extreme end of this game-playing, where people find methods to achieve higher marks in a quantitative assessment of their abilities.
These marks may mean something in regard to the assessment exercise, but often they are used beyond that, in a way that is incongruent with how they are constructed.
For example, a summative assessment of knowledge in a particular technical area, may be then used to funnel students between programmes, to align with merit based scholarships, to get a job in an unrelated field, or just to survive in a time of great uncertainty.
In this example, are we surprised that some people, and perhaps more people ‘than usual’ are cheating? Have we stressed our cohorts of students simply too much?
Of course, each case of cheating has an individual story. I’ve investigated many individual cases. Each investigation is unpleasant, deeply personal, and uncomfortable. As the academic accusing the individual of plagiarism, or more, I can feel virtuous hiding behind our academic conduct pledge. As a course director, I could report the number of incidences with dread (or pride) at our end of year meetings.
Sometimes cheating, and academic malpractice, is simply not knowing what the rules of the game are. We have also turned the entire of the University experience into the game. We may have failed to provide routes to educate beyond this — and highlight why better practice exists. We know that better practice is rewarded in the long term, and this is seen with formative (i.e. along the way, forming knowledge) assessment exercises.
In cases of summative testing, it is less clear and may be too late. I suspect in these cases, this is also the same reason why students may be especially driven towards cheating, especially if marks ‘achieved’ are used in ways that are disjointed to their implementation.
In studying and considering pathways through academia, I remain staggered at the linkage and reliance of grades as ‘gates’ within the academic system (and beyond). These grades are used devoid of all meaning, apart from highlighting ‘excellence’, and this is found with follow-on courses, access to scholarships, job recruitment and more.
In many instances, the hyper-pressure to achieve and to be ‘the best’ is enshrined in the worth of the recruiting entity. Fundamentally the assessors are cheat in their evaluation of the human being who they wish to recruit, simply in relying on the blunt instrument of a grade.
Thriving of students (and people) in academia is not fair or equal, and I’d argue that the population of people who are simply surviving is even less balanced.
The unbalance of the populations of people is worsened in these unusual times, and perhaps we should also reflect that the prioritisation of time and energy, and our pivot (as instructors and students) towards new technologies creates new ways and new motivations for people to cheat.
In creating this ‘best of a best’ culture, we disproportionally reward cheaters.
We also foster a continue one-up-person-ship between peers, where these hyper competitive environments again encourage people to engage in poor academic practice. Instructors see this in their day to day — there is a continue ‘nag’ about ‘that extra mark’ — when deep down from an educational perspective, we know that the totally of understanding is lost within the granularity, subjectivity, and pure chance of our marking systems.
In the overall system of higher education, we are forced mark because we must provide a grade, because that’s ‘how we add value’, reinforcing the circle and cheating in the embedding of knowledge, rewarding curiosity, and providing skills training.
As our students progress into research careers, they are moved from a hyper-competitive and quantitative environment, into this world which is more uncertain, and answers remain unknown.
Assessment practice dramatically moves into a ‘good enough — not good enough culture’ for formal assessments, and meanwhile they are left with a research landscape where ideas are complex and unknown.
Researchers scrabble in the dark here, and look for ‘who to trust’. This is where citation worship comes into it’s own. Recruitment, retention, and promotion committees struggle in their evaluation evaluating the ‘quality’ of a human being with regards to the potential for them to do their wide-ranging and poorly defined job (show me an academic job description that makes sense).
These struggles lend themselves to the insertion of (false) objectivity through the use of poorly formed metrics. In essence we often find ourselves outsourcing our decisions processes to third parties, be they an improper tracking of publication records or the (societally-and-institutionally) biased use of teaching evaluations.
Especially in STEM, we are trained to ‘trust the numbers’, but we know that there remains uncertainty and context for any metric. Fundamentally, this is why we write a paper to tell a story about our observations, and we do not simply publish a table of data.
The same ‘story telling’ aspect must be true of the human beings are are evaluated via peer review, e.g. for academic fellowships, promotion and more.
Yet, we also must be aware that these stories will be tainted by the biased brushes of the environments in which we each exist. The misogyny, cis-heteronormative, ableist, and (in the global north) broadly an English-speaking white European-Colonial landscape, exists both within and outside the academic ivory tower.
Generation of the Impostor
In all of this, the impostor runs wild.
Yes, there may be literal impostors in our midst, but I would suggest these are few and far between.
Many more folk are left ‘feeling like impostors’ — i.e. paralysed by impostor syndrome, as there is too much to excel in, to much randomness, bias and chance, and not enough resource to enable people to survive on their own merits. In an absolute world — this metrics based achievement would be based upon skills given fair opportunity, but the world is fundamentally flawed due to inequity.
Linking our themes together, how do we go full circle, from a incidence of alleged cheating to the evaluation of academic skills and know-how?
In both cases, people try to cut corners due to lack of resources, lack of focus, and lack of skill in navigation of complex systems. It is no longer felt reasonable to fail your way to achieve success, as the world moves faster than we can manage, and we do not have the courage or forsight to stand proud simply to be ourselves, and in many cases we are never given this opportunity.
This uncertainty is toxically fuelled by an unquestioned thirst to be ‘better than our peers’ (which is a peculiar phrasing if you pause to think about it). Hyper competition, and losing sight of the importance of knowledge, fair judgement, and more is enshrined at an early stage in our careers.
This starts with the academic gold-fish bowl of improper assessment in an academic ecosystem, which beings (in the UK) before University, but it is further enshrined and established as the way things are. Feedback is often distilled into simple marks that sate our self-worth, but fail to address a more wholistic understanding of our learning and development needs.
More complex feedback may be provided, but how much is that taken to heart? How much of that is translatable beyond the confines of the individual who is being evaluated, and how much of that is shared honestly with others to enhance our (individual and collective) development?
This leads us towards more ‘senior’ stages in our careers. We cheat at weighing up the competition, and ourselves, in our simple numbers, as we are never sure enough in the ‘soundness’ of our own decisions. This is driven in part by a rationale that translates as a narrative fit others, where we simplify complexity.
For example, we may weigh up individuals via the total (physical) weight of papers in our publication history, perhaps our grant income, we count our citation churn, or we peer at our our teaching evaluations. The list continues. Fundamentally, each of these metrics is distorted by the society we live in, and they are inequitable (though they can be useful, sometimes, perhaps).
Time and time again, we hear how key decisions are taken out of our hands and justified through these simple metrics. Each of which can be gamed, cheated and distorted beyond recognition, especially in the hyper competitive world in which we live.
And yet, we’ve been trained to trust the numbers, as after all, the numbers would never be impostors, right?
p.s. Credit to Dr Jo Sharp who informed me that impostor was spelt with an ‘o’, because as a dyslexic academic I genuinely had no idea 🙃.
Dr Ben Britton is an academic Material Scientist and Engineer, with other hobbies and interests. When he’s not doing that, he’s busy validating his ideas by refreshing the clicks, citations and reads of his online activities and scholarly outputs.
If he has time beyond these core business activities, Ben can be found composing thoughts into 280 characters or less as @bmatb.