Invisible women, evaluations and the myth of meritocracy.

"Imagine a world where your phone is too big for your hand, where your doctor prescribes a drug that is wrong for your body, where in a car accident you are 47% more likely to be seriously injured, where every week the countless hours of work you do are not recognised or valued.  If any of this sounds familiar, chances are that you’re a woman.”

(Carolina Criado Perez, 2019)

I've lost count of how many times I've recommended Invisible Women. When I finally got around to reading it I couldn't put it down, I couldn't stop talking about it, and I still regularly flick through it. If you haven't read it, grab a copy. If you have, the map below offers a quick reminder of some of the key topics.

The myth of meritocracy

If you work in higher education and want to dip into a particularly relevant chapter, look at the first few pages of Chapter 4, ‘The Myth of Meritocracy’. We may like to think that our institutions are leading the way in social justice, diversity, and equity, but Criado Perez points out that studies around the world have found that female students and academics are ‘significantly less likely than comparable male candidates to receive funding, be granted meetings with professors, be offered mentoring, or even to get the job.’ (Criado Perez, 2019, p95).

She goes on to explore the challenges of career progression for female academics, where publishing opportunities and citation frequency (often a key metric in determining ‘research impact’) are much lower for females than males. In a particularly fascinating observation, she notes that over the past 20 years,

…men have self-cited 70% more than women - and women tend to cite other women more than men do, meaning that the publication gap is something of a vicious circle: fewer women getting published leads to a citations gap, which in turn means fewer women progress as they should in their careers, and around again we go.

That’s before we’ve even started on the analysis that shows the additional load within the institutional working context on female academics, who are more frequently approached by students for emotional support, extensions, grade adjustments and so on. They are also more likely to be given extra teaching hours, which is so often seen as less ‘valuable’ than research work (and contributes less to promotional prospects).

Are student evaluations biased?

The section on gender bias in student feedback and evaluation is worth a focus all of its own, and received attention last year when researchers found that gender bias in student surveys on teaching increased with remote teaching (original paper from researchers at Victoria University, and another research study showing similar findings from Sweden). Some of the issues noted by Criado Perez include:

  • Less effective male teachers routinely receiving higher student evaluations than more effective female teachers;

  • Students believing male professors hand marking back more quickly;

  • Female professors penalised for not being sufficiently ‘warm’ and ‘accessible’; or conversely, for not appearing authoritative and professional;

  • Female professors more likely to be described as ‘mean’, ‘harsh’, ‘unfair’, ‘strict’ or ‘annoying (analysis of 14 million reviews on the website RateMyProfessors.com).

And so it goes on. We know student evaluations are not always popular, but research like this adds a different dimension to these as a key source of data in universities. Is anyone routinely checking for bias at all our institutions, not only where gender is concerned, but also cultural bias and even bias across discipline and subject areas?

So next time a report on student satisfaction hits your inbox, or the latest QILT survey results, or the research publications and citations for your academic staff, feel free to ask a few more questions of the data. Is it showing you the full picture, really?

.
Previous
Previous

Meet complexity with the ‘SANE’ Framework

Next
Next

Artefacts from the future: connecting dreams to reality