The language you use to describe candidates is probably revealing more than you think
Implicit bias training has gone out of fashion in recent years, and for good reason. Too many employers used to believe that simply making their staff aware of this phenomenon (whereby you subconsciously attribute certain qualities to an individual member of a social group) would act as a magic wand to solve all their issues around Diversity, Equity and Inclusion (DEI). Yet the evidence for implicit bias training having a meaningful impact is strikingly scant.
But, whilst implicit bias training has failed us, regular implicit bias audits are a vital responsibility of anyone with responsibilities for hiring.
Our company, Society, is an executive search deeply committed to DEI. We care about our actions and our culture as an employer, and we also obsess about getting progressively better DEI outcomes for our clients. For that reason, we are always asking ourselves how the implicit bias phenomena manifests itself in our headhunting process.
Recently we have been looking at how we describe candidates in writing, and we have made some remarkable discoveries. We believe that the lessons we are learning could be useful across our industry, and within employers themselves.
How our experiment worked
We decided to look in detail at the two-page written interview notes or ‘commentaries’ that we produce when recommending candidates for a final shortlist.
To start with, we took a selection of recent commentaries written by different colleagues across the business. We ensured that they covered a good spread of sector practices and levels of seniority, and that they were split equally between male/female candidates and between white/BAME candidates. I should note that we use the term BAME (Black, Asian, and Minority Ethnic) here as a practical necessity, but are fully aware that it is a reductive label.
Next, we removed the candidates’ names, and deleted the sections about their current employer, career history, interest level, and remuneration expectations. This left only the sections about their suitability and their personal style.
Then we de-gendered their pronouns, changing ‘she/her/hers’ or ‘he/him/his’ to ‘they/them/their’ in every case.
Finally, once this collection of anonymised, stripped-down, and (seemingly) de-gendered commentaries was prepared, we distributed them across the business. Each colleague in the firm was asked to read at least six commentaries and to answer the following questions:
- What gender do you think this person is? (Options: Female, Male, Unsure)
- What ethnicity do you think this person is? (Options: BAME, White, Unsure)
To avoid the results being warped by particular individual perspectives, we ensured that each document had several pairs of eyes on it. Together, we read through a total of 84 commentaries, so that we would have a meaningful sample size.
What we learned
Our headline results were pretty striking:
- We could predict the gender of the candidates with 56% accuracy.
- We could predict the ethnicity of the candidates with 62% accuracy.
That’s shocking. It suggests that, encoded within our use of language, were some powerful clues to attributes that should be utterly immaterial to someone’s suitability for a job.
“Wait”, we said to ourselves. “What if our findings were skewed by one or two particular colleagues writing in a particularly obvious way?” No, that wasn’t it. We checked back through the results, looking at those commentaries where groups of colleagues had made predictions with 90%+ accuracy. What we found was that every consultant in the business had at least one document they’d written amongst that pile. So that led to another inescapable conclusion:
- This is a systemic issue, not just a case of a few rogue outliers.
And, if this afflicts us, as a very DEI-aware firm, then we can reasonably suppose that others will be equally or more susceptible,
Then we got really curious and asked ourselves what would happen if we applied a different lens, looking at the results based upon the gender and ethnicity of our own colleagues. Here’s what we discovered:
- Although both male and female colleagues were equally as good at predicting candidate gender, our BAME colleagues could predict candidate ethnicity with 67% accuracy, versus only 59% for our white colleagues.
This suggests that not only are there some troubling clues encoded within our language, but that those with lived experience of particular attributes have better developed antennae for detecting them.
What we recommend
There’s not nearly enough room in this short article for unpacking and dissecting all the myriad lessons we have learned from this exercise. Suffice to say that we have fundamentally reassessed how we structure and prepare our commentaries, and how we use language. For everyone else, we make the following recommendations:
- Try this exercise yourself. Colleagues will hopefully be shocked into action by the results.
- Challenge your search partners on whether they are regularly auditing their own practice in this way. If they aren’t, don’t work with them.
- Download a copy of our free Inclusive Recruitment Toolkit, which contains more practical and innovative ways to improve DEI in your hiring process.
NewsGapminder demonstrates the beauty of statisticsGapminder demonstrates the beauty of statistics
Professor Hans Rosling's lectures have become an internet phenomenon. His website www.gapminder.org makes his incredible visual statistical displays freely available for all to see.
NewsSociety sponsors Season Five of The Charity CEO PodcastSociety sponsors Season Five of The Charity CEO Podcast
NewsWeb 2.0 harnesses the power of naggingWeb 2.0 harnesses the power of nagging
A social networking site called www.thenag.net is trying to mobilise people towards sustainable living.
NewsThe world as you have never seen it beforeThe world as you have never seen it before
Academics at the Universities of Sheffield and Michigan have created an extraordinary web resource at www.worldmapper.org.