This post is an annotated bibliography. But rather than list citations in alphabetical order, I’ve organised them under subheadings comprised of research questions. As such, I provide a possible framework for a research project.


An alphabetised non-annotated bibliography corresponding to this post is available for download, here:


Subheadings

About the overarching EDI initiative in Canadian universities

Canadian Universities, Implicit Bias & EDI, examples

Why are some reluctant to criticise EDI programmes?

Hunting Implicit Biases (e.g Anonymous Reporting and Bias Incident Response Teams)

Questions and Controversy Around the Accuracy of Tests, the Efficacy of Training, and the Autonomous Effects of Each

More on Implicit Bias from Previous Posts on this Blog


*Note that I use implicit bias and unconscious bias interchangeably.

Search any Canadian University and you’ll discover that implicit bias tests and training are broadly regarded by those who design, direct, and implement EDI and EDI-related programmes as a powerful tool for mitigating negative racial and gender stereotypes. However, not only have these EDI-involved people overstated their claims about implicit bias and its possible remedies, but also they may well be wholly dead wrong. I’m not here purporting to make a definitive case against implicit bias testing and training. Only that such worries are warranted.

One worry is that those involved in all facets of EDI and EDI-related programmes tend to ignore disconfirming counter-evidence to their EDI-supportive research, thereby eschewing scholarly rigour.

“Commitment to equity, diversity and inclusion …Through these means the agencies will work with those involved in the research system to develop the inclusive culture needed for research excellence and to achieve outcomes that are rigorous, relevant and accessible to diverse populations [bolding mine].”

Hence their claims that EDI leads to “research excellence” might be a performative contradiction.

Recruiting, Best Practices, “Have those involved in the hiring process complete EDI training, including instruction on how to recognize and combat unconscious, implicit, overt, prejudicial and any other kinds of bias (e.g., see the “dirty dozen” explained in chapter 11 of The Equity Myth).

Mentoring, Best Practices, “Ensure mentors receive unconscious bias training and/or other EDI training as necessary (e.g., microaggressions, antiracism training).”


About the overarching EDI initiative in Canadian universities:


NSERC

EDI begins at the federal level under the granting agency for Canadian science and innovation research, the National Sciences and Engineering Research Council of Canada (NSERC),

“We work with universities, colleges, businesses and not-for-profits to remove barriers, develop opportunities and attract new expertise to make Canada’s research community thrive.”

NSERC, along with two other federal granting agencies,  the Social Sciences and Humanities Research Council (SSHRC)  and the Canadian Institutes of Health Research (CIHR), together form the Tri-agency Financial Administration

SSHRC

"The Social Sciences and Humanities Research Council (SSHRC) is the federal research funding agency that promotes and supports research and training in the humanities and social sciences.” 

CIHR

“The Canadian Institutes of Health Research (CIHR) is Canada's federal funding agency for health research. Composed of 13 Institutes, we collaborate with partners and researchers to support the discoveries and innovations that improve our health and strengthen our health care system.”

Tri-Agencies

About: “Collaborations between federal research funding organizations,” The tri-agencies, Canadian Institutes of Health Research, Government of Canada, https://cihr-irsc.gc.ca/e/46884.html, accessed Jan 6, 2023 

Included with the Tri-agencies is the,

CFI,

Canada Foundation for Innovation (CFI) …, a federally funded organization that enables this research, training and innovation through investments in state-of-the-art infrastructure.”

Grants and Awards: An overview of the grants and awards available under the EDI initiative, including a non-renewable Equity, Diversity, and Inclusion Institutional Capacity Building grant worth up to $200,000 per year, for up to 2 years. (*corrected 25 February 2022),


A number of Canadian Colleges and Universities have received the full amount of the EDI Institutional Capacity Building Grant, including the University of Lethbridge and Lethbridge Community College:


The Dimensions project is an chartered EDI initiative, one of a number of similar international programs, and is supported by the Tri-agencies,

“The [Dimensions] program is the result of cross-country consultations to make it uniquely adapted to the Canadian realities.”

Dimensions Charter: 


The Canada Research Chairs (CRC) program is also a partner in the EDI initiative. 

The CRC site is too extensive to summarize here, but I draw your attention to the Bias in Peer Review Training Module which is recommended by almost every Canadian university (if not all):

  • (If anyone has 10-15 minutes to complete the training module, please share your impressions in the comment section.)  

Canadian Universities, Implicit Bias & EDI, examples:

Note most of these examples include links to the Harvard Implicit Association Test (IAT) – also known as Project Implicit — as well as to the CRC training module.

  1. University of British Columbia (UBC) 

2. Simon Fraser University (SFU)

“These pages are the result of extensive collaboration between the SFU Library, the SFU EDI Administrative group, and other SFU stakeholders, and this work is ongoing.”

3. University of Calgary (U of C)

4. University of Alberta (U of A)

  • See subheading: Equity,Diversity, and Inclusion Training
"The University of Alberta provides training on equity, diversity and inclusion - including instructions on limiting the impact of unconscious bias to all individuals involved in the chair recruitment process. All members of selection/hiring committees must complete training on equity, diversity, and inclusion."
"Training is offered by the University of Alberta Equity, Diversity, Inclusion (EDI) group. Hiring committee members must contact the EDI group for information about upcoming sessions. The EDI group offers many helpful training courses regarding this issue but the most relevant training is "Introduction to Unconscious Bias." Each participant in selection/hiring committees should attend, at least, one such training session in order to meet this requirement."

5. University of Manitoba (U of M, UM)

6. University of Waterloo (Waterloo, UW)

Note: The Research Hub now requires a sign-in:

"The Inclusive Research Resource Hub is a cross-disciplinary document library of Equity, Diversity, Inclusion (EDI) and Indigenous Research resources to facilitate access for University of Waterloo faculty members, students and staff to incorporate EDI into research design and team planning. Topics range from resource guides to educational opportunities and institutional documents that support embedding EDI into research practices. Users need a uwaterloo.ca email to log in." 

7. Toronto Metropolitan University (TMU) Formerly: Ryerson University (Ryerson, RyeU, RU)

8. University of Toronto (U of T) 

"We are writing to inform you of exciting new online training on Unconscious Bias that will be available to all faculty, librarians, and staff at the University of Toronto."

9. Dalhousie University (Dal) 

  • Includes,
Harvard Implicit Association Test: Unconscious Bias Assessment Tool. This test measures implicit attitudes and beliefs that people are unwilling or unable to report. The test can be done to identify biases related to age, race, sexuality, skin-tone, gender, countries and weight.

10. Queen’s University (QueensU)

Unconscious Bias,
Sign in to begin training.

Enter your Staff or Student Number to identify yourself for the training. If you do not have a Staff or Student Number, you may identify yourself with an email address instead.


Why are some reluctant to criticise EDI programmes?

Some fear being labelled a racist, bigot, and/or Alt-Right for criticising EDI and EDI-related phenomena. In many cases, their fears are justified.  

The following excerpt from a Brandon University news article states, “We cannot be fooled by vague language that hides divisiveness and hatred.” Apparently they can. “Divisiveness and hatred” are examples of vague language.

“No place is immune to objectionable and distasteful sentiments. We cannot be fooled by vague language that hides divisiveness and hatred. We are proud of the committed students, faculty and staff who stand together to support universal human rights.

We recognize that the hurtful or hateful actions of a small number of individuals can have an outsize effect on marginalized groups and we reiterate that white supremacy, racism, xenophobia, misogyny, hate speech and discrimination of all kinds have no place at Brandon University. We condemn it and it will not be tolerated.

We know that disturbing expressions can have emotional impacts that require care and attention. We remind our entire BU community that we have services here to support you.”

Also from Brandon University (BU), The BU Statement on Inclusion,

"Brandon University affirms an unwavering and unambiguous commitment to diversity, inclusion and universal human rights. We are stronger and richer together, and we celebrate the unique contributions brought to our community through everyone’s individual circumstances, perspectives and life experiences.

Around the globe, and occasionally here at home, we must sometimes face xenophobia and racism. This often masquerades as nationalism, pride, or concerns about cultural purity. Bigots may deliberately use vague language or misappropriate the struggles of marginalized groups to advance their offensive cause. Their language is couched in pretend innocence that is designed to convince the naïve and to provoke divisive reactions. We are not fooled. We condemn hate speech of all kinds.

The paradox of tolerance reminds us that no accommodations can be made for intolerance. Hate speech is not free speech. Prejudice is not pride. Bigotry is not up for debate.

These distasteful opinions are to be found everywhere, and the Brandon University campus is no exception..." 

Some Other Pressures to Not-Criticise Implicit Bias

Lee Jussim outlines six reasons why implicit bias training is so popular, notwithstanding its ineffectiveness. Roughly,

1) Overstated claims at the outset of implicit bias research 

2) Implicit bias provides a simple explanation for continuing inequality (especially when appealing to ‘hidden forces’) 

3) Virtue Signalling 

4) It gives activists a veneer of scientific credibility 

5) PR and Insurance 

6) Consultants make big bucks 

Hunting Implicit Biases: 

This sketch is my own. Please credit me if you use it.


Institutions such as Queens University and the University of Toronto are collecting self-reported anonymous information as evidence for recurring patterns of harassment, discrimination, and bias/hate incidents. Hence these reports are non-falsifiable.

Bibliographer’s note. I’ve recently published a more in-depth analysis of anonymous reporting: Lindsay, Pamela. Policing Humour in Canadian Universities, Part 1 of 3, Universities Solicit Negative Reports from Students, Staff, and Faculty, Keeping an Eye on EDI, https://keepinganeyeonedi.ca/2023/07/12/policing-humour-at-canadian-universities-part-1-of-3/


I’ve included two references about the Bias Incident Response Teams (BIRTS, or BARTS) that have proliferated in the US:

“Executive Summary: Over the past several years, the Foundation for Individual Rights in Education (FIRE) has received an increasing number of reports that colleges and universities are inviting students to anonymously report offensive, yet constitutionally protected, speech to administrators and law enforcement through so-called “Bias Response Teams.” These teams monitor and investigate student and faculty speech, directing the attention of law enforcement and student conduct administrators towards the expression of students and faculty members…”

Questions and Controversy Around the Accuracy of Tests, the Efficacy of Training, and the Autonomous Effects of Each

Recall that I’m not purporting here to make a definitive case against implicit bias testing and training. The following is a far from exhaustive bibliography of popular articles and technical papers.

I’ve made a few of my own comments, but I’ve largely drawn on quotes that will give you the gist of the article or technical paper.


Abstract: In a prior publication, I used structural equation modeling (sic) of multimethod data to examine the construct validity of Implicit Association Tests. The results showed no evidence that IATs measure implicit constructs (e.g., implicit self-esteem, implicit racial bias). This critique of IATs elicited several responses by implicit social-cognition researchers, who tried to defend the validity and usefulness of IATs. I carefully examine these arguments and show that they lack validity. IAT proponents consistently ignore or misrepresent facts that challenge the validity of IATs as measures of individual differences in implicit cognitions. One response suggests that IATs can be useful even if they merely measure the same constructs as self-report measures, but I find no support for the claim that IATs have practically significant incremental predictive validity. In conclusions, IATs are widely used without psychometric evidence of construct or predictive validity.
  • Schimmack, Ulrich. “Invalid Claims About the Validity of Implicit Association Tests by Prisoners of the Implicit Social-Cognition Paradigm.” Perspectives on psychological science : a journal of the Association for Psychological Science vol. 16,2 (2021): 435-442. doi:10.1177/1745691621991860, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8167921/, accessed August 25 , 2023

"Racial bias is a reality, Schimmack says, but the problem is too many discussions of the issue are based on research findings that rely on flawed IAT measures. For example, some argue implicit bias training is not useful because it doesn’t change IAT scores. But if IAT scores are not valid in the first place, they are not likely to be effective evaluation tools."

“But while implicit bias trainings are multiplying, few rigorous evaluations of these programs exist. There are exceptions; some implicit bias interventions have been conducted empirically among health care professionals and college students. These interventions have been proven to lower scores on the Implicit Association Test (IAT), the most commonly used implicit measure of prejudice and stereotyping. But to date, none of these interventions has been shown to result in permanent, long-term reductions of implicit bias scores or, more importantly, sustained and meaningful changes in behavior (i.e., narrowing of racial/ethnic clinical treatment disparities)."
"Even worse, there is consistent evidence that bias training done the “wrong way” (think lukewarm diversity training) can actually have the opposite impact, inducing anger and frustration among white employees. What this all means is that, despite the widespread calls for implicit bias training, it will likely be ineffective at best; at worst, it’s a poor use of limited resources that could cause more damage and exacerbate the very issues it is trying to solve.” 

"Yet hundreds of studies dating back to the 1930s suggest that antibias training does not reduce bias, alter behavior or change the workplace.(p48)"

Abstract: In this review, I provide a pessimistic assessment of the indirect measurement of attitudes by highlighting the persisting anomalies in the science of implicit attitudes, focusing on their validity, reliability, predictive power, and causal efficiency, and I draw some conclusions concerning the validity of the implicit bias construct.

"One question crucial to the metaphysics of implicit bias is whether the relevant psychological constructs should be thought of as stable, trait-like features of a person’s identity or as momentary, state-like features of their current mindset or situation (§2.4). While current data suggest that implicit biases are more state-like than trait-like, methodological improvements may generate more stable, dispositional results on implicit measures."

"Future research on epistemology and implicit bias may tackle a number of questions, for example: does the testimony of social and personality psychologists about statistical regularities justify believing that you are biased? "

"One noteworthy intersection of theoretical ethics with forthcoming empirical research will focus on the interpersonal effects of blaming and judgments about blameworthiness for implicit bias." 


Comment 2, Anon PhD: "I second much of Daniel Kaufman's comment [1], with one exception. The reply is often not just "move along" [nothing to see here, re: comment 1] but something much more pernicious, e.g. an implication that your rejection of the IAT is indicative of deeper moral flaws (I can cite relevant examples, though anyone who has been keeping up with this "debate" is surely familiar). positively, it would be nice to see a discussion among /professional/ philosophers regarding retraction norms for philosophical work. If, for example, a published piece of philosophy relies heavily upon discredited and/or retracted empirical work, presumably the philosophical work should also be discredited and/or retracted. Presumably one justification for retracting discredited work is that this norm incentivizes scholarly care and precision, two qualities that are conspicuously lacking in much of the philosophical work that incorporates the IAT."

Lopez, German. “For years, this popular test measured anyone’s racial bias. But it might not work after all,” Vox, March 7, 2017, https://www.vox.com/identities/2017/3/7/14637626/implicit-association-test-racism, accessed August 25, 2023

  • Lopez provides a thorough-going overview of the literature, researchers, and points of contention up to March, 2017. 
  • Lopez raises a critical point. Most EDI university websites encourage visitors to take the IAT (Harvard) test to determine whether they have an implicit bias. But, taking the test once by no means indicates that you have a bias (or not) as indicated by the results.

"I saw a similar reluctance to criticize implicit bias among friends and colleagues. Taking the test, and buying into the concept of implicit bias, feels both open-minded and progressive."
"There’s little doubt we all have some form of unconscious prejudice. Nearly all our thoughts and actions are influenced, at least in part, by unconscious impulses. There’s no reason prejudice should be any different."
"But we don’t yet know how to accurately measure unconscious prejudice. We certainly don’t know how to reduce implicit bias, and we don’t know how to influence unconscious views to decrease racism or sexism. There are now thousands of workplace talks and police trainings and jury guidelines that focus on implicit bias, but we still we have no strong scientific proof that these programs work."
“A lot of folks see the IAT as a golden path to the unconscious, a tool that perfectly captures what’s going on behind the scenes and it’s not,” says Lai. “It’s a lot messier than that. The truth, as often, is a lot more complicated.”

"First, there is very limited [IAT] test-retest reliability (Gawronski et al., 2017). What this means is that the relationship is very low between an individual’s score taking an IAT at one time, and then repeating the test at a later time. What this also implies is that, either the biases aren’t stable over time, or the test doesn’t reliably measure what it purports to measure, or potentially both depending on how sceptical one is about the status of unconscious bias as a phenomenon."
"To the extent that the IAT might even detect unconscious biases, which from the earlier discussion is already under contention, the IAT doesn’t predictable discriminatory behaviours. This issue is particularly problematic for EDI training programmes for the reason that if the IAT doesn’t reliably detect unconscious biases, and doesn’t predict objective behaviours that might be assumed to be causally associated with harbouring unconscious biases towards a particular group, then it also cannot be used as an objective test of the efficacy of unconscious bias training."
"The general conclusions from this work are that there are deep problems with unconscious bias training (e.g., Carter et al., 2020; Dobbin et al., 2011; Noon, 2018). Several studies have examined the extent to which unconscious bias training itself leads to any objective changes in the work practices of those exposed to the training; this is critical to addressing assumption 4. The findings suggest that unconscious bias training had no impact at all (Behavioural Insights Team, 2020;Chang et al., 2019; Duguid & Thomas-Hunt, 2015). Worse still, it is also liable to producing backfiring effects, particularly if the training is made mandatory (Carter et al., 2020; Dobbin & Kalev, 2018)."
"By focusing on basic cognitive biases rather than specific social biases, can serve several useful functions, including decreasing inter-group tensions amongst those on DEI training methods. Here the evidence suggests that, in DEI training initiatives such as unconscious bias training, identifying a group that holds unconscious biases towards another group, can lead to backfiring effects, such as increased tensions between different groups."
"As this pilot study hopefully shows, people can vary with respect to several core interpretations of core concepts, such as bias. If DEI methods, such as unconscious bias training are to be used, then it is important to recognise that recipients of the training ought not to be treated as a homogenous group with similar attitudes and opinions towards the training or their views on biases and where and how they appear. Finally, given that the biggest disconnect is between the evidence base regarding unconscious bias training and the public’s view of its efficacy, clearly this needs to be addressed."

Harvard Implicit Association Test (IAT), aka Project Implicit, https://implicit.harvard.edu/implicit/canada/, accessed August 25, 2023

  • If you take the test, please leave your impressions in the comment section.
  • Note that the IAT was never intended for a one-of use by an individual to determine one’s implicit bias. But, universities encourage people to take the test in just this manner.

Branson, Adam. “UK government follows US with ban on unconscious bias training,” Global Government Forum, 16/12/2020, https://www.globalgovernmentforum.com/uk-government-follows-us-with-ban-on-unconscious-bias-training/, accessed August 24, 2023

  • This article includes a link to the Unconscious Bias Training Report by The Behavioural Insights Team.
  • This report mentions some think Diversity training might serve to “raise awareness” about biases. My worry is that “raise awareness” requires an indexical: Raise awareness in whom and about WHAT? And having done so, what are the autonomous effects of this consciousness raising – for better and worse? E.g. Does raising awareness make some stereotypes more salient, thereby amplifying rather than mitigating them? This is the kind of worry that many activists tend to miss.
    • I suspect some of the apparent concession of allowing that unconscious bias training might have some usefulness by raising awareness is a species of virtue signalling.
  • The report is also available here:

 


Jussim is a little loose in the following excerpt — who vets a “deserving person” and by which criteria? But I take his point. The money can be better used. How? Perhaps by hiring another faculty member in a short-staffed department.

"Claiming the mantle of "science" for false claims and misinformation, no matter how earnest or well-intended is bad. Misinformation is one harm; opportunity costs are another. The time and money spent on implicit bias training could surely be better spent doing more constructive things...A university could do more to reduce inequality simply by taking that fee and creating a fellowship for a student from a low-income background or marginalized group. Then, at least, they would know for a positive fact that one deserving person was actually helped."

  • Singal, Jesse. “Psychology’s favourite tool for measuring racism isn’t up to the job,” The Cut, New York Magazine, https://www.thecut.com/2017/01/psychologys-racism-measuring-tool-isnt-up-to-the-job.html , accessed August 25, 2023
    • This is a long article, so reserve 10-15 minutes of your time for the undertaking. It’s worth the read. And it’s worth noting that at the time EDI in Canada was getting off the ground, worries about the limits of implicit bias measures and training were already circulating.


The problem, as I showed in a lengthy rundown of the many, many problems with the test published this past January, is that there’s very little evidence to support that claim that the IAT meaningfully predicts anything. In fact, the test is riddled with statistical problems — problems severe enough that it’s fair to ask whether it is effectively “misdiagnosing” the millions of people who have taken it, the vast majority of whom are likely unaware of its very serious shortcomings. There’s now solid research published in a top journal strongly suggesting the test cannot even meaningfully predict individual behavior. And if the test can’t predict individual behavior, it’s unclear exactly what it does do or why it should be the center of so many conversations and programs geared at fighting racism.

Excerpt from the conclusion: "Nosek and Greenwald (2009, p. 375) note that “the most important considerations in appraising validity of psychological measures are those that speak to the measure’s usefulness in research and application”. Whilst there have been many concerns regarding the IAT’s veracity and usefulness (see Blanton et al., 2009; Krause et al., 2010; Mitchell & Tetlock, 2017; Oswald et al., 2015; Rae & Olson, 2018), there has been no clear estimate for the component of error variance in IAT scores. The present study has provided clarity on this issue, demonstrating that the IAT effect scores were comprised of over 80% combined random and systematic error variance, allowing little opportunity for trait ‘implicit attitudes’ to be revealed through the noise, and requiring significant statistical modifications and processing to obtain even population-level ‘insights into our implicit biases’. To put it simply, the IAT was shown to be inadequately honed to provide insights into our implicit biases and its ‘usefulness in research and application’ is questionable, if not at times, potentially misleading. The sheer magnitude of error variance has serious implications for the use and interpretation of IAT effect scores." 

"Using a novel technique known as network meta-analysis, we synthesized evidence from 492 studies (87,418 participants) to investigate the effectiveness of procedures in changing implicit measures, which we define as response biases on implicit tasks. We also evaluated these procedures’ effects on explicit and behavioral measures. We found that implicit measures can be changed, but effects are often relatively weak (|ds| < .30). Most studies focused on producing short-term changes with brief, single-session manipulations. Procedures that associate sets of concepts, invoke goals or motivations, or tax mental resources changed implicit measures the most, whereas procedures that induced threat, affirmation, or specific moods/emotions changed implicit measures the least. Bias tests suggested that implicit effects could be inflated relative to their true population values. Procedures changed explicit measures less consistently and to a smaller degree than implicit measures and generally produced trivial changes in behavior. Finally, changes in implicit measures did not mediate changes in explicit measures or behavior. Our findings suggest that changes in implicit measures are possible, but those changes do not necessarily translate into changes in explicit measures or behavior. (APA PsycInfo Database Record (c) 2019 APA, all rights reserved)"

Forscher, P. S., Lai, C. K., Axt, J. R., Ebersole, C. R., Herman, M., Devine, P. G., & Nosek, B. A. (2019). A meta-analysis of procedures to change implicit measures. Journal of Personality and Social Psychology, 117(3), 522–559. https://doi.org/10.1037/pspa0000160, accessed August 24, 2023


Tinna C. Neilsen and Lisa Kepinski argue that ‘awareness’ is not a reason for continuing unconscious bias testing and training. In fact, awareness is at best ineffective and at worst liable to create a backlash. 

Neilsen and Kepinski note that,

“Over-reliance on unconscious bias awareness training as ‘the solution’ has created a multi-billion dollar-a-year industry that is profiting from many thinking this approach will ‘fix the problem’. Yet often, the outcomes of these bias trainings are not effective and the problem persists. It may even get bigger!” 

These authors detail a number of reasons awareness backfires, such as Mental Overload where,

“Having to be consciously aware of the unconscious comes at the risk of creating mental overload, which has been proven to strengthen the impact of bias. Furthermore, when knowing (system 2) but not having the ability to act on that knowledge, it can paralyse us (system 1) and then we rely even more on default and biased behaviour. So, you see this creates a vicious circle.”

They suggest modes other than raising awareness for targeting and (re)-training the unconscious mind, which, on their view,

“steers people to make better choices. This ‘pushes’ (nudges) the unconscious mind in a non-intrusive way to change behaviour without taking away the freedom to choose something else.” 

One example of a nudge these authors give is the practice of anonymising candidates for a symphony orchestra by having them audition behind a screen, and removing any subtle hints to their identities, such as having the women remove high heels that would tellingly clack across the floor as they walk to their audition positions. (Bibliographer’s note: This nudge is akin to the call to leave out language in academic letters of reference that would identify an applicant as female, such as “nice.” But as an applicant, I might want to identify as female if, in order to game the system, I believe that doing so advantages me in a quota system.) 

*Bibliographer’s Note: Neilsen’s and Kepinski’s “nudges” still rest upon certain assumptions, i.e. that backlash to implicit bias testing results from people, e.g. ‘old white males’, being pushed on their biases, and, as in other literature, on a threat to their privileges. 

Missing from explications about backlash is the possibility that some academics are responding as scholars/researchers simpliciter worried about sound scholarship/research practices rather than feeling threatened or made uncomfortable in some way about racial and other such negative stereotypes. In other words, the bias is against perceived intellectual sloppiness and not against race, gender, or other designated disadvantaged group. 


More on Implicit Bias from Previous ‘Keeping an Eye on EDI’ Posts:

Open Science Framework OSF, Articles Critical of the IAT and Implicit Bias

Ethical Considerations of the Harvard Implicit Association Test. Are Canadian Universities Complying With These Guidelines?

“Mandatory Implicit Bias Training is a Bad Idea.” (Plus Some Examples from Canadian Universities)