Skip to Main Content

Artificial Intelligence (AI) Resource Guide

About This Page

This section will provide an overview of resources to get you started.

Getting Started

AI & Diversity

Biases exist in societal structure and human thought. Since AI and Generative AI use data that we give to them, biases will exist in the work they do. It is our responsibility to be aware of those, just as we should be aware of our own biases. Biases can also occur in Data Labeling, Generative AI Training, and the initial/continuing developments of AI and Generative AI.

To learn about Biases in research, look at our other guide on research. 

Implicit and Explicit Biases:

  • Explicit Biases:
    • a person's conscious (with our awareness) prejudices or beliefs towards a specific group. 
  • Implicit Biases:
    • Unconscious (without our awareness) views or stereotypes that affect our comprehension, actions, and choices. These are harmful as they are deeply ingrained even when we intentionally reject them.

Biases in AI and Generative AI

  • Confirmation Bias:
    • We tend to trust information that confirms our existing beliefs or discard information that doesn’t. Generative AI can rely too much on previous trends in the data that reinforce biases, and stereotypes.
  • Selection Bias:
    • The data that trains Generative AI is unrepresentative of a whole population or the randomization of the data isn't appropriately random. 
  • Stereotyping Bias:
    • Generative AI reinforces stereotypes. An example is a facial recognition system that does not identify people of color as easily as white people or a system that conforms to gender-stereotypical language.
  • Outgroup Bias:
    • It is based on Group Favoritism. Those from the "In" group are favorable, while those in the "Out" group are less desirable. The ones in the "Out" group tend to be ones we don't identify( socially, culturally, professionally, etc.) with. Generative AI is less capable of determining between individuals who are not part of the majority group. This can cause inaccuracy when including minority groups.

There are others not mentioned that you can find HERE.

It is important to note that a big issue (besides Academic Integrity) that Generative AI and AI face is its harm to DEIB.

Here are some examples:

  • Facial recognition: Systems are less accurate in identifying people of color, leading to potential misuse in law enforcement.
  • Recruitment software: Algorithms that prioritize certain keywords or educational backgrounds can unfairly disadvantage candidates from underrepresented groups.
  • Loan applications: Generative AI models may make biased decisions based on factors like zip code or credit history, perpetuating existing inequalities.

There is also a lack of diversity in the AI field. "The high tech sector employed a larger share of whites (63.5 percent to 68.5 percent), Asian Americans (5.8 percent to 14 percent) and men (52 percent to 64 percent), and a smaller share of African Americans (14.4 percent to 7.4 percent), Hispanics (13.9 percent to 8 percent), and women (48 percent to 36 percent)."-  EEOC(2023)

Other points to note:

  • Unintended consequences: Generative AI systems created with good intentions can have unintended consequences for marginalized groups. 
    • Example: An algorithm used to predict recidivism rates for criminal justice (even if it wasn't explicitly programmed to do so) may disproportionately target people of color
  • Misuse: Generative AI systems could be misused to further discriminate against marginalized groups.

There are also potential benefits:

  • Identifying and mitigating bias: Generative AI can be used to analyze data and identify potential biases in existing systems.
  • Promoting diversity: Generative AI can personalize education and training, making it more accessible to diverse learners.
  • Increasing inclusion: AI-powered tools can help people with disabilities access information and services.

Just like DEI and AI, there are challenges and opportunities in how the tools are used and created.

Challenges:

  • Discrimination: Generative AI algorithms with preexisting biases lead to discriminatory outcomes for neurodiverse individuals, such as in hiring algorithms or facial recognition systems.
  • Accessibility: Some interfaces won't be designed with neurodiversity in mind, creating barriers for individuals with cognitive differences or sensory needs. 
  • Lack of representation: Neurodiverse individuals are often underrepresented in the AI field, both in terms of developers and users. 

Opportunities:

  • Diversity of thought: By bringing together diverse perspectives, including those of neurodiverse individuals, we can foster innovation and develop more inclusive and equitable AI solutions.
  • Accessibility tools: These tools can be used to develop technologies that assist neurodiverse individuals.
    • Example: AI-powered communication aids can help individuals with speech difficulties.
  • Increased awareness: As the conversation around Generative AI and AI and neurodiversity continues, organizations and individuals are becoming more aware of the need for inclusion and accessibility. This can lead to positive changes in policies and practices.