Recognize the effects of bias on the quality and validity of information and arguments
- Develop a prompt that you believe may reveal bias and other particularities in the training data of generative text-to-image tools. For example, you might try one of the following prompts: “a British professor studying in a typical academic environment,” “a group of doctors preparing to perform a surgery,” or “a student holding up a trophy after winning a spelling bee competition.”
- Next, use at least three different generative text-to-image tools to create images with your prompt, for example DALL-E, Midjourney, Stable Diffusion, Adobe Express, etc.
- Student discussion: in class, in groups of three, students discuss similarities and differences between their generated images. Then, ask them to develop hypotheses about the models’ training data, and test these hypotheses by generating more images. For example, does a tool consistently produce images of a British professor that looks a certain way? Note: students may use Midjourney to convert the image back to text, to understand how the model characterizes certain images.
[Option B1] Prompt an LLM tool to “write a scene in a movie script where people in specific professions interact” (e.g., a doctor and nurse, pilot and flight attendant).
[Option B2] Prompt an AI image generator tool to provide an illustration of a nurse, doctor, pilot, and professor. Ask it then for the same illustration with diverse racial/gender representation. Examples here.
- Student discussion: What gender and race did AI assign to each role with each prompt? How did this reinforce or contradict common stereotypes?
- Ask students to discuss and evaluate differences as an introduction to critical analysis.
- Ask students to experiment further with the software (e.g., revising the prompt to provide different answers and/or explore different stereotypes).