A Truth, a Lie, and a Blurry Line

Explore the problems and possibilities posed by synthetic images created with generative artificial intelligence, focusing specifically on the relationship between photography and reality.

Assignment Details

Credit

AI Theme

Subject

Tools

Skills

LEARNING OBJECTIVES

Gain an understanding of the history of photographic manipulation and its impact on public perception 
Critically analyze and verify the authenticity of photographs by employing research techniques and recognizing the ethical implications of manipulation in various contexts
Apply this knowledge to create and evaluate synthetic images, remaining critical of biases in the models while avoiding the perpetuation of implicit biases


PRE-WORK

Ensure that students’ text-to-image generators of choice are not locked behind a paywall or require logins. Some campuses may have access to a specific tool through an enterprise account, such as Adobe Firefly, Google Gemini, Microsoft Designer, Canva, or OpenAI’s DALL-E 3. Stable Diffusion 3 and Craiyon are options that are publicly available without making accounts. Often, having a variety of tools available works well in case a website is down or slow due to high traffic.


INSTRUCTIONS

  1. In the beginning, students are provided with a brief overview of the history of photographic manipulation techniques, such as spirit photography, double exposure, hand coloring, and composite techniques.
    • Examples of composited images from different disciplines such as art, journalism, and science are displayed and discussed in terms of their ethical considerations.
    • Prompt students to consider how these manipulations reveal or obscure the “truth” and why context matters.
  2. This historical framing lays the foundation for discussing how images are created using text-to-image generators, with special attention paid to diffusion models being the primary versions used today. Students are then presented with a series of archival photographs and captions and asked to verify their veracity. Some of these images are authentic with factual captions, some are real pictures with false captions, and others are completely fabricated.
  3. The instructor then demonstrates how to use Google reverse image search as the images are reviewed. Special care is taken to discuss the pitfalls of relying on ableist and ineffective analysis techniques like pointing out disfigured faces, hands, or bodies and instead focus on elements like fabric textures, background objects, or historical inaccuracies.
  4. Now, it’s the students’ turn to synthesize and analyze their own fake historical photographs. Students work in small groups to create a set of three images with the following captions: (1) a real photograph with an authentic story, (2) a fake photograph with a fabricated story, and (3) a real photograph with a misleading story. Example images:

    Left (completely real), Center (completely fake), Right (real image, misleading narrative)
    • To source their real photographs, students are required to use archives from trusted resources like the image galleries of the Smithsonian, Library of Congress, Getty Center, or other special collections databases.
    • Encourage students to select figures that are less known from history or unknown altogether, with particular attention to communities that might be underrepresented in such archives.
    • When creating their fake images with AI text-to-image tools, a list of terms about historical techniques is shared with students to help them craft more effective prompts that will, ideally, generate convincing photographs from a particular time period.
    • Once they have gathered a set of three photos–two real and one fake–students then create short Wikipedia-like captions describing the people in their images. They are allowed to use tools like ChatGPT for creating these descriptions, but it is not required.
  5. When this portion of the activity is completed, students will then share their images and captions through a collaborative presentation tool such as Google Slides, Zoom Whiteboard, or FigJam.
  6. As a class, students practice their research skills to verify, debunk, and correct each other’s images and captions. You might conduct this portion as a small-group activity such that student groups are divided into pairs, exchanging their set of images and working together to decipher which images are real and which are AI-generated. 
  7. Once students have accurately identified the real, fabricated, and misleading content, pose some of the following discussion questions to facilitate critical reflection:
    • Was it easy to distinguish between the real and AI-generated photographs? Why or why not?
    • How do historical and contemporary photographic manipulation techniques impact our understanding of reality? How can the context in which an image is presented alter its perceived truthfulness?
    • In what ways can synthetic images created with generative AI both reveal and obscure the truth? How might biases in AI models affect the generation and interpretation of synthetic images?
    • What are the ethical implications of using AI-generated images in various fields such as journalism, science, and art? What responsibilities do creators and consumers of synthetic images have in preventing misinformation?
    • How can we ensure that the techniques we use to verify the authenticity of an image evolve in tandem with advancements in AI and photographic manipulation?

SUGGESTED READINGS

Related

DuPont Analysis of Coca-Cola

Conduct a DuPont analysis of Coca-Cola using real-world data, comparing and contrasting your results with an analysis conducted by Microsoft’s chatbot Copilot.

...

How to Train Your AI Dragon

Empower educators to craft innovative activities by training AI tools to enhance student learning outcomes, generate educational content, and design evaluation methods.

...