Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

How Fashion Schools Are Tackling AI’s Blind Spots

Dec 08, 2023 - businessoffashion.com
Julienne DeVita, a lecturer at Parsons’ future studies programme, is developing a course called “Designing with AI” to expose students to the multidisciplinary nature of AI and provide hands-on experience with AI tools. However, she acknowledges the challenge of preparing students for the biases and shortcomings of AI technology, citing instances where AI tools like Midjourney and Adobe Firefly have been found to replicate racial and cultural stereotypes. Other educators at institutions like Central Saint Martins and Pace University are also considering ways to incorporate bias mitigation into their AI curriculums.

One approach is to engage students in critical discourse about AI’s inclination toward stereotypes. Jeff Karly, a fashion lecturer at Parsons, instructs his students to use specific prompts to avoid AI’s stereotypical defaults and grades them on their use of identifiers in their assignments. DeVita believes that encouraging conversation and discussion about the ethics of AI can help students understand their future role and responsibility in curbing AI’s biases. Some professors are even working on correcting biases in the datasets that power generative AI.

Key takeaways:

  • Julienne DeVita, a lecturer at Parsons’ future studies programme, is developing a course called “Designing with AI” to expose students to the multidisciplinary nature of AI and give them hands-on experience with AI tools such as Midjourney and Adobe Firefly.
  • AI tools like Midjourney have been found to replicate stereotypes and biases, such as defaulting to caucasian skin tones when generating images of humans, which is a challenge educators are trying to address.
  • Professors at fashion programmes including Central Saint Martins and Pace University are incorporating bias mitigation into their AI curriculum and developing solutions for AI’s ethical flaws.
  • Students are being trained to think critically about AI's biases and the broader social and cultural context that informs these tools, with the aim of reducing the risk of them reproducing bias in their work and preparing them to face these challenges in the workforce.
View Full Article

Comments (0)

Be the first to comment!