One approach is to engage students in critical discourse about AI’s inclination toward stereotypes. Jeff Karly, a fashion lecturer at Parsons, instructs his students to use specific prompts to avoid AI’s stereotypical defaults and grades them on their use of identifiers in their assignments. DeVita believes that encouraging conversation and discussion about the ethics of AI can help students understand their future role and responsibility in curbing AI’s biases. Some professors are even working on correcting biases in the datasets that power generative AI.
Key takeaways:
- Julienne DeVita, a lecturer at Parsons’ future studies programme, is developing a course called “Designing with AI” to expose students to the multidisciplinary nature of AI and give them hands-on experience with AI tools such as Midjourney and Adobe Firefly.
- AI tools like Midjourney have been found to replicate stereotypes and biases, such as defaulting to caucasian skin tones when generating images of humans, which is a challenge educators are trying to address.
- Professors at fashion programmes including Central Saint Martins and Pace University are incorporating bias mitigation into their AI curriculum and developing solutions for AI’s ethical flaws.
- Students are being trained to think critically about AI's biases and the broader social and cultural context that informs these tools, with the aim of reducing the risk of them reproducing bias in their work and preparing them to face these challenges in the workforce.