The author argues that the widespread adoption of LLMs could have massive consequences, particularly as they are used in consequential matters such as housing decisions, loan decisions, and crime proceedings. The author calls for an investigation by Congress, the EEOC, HUD, the FTC, and other organizations, and suggests that LLM manufacturers should recall their systems until they can find an adequate solution to this issue. The author has also started a petition on change.org to address this issue.
Key takeaways:
- A new paper from Allen AI, Stanford and others reveals shocking results about covert racism in Language Learning Models (LLMs).
- The study found that while the systems rarely show overt racism, they display a high level of covert racism, especially when given an African American English prompt.
- The widespread adoption of LLMs could have massive potential consequences, particularly as scale makes the covert racism worse.
- The author suggests that LLM companies should recall their systems until they can find an adequate solution to this problem, and encourages Congress and other bodies to investigate.