The article also highlights that AI developers use reinforcement learning with human feedback (RLHF) to refine AI responses before public use, which can influence the AI's responses. The author argues that the AI's responses are pre-shaped by the AI maker's beliefs and biases. The author concludes that the question of whether AI should "believe" in angels is currently in the hands of AI makers and their perception of AI ethics.
Key takeaways:
- The article explores the question of whether generative AI, such as ChatGPT, should 'believe' in angels, given that a significant proportion of humans do.
- Generative AI is based on scanning a wide swath of data across the Internet and finding patterns in human writing, which it then mimics. If humans express a belief in angels, the AI will likely discover and mimic that pattern.
- However, AI developers often refine the AI before releasing it to the public, using reinforcement learning with human feedback (RLHF) to guide what the AI should and should not say. This means the AI's responses are pre-shaped by the developers' beliefs and preferences.
- While the question of whether AI should 'believe' in angels is currently in the hands of the AI developers, the author suggests that once AI achieves sentience, it may form its own 'beliefs'.