The challenge of creating a moral AI is complex due to the inherent subjectivity of morality and the limitations of current AI technology. Current AI systems, trained on data from the web, often reflect the values of Western, educated, and industrialized nations and can internalize a range of biases. Moreover, AI lacks an understanding of ethical concepts and the reasoning and emotion involved in moral decision-making. Despite these challenges, the researchers aim to develop an algorithm that can accurately predict human moral judgements.
Key takeaways:
- OpenAI is funding a project at Duke University aimed at developing algorithms that can predict human moral judgements, as part of a larger, three-year, $1 million grant.
- The research is led by Walter Sinnott-Armstrong and Jana Borg, who have previously worked on AI's potential to serve as a 'moral GPS' and developed a 'morally-aligned' algorithm for kidney donations.
- The goal of the OpenAI-funded work is to train algorithms to predict human moral judgements in scenarios involving conflicts in medicine, law, and business.
- The challenge of developing such an algorithm is complicated by the inherent subjectivity of morality and the biases internalized by AI from its training data, which often reflects the values of Western, educated, and industrialized nations.