Cook emphasizes the importance of securing data at both an individual and aggregate level, and the need for HR systems to provide both detailed and summarized views of data. He warns that without proper security measures, AI systems could potentially reveal sensitive information about employees. He concludes by advising business leaders to ensure any AI vendors they work with can effectively secure their data.
Key takeaways:
- New legislation coming into force in New York will hold employers legally accountable for how they deploy AI in their businesses, including ensuring that AI systems are bias-free and operate as specified.
- Generative AI (GenAI) models, which return written answers to typed inputs, need to be trained on company-specific data, potentially involving sensitive employee data, and will be subject to audit and validation.
- While the legal implications and sensitivity of people data may lead CHROs to reduce their use of AI-based systems, it's crucial for organizations to leverage technology to stay competitive, reduce costs, and improve delivery.
- The best way to balance a performant AI system and a training data set that does not breach employees' rights is through security, including the right infrastructure and security process, GDPR compliance, and the ability to secure data at both an individual and aggregate level.