Tad Roselund, a senior partner at Boston Consulting Group, says efforts to use AI responsibly are moving "nowhere near as fast as they should be," and implementing these programs requires significant resources and time. Navrina Singh, the founder of Credo AI, argues that investors need to fund tools and resources for responsible AI programs. Despite legislative efforts, such as the EU's Artificial Intelligence Act and the Biden Administration's executive order for greater transparency, the rapid pace of AI innovation may outstrip current regulations, leading to potential risks and complications.
Key takeaways:
- Companies are rapidly adopting generative AI technology to boost productivity, but experts warn that efforts to manage the risks of AI are lagging.
- Responsible AI programs should include strategies for governance, data privacy, ethics, and trust and safety, but these programs haven't kept up with the pace of AI innovation.
- Investors need to play a more critical role in funding the tools and resources for responsible AI programs, as the demand for AI governance and risk experts is outpacing the supply.
- Despite legislative efforts like the EU's Artificial Intelligence Act and the Biden Administration's executive order demanding transparency from AI developers, the pace of AI innovation may outstrip the ability of regulations to ensure companies are protecting themselves.