The report also discusses the achievements and potential of AI, noting that AI systems have surpassed human performance on several benchmarks. It also highlights the increasing use of AI in scientific discovery and real-world medical applications. However, the report also points out several challenges and concerns, including the lack of robust and standardized evaluations for AI, the ease of creating political deepfakes, and the rising number of incidents involving the misuse of AI.
Key takeaways:
- The Stanford Institute for Human-Centered Artificial Intelligence (HAI) 2024 AI Index report discusses the rise of multimodal foundation models, increasing investment in generative AI, an influx of regulations, and shifting opinions on AI around the globe.
- U.S. tech companies dominate the AI landscape, with Google alone releasing 18 foundation models in 2023. Private industry accounted for 72% of foundational models released last year.
- The cost of training state-of-the-art AI models has reached unprecedented levels, which is a reason academia and governments have been edged out of AI development. For instance, Google’s Gemini Ultra cost an estimated $191 million worth of compute to train.
- AI systems have passed human performance on several benchmarks and are being increasingly used for real-world medical purposes. However, there are many significant challenges, including a lack of robust and standardized evaluations for LLM responsibility and issues surrounding transparency and security.