The author also discusses the issues plaguing LLMs, such as the creation of toxic content and hallucination, the copyright debates around AI, the evaluation of LLMs, and the revenue generation from generative AI. The article also touches upon the creation of fake images and videos, the closing of free API access by companies, and the use of RLHF and DPO in LLMs. The author concludes by making predictions for 2024, including the transformation of STEM research by LLMs, the development of custom AI chips, and the rise of DPO in open-source models.
Key takeaways:
- The AI industry in 2023 saw upgrades in existing technologies rather than fundamentally new ones, with GPT 3.5 upgraded to GPT 4, DALL-E 2 upgraded to DALL-E 3, and Stable Diffusion 2.0 upgraded to Stable Diffusion XL.
- There is a trend of industry researchers sharing less information in their papers, making the architecture of AI models like GPT-4 a closely guarded secret.
- Open-source AI has had an active year with many breakthroughs and advancements, with small and efficient models coming closer to the performance of large proprietary models.
- Issues such as the creation of fake content, copyright debates, and the hallucination problem in LLMs continue to plague the AI industry.