Amodei also addressed concerns about the limitations of LLMs, such as difficulty with certain tasks. He expressed skepticism about the existence of these limitations and the ability to measure them, suggesting that with different training or fine-tuning, LLMs could overcome these challenges. He predicts that the industry will not see diminishing returns for at least the next three to four years.
Key takeaways:
- Anthropic CEO Dario Amodei believes there are no barriers to the continued growth in size and capability of large language models (LLMs).
- Amodei expects the scale of neural nets used to train LLMs to keep increasing, leading to better performance.
- Despite some researchers suggesting that LLMs may struggle with certain tasks regardless of their size, Amodei is skeptical of any fundamental limits to what LLMs can achieve.
- Amodei predicts that the growth of LLMs will not see diminishing returns for at least the next three to four years.