The author predicts that the real breakthrough will come when voice assistants on phones are effectively implemented with language learning models (LLMs). They see the potential for this technology to assist older people, like their father, who struggle with current tech ecosystems but could easily interact with a voice interface. They urge for the technology to be utilized to its full potential before dismissing it.
Key takeaways:
- Even if technological advancement hits a wall, there are still many applications enabled by it that have yet to be developed.
- Multi-modal models, especially those that can describe images, have promising use cases.
- The author is looking forward to having GPT4-level quality on their local machine, open source, so it can't be taken away.
- There is a prediction that Language Learning Models (LLMs) will be a critical part of the first effective voice assistant, which could be particularly beneficial for older people who struggle with current technology interfaces.