The article further explores the capabilities of LLMs and generative AI, which are trained on a large portion of the internet and can generate well-formed sentences. These models can "remember" context over tens of thousands of words, making them appear surprisingly smart at question-answering. The author uses the example of how his company uses LLMs to translate long medical notes into shorter, more comprehensible instructions for patients. Despite the advancements, the author assures that today's AI is still just fancy math and the rise of self-aware machines is still decades away.
Key takeaways:
- The rise of deep learning and convolutional neural networks in 2012 revolutionized object recognition in photos, with applications ranging from Facebook's auto-tagging to organizing photos by content.
- Large language models (LLMs) and generative AI, like GPT4, are trained on a vast portion of the internet, making them appear surprisingly smart at question-answering and capable of generating human-like responses.
- Generative AI is beginning to use context to alter output, mimicking human communication more closely. However, it is still fundamentally based on mathematical algorithms and does not possess self-awareness or emotion.
- Despite concerns about AI replacing jobs or posing a threat to humanity, the development of self-aware machines is still far off and requires solutions to complex problems that few have begun to address.