The author further explains how Cody works by providing the LLM with the necessary context, such as the relevant code, to answer a question. This process, referred to as "cheating," makes LLMs more effective at answering code-related questions. The author emphasizes the importance of a good code search engine with comprehensive code intelligence for this process. The article concludes with the author expressing optimism about the potential of AI and LLMs despite the possibility of failures along the way.
Key takeaways:
- The author suggests that to effectively use an LLM-based product like Cody, users need to build a mental model of how it works, much like how people have learned to use Google.
- LLMs are compared to booksmart Harvard graduates who can answer many questions well, but can't answer questions that depend on information they haven't seen. To overcome this, users need to include the necessary information in their questions.
- Cody "cheats" by rephrasing user's questions to include their code, providing the LLM with the necessary context to answer the question. This requires a good code search engine that can quickly and comprehensively return all instances of a function, for example.
- Sourcegraph is positioned to build amazing things using AI + Sourcegraph's code intelligence and search, as it can feed context into a smart LLM. Cody is essentially a power user of Sourcegraph that can run multiple searches and read multiple code files in an instant, providing users with answers or the right code within seconds.