The author suggests that these issues can be mitigated by supplementing generative AI with other transparent models that reason based on industry-specific knowledge. For instance, an AI legal assistant could consult industry-accepted legal documents and reason like a lawyer, producing ready-to-use legal memos while leveraging ChatGPT's conversational fluency. This would allow the AI to surpass the expertise of the average human lawyer, while any errors could be easily traced and corrected.
Key takeaways:
- Generative AI, like OpenAI's ChatGPT, is impressive but has limitations as it cannot reason or think like humans. It determines its output by establishing relationships between words and predicting the most statistically likely response.
- Generative AI's potential for professional support sets it apart from previous innovations. However, to live up to its hype, it needs to be supported by additional layers of AI that can reason and produce reliable outputs for specific industries.
- ChatGPT in a professional setting, like medicine or law, lacks the relevant context and can make mistakes. Professionals using it don't have a way to understand why it made a mistake or how it arrived at its output.
- The potential of AI assistants for every industry is significant. By bolstering generative AI models with other transparent models that reason based on their industry’s gold standard of knowledge, AI assistants can surpass the expertise of the average human professional.