The author suggests that software developers should strive for truly explainable AI with testable components. They should insist on AI components that are monitorable, reportable, repeatable, explicable, and reversible. The author also emphasizes the importance of being able to correct any false beliefs held by an LLM. While acknowledging the current challenges, the author expresses hope that these issues can be addressed in the future, but warns against treating AI as a "holy relic" that cannot be questioned or tested.
Key takeaways:
- Current AI systems, particularly Large Language Models (LLMs), lack internal structure that relates meaningfully to their functionality, making them difficult to develop or reuse as components.
- LLMs are problematic in the software development lifecycle due to their lack of decomposability and explainability, and their inseparability from their training data.
- There are several business and legal concerns associated with LLMs, including security and privacy issues, legal ownership problems, and their high carbon footprint.
- Software developers should aim for truly explainable AI with testable components, and any necessary training should be monitored, reportable, repeatable, explicable, and reversible.