Boiten suggests that the current trajectory of AI development, which focuses on increasing training data and computational effort, is a dead end that offers only modest improvements in plausibility without enhancing reliability. He advocates for a shift towards compositional approaches and hybrids between symbolic and intuition-based AI, which could generate explicit knowledge models or confidence levels. Such systems could be integrated into larger systems with limited scopes where their outputs can be managed, or in contexts like weather prediction where stochastic predictions are expected.
Key takeaways:
- Current AI systems, particularly those based on large neural networks, are unmanageable and unsuitable for serious applications due to their lack of transparency, accountability, and explainability.
- The emergent behavior of AI systems contradicts the principles of compositionality in software engineering, making them difficult to develop, verify, and reuse as components.
- Verification of AI systems is challenging due to their large input and state spaces, stochastic nature, and lack of intermediate models, leading to reliance on inadequate whole system testing.
- Faults in AI systems are hard to predict and fix, and retraining does not guarantee error correction, making them unreliable for critical applications.