Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Does current AI represent a dead end?

Dec 27, 2024 - bcs.org
Eerke Boiten, a Professor of Cyber Security, argues that current AI systems, particularly those based on large neural networks like LLMs, are unsuitable for serious applications due to their unmanageable nature and lack of accountability. He highlights that these systems lack internal structures that relate meaningfully to their functionality, making them difficult to develop, verify, or reuse as components. The emergent behavior of AI systems, combined with their stochastic nature, poses significant challenges for verification and fault correction, as they lack intermediate models and explicit knowledge representations. This results in a reliance on post-hoc verification, which is inadequate due to the vast input and state spaces of AI systems.

Boiten suggests that the current trajectory of AI development, which focuses on increasing training data and computational effort, is a dead end that offers only modest improvements in plausibility without enhancing reliability. He advocates for a shift towards compositional approaches and hybrids between symbolic and intuition-based AI, which could generate explicit knowledge models or confidence levels. Such systems could be integrated into larger systems with limited scopes where their outputs can be managed, or in contexts like weather prediction where stochastic predictions are expected.

Key takeaways:

  • Current AI systems, particularly those based on large neural networks, are unmanageable and unsuitable for serious applications due to their lack of transparency, accountability, and explainability.
  • The emergent behavior of AI systems contradicts the principles of compositionality in software engineering, making them difficult to develop, verify, and reuse as components.
  • Verification of AI systems is challenging due to their large input and state spaces, stochastic nature, and lack of intermediate models, leading to reliance on inadequate whole system testing.
  • Faults in AI systems are hard to predict and fix, and retraining does not guarantee error correction, making them unreliable for critical applications.
View Full Article

Comments (0)

Be the first to comment!