The author also highlights the work of various researchers and their contributions to the field. For instance, Judea Pearl's work on Bayesian networks is credited with rebuilding AI on a new foundation of statistical reasoning. The article also discusses the development of natural language processing, with IBM's shift away from encoding linguistic knowledge in explicit rules to training models automatically. The article concludes with a discussion on the return of neural networks, led by researchers like Geoffrey Hinton, and their application in areas like digit recognition.
Key takeaways:
- Ted Chiang, a science fiction writer, argued that the term "artificial intelligence" is a misnomer that has caused confusion, suggesting "applied statistics" would have been more appropriate.
- The field of AI has not been consistently statistical since its inception in the 1950s, with approaches from the 1960s to the 1980s having no connection to statistics or probability.
- AI research has shifted towards probabilistic methods and the revival of neural networks, although this shift was not initially driven by neural networks and the revival of neural networks was more likely to be branded as machine learning than as AI.
- Most of the systems currently branded as AI are based around the training of simulated neural networks, taking big data approaches, although the mathematical nature of the model is different.