Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

AI ‘breakthrough’: neural net has human-like ability to generalize language

Oct 25, 2023 - nature.com
Scientists have developed a neural network that can generalize language in a human-like manner, a key aspect of human cognition. The AI system performs as well as humans in integrating new words into an existing vocabulary and using them in new contexts. This is a significant advancement over chatbot models like ChatGPT, which, despite their conversational abilities, perform poorly on such tasks. The study, published in Nature, could lead to AI systems that interact with people more naturally.

The researchers trained the neural network using a pseudo-language and a series of tasks that required the application of abstract rules. The AI was able to learn from its mistakes and reproduce patterns of errors observed in human test results. In contrast, GPT-4 struggled with the same task. The study suggests that infusing systematicity into neural networks could make them more efficient learners, reducing the amount of data needed for training and minimizing inaccuracies.

Key takeaways:

  • Scientists have developed a neural network that can make generalizations about language, a key aspect of human cognition, performing as well as humans at integrating new words into existing vocabulary and using them in new contexts.
  • The AI model underlying the chatbot ChatGPT performed significantly worse on the same task, despite its ability to converse in a human-like manner, highlighting the gaps and inconsistencies in large language models.
  • The study involved testing 25 people on how well they apply newly learned words to different situations using a pseudo-language. The neural network was trained in a similar way, learning from its mistakes and reproducing the patterns of errors observed in human test results.
  • The research could make neural networks more efficient learners, reducing the large amount of data needed to train systems like ChatGPT and minimizing 'hallucination', where AI perceives non-existent patterns and creates inaccurate outputs.
View Full Article

Comments (0)

Be the first to comment!