Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Why everyone seems to disagree on how to define Artificial General Intelligence

Oct 18, 2023 - fastcompany.com
The TED AI conference in San Francisco saw discussions about Artificial General Intelligence (AGI), but there was no consensus on its definition or timeline. Some believe AGI refers to systems that can learn any intellectual task humans can perform, while others say it refers to systems that can learn completely new tasks without explicit instructions. The definition is crucial as it could affect the pace at which AI companies focus on building safety features into their models. Meanwhile, Stanford's Institute for Human-Centered Artificial Intelligence (HAI) released its Foundation Model Transparency Index (FMTI), grading companies on their disclosure of 100 different aspects of their AI’s foundational models.

DeepMind co-founder Mustafa Suleyman's book, "The Coming Wave," criticizes Silicon Valley's "naive optimism" towards AI, which ignores potential ill effects of the technology. Investor Marc Andreessen's recent piece, "The Techno-Optimist Manifesto," is seen as an example of this optimism, with Andreessen arguing that any deceleration of AI will cost lives and that the precautionary principle is deeply immoral. However, critics note that Andreessen's piece does not mention "unintended consequences," "global warming," or "climate change."

Key takeaways:

  • At the TED AI conference in San Francisco, there was little consensus on when Artificial General Intelligence (AGI) systems will arrive and how they should be defined. Some believe AGI refers to systems that can learn any intellectual task humans can perform, while others say it refers to systems that can learn completely new tasks without explicit instructions or examples.
  • Stanford’s Institute for Human-Centered Artificial Intelligence (HAI) has released its inaugural Foundation Model Transparency Index (FMTI), which grades companies on their disclosure of 100 different aspects of their AI’s foundational models. The index found that there is a lot of room for improvement in terms of transparency, with Meta being the only company that scored higher than 50%.
  • DeepMind cofounder Mustafa Suleyman's book, "The Coming Wave," criticizes the "naive optimist" in Silicon Valley who ignores the potential negative effects of new technology and pushes forward without considering safeguards. This criticism seems to be directed at investors like Marc Andreessen, who recently published a piece strongly advocating for the acceleration of AI development.
  • There is concern that the rush to develop AGI and other AI technologies could lead to a lack of safety features and potential misuse. The economic incentive to build bigger and more performant models is currently overwhelming the idea of developing AI in slower, safer ways.
View Full Article

Comments (0)

Be the first to comment!