Meanwhile, AI skeptic Gary Marcus continues to critique the current trajectory of generative AI, arguing that large language models (LLMs) are flawed and unlikely to fulfill the grand promises of Silicon Valley. He advocates for a neurosymbolic approach to AI, which he believes could achieve human-level intelligence more effectively. Marcus warns of potential privacy risks as companies may turn to monetizing user data when generative AI's limitations become apparent. Despite the hype, he sees the technology's practical applications as limited, primarily benefiting areas where occasional errors are acceptable, such as coding and brainstorming.
Key takeaways:
- The New York Times has signed a deal with Amazon to use its content for training AI models, marking its first generative AI licensing agreement.
- Gary Marcus, a prominent skeptic of generative AI, criticizes current AI models like ChatGPT, arguing they are flawed and not transformative.
- Marcus advocates for neurosymbolic AI, which focuses on rebuilding human logic artificially, as a better approach to achieving human-level intelligence.
- Marcus warns that companies like OpenAI may resort to monetizing user data due to the limitations of generative AI, raising concerns about privacy and surveillance.