Numenta's approach is based on the concept of sparse computing, which mimics how the brain forms connections between neurons. The startup applies its "secret sauce" to general CPUs to unlock the efficiency gains of sparse computing in AI models. It delivers its NuPIC service using docker containers, which can run on a company's own servers. This could potentially repurpose CPUs already deployed in data centers for AI workloads, especially considering the lengthy wait times on Nvidia’s industry-leading A100 and H100 GPUs.
Key takeaways:
- Numenta has demonstrated that Intel Xeon CPUs can vastly outperform the best CPUs and GPUs on AI workloads by applying a novel approach to them.
- Numenta uses a set of techniques based on the idea of sparse computing, which is how the brain forms connections between neurons.
- The startup looks to unlock the efficiency gains of sparse computing in AI models by applying its “secret sauce” to general CPUs rather than chips built specifically to handle AI-centric workloads.
- Numenta delivers its NuPIC service using docker containers, and it can run on a company’s own servers, potentially repurposing CPUs already deployed in data centers for AI workloads.