Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Microsoft-backed startup debuts task optimized enterprise AI models that run on CPUs

Nov 12, 2024 - venturebeat.com
San Francisco-based startup Fastino has emerged from stealth with a promise to deliver 'task-optimized' AI models that offer better performance at a lower cost. The company, which has raised $7 million in pre-seed funding from Insight Partners, M12, and Github CEO Thomas Dohmke, is developing its own suite of enterprise AI models and developer tools. Unlike most Large Language Models (LLMs), Fastino's models are designed to run efficiently on general-purpose CPUs, eliminating the need for high-cost GPUs.

Fastino's models are task-optimized, focusing on specific enterprise functions rather than being generalist models. This approach, according to the company, results in higher accuracy and reliability. The models excel in structuring textual data, supporting retrieval-augmented generation pipelines, task planning and reasoning, and generating JSON responses for function calling. The company is currently working with industry leaders in consumer devices, financial services, and e-commerce, including a major North American device manufacturer for home and automotive applications.

Key takeaways:

  • Fastino, a new enterprise AI startup, has raised $7 million in pre-seed funding and is developing 'task-optimized' models that provide better performance at lower cost.
  • Unlike most other Large Language Models (LLMs) providers, Fastino’s models will run well on general-purpose CPUs, and do not require high-cost GPUs to run.
  • Fastino's models are task-optimized rather than being generalist models, focusing on specific tasks to achieve higher accuracy and reliability.
  • Fastino's models can run on CPUs and do not require the use of GPU AI accelerator technology, which could significantly lower enterprise AI costs.
View Full Article

Comments (0)

Be the first to comment!