Fastino's models are task-optimized, focusing on specific enterprise functions rather than being generalist models. This approach, according to the company, results in higher accuracy and reliability. The models excel in structuring textual data, supporting retrieval-augmented generation pipelines, task planning and reasoning, and generating JSON responses for function calling. The company is currently working with industry leaders in consumer devices, financial services, and e-commerce, including a major North American device manufacturer for home and automotive applications.
Key takeaways:
- Fastino, a new enterprise AI startup, has raised $7 million in pre-seed funding and is developing 'task-optimized' models that provide better performance at lower cost.
- Unlike most other Large Language Models (LLMs) providers, Fastino’s models will run well on general-purpose CPUs, and do not require high-cost GPUs to run.
- Fastino's models are task-optimized rather than being generalist models, focusing on specific tasks to achieve higher accuracy and reliability.
- Fastino's models can run on CPUs and do not require the use of GPU AI accelerator technology, which could significantly lower enterprise AI costs.