Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Show HN: Unify – Dynamic LLM Benchmarks and SSO for Multi-Vendor Deployment

Feb 07, 2024 - news.ycombinator.com
Unify has launched its Model Hub, a collection of Language Model (LLM) endpoints with live runtime benchmarks plotted over time. The founder of Unify argues that static tabular runtime benchmarks for LLMs are ineffective and instead, a time-series perspective should be taken. The hub currently hosts 21 models from various providers including Anyscale, Perplexity AI, Replicate, Together AI, OctoAI, Mistral AI and OpenAI. The benchmarks are tested across different regions with varied concurrency and sequence length, and the results are plotted over time to highlight the stability and variability of the different endpoints.

The benchmarking code is open source and the unified API makes it easy to test and deploy different endpoints in production. The Model Hub is a work in progress with new features being released weekly. A demo video has been recorded to help users get started. As a promotion, the code "HACKERNEWS" can be used to claim $5 per week in free credits, compatible with the expanding list of LLM providers.

Key takeaways:

  • Unify has released their Model Hub, a collection of LLM endpoints with live runtime benchmarks plotted across time.
  • The Hub currently features 21 models from various providers, and tests across different regions with varied concurrency and sequence length.
  • The benchmarking code is fully open source and the unified API makes it easy to test and deploy different endpoints in production.
  • Unify is offering a promo code for HN readers to claim $5 per week in free credits, compatible with their list of LLM providers.
View Full Article

Comments (0)

Be the first to comment!