Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

First Impressions of Early-Access GPT-4 Fine-Tuning | Supersimple

Apr 01, 2024 - supersimple.io
Supersimple, a data analytics platform, recently gained access to the GPT-4 fine-tuning API and found that a fine-tuned GPT-4 outperforms a fine-tuned GPT-3.5 by more than 50% in their use case. The company uses these models to answer users' natural language questions about their data, generating reports and underlying queries. The models were fine-tuned for a domain-specific use case of asking data questions in natural language. The company also uses a custom high-level markup language (DSL) that is token-efficient and well-aligned with pre-trained language models.

Despite the performance improvements, the models still struggle with broad and open-ended queries. The company observed diminishing returns from fine-tuning, with the progress achieved by fine-tuning GPT-4 being smaller than previous models. To address these limitations, they use a mix of various specialized models, prompts, and heuristics to improve both accuracy and response time. The main issue with the largest models is higher latency, which is why they only use GPT-4-FT for a certain subset of questions and for some of the most critical reasoning steps.

Key takeaways:

  • Supersimple has been using fine-tuned models of OpenAI's GPT-3 and GPT-4 for their data analytics platform, with GPT-4 showing more than 50% improvement in their use case compared to GPT-3.5.
  • The models are used to answer users' natural language questions about their data, generating reports and queries based on the questions asked.
  • Despite the performance improvements, the models still struggle with broad and open-ended queries, and there is a trend of diminishing returns from fine-tuning.
  • Due to higher latency and cost, GPT-4 is only used for a subset of questions and critical reasoning steps, while other models, including GPT-3.5, are used for the rest.
View Full Article

Comments (0)

Be the first to comment!