Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Instead of fine-tuning an LLM as a first approach, try prompt architecting instead | TechCrunch

Sep 18, 2023 - techcrunch.com
Victoria Albrecht, CEO of Springbok AI, discusses the challenges businesses face when trying to implement large language models (LLMs) like ChatGPT. She suggests that instead of building an LLM from scratch or fine-tuning an existing one, companies should focus on developing a comprehensive "prompt architecture". This approach, which involves creating a series of prompts to guide the LLM's responses, is more cost-effective and efficient for most businesses.

The article also differentiates between prompt architecting and fine-tuning. While fine-tuning involves modifying the LLM with a new dataset, prompt architecting leverages the existing LLM without any modification. However, for companies with strict data privacy requirements, such as banks, fine-tuning may be the more appropriate option.

Key takeaways:

  • Large Language Models (LLMs) can be tailored to specific data or domain information without modifying the model itself or its training data, by using the right sequence of prompts.
  • Building a comprehensive “prompt architecture” is a cost-effective approach to maximize the value extracted from LLMs, enhancing API-powered tools.
  • Fine-tuning, which involves retraining a segment of an LLM with a large new dataset, is a more costly process and is only necessary in a minority of cases.
  • Fine-tuning is most appropriate for companies with stringent data privacy requirements, such as banks, as it imbues the LLM with domain-specific knowledge.
View Full Article

Comments (0)

Be the first to comment!