Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

GitHub - Ads-cmu/WhatsApp-Llama: Finetune a LLM to speak like you based on your WhatsApp Conversations

Sep 09, 2023 - github.com
The article discusses a repository that fine-tunes the Llama 7b chat model to replicate a user's personal WhatsApp texting style. By inputting WhatsApp conversations, the model can be trained to respond similarly to the user. The author tested the model by having friends guess which responses were theirs and which were the model's, with the model fooling 10% of the friends. The author believes that with more compute, the model's effectiveness could increase to around 40%.

The article also provides a step-by-step guide on how to set up the repository and create a customized dataset. This includes exporting WhatsApp chats, preprocessing the dataset, validating the dataset, configuring the model, and training the model. The author concludes by stating that this adaptation of the Llama model is a fun way to see how well a language model can mimic personal texting styles, but reminds users to use AI responsibly.

Key takeaways:

  • The Llama 7b chat model can be fine-tuned to replicate a personal WhatsApp texting style, using a fork of the `facebookresearch/llama-recipes` repository.
  • The fine-tuned model can learn texting nuances quickly, generating more words and accurately replicating common phrases and emoji usage.
  • A Turing Test with friends showed that the model could fool 10% of them, with some responses being very similar to the user's own.
  • The repository provides a step-by-step guide to set up and create a customized dataset, including exporting WhatsApp chats, preprocessing the dataset, validating the dataset, configuring the model, and training the model.
View Full Article

Comments (0)

Be the first to comment!