Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Intel neural-chat-7b Model Achieves Top Ranking on LLM Leaderboard!

Dec 03, 2023 - community.intel.com
The Intel neural-chat-7b model has topped the Hugging Face Open LLM Leaderboard for 7-billion-parameter models, with an average score of 59.06. The model, which is the foundation for the NeuralChat chatbot in the Intel® Extension for Transformers, is available in three versions: half-precision floating-point, bfloat16, and a 4-bit integer. Despite being at the lower end of large language model sizes, it has achieved comparable accuracy scores to models 2-3x larger.

The model is based on the open-source Mistral-7B-v0.1 transformer model from Mistral AI and was fine-tuned using a pipeline in Intel Extension for Transformers. The fine-tuning process used direct preference optimization (DPO) to provide human preference feedback. The model can be deployed on a wide range of compute platforms and is suitable for both academic and commercial use. Intel Extension for Transformers and Intel Neural Compressor are both available as part of Intel's AI software suite.

Key takeaways:

  • The Intel neural-chat-7b model has achieved the top ranking for 7-billion-parameter models on the Hugging Face Open LLM Leaderboard, with an average score of 59.06.
  • The model is the foundation for the NeuralChat chatbot available within Intel® Extension for Transformers, which is built on Hugging Face Transformers and uses Intel® Neural Compressor for model compression.
  • The model is based on the open-source Mistral-7B-v0.1 transformer model from Mistral AI, and was fine-tuned using a pipeline available in Intel Extension for Transformers, with a novel approach called Direct preference optimization (DPO).
  • Intel Extension for Transformers and Intel Neural Compressor are both available, along with a full suite of end-to-end AI software from Intel, and can be used on a wide range of compute platforms.
View Full Article

Comments (0)

Be the first to comment!