Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Domain-Focused Models: Code LLMs

Mar 18, 2024 - alexsandu.substack.com
The article discusses various Large Language Models (LLMs) developed for coding tasks, featuring 10 series from 7 companies and university research groups. These include StarCoder 2 from the Big Code Project, Code Llama from Meta AI, DeepSeek-Coder from DeepSeek AI, StableCode 3B from Stability AI, WizardCoder from Microsoft and Hong Kong Baptist University, Magicoder from researchers at the University of Illinois at Urbana-Champaign and Tsinghua University, CodeGen 2.5 from Salesforce AI Research, Phi-1 1.3B from Microsoft, CodeT5+ from Salesforce AI Research, and SantaCoder from the Big Code Project. The article provides detailed information about each model, including their release dates, developers, versions, training data, performance scores, and availability.

The article also highlights the performance of coding LLMs on Python and other programming languages, noting that performance is generally lower for languages other than Python. It identifies opportunities for improvement in coding LLMs, such as improved support for programming languages other than Python, support for natural language interaction in languages other than English, and the creation of more advanced benchmarks that include more complex debugging and coding problems.

Key takeaways:

  • The article provides a comprehensive overview of various coding Large Language Models (LLMs) developed by different companies and research groups, including StarCoder 2, Code Llama, DeepSeek-Coder, StableCode 3B, WizardCoder, Magicoder, CodeGen 2.5, Phi-1 1.3B, Code T5+, and SantaCoder.
  • These LLMs have been trained on extensive datasets and support multiple programming languages, with varying performance scores on benchmarks like HumanEval and MBPP.
  • The performance of coding LLMs on Python is similar to that of the largest General Purpose LLMs, but their performance on other programming languages is generally lower and varies significantly.
  • There are several opportunities for improvement and expansion in the field of coding LLMs, including better support for programming languages other than Python, support for natural language interaction in languages other than English, and the creation of more advanced benchmarks for complex debugging and coding problems.
View Full Article

Comments (0)

Be the first to comment!