However, there are concerns about the potential risks of such tools. A research team found that engineers using AI tools are more likely to cause security vulnerabilities in their apps. There are also legal concerns about these tools being trained on copyrighted or code under a restrictive license. Despite these risks, Meta has placed minimal restrictions on how developers can deploy Code Llama, as long as they agree not to use the model for malicious purposes.
Key takeaways:
- Meta has open sourced Code Llama, a machine learning system that can generate and explain code in natural language, similar to GitHub Copilot and Amazon CodeWhisperer.
- Code Llama can complete and debug code across various programming languages, and is available in several versions, including one optimized for Python and another for understanding instructions.
- Despite potential risks such as generating insecure code or infringing on intellectual property, Meta places minimal restrictions on how developers can deploy Code Llama, as long as they agree not to use the model for malicious purposes.
- Meta believes that the open sourcing of AI models, especially those for coding, can facilitate the development of new technologies that improve people's lives, and allows the community to evaluate their capabilities, identify issues and fix vulnerabilities.