Meta has announced Llama 2, an improved version of its open source language model. The language model can handle cloud services such as Microsoft Azure and AWS, but can also work on local hardware such as PCs or high-end smartphones.
Llama 2 is available immediately and is free for research purposes and commercial use. Meta says that this second version has been trained with forty percent more data than Llama 1 and provides answers faster. With the largelanguage model, companies and researchers can create generative AI applications. Meta releases the language model open source so that developers and researchers can identify errors that Meta can fix.
The language model comes with model weights and starting code to turn it into a conversation tool. Meta introduces the language model together with Microsoft and makes Llama 2 immediately available in Microsoft’s Azure AI catalog. The language model also works on cloud services from, for example, Amazon Web Services and Hugging Face.
Meta also says that the language model can also be used on local hardware, such as Windows PCs. In a loose announcement says Qualcomm that flagship smartphones and PCs with Qualcomm SoCs will also be able to run Llama 2 locally from 2024. This means users don’t have to rely on cloud services to use generative AI based on Llama 2, which Qualcomm says is better for security, privacy, reliability and cost. Qualcomm says this language model will enable “intelligent virtual assistants, productivity applications, content creation tools and entertainment.”
According to Meta, the LLM has been tested for safety by internal and external teams. The company also says it will continue to do this and release improved language models based on tests. Meta released Llama 1 in February, this language model was only intended for scientific research. The company says it has received one hundred thousand requests for access to this first language model.