Meta is not releasing information about the data set that it used to train LLaMA 2 and cannot guarantee that it didn’t include copyrighted works or personal data, according to a company research paper shared exclusively with MIT Technology Review. LLaMA 2 also has the same problems that plague all large language models: a propensity to produce falsehoods and offensive language. The idea is that by releasing the model into the wild and letting developers and companies tinker with it, Meta will learn important lessons about how to make its models safer, less biased, and more efficient. What do you think about it? #ai #generativeai #opensource #llm
Meta is going all in on open-source AI. The company is today unveiling LLaMA 2, its first large language model that’s available for anyone to use—for free. It hopes that making LLaMA 2 open source might give it the edge over rivals like OpenAI.