Llama 2 could be a “watershed moment,” Matt Bornstein, a partner with venture capital firm Andreessen Horowitz, said on Twitter. The capabilities of the model rivals recent versions of OpenAI’s tools, he added.
The move could help spur more competition in the booming AI space — which is already dominated by OpenAI, Microsoft and Google. Smaller companies without the money to pay those AI leaders for access to their algorithms could benefit from Llama 2. At the same time, criminals, governments and other malicious actors could use the tech to create powerful AI tools of their own. Other open source AI models have already been used to create child sexual abuse imagery.
The decision will deepen the divide forming in the tech world over whether to make new AI tech open source or not. Google and OpenAI have rejected full transparency, citing the risks of bad actors using the tech or developing it in ways that increase risks to people. Facebook and a group of start-ups including Hugging Face and Stability AI have said open source is key to making sure the powerful new technology doesn’t further entrench the tech giants and stifle competition. Facebook lacks the cloud software business that Google and Microsoft have, which allows them to integrate AI tools into their existing products and charge money for them.
“Meta has sort of been in the shadow, so this gives Meta a chance to at least be a player,” said Bhaskar Chakravorti dean of global business at The Fletcher School at Tufts University. “The irony here is that this was pretty much the model that Google used with its Android operating system when it was trying to play catch up with the Apple IOS.”
Like Google, Microsoft and OpenAI, Facebook has invested huge amounts of money in AI over many years. Its AI lab is widely acknowledged to be a world leader and is run by Yann LeCun, an outspoken and highly-respected AI researcher known as a pioneer of the field. While other company executives have warned that AI could become an existential risk for humanity if it surpasses human intelligence, LeCun and other Meta leaders have said such concerns are overblown and risk prompting regulators to clamp down on the technology that could benefit people.
“Open source drives innovation because it enables many more developers to build with new technology,” Meta CEO Mark Zuckerberg said in a Facebook post on Tuesday. “It also improves safety and security because when software is open, more people can scrutinize it to identify and fix potential issues. I believe it would unlock more progress if the ecosystem were more open, which is why we’re open sourcing Llama 2.”
But critics say open sourced AI models could lead to the technology being misused. Earlier this year, Meta released Llama to a select group of researchers only for the model to be leaked and later used for applications ranging from drug discovery to sexually explicit chatbots. Last month, Sens. Richard Blumenthal (D-Conn.) and Josh Hawley (R-Mo.) in June wrote to Zuckerberg arguing that in the short time generative artificial intelligence applications have become more widely available, they have already been misused for problematic content from pornographic deep fakes of real people to malware and phishing campaigns.
“Meta’s choice to distribute LLaMA in such an unrestrained and permissive manner raises important and complicated questions about when and how it is appropriate to openly release sophisticated AI models,” the senators wrote.
Meta said Tuesday its latest AI model has gone through “red-teaming” exercises, where human testers try to get it to make mistakes or produce offensive content, then train it to avoid those kinds of answers. The company also asks potential users to promise not to use it to promote terrorism, create child sex abuse material or discriminate against people.
“If I’m a regulator,” Chakravorti, the Tufts business dean said, “I’m looking at this and I’m wondering ‘is the genie being let out of the bottle here?’”
Microsoft CEO Satya Nadella also mentioned the partnership to distribute Facebook’s AI through its cloud business during a company event on Tuesday. Nadella announced a version of its Bing chatbot that would allow business customers to ask the bot questions about their company’s internal data and use it more fluidly at work.
Microsoft also announced pricing for some of its AI tools, a development industry analysts had been waiting for to see what the financial impact of AI might be on the company. Microsoft stock jumped nearly 5 percent after the announcement.
Meta, which has dropped out of the ranks of the world’s most valuable tech companies in recent years, is pushing to show that it can be a leader in the generative AI boom surrounding the new crop of chatbots and image generators. In recent months, Zuckerberg and other executives have been touting the company’s investment in AI research and computing infrastructure, as well as new products such as an internal productivity assistant, a generative AI-based advertising product and a new photo-generation tool.
The AI announcements follow months of sluggish financial performance and a litany of challenges facing Meta’s business. New privacy rules from Apple, rising inflation and a post-pandemic slump in e-commerce market growth hurt the company’s digital advertising business. Over the last half a year, Meta has laid off more than 20,000 workers as part of a larger effort to flatten the workforce and become more efficient. Still, stock prices have risen drastically this year amid the company’s efforts to tighten its belt.
Meta has also been vocal in pushing back against scenarios proposed by a rising number of prominent AI leaders, including Elon Musk and Google chief AI researcher Demis Hassabis, who say the tech is advancing so quickly it might surpass human intelligence within 10 years.
Meta Global Affairs President Nick Clegg has urged regulators not to fear the doomsday scenarios and rush to clamp down on AI models altogether, arguing that some of the potential “existential threats” that critics have raised are just hypothetical and still a long ways off. Instead, Clegg has argued that AI should be regulated in a way that values keeping the technology open and available.
“No one thinks the kind of models that we’re looking at [with] Llama one or Llama version two are even remotely knocking on the door of these kind of high-capability [AI models] that might require some specialized regulatory licensing treatment,” Clegg told The Washington Post earlier this month.
Source link