Senators Criticize Meta's "Seemingly Minimal" Protections Against Fraud and Cybercrime in LLaMA AI Model

Current World Trends


Key Highlights :

1. Meta released its advanced AI model, LLaMA, w/seemingly little consideration & safeguards against misuse--a real risk of fraud, privacy intrusions & cybercrime.
2. U.S. Senators Richard Blumenthal and Josh Hawley criticized Zuckerberg's decision to open source LLaMA and claimed there were "seemingly minimal" protections in Meta's "unrestrained and permissive" release of the AI model.
3. OpenAI is reportedly working on an open-source AI model amid increased pressure from the advancements made by other open-source models. Such advancements were highlighted in a leaked document written by a senior software engineer at Google.
4. Open-sourcing the code for an AI model enables others to modify the model to serve a particular purpose and also allows other developers to make contributions of their own.




     The two United States senators, Richard Blumenthal and Josh Hawley, have recently questioned Meta CEO Mark Zuckerberg over the tech giant's artificial intelligence model LLaMA, claiming that it is potentially dangerous and could be used for criminal activities. In a June 6th letter, the senators criticized Meta for their “unrestrained and permissive” release of the AI model, claiming that there were “seemingly minimal” protections in place to fight against fraud and cybercrime.

     Meta released its advanced AI model, LLaMA, to a limited audience of researchers, but was later leaked in full by a user from the image board site 4chan in late February. While the senators acknowledged the benefits of open-source software, they concluded that Meta’s “lack of thorough, public consideration of the ramifications of its foreseeable widespread dissemination” was ultimately a “disservice to the public.”

     The senators expressed their concern that LLaMA could be easily adopted by spammers and those who engage in cybercrime to facilitate fraud and other “obscene material.” They highlighted the differences between OpenAI’s ChatGPT-4 and Google’s Bard — two close source models — with LLaMA to emphasize how easily the latter can generate abusive material. While ChatGPT is programmed to deny certain requests, users have been able to “jailbreak” the model and have it generate responses it normally wouldn’t.

     In the letter, the senators asked Zuckerberg a number of questions about LLaMA’s release, including whether any risk assessments were conducted prior to its release, what Meta has done to prevent or mitigate damage since its release, and when Meta utilizes its user’s personal data for AI research.

     Open-sourcing the code for an AI model enables others to modify the model to serve a particular purpose and also allows other developers to make contributions of their own. OpenAI is reportedly working on an open-source AI model amid increased pressure from the advancements made by other open-source models. Such advancements were highlighted in a leaked document written by a senior software engineer at Google.

     The senators weren’t happy with the “seemingly minimal” protections to fight against fraud and cybercrime in Meta’s AI model. They asked for more transparency from Meta in order to ensure that the AI model is not being used for malicious purposes. Open-source AI models can provide great benefits, but they must be released responsibly and with proper safeguards in place.



Continue Reading at Source : cointelegraph
Tags