Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development , Standards, Regulations & Compliance

NTIA Gives Nod to Unrestricted Open AI Model Access

Government Must to Prioritize Risk Evaluation of Dual-Use AI Models
NTIA Gives Nod to Unrestricted Open AI Model Access
Image: Shutterstock

The United States government gave a cautious blessing for unrestricted access to open artificial intelligence foundation models, warning that users should be prepared to actively monitor risks.

See Also: New OnDemand Webinar | Overcoming Top Data Compliance Challenges in an Era of Digital Modernization

The National Telecommunications and Information Administration in a Tuesday report said open-weight models can make generative AI accessible to small companies, researchers, non-profits and individual developers. It recommended that there be no restrictions on access to the open models - at least until proof emerges that restrictions are warranted.

Open-weight AI models are essentially ready-to-use molds for developers to build applications on. Unlike open- source models, their code is not fully transparent. "Openness of the largest and most powerful AI systems will affect competition, innovation and risks in these revolutionary tools," said NTIA Administrator Alan Davidson. "At the time of this report, current evidence is not sufficient to definitively determine either that restrictions on such open-weight models are warranted, or that restrictions will never be appropriate in the future," the report says.

Other federal agencies have been vocal about the need for open models. The Federal Trade Commission supports their use. Agency Chair Lina Khan recently said open models allow small players to participate in the market with their ideas, and including such models could decentralize control and promote healthy competition.

Still, model abuse could pose risks to national security, privacy and civil rights, the NTIA report says. Foundation models can be exploited to spread disinformation, create deepfakes and automate cyberattacks - highlighting their potential to serve as what the government calls "dual-use" technology. Bad actors can also manipulate foundation models to amplify biases in their training data, leading to unfair outcomes in critical areas where fairness is key, such as hiring, law enforcement and lending. Their manipulation can lead to the AI model owner losing control over their behavior and the potential mishandling of personal data that can result in privacy breaches.

Nation-state actors could use the technology to develop advanced weapons, such as autonomous drones or cyberwarfare tools, which reportedly is already occurring in the Russia-Ukraine conflict in the form of "killer algorithms" for target selection and "warbot" armies.

In the report, the NTIA advises the government to create a program aimed at gathering and assessing evidence on the risks and benefits of open AI models. It recommends research into the safety aspects of different AI models, support for risk mitigation studies and the development of "risk-specific" indicators to determine when policy changes may be needed.

Evidence collection could include encouraging the industry to set up standards, audits, disclosures and transparency for dual-use foundation models. The process could also include conducting and supporting research on the safety, security and future capabilities of these models.

Evaluating the evidence could include developing benchmarks and definitions for monitoring and potential action in cases of escalating risk, including access restrictions on models and other methods of risk mitigation.

The report comes at a time when AI regulation in the U.S. comprises guidelines rather than stringent rules, in contrast with Europe's recently approved AI Act. The Biden administration has extracted promises of secure and trustworthy development from Silicon Valley heavyweights. An October executive order calls for developers of generative AI foundation models that could pose a "serious risk" to national security, national economic security or national public health to notify the government when they're training the model. Developers must also share the results of all red-team safety tests with the government.


About the Author

Rashmi Ramesh

Rashmi Ramesh

Assistant Editor, Global News Desk, ISMG

Ramesh has seven years of experience writing and editing stories on finance, enterprise and consumer technology, and diversity and inclusion. She has previously worked at formerly News Corp-owned TechCircle, business daily The Economic Times and The New Indian Express.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing databreachtoday.com, you agree to our use of cookies.

  翻译: