Apple Signs White House AI Safeguards

Apple has signed up for the Joe Biden administration’s voluntary AI safeguards that seek to ensure the emerging technology is developed in a safe and secure way, the White House said.

Apple joins 15 other companies who have signed the Voluntary AI Safeguards over the past year.

The initial group of companies signing onto the programme in July 2023 included Amazon, Anthropic, Google, AI firm Inflection, Meta, Microsoft and OpenAI, developer of the ChatGPT chatbot.

The administration said at the time it had secured “voluntary commitments from these companies to help move toward safe, secure, and transparent development of AI technology”.

Image credit: US government

Watermarks

The move came amid ongoing concern from experts, regulators and governments over the potential misuse of AI technologies in the years ahead.

In September a further eight firms signed the commitments, including Adobe, IBM and AI accelerator chip maker Nvidia.

Companies that are developing these emerging technologies have a responsibility to ensure their products are safe, the White House said.

It said the commitments underscore three principles that must be fundamental to the future of AI – safety, security, and trust – and mark a critical step toward developing responsible AI.

Among the measures companies agree are the use of watermarks on AI-generated content such as text, images, audio and video, amidst concern that deepfake content can be utilised for fraudulent and other criminal purposes.

Companies also commit to internal and external security testing before the release of their AI systems and publicly reporting their AI systems’ capabilities.

AI safety

In October the administration released a wide-ranging executive order on AI that amongst other measures obliged companies developing the most powerful models to submit regular security reports to the federal government.

The 111-page document built on an AI “Bill of Rights” issued in late 2022 that similarly sought to address some of the technology’s main potential drawbacks while pushing to explore its benefits.

Last November the UK hosted the first AI Safety Summit which issued an international declaration that “for the good of all, AI should be designed, developed, deployed, and used, in a manner that is safe, in such a way as to be human-centric, trustworthy and responsible”.

Tens of thousands of Hollywood video game actors began a strike on Friday over concerns generative AI could be used to put them out of jobs.

Matthew Broersma

Matt Broersma is a long standing tech freelance, who has worked for Ziff-Davis, ZDnet and other leading publications

Recent Posts

US DOJ To Propose Google Penalties By End Of Year

US judge gives Justice Department until end of year to formulate plan for Google punishment…

7 mins ago

Trump ‘To Appoint Musk’ To Gov’t Efficiency Role If Elected

Donald Trump says he would appoint Elon Musk to lead government efficiency commission if elected,…

37 mins ago

Australian Official Received Death Threats After Musk Criticism

Australian eSafety commissioner says she received death threats after Musk criticised her for trying to…

1 hour ago

Man Arrested After ‘Earning Millions’ From AI Music Tracks

US man allegedly earned more than $10m in royalties streaming hundreds of thousands of fake…

2 hours ago

NCSC Calls Out Cyber-Attacks From Russia’s GRU

UK's NCSC and allies outline campaign of attacks from unit of Russia's military intelligence service…

2 hours ago

TfL Cuts Data Feeds Amidst Cyber-Attack Fallout

Transport for London cuts live data feeds to travel apps and restricts access to online…

3 hours ago
  翻译: