The UK government is failing to protect workers against artificial intelligence (AI), which is already used to life-changing decisions across the economy, the Trades Union Congress warned on Tuesday.

The TUC singled out the government’s Data Protection and Digital Information Bill, which reached its second reading in parliament on Monday, saying it would dilute important protections.

One of the bill’s provisions would narrow current restrictions on th use of automated decision-making without meaningful human involvement, while another could limit the need for employers to give staff a say in the introduction of new technologies through impact assessments, the TUC said.

This, combined with a “vague and flimsy” government white paper on the technology published last month raises concerns that “guard rails” in the workplace are becoming nonexistent, the TUC said.

Image credit: Pexels

Hands-off approach

It said the government’s hands-off approach to AI meant there was “no additional capacity or resource to cope with rising demand”.

The TUC said it had found AI being used at all stages of the employment process, from initial sifting of CVs to team allocation, allocation of work, disciplinary measures, through to termination.

AI could “set unrealistic targets that then result in workers being put in dangerous situations that impact negatively on their both physical health and mental well being”, the body warned.

The government responded that the TUC’s assessment was “wrong”, arguing that AI was “set to drive growth and create new highly paid jobs throughout the UK, while allowing us to carry out our existing jobs more efficiently and safely”.

‘Safely and responsibly’

The government said it was “working with businesses and regulators to ensure AI is used safely and responsibly in business settings” and that the Data Protection and Digital Information Bill included “strong safeguards” employers would be required to implement.

The AI white paper proposed spreading regulation of the technology across different existing bodies rather than creating a new dedicated agency or new laws.

The approach contrasts with that of the EU, which is working on far-reaching regulation called the AI Act.

Matthew Broersma

Matt Broersma is a long standing tech freelance, who has worked for Ziff-Davis, ZDnet and other leading publications

Recent Posts

X’s Community Notes Fails To Stem US Election Misinformation – Report

Hate speech non-profit that defeated Elon Musk's lawsuit, warns X's Community Notes is failing to…

2 days ago

Google Fined More Than World’s GDP By Russia

Good luck. Russia demands Google pay a fine worth more than the world's total GDP,…

2 days ago

Spotify, Paramount Sign Up To Use Google Cloud ARM Chips

Google Cloud signs up Spotify, Paramount Global as early customers of its first ARM-based cloud…

4 days ago

Meta Warns Of Accelerating AI Infrastructure Costs

Facebook parent Meta warns of 'significant acceleration' in expenditures on AI infrastructure as revenue, profits…

4 days ago

AI Helps Boost Microsoft Cloud Revenues By 33 Percent

Microsoft says Azure cloud revenues up 33 percent for September quarter as capital expenditures surge…

4 days ago
  翻译: