Targeting AI: Responsible AI means regulation, ethical use

AI users should adhere to ethical standards, particularly in regard to generative AI chatbots, but the promise of the latest tech is too great to not embrace.

Generative AI is setting off alarms.

While AI software has taken full root in the business world and consumers are having fun with the swarm of new chatbots like ChatGPT and Google Bard, many AI experts are worried about generative AI.

One of them is Michael Bennett, director of education curriculum and business lead for responsible AI at Northeastern University in Boston.

A Harvard-educated lawyer who has litigated AI copyright cases, Bennett is at once a critic as well as user of and advocate for responsible use of AI. He helped craft New York City's automated employment decision tools legislation, Law 144, which took effect July 5.

Bennett pointed out that AI algorithms are commonly used for decisions about people's employment; finances, such as mortgage applications; and even bail in court cases -- decisions that often affect marginalized communities.

"AI is sufficiently powerful and sufficiently black boxed that it's causing concern," he said in an interview on the TechTarget News podcast, "Targeting AI," referring to the locked-up, unexplainable algorithms that power many AI systems.

AI expert Michael Bennett of Northeastern UniversityMichael Bennett

"Even some of the most adept experts in the space are struggling to explain how something like ChatGPT does what it does. We, at the same time, are seeing various types of AI approaches folded into various sensitive processes in society," Bennett continued.

"You can see how troubling this situation is for many people. And so responsible artificial intelligence, ultimately, is intended to be a kind of solution for those types of concerns," he said.

Law 144 aims to address a set of these problems and regulate and enforce the use of AI technology in hiring. It prohibits the use of automated tools in employment unless the tool is audited for bias annually.

Bennett also delves into a host of other AI hot topics he's involved in, including educating lawmakers about the risks of AI and advising business clients about safe and effective ways to use it. He also discusses how the technology is intersecting with the arts, including painting, music, film and photography.

Movie and TV show screenwriters as well as actors are on strike against Hollywood and the big streaming platforms. One of their most immediate grievances is over generative AI, he noted.

AI is sufficiently powerful and sufficiently black boxed that it's causing concern.
Michael BennettDirector of education curriculum and business lead for responsible AI, Northeastern University

Professional writers "are really concerned that they are increasingly in jeopardy as AI comes onto the scene to compete against them," said Bennett, a former arts and culture commissioner of Tempe, Ariz.

"And that's actually a part of the negotiations right now: what to do about the fact that AI is able to generate scripts now, whereas it was only humans that could do this, say, five years ago," he added.

Bennett also emphasized his institute's commitment to the increasingly popular "human-in-the-loop" approach of putting people in charge of supervising AI systems.

"We don't endorse AIs driving all decision-making on their own," he said.

Shaun Sutner is senior news director for TechTarget Editorial's enterprise AI, business analytics, data management, customer experience and unified communications coverage areas.

Esther Ajao is a TechTarget news writer covering artificial intelligence software and systems.

Together, they host the "Targeting AI" podcast series.

Dig Deeper on AI technologies