Marcelius Braxton’s Post

View profile for Marcelius Braxton, graphic

Director of Center for Social Change and Belonging, Affiliate Professor, and DEI Consultant

See below for Dr. Wurth’s ideas for proactively reducing bias in AI. Because AI is everywhere, addressing bias in AI is going to be an emerging issue, especially within DEI work.

View profile for Renee Wurth, PhD, RD, graphic

Research Leader | Advisor | Board member | Embedding data with empathy

AI is EVERYWHERE, being built faster than we can catch the harmful biases it creates. So how do we *proactively* reduce the bias in AI? I am teaching a course on responsible AI and was delighted that nearly every group project mentioned that the AI tool they evaluated was built upon biased datasets. The data was often not even representative of the population it was intended to be used for. That’s a BIG problem. So what can be done? Most data that feds AI algorithms is what we call secondary data — scrapped or pulled from public sources, but not intentionally sampled for that AI use case. The unsexy work of fixing this is to be intentional about collecting primary data on populations who are often not captured. I’ve had the opportunity to work with the non profit Buddy System MIA and it surprised me how they have never been valued for their amazing data collection work among the most vulnerable and often omitted communities. One way to reduce the bias in AI is elevating groups that are already collecting high quality data as a means to serve and support these underrepresented communities. Lets be more proactive in combatting bias. If you’re working in AI and looking to reduce bias or are a non profit wishing to understand how to showcase or improve the value of your data, definitely reach out (here or or @ https://lnkd.in/dd5qkEJ6) as I’d love to collaborate.

To view or add a comment, sign in

Explore topics