Addressing bias in data analytics

Addressing bias in data analytics

A biased machine learning model has the potential to harm the very people it's designed to help, writes data scientist Amy Newman. "In addition, ML algorithms are implemented across more sectors than can be easily quantified."

Building fair machine learning models is critical to equitable policy research. At Mathematica, we've identified and implemented several approaches for building fair ML models. Along with our partners and clients, we’ve learned that creating equitable models requires being proactive about how you (1) select and prepare data, (2) choose models, (3) build in transparency, and (4) monitor results and applications over time.

Find out more about our approach.

A Scott

Organization in Social Services

3w

Thanks for sharing

Like
Reply

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics