Lyssn’s Post

View organization page for Lyssn, graphic

2,837 followers

AI guardrails in health and human services are not just recommended, but essential. We loved the recommendations Mathematica shared with the Administration for Children and Families (ACF) on leveraging AI to enhance operations while ensuring equity, governance, privacy, and security: "The pace of AI development and deployment is rapid, and without rigorous equity, governance, privacy, and security controls, as well as research and evidence on the impacts of AI, these advances could result in more harm than good,” wrote Mathematica. “AI should be viewed as an additive technology that supports human decision making by helping place relevant and practical insights at the fingertips of agency staff, practitioners, and policymakers." The barrier to entry for developing AI tools is now at an unprecedented low, making it crucial for individuals and organizations to become informed consumers of AI. While AI holds significant potential to support caseworkers in the child welfare sector, it's imperative to ensure that the AI solutions implemented are safe, robust, and trained for the specific needs of health and human services. Prioritizing these considerations will enable AI to make a meaningful and positive impact. We've seen first hand that building #EthicalAI for this use case is possible! At Lyssn, we've been able to work with many state and local agencies to implement specialized AI training and quality assurance into child welfare teams to improve onboarding, meet FFPSA requirements, and increase service quality. If you would like to learn more, please feel free to shoot us a message!

To view or add a comment, sign in

Explore topics