Isn’t it interesting how the more complex new technology gets, the more its development and deployment rely on multi-disciplinary teams, team members with diverse backgrounds, and engagement with broader societal actors? Launching new products isn’t just about knowing the customer anymore. It’s promising to see the many, varied approaches around the world to setting guardrails for artificial intelligence. This offers an excellent laboratory for countries and economic regions to test and learn from each other. Values reflect the cultural context. I wonder, as companies and regulatory bodies around the world advance their ethical guardrails for AI, is it possible that we might see a convergence of global values as we learn from each other? Relatedly, check out “Bring Human Values to AI” in the latest HBR Mar-Apr 2024, by Jacob Abernathy, François Candelon, Theodoros Evgeniou, Abhishek Gupta and Yves Lostanlen. The authors emphasize that AI value alignment is not just a regulatory issue but also a product differentiator. This article offers concrete recommendations for companies along 6 Challenges in bringing AI-enabled products and services to market. 1. Define Values for your Product · Embed established principles. Anthropic uses the United Nations’ Universal Declaration of Human Rights as the principles guiding its AI Assistant, Claude. · Articulate your own Values. Set up a diverse team of experts or enlist customers and employees in the process. 2. Write the Values into the Program · There are different methods for constraining the behavior of AI systems such as privacy by design and safety by design and building in the necessary feedback loops. · Generative AI will require more formal red lines (company + regulators) for adhering to defined values. · Because guardrails and regulations evolve over time, these must be embedded in the AI’s programming so that any changes apply across systems. · Tracking compliance with values includes tracking user’s conduct as with aggressive behavior in online gaming or terrorist material on social media platforms. 3. Assess the Trade-Offs · Privacy vs accuracy · Time to market vs risk of values misalignment · Oversight Boards can be effective for weighing value-driven decisions 4. Align your Partners’ Values · Companies must understand the values underlying the models and training data they use from partners. · Owners of foundational models have little control in how partners modify them for their own purposes. 5. Ensure Human Feedback · Ensuring values alignment requires the processing of large volumes of data and human review. 6. Prepare for Surprises · Companies must deploy a robust system for detecting and limiting harmful or unexpected behavior of it AI systems. · Participation in AI-incident databases (eg OECD or Partnership on AI) is recommended for continued learning.
I would certainly love it if AI guardrails led to a cross-cultural values discussion. It could serve as a catalyst for connecting within and beyond AI. Thanks for sharing, Tracey!
ProSocial AI. Founding Director POZE@ezop. Wharton Fellow.
8moSo true. The added challenge is that Alignment between humans and AI starts with the alignment of Human values and values; which starts offline 🌀