From the course: Ethics in the Age of Generative AI

Understanding Vilas' ethical AI framework

From the course: Ethics in the Age of Generative AI

Understanding Vilas' ethical AI framework

- I'm so excited about how quickly we're building new generations of AI tools, but I know that we need to make sure we're designing tools that support the future we want to create, equitable, sustainable, and thriving. And to do this, we're going to have to come up with new frameworks for ethical creation just as quickly as we advance the frontier of innovation. So how do we translate intuitions and hopes into clear principles for decision making? I'd like to share with you a three part framework that I use for evaluating and advising organizations on the creation of new ethically grounded AI tools and it works equally well for technologists and non technologists. The three pillars of the framework are responsible data practices, well-defined boundaries on safe and appropriate use and robust transparency. Let's start by talking about responsible data practice. This is the starting point for all ethical AI tools. Any new technology is only as ethical as the underlying data that it's trained on. For example, if the majority of our consumers to date have been of a particular race or gender when we train the AI on that data, we'll continue to only design products and services that serve the needs of that population. As you consider building or deploying any new tool you should ask what's the source of the training data? What's been done to reduce explicit and implicit bias in that dataset? How might the data we're using perpetuate or increase historic bias? And what opportunities are there to prevent bias decision making in the future? The second part of the framework is the importance of creating well-defined boundaries for safe and ethical uses. Any new tool or application of AI should begin with a focused statement of intention about the organization's goals and an identification of the population that we're trying to serve. So for example, a new generative AI tool that can write news articles. Well, it could be used to help tell the stories of a wider range of underrepresented voices. We could use it in new languages or it could perpetuate misinformation. When considering ethical use, you should ask who's the target population for this tool? What are their main goals and incentives and what's the most responsible way to make sure we're helping them achieve those goals? The third part of the framework is robust transparency. We need to consider how transparent the recommendations of the tool are, and that includes how traceable those outcomes are. This allows for human auditing of ethical accountability. When it comes to transparency, you should ask how did the tool arrive at its recommendation? And sometimes it's not possible to know, but if so what are other ways we have of testing its fairness? Is it possible for decision makers to easily understand the inputs, analysis, outputs, process of the tool? And finally, have you engaged with a broad range of stakeholders to make sure that this tool promotes equity in the world. As you embark on building and using increasingly more complicated ethical AI tools, this framework of responsible data, well-defined boundaries and robust transparency should provide you with a foundation for making smarter, more informed decisions.

Contents