From the course: Ethics in the Age of Generative AI

Consulting your customers in building AI

From the course: Ethics in the Age of Generative AI

Consulting your customers in building AI

- In the previous videos, we've explored the importance of technology teams, of C-suites, of boards of directors in ensuring responsible and ethical AI practices. But what about the most important stakeholder of all, our customers? Designing great products means that we have to understand and incorporate their preferences, their needs, and their wants into our product design. I'd like to share with you a powerful framework for listening to our customers, an acronym that I call LISA. First, we listen to users before we start to build. Developing and launching new technologies requires a clear understanding of our users' goals, their needs, and their fears. It can be difficult to create a product when we haven't heard what our customers expect. Research has shown that users care deeply about the experience and usability of the technology products they use. In a recent survey conducted by the Nielsen Group, 85% of respondents said they would not return to a website or a product if they had a poor user experience. The second part of the LISA framework, how do we involve our customers in design decisions? We know that our customers want to feel that their opinions matter, and we want to include them in design decisions that can be crucial to building our products to meet their needs. This can be especially helpful when we're seeking to ensure that our decisions reflect the full diversity of our user base. Here's an example. In 2016, Airbnb launched its Community Commitment initiative. They gathered input from users on ways to make their platform more inclusive and welcoming for people from diverse backgrounds. This simple practice led to the creation of brand new features, such as the filters for gender neutral pronouns or the ability to search for wheelchair accessible listings. Another way to involve customers and design decisions is creating a user advisory board, a group of users who are invited to provide feedback and input on new features, designs, and other aspects of product development, even as you're in the design environment. For example, Microsoft has done this with a customer advisory board made up of customers from a range of industries and backgrounds who are invited to provide feedback on Microsoft's products and services, even while they're still in development. By including users from diverse backgrounds and experiences, we get a much wider range of perspectives on how to design and build products that actually meet the needs of all users. The third part of the LISA framework, sharing simple and transparent privacy policies. By prioritizing user privacy, we focus on building trust with our users and we create a more loyal user base. This is so important. According to a survey by Pew Research Center, 79% of adults in the US are concerned about how companies use their data. This can be a barrier that keeps users from engaging with your products, even when these products might actually help them improve their lives. There are ways that we can do this that include using plain language to explain data collection practices, providing customers with clear opt-in and opt-out options, and implementing privacy by design principles into your core technology development process. The final part of the LISA process is auditing our work and inviting outsiders in to help hold us accountable. Every existing and new technology product should be audited on a regular cycle, a process where you review the purpose of the product, potential risk to users, and maybe most importantly, the possible unintended consequences that might happen because of that product. Here's an example where this works well. Google developed an AI principles framework which guides their development and use of AI technology. That framework includes principles like fairness, privacy, and accountability, and it's used as a guide for conducting regular audits of their AI systems, identifying potential risks, and coming up with remediation. There are a number of risks we should be aware of. They could include bias or data privacy concerns, or even security vulnerabilities. And we know that organizations that bring in users and audit these risks do a great job of responding to them. At OpenAI, the trust and safety team is responsible for identifying potential risks associated with AI technologies. The team includes experts in the fields of computer science, law, and philosophy. They work together to ensure that OpenAI's technology is ethical and responsible. Once potential risks are identified, then we have to step back and conduct a risk assessment to evaluate the likelihood and possible impact of those risks. This assessment should consider the potential impact on users as well as the business impact of the risk. For example, at IBM, there's a separate AI governance board that's responsible for conducting risk assessments for AI systems. The board evaluates the potential risks and makes recommendations to mitigate those risks and improve safety for users. Building great products means listening to our customers, and using the framework we've described here, affectionately termed LISA, lets us listen to our customers, lets us involve them in decision making, shares privacy practices, and ensures that we're living a practice of regular audit and accountability. These practices mean that we can build better trust with our customers and ensure that technologies meet the needs and preferences of communities, not just the ones we serve today, but the ones we aspire to serve in the future.

Contents