How do we protect what is human while working with innovation and AI? 🤖 In this podcast episode, Brené Brown and Dr. Joy Buolamwini discuss how we can fight bias in algorithms, the accuracy of AI powered gender classification products, and a creative perspective on tech. Check it out here ➡ ... Every week we share content that inspires us in our work or, in some way, moves us to become better professionals, colleagues, or leaders. #AI #innovation #creativity #bias
EGGS, Part of Sopra Steria’s Post
More Relevant Posts
-
Founder, Diverse AI and Bedrock AI | Ex-Google | Author “Building Responsible AI Algorithms” | Responsible AI Advisor | Speaker | Entrepreneur
🎙️What does facial recognition and missing people have in common? Well about 600,000 people are reported missing in the U.S. every year and facial recognition could be used to help reduce this problem and reconcile families back together / resolve unending painful and stressful searches. 🚩 So why isn’t it being used? I had a chat with Dina Temple-Raston on the Click Here Show to discuss facial recognition, its possibilities and challenges including the continuous bias existent in AI systems and how the U.S. government could prioritise the adoption of facial recognition technologies. Before adoption, these systems should go through rigorous safety tests with implemented guardrails / mitigation measures, and overall Responsible AI principles to ensure accurate outputs. Here’s a brief Q&A covering the podcast. Link to podcast in the comments. https://lnkd.in/ea2fPkk9 #ai #responsibleai #facialrecognition
Trying to wring the bias out of AI algorithms — and why facial recognition software isn’t there yet
therecord.media
To view or add a comment, sign in
-
Episode 4 of Humans of AI is out now! This week, we're joined by David Ryan Polgar founder of All Tech Is Human, as he shares gripping stories from the early days of social media that led him to the intersection of tech and human rights, AI, and ethics. As a lawyer and educator, he's at the forefront of a movement "altering the DNA of tech development." Learn how David is dedicated to creating spaces and communities for human conversations and connections, shaping the future of AI responsibly. #HumansOfAI #Podcast #Ethics
Humans of AI: Presented by Writer
share.postbeyond.com
To view or add a comment, sign in
-
Solution Architect and Product Manager specializing in Field Service | Studying the pursuit of excellence in business and life | Views are reflection of my own quirky persona, not my employer's corporate image.
I recently tuned into Brené Brown’s podcast with guest Dr. Joy Buolamwini, a Canadian-American computer scientist educated at Massachusetts Institute of Technology. I am aware that there are biases that exist in AI, and this opened my eye up even more to specific, ethics-related instances that need to be addressed. They discussed her book, Unmasking AI: My Mission to Protect What Is Human in a World of Machines. The book highlights the author’s journey as an AI researcher focused on her efforts to uncover and address biases and discrimination in AI systems, advocating for inclusive and ethical AI development. With AI getting more notoriety in everyday society, there are many ethical concerns we need to be aware of and likely many that we haven’t yet discovered. How can we, as a society, better advocate for fairness and transparency in AI - do you think that is even possible? Please share your thoughts in the comments below. https://lnkd.in/ggVaUTxC #AI #Ethics #Society #Technology
Dr. Joy Buolamwini on Unmasking AI: My Mission to Protect What Is Human in a World of Machines
brenebrown.com
To view or add a comment, sign in
-
AI is sexist. It mirrors our society’s gender bias and it's worth pointing out when it sneaks into daily life. Couch Potato Salad is an entertaining daily word puzzle that prompts users to describe an AI-generated image using two common phrases linked together by a shared word. Today's AI-generated image is a group of doctors- ALL are male. In reality, women account for ~38% of active physicians, according to AAMC. It's a small catch, but indicative of bigger potential issues about representation. Only the puzzle creator knows the AI prompt used but I believe the oversight was unintentional- which is the point! When we increase our awareness, we are less likely to ingest whatever AI produces in the future as pure fact. I'm curious about what other biases have caught your eye in AI content...Ruth D. Kirstin Pesola McEachern, Ph.D. Sara Randazzo, M.A. Jessica L. Horton, Ph.D Femily Howe Natalie Duvall, EdD, MFA - any insights? #AI #genderbias #couchpotatosalad
To view or add a comment, sign in
-
In February’s episode of Diversifying Data, host and Senior Partnerships Manager Rakhi Sharma M.Sc, is joined by Associate Principal Mirela Gyurova to discuss all things AI, more specifically how bias shows up along with the effect of this bias. During the podcast Rakhi and Mirela explore the growing concern surrounding AI ethics and bias, delving into recent examples that illustrate these issues. From seemingly innocuous biases to more impactful ones, the discussion sheds light on the multifaceted ways bias manifests in AI systems. Examples range from biased facial recognition algorithms to discriminatory hiring practices perpetuated by AI-driven tools. Despite the ongoing discussion, progress in addressing AI bias has been relatively slow. Various factors contribute to this, including the lack of diversity within the AI field itself. Diversifying the AI community could enhance its ability to identify and mitigate bias effectively. Moreover, improving transparency around algorithms, incorporating fairness definitions into the training process, and implementing better auditing practices are crucial steps toward addressing these issues. Lastly, Mirela and Rakhi mention how at Kubrick we are training the next generation of data workers. By educating future professionals about AI ethics and bias, Kubrick can contribute to creating a more ethically responsible and inclusive AI ecosystem. Thank you to our wonderful speakers for exploring such an important topic. To watch or listen to the podcast episode in full, click here: https://lnkd.in/eqUGBg2G
To view or add a comment, sign in
-
🤦Will AI replace me...? A questions that pops up in my mind more often everyday. With this new tech there are a lot of implications that we rarely think about. I really enjoyed filming the new episode of Diversifying Data. Rakhi & Mirela provided some incredible insight into the field of AI and it's future. I would definitely recommend to have a listen or watch here if you feel about AI the same as me: https://lnkd.in/dFKcnrq6
In February’s episode of Diversifying Data, host and Senior Partnerships Manager Rakhi Sharma M.Sc, is joined by Associate Principal Mirela Gyurova to discuss all things AI, more specifically how bias shows up along with the effect of this bias. During the podcast Rakhi and Mirela explore the growing concern surrounding AI ethics and bias, delving into recent examples that illustrate these issues. From seemingly innocuous biases to more impactful ones, the discussion sheds light on the multifaceted ways bias manifests in AI systems. Examples range from biased facial recognition algorithms to discriminatory hiring practices perpetuated by AI-driven tools. Despite the ongoing discussion, progress in addressing AI bias has been relatively slow. Various factors contribute to this, including the lack of diversity within the AI field itself. Diversifying the AI community could enhance its ability to identify and mitigate bias effectively. Moreover, improving transparency around algorithms, incorporating fairness definitions into the training process, and implementing better auditing practices are crucial steps toward addressing these issues. Lastly, Mirela and Rakhi mention how at Kubrick we are training the next generation of data workers. By educating future professionals about AI ethics and bias, Kubrick can contribute to creating a more ethically responsible and inclusive AI ecosystem. Thank you to our wonderful speakers for exploring such an important topic. To watch or listen to the podcast episode in full, click here: https://lnkd.in/eqUGBg2G
To view or add a comment, sign in
-
A new article from Wharton Magazine dives into the future of #AI with Ethan Mollick and Kevin Werbach. From Mollick's new book, "Co-Intelligence: Living and Working with AI," to Werbach's podcast, "The Road to Accountable AI" learn how these professors at The Wharton School are shaping how we think about AI in education and work. https://lnkd.in/ePZF2hb6
Wharton Faculty Examine Our AI Present and Future
magazine.wharton.upenn.edu
To view or add a comment, sign in
-
Transforming Businesses with AI & Automation | Advocate for Inclusive, Ethical, and Purpose-Driven Growth
AI and Gender Bias: Why Are Men the Default? Every time I create a new GPT, it suggests male figures, after I need to ask for no men: it replies there are “no people in the image”—a frustrating reminder of the bias in the AI systems. It’s time we challenge these defaults and push for AI that represents everyone. Let’s work towards AI that’s truly inclusive and reflects the diversity of our world. #aiethics #genderequality #biasinai #inclusiveAI
To view or add a comment, sign in
-
Place Futurist | Management Consultant for Place | Board Advisor | People-Centric | Emerging Tech | Regenerative Leadership
If you are someone who spends time thinking about inclusion, identity and bias then I strongly recommend you check out this podcast from TED AI ✨TLDR: our immediate reaction may be that we need to eliminate bias and stereotypes within AI but sometimes those biases maybe as a result of a specific culture. 🌍How do we get better specific, geographically accurate results when LLMs are training on more, global data? 🌏And where models are trained to correct some of our inherent biases, do we need to spend more time perfecting our prompts? 🌎Don’t forget that when you’re using AI, you’re also training AI! #AI #AIethics #ethics #bias #culture https://lnkd.in/e_tVzPeE
TED Tech: The TED AI Show: Why we can't fix bias with more AI w/ Patrick Lin on Apple Podcasts
podcasts.apple.com
To view or add a comment, sign in
-
Part of the conversation that intrigues me is the difference between needing an accurate reflection of historical events vs showing us images that may me more reflective of a world without bias. Is it important to represent historic events “accurately” (bear in mind that depending on how far back you look, you may be relying on the writing and/or bias of someone who had their own agenda at the time)? #interestingdebate #AI #ethics #accuratehistory
Place Futurist | Management Consultant for Place | Board Advisor | People-Centric | Emerging Tech | Regenerative Leadership
If you are someone who spends time thinking about inclusion, identity and bias then I strongly recommend you check out this podcast from TED AI ✨TLDR: our immediate reaction may be that we need to eliminate bias and stereotypes within AI but sometimes those biases maybe as a result of a specific culture. 🌍How do we get better specific, geographically accurate results when LLMs are training on more, global data? 🌏And where models are trained to correct some of our inherent biases, do we need to spend more time perfecting our prompts? 🌎Don’t forget that when you’re using AI, you’re also training AI! #AI #AIethics #ethics #bias #culture https://lnkd.in/e_tVzPeE
TED Tech: The TED AI Show: Why we can't fix bias with more AI w/ Patrick Lin on Apple Podcasts
podcasts.apple.com
To view or add a comment, sign in
15,658 followers