AI: He/She/It

AI: He/She/It

In the realm of pop culture, AI often takes on a life of its own—think Arnold Schwarzenegger's Terminator or Scarlett Johansson's character in Her. These Hollywood portrayals, while entertaining, inadvertently complicate the conversation about AI by anthropomorphizing the technology. Anthropomorphizing is the act of attributing human characteristics or behavior to non-human entities, such as animals, plants, or machines. As kids, we grow up anthropomorphizing our toys and surroundings. 90s kids will remember watching cartoons about Thomas the Engine, singing teacups and toys coming to life. I anthropomorphize my car by naming it “Stella” and my robot vacuum by christening it “Tom Brady.” This anthropomorphizing is also often referred to as “humanizing.” Humanizing can be a method of creativity, but it can also lead to misunderstanding or emotional attachment.

When we anthropomorphize or humanize AI, we inadvertently enter a conversation on whether AI is an individual with its own rights and emotions. This conversation raises a fascinating philosophical discussion, but it overlooks a critical current issue: not all humans are afforded the same level of respect and dignity. Individuals living with disabilities are regularly dehumanized and marginalized in our society. Women and people of color consistently experience discrimination. Across the globe, different demographics get treated as less than human. There are still many challenges and gaps in terms of accessibility and inclusion in how we design our world and our AI systems. Before we leap into discussing personhood of AI, we must first resolve the disparities and injustices towards fellow humans.

Human-Centricity in AI provides a key pivot within the conversation. Human-Centricity prioritizes the individual, ensuring that artificial intelligence is used as a tool to promote agency and equity. Human-centricity in AI is about providing more opportunities for people to have control over their lives and make choices that align with their values and goals. This approach requires a commitment to diversity, ethics, and accountability in AI development. We must strive for AI that reflects the rich tapestry of human experience and provides a voice to all perspectives. Inclusivity is not just a buzzword; it is a guiding principle for the future of responsible innovation. A lack of inclusivity in AI can have dire consequences, from reinforcing stereotypes to inadvertently perpetuating discrimination.

At SAS, we talk about promoting human well-being, human agency and equity and ensure accessibility and include diverse perspectives and experiences. We are dedicated to fostering an environment where AI serves as a tool for enhancing human well-being, promoting individual agency, and ensuring equity. This inclusive approach is crucial for developing ethical AI that ensures that technology is tool for progress, not a source of exclusion or bias.

As we delve deeper into the world of artificial intelligence, it's crucial that we ask ourselves a fundamental question: Are we truly prioritizing the well-being and agency of individuals when creating and using AI, or are we inadvertently elevating AI to the status of an individual? While I love my car and am grateful for the service of my robot vacuum, I will never argue that Stella and Tom should have rights equal to myself or a friend with a different lived experience. Let's remember that technology should serve as a tool to uplift and empower individuals, not replace, or diminish our intrinsic worth. Let's make sure AI remains a way to enhance human potential, not define or replace it.


Kristi Boyd, CIPT is a Senior Trustworthy AI Specialist with SAS' Data Ethics Practice (DEP) and supports the Trustworthy AI strategy with a focus on the pre-sales, sales & consulting teams. She is passionate about responsible innovation and has an R&D background as a QA engineer and product manager.

Good insights SAS. Humanizing AI through naming can foster understanding, yet we must tread carefully regarding its rights and ethical boundaries.

Like
Reply
Benjamin Shepard

Technologist, ethicist, and cautious optimistic exploring the intersection of science, technology, ethics, and policy at Duke University.

4mo

Came for the Tom Brady dig ;'D Stayed for the AI insight.

Rachel Marble, MLS, PMP

Project Manager | Lean Process Improvement | Business Operations | MBA Candidate | Army Veteran Spouse | Inactive Secret Clearance

4mo

My robot vacuum is named Keith. 😁

To view or add a comment, sign in

More articles by SAS

Insights from the community

Others also viewed

Explore topics