It's not all gloom and doom when it comes to the predictions of economist Dr. Nouriel Roubini. The "MegaThreats" author, also known as "Dr. Doom," joins Ronan Ryan and John Ramsay on the latest episode of #BoxesAndLines and shares his thoughts on how the U.S.'s ongoing growth and advancements in AI will likely contribute to it remaining the biggest global power: Listen to the full conversation here: https://lnkd.in/e3QBsEKj
IEX’s Post
More Relevant Posts
-
#AI #SocialMedia, #ConfirmationBias and the impact on society and #K12 #Students in the near future is really addresses well in the latest scary episode of #DoctorWho. I would love to hear your thoughts on this episode! Even if you are not a fan, this is the one episode you have to see.
To view or add a comment, sign in
-
🎧 Flashback Friday: Have you ever wondered how the metaverse is enabling a global talent marketplace? Tune in to hear our past guest, Josh Drean, explore this fascinating topic and much more, including: 💼 How AI is transforming our relationship with work ☀ Why "employment is dead" but work will soon be more enjoyable 🤖 The divide between those embracing AI and those at risk of obsolescence 🕵️♂️ Why employee surveillance is a flawed strategy 🌟 Why "passion is future-proof" Don't miss out on this insightful episode! [Link to episode in the comments below!]
To view or add a comment, sign in
-
Increase Your Value! What insights does the experienced professor bring to light about your intellectual capital? Reflect back on the 90s, when Swedish IT companies like Framfab and Icon Medialab grew and expanded, much like AI companies are today. How can we, moving forward, recognize and measure not only the value of machines but also human contributions to build sustainable businesses and society? We’re revisiting the era and the discussions that led our guest to write the groundbreaking book "Intellectual Capital: Realizing Your Company's True Value by Finding Its Hidden Brainpower" (1997), together with Michael Malone. In our fast-paced world, the rate of change in business and research accelerates, largely driven by AI. Yet, with this rapid change, there’s a human resistance that many renowned researchers acknowledge. Most people don’t welcome swift changes. And there’s a human limitation that computers don’t share. We need to pivot toward highlighting our unique abilities and potential as humans enabling us to collaborate effectively with technology while increasing our own value. One way to do this is by focusing on our Relational Intelligence and Relational Capital, turning them into part of our structural capital. That’s why we (Christian Altenius and I) believe our Swedish podcast, With AI to RI (Relational Intelligence): How to Build Your Relational Capital, has never felt more relevant than when we recorded this episode at CONVENDUM. Leif Edvinsson brings a rich background in economics and management, with experience in both academia and industry. He was the world’s first "Director of Intellectual Capital", pioneering ways to assess and visualize intangible assets, like human and structural capital. Through his work, he has helped companies recognize the value of intellectual capital for achieving long-term competitive advantage and sustainable growth. If you understand Swedish and can spare 20 minutes, listen to some intriguing Q&A on our podcast. And if you’d like to join the discussion, feel free to comment or share your thoughts on our LinkedIn page, Relational Capital. Enjoy the rest of your weekend! https://lnkd.in/dC8fKKvJ
To view or add a comment, sign in
-
This is the third podcast in our series on living beyond human scale, and I have NOT stopped thinking about my conversation with Dr. S. Craig Watkins since we spoke last week. Our team got an early listen and we were blown away by Craig’s ability to walk us through what is known in the AI community as the “alignment problem” — who needs to be at the table in order to build systems that are aligned with our values as a democratic society? He uses powerful examples about what happens when we start unleashing AI in high stakes environments like education, healthcare, and criminal justice. The question is: What guardrails, policies, and ethical principles do we need to make sure that we’re not scaling injustice? Here’s one of the many, many quotes that shook me by the shoulders: “If we’re going to build AI that matters — AI that impacts the world in a significant way — we’ve got to expand who’s contributing to that conversation and who’s driving how we design and deploy these systems. This is no longer a problem that’s adequately solved by just computer scientists or engineers. Doing that has gotten us to the point now where we see that that’s insufficient, inadequate, and increasingly indefensible.” Discussing the podcast would be an incredible lunch and learn for teams in organizations! https://lnkd.in/gyDDVef5
To view or add a comment, sign in
-
AI continues to be THE topic in the tech industry, but as advancements progress at an exponential rate, we're now "living beyond human scale," in the words of Brené Brown. She's launched a new podcast series as part of her Unlocking Us podcast to explore this fascinating subject (link below). If you're not familiar with Brené Brown, she's a researcher/storyteller whose Ted Talk on the Power of Vulnerability is my all-time favorite. https://lnkd.in/eigdbHhm #getthefutureyouwant #unlockingus #artificialintelligence #livingbeyondhumanscale
This is the third podcast in our series on living beyond human scale, and I have NOT stopped thinking about my conversation with Dr. S. Craig Watkins since we spoke last week. Our team got an early listen and we were blown away by Craig’s ability to walk us through what is known in the AI community as the “alignment problem” — who needs to be at the table in order to build systems that are aligned with our values as a democratic society? He uses powerful examples about what happens when we start unleashing AI in high stakes environments like education, healthcare, and criminal justice. The question is: What guardrails, policies, and ethical principles do we need to make sure that we’re not scaling injustice? Here’s one of the many, many quotes that shook me by the shoulders: “If we’re going to build AI that matters — AI that impacts the world in a significant way — we’ve got to expand who’s contributing to that conversation and who’s driving how we design and deploy these systems. This is no longer a problem that’s adequately solved by just computer scientists or engineers. Doing that has gotten us to the point now where we see that that’s insufficient, inadequate, and increasingly indefensible.” Discussing the podcast would be an incredible lunch and learn for teams in organizations! https://lnkd.in/gyDDVef5
To view or add a comment, sign in
-
A must listen! Breaks down the reality of AI potential and pitfalls.
This is the third podcast in our series on living beyond human scale, and I have NOT stopped thinking about my conversation with Dr. S. Craig Watkins since we spoke last week. Our team got an early listen and we were blown away by Craig’s ability to walk us through what is known in the AI community as the “alignment problem” — who needs to be at the table in order to build systems that are aligned with our values as a democratic society? He uses powerful examples about what happens when we start unleashing AI in high stakes environments like education, healthcare, and criminal justice. The question is: What guardrails, policies, and ethical principles do we need to make sure that we’re not scaling injustice? Here’s one of the many, many quotes that shook me by the shoulders: “If we’re going to build AI that matters — AI that impacts the world in a significant way — we’ve got to expand who’s contributing to that conversation and who’s driving how we design and deploy these systems. This is no longer a problem that’s adequately solved by just computer scientists or engineers. Doing that has gotten us to the point now where we see that that’s insufficient, inadequate, and increasingly indefensible.” Discussing the podcast would be an incredible lunch and learn for teams in organizations! https://lnkd.in/gyDDVef5
To view or add a comment, sign in
-
Speaker/Coach-HOW TO PEOPLE™-Expanded Rules of Engagement Offering virtual masterclasses and live workshops/keynotes Author"Creating Cultures of Neuroinclusion" (Nov. 2024) Forbes*Neurodiversity Expert
Seriously, y'all, drop everything and listen to this shape-shifting, mind-blowing, and positively SCARY conversation about the systemic injustices and dangers inherent in AI with Brene Brown and Craig Watkins. Then read "Unmasking AI", by Joy Buolamwini, the conscience of the AI revolution, and feel your mind explode further as you explore algorithmic justice through the lens of intersectionality and the tech industry. Don't look away, friends. #AI #intersectionality #SystemicOppression #Justice #SystemicRacism
This is the third podcast in our series on living beyond human scale, and I have NOT stopped thinking about my conversation with Dr. S. Craig Watkins since we spoke last week. Our team got an early listen and we were blown away by Craig’s ability to walk us through what is known in the AI community as the “alignment problem” — who needs to be at the table in order to build systems that are aligned with our values as a democratic society? He uses powerful examples about what happens when we start unleashing AI in high stakes environments like education, healthcare, and criminal justice. The question is: What guardrails, policies, and ethical principles do we need to make sure that we’re not scaling injustice? Here’s one of the many, many quotes that shook me by the shoulders: “If we’re going to build AI that matters — AI that impacts the world in a significant way — we’ve got to expand who’s contributing to that conversation and who’s driving how we design and deploy these systems. This is no longer a problem that’s adequately solved by just computer scientists or engineers. Doing that has gotten us to the point now where we see that that’s insufficient, inadequate, and increasingly indefensible.” Discussing the podcast would be an incredible lunch and learn for teams in organizations! https://lnkd.in/gyDDVef5
To view or add a comment, sign in
-
I listened to this last night, and I highly recommend checking out this podcast about ethical decision-making and "fairness" when it comes to #AI
This is the third podcast in our series on living beyond human scale, and I have NOT stopped thinking about my conversation with Dr. S. Craig Watkins since we spoke last week. Our team got an early listen and we were blown away by Craig’s ability to walk us through what is known in the AI community as the “alignment problem” — who needs to be at the table in order to build systems that are aligned with our values as a democratic society? He uses powerful examples about what happens when we start unleashing AI in high stakes environments like education, healthcare, and criminal justice. The question is: What guardrails, policies, and ethical principles do we need to make sure that we’re not scaling injustice? Here’s one of the many, many quotes that shook me by the shoulders: “If we’re going to build AI that matters — AI that impacts the world in a significant way — we’ve got to expand who’s contributing to that conversation and who’s driving how we design and deploy these systems. This is no longer a problem that’s adequately solved by just computer scientists or engineers. Doing that has gotten us to the point now where we see that that’s insufficient, inadequate, and increasingly indefensible.” Discussing the podcast would be an incredible lunch and learn for teams in organizations! https://lnkd.in/gyDDVef5
To view or add a comment, sign in
-
Procurement Transformation Leader | Driving Digital & Sustainable Procurement | Champion of ESG & Supplier Diversity
Are You Ready for the Rise of AI? Confused about AI, is it a runaway train? This insightful podcast hosted by Brené Brown tackles the development of AI, challenging our assumptions about its impact and potential. The future is ours to shape. Will AI be our downfall or a new era's dawn? Let's choose what gets amplified and walk consciously into this new age. #AI #futureofwork #ethics
This is the third podcast in our series on living beyond human scale, and I have NOT stopped thinking about my conversation with Dr. S. Craig Watkins since we spoke last week. Our team got an early listen and we were blown away by Craig’s ability to walk us through what is known in the AI community as the “alignment problem” — who needs to be at the table in order to build systems that are aligned with our values as a democratic society? He uses powerful examples about what happens when we start unleashing AI in high stakes environments like education, healthcare, and criminal justice. The question is: What guardrails, policies, and ethical principles do we need to make sure that we’re not scaling injustice? Here’s one of the many, many quotes that shook me by the shoulders: “If we’re going to build AI that matters — AI that impacts the world in a significant way — we’ve got to expand who’s contributing to that conversation and who’s driving how we design and deploy these systems. This is no longer a problem that’s adequately solved by just computer scientists or engineers. Doing that has gotten us to the point now where we see that that’s insufficient, inadequate, and increasingly indefensible.” Discussing the podcast would be an incredible lunch and learn for teams in organizations! https://lnkd.in/gyDDVef5
To view or add a comment, sign in
13,546 followers