At our next Stanford HAI seminar, copyright meets generative AI. How will legal rulings affect the future of machine learning? Join us for an in-depth talk by Pamela Samuelson on Nov 13 as she explores the legal challenges of fair use in AI. https://lnkd.in/gVe8crSp
Stanford Institute for Human-Centered Artificial Intelligence (HAI)
Higher Education
Stanford, California 104,079 followers
Advancing AI research, education, policy, and practice to improve humanity.
About us
At Stanford HAI, our vision for the future is led by our commitment to studying, guiding and developing human-centered AI technologies and applications. We believe AI should be collaborative, augmentative, and enhancing to human productivity and quality of life. Stanford HAI leverages the university’s strength across all disciplines, including: business, economics, genomics, law, literature, medicine, neuroscience, philosophy and more. These complement Stanford's tradition of leadership in AI, computer science, engineering and robotics. Our goal is for Stanford HAI to become an interdisciplinary, global hub for AI thinkers, learners, researchers, developers, builders and users from academia, government and industry, as well as leaders and policymakers who want to understand and leverage AI’s impact and potential.
- Website
-
http://hai.stanford.edu
External link for Stanford Institute for Human-Centered Artificial Intelligence (HAI)
- Industry
- Higher Education
- Company size
- 11-50 employees
- Headquarters
- Stanford, California
- Type
- Nonprofit
- Founded
- 2018
Locations
-
Primary
Stanford, California 94305, US
Employees at Stanford Institute for Human-Centered Artificial Intelligence (HAI)
Updates
-
“AI isn’t replacing doctors,” says Stanford researcher Ethan Goh. A recent study finds ChatGPT offers efficiency in diagnosis but needs more physician trust and integration to impact clinical reasoning effectively. https://lnkd.in/gcekCP4S
-
What makes a story go viral? Using sentiment analysis, Stanford scholars analyzed nearly 30 million tweets from over 180 news organizations between 2011 and 2020 and found what fuels the viral spread of news on social media. Now, what to do about it? https://lnkd.in/g4ttUm-Z
The Data Behind Your Doom Scroll: How Negative News Takes Over Your Feed
hai.stanford.edu
-
Stanford HAI Distinguished Education Fellow Peter Norvig outlines actionable approaches to make the development and deployment of AI agents safe and responsible. https://bit.ly/3C3lV50
OpenAI Fast-Tracks AI Agents. How Do We Balance Benefits With Risks?
social-www.forbes.com
-
Stanford Institute for Human-Centered Artificial Intelligence (HAI) reposted this
Meet some of Stanford University's leading experts in Generative AI. Our new online, self-paced program created in collaboration with Stanford Institute for Human-Centered Artificial Intelligence (HAI), features an incredible teaching team of 26 thought leaders across various disciplines--from computer science and engineering, to law and ethics. You'll gain insights from the very minds shaping the future of generative AI. Learn more and enroll: https://lnkd.in/gAuSi4dE #GenerativeAI
-
Scholars are making progress in developing generative AI for multimodal biomedicine. Learn more at the next Stanford HAI Seminar on Nov. 6 with guest speaker Sheng Wang, assistant professor of CSE at the University of Washington, Seattle. https://lnkd.in/gsiPQnxB
-
Stanford Institute for Human-Centered Artificial Intelligence (HAI) and Stanford Robotics Center just announced a new partnership to advance the responsible applications of AI in robotics. Co-led by Fei-Fei Li, John Etchemendy, James Landay, and Oussama Khatib, the initiative will leverage interdisciplinary research and will focus on helping policymakers understand & govern these technologies. “We’re thrilled to be collaborating on this exciting venture. We’re just beginning to understand the exciting ways AI will drive robotics to new capabilities, and now is the time to talk about its effective governance,” said Khatib. “We’re watching the robotics field accelerate in ways we’d never dreamed even a few years ago. For this to be done safely, fairly, and successfully, we need to work together now to understand its potential and its dangers,” said Landay. Read the full announcement: https://lnkd.in/gGqcuN-R
-
We are pleased to announce Yolanda Gil and Toby Walsh as the newest members of the AI Index Steering Committee! Gil, a distinguished computer scientist at the USC Information Sciences Institute, will be joining as Chair-Elect. She initiated and led the W3C Provenance Group that resulted in a widely-used standard that provides the foundations for trust on the Web. While she was president of the Association for the Advancement of Artificial Intelligence (AAAI), she co-chaired the 20-Year Artificial Intelligence Research Roadmap for the US with key strategic recommendations based on extensive community engagement. She is a fellow of AAAI, the Association for Computing Machinery (ACM), the Institute of Electrical and Electronics Engineers (IEEE), the Cognitive Science Society (CSS), and the Association for the Advancement of Science (AAAS). Walsh is the chief scientist of UNSW.AI, UNSW's new AI Institute. He is a strong advocate for limits to ensure AI is used to improve our lives, having spoken at the UN, and to heads of state, parliamentary bodies, company boards and many others on this topic. He is a fellow of AAAI and the Australia Academy of Science, and was named on the international "Who's Who in AI" list of influencers. He has written four books on AI for a general audience, including most recently, "Faking It! Artificial Intelligence in A Human World."
-
Join this upcoming workshop on “The Future of Third-Party AI Evaluation” this Monday, October 28. Learn more here: https://lnkd.in/gktYpYdz
What should the future of third-party AI evaluation look like? Join us on October 28th for a virtual workshop on safe harbor, vulnerability disclosure, and the design of third-party evaluations. Our workshop will: ❌ highlight barriers to adversarial red teaming 🌳 lay out how the third-party evaluation ecosystem can grow ⚖️ bring law and policy experts into conversation with techies 🤖 identify how these issues may differ for large AI models ⛏️ help shape responsible reporting of flaws in AI systems In March, my team released a paper in which we argued for "A Safe Harbor for AI Evaluation and Red Teaming." A lot has changed since then, and we're excited to get together with an amazing group of experts to think about the future of these emerging and interconnected fields. 🔗 see below for links with full details Thanks to my co-organizers Shayne Longpre, Sayash Kapoor, Rishi Bommasani, Michelle Sahar, Dr. Rumman Chowdhury, Arvind Narayanan, and Percy Liang, to Stanford Institute for Human-Centered Artificial Intelligence (HAI), CITP Princeton, and Massachusetts Institute of Technology, and to Nicolas Carlini, Lama Ahmad, Avijit Ghosh, PhD, Deborah Raji, Casey Ellis, Jonathan (Jono) Spring, Harley Geiger, Ilona Cohen, and Amit Elazari, Dr. J.S.D. Can't wait to see everyone on the 28th!
-
At our next Stanford HAI seminar, learn how specialized models are shaping the future of healthcare. University of Washington Assistant Professor Sheng Wang will introduce recent works towards building multimodal biomedicine foundation models. Save your spot here: https://lnkd.in/g5MyBnNv