Foresight Institute

Foresight Institute

Civic and Social Organizations

San Francisco, CA 5,189 followers

Advancing transformative technology towards futures of Existential Hope

About us

Foresight Institute is a leading think tank and public interest organization focused on transformative future technologies. Founded in 1986, its mission is to discover and promote the upsides, and help avoid the dangers, of nanotechnology, AI, biotech, and similar life-changing developments.

Industry
Civic and Social Organizations
Company size
2-10 employees
Headquarters
San Francisco, CA
Type
Nonprofit
Founded
1986

Locations

Employees at Foresight Institute

Updates

  • View organization page for Foresight Institute, graphic

    5,189 followers

    The 2024 Space Futures & Governance Workshop will occur at the Chabot Space & Science Center on September 20-21 in Oakland, CA. Accelerating progress in space science and technologies has recently opened up new frontiers, possible governance challenges, and opportunities to make our future in space accessible, peaceful, and flourishing. This workshop invites leading space-oriented scientists, technologists, and governance experts to explore emerging opportunities at the intersection of space technology and governance. Secure your participation here: https://lnkd.in/eg4ASuNr

    • No alternative text description for this image
  • View organization page for Foresight Institute, graphic

    5,189 followers

    Yesterday evening we kicked off the 2024 LBF and Foresight Longevity Workshop with a VIP reception in San Jose with our Speakers, Sponsors, and Fellows. Thank you to our hosts: Longevity Biotech Fellowship, Stanford University School of Medicine, and Media Partners: Lifespan.io (Lifespan Extension Advocacy Foundation). This event wouldn’t have been possible without our sponsors: Protocol Labs, AgingBiotech.info, 100 Capital, NFX Bio, OpenCures, Deep Origin, and THE MICHAEL ANTONOVICH CHARITABLE FOUNDATION. We look forward to exploring new ideas for critical pathways with you at CANOPY and LKSC at Stanford University these next two days.

    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
  • View organization page for Foresight Institute, graphic

    5,189 followers

    See Steven Stone present at our recent Intelligent Cooperation workshop. Steven is a data security expert at Zero Labs, examines the critical role of AI in managing the growing complexities of data security. He emphasizes that organizations must prepare for an exponential increase in data volume, projecting a 7x growth over the next five years due to a 42% increase every 18 months from a base of 240 backend terabytes. This massive data expansion necessitates robust security strategies. Stone argues that generative AI holds significant promise for enhancing these strategies by efficiently identifying, classifying, and tracking data movements. He believes AI can help organizations adapt their defense mechanisms to keep up with the fast-evolving data landscape, thereby effectively mitigating potential vulnerabilities. Full summary and video here: https://lnkd.in/dGnviDqP

    • No alternative text description for this image
  • View organization page for Foresight Institute, graphic

    5,189 followers

    Watch the proposal from one of the working groups at out recent AI safety workshop. This group explored Differential Cyber Defense. Participants: Aleksandra Singer, Altos Labs Austin Liu, Chao Society Evan Miyazono, Atlas Computing Matt Slater, Stateless Ventures Ryan Singer, VEX This working group focused on cyber defense, developing an approach to secure the future of cybersecurity against AI-associated risks. They proposed creating an Epoch AI-like research group dedicated to cyber posture forecasting in an AGI future, aiming to provide trustworthy, publicly available data to demonstrate future risks and increase awareness among cybersecurity policy professionals. The group emphasized empowering defense over offense to ensure greater stability, suggesting the use of machine learning approaches to improve the overall internet's security posture. Their plan involves building AI tools to identify vulnerabilities in open-source software and create patches to secure it. The approach includes assembling a team of security experts, fine-tuning frontier models, and developing automated tools for vulnerability identification and responsible disclosure. By encouraging widespread adoption of this platform, the group aims to facilitate better decision-making and contribute to a more secure cyber future in the age of AGI. Watch here: https://lnkd.in/dDv28_cP

    • No alternative text description for this image
  • View organization page for Foresight Institute, graphic

    5,189 followers

    Watch Marta Belcher’s presentation at our recent AI safety workshop underscoring the need for a nuanced debate surrounding the regulation of artificial intelligence and machine learning technologies. Belcher highlights a pivotal issue: the balancing act between protecting intellectual property rights and preserving civil liberties in the face of rapidly advancing technologies. Her insights stress the importance of regulating AI activities rather than the technologies themselves. This distinction could play a crucial role in maintaining the integrity of intellectual property rights while fostering innovation. Watch here: https://lnkd.in/daRdxbBk

    • No alternative text description for this image
  • View organization page for Foresight Institute, graphic

    5,189 followers

    Watch the proposal from one of the working groups at out recent AI safety workshop. This group explored Systemic Risk of AI. Participants: Brandon Sayler, University of Pennsylvania Colleen McKenzie, AI Objectives Institute Jeremiah Wagstaff, Humaic Labs Max Reddel, ICFG Milan Griffes, Lionheart Ventures Philip Chen, Lionheart VC Vassil Tashev, Independent This working group developed a comprehensive approach to identify and address cascading systemic risks associated with AI development. They created a taxonomy of interconnected risks spanning cybersecurity, economics, geopolitics, and social dynamics, proposing the creation of "risk observatories" to monitor early warning signs of potential problems. This approach aims to centralize and automate risk detection systems, allowing quick identification and addressing of issues as they arise. Key risk areas included AI-driven job displacement, erosion of trust due to deepfakes, cybersecurity threats, and potential AI-enabled totalitarianism. Their analysis culminated in a flow chart illustrating how issues could lead to three major endpoints: extinction, excessive state control, or anarchy. By implementing their proposed risk observatory system, the group aims to mitigate these systemic risks and prevent cascading negative outcomes. Watch here: https://lnkd.in/d8ThkFCC

    • No alternative text description for this image

Similar pages

Browse jobs