Subconscious AI’s Post

View organization page for Subconscious AI, graphic

1,207 followers

Prepper Sam Altman regularly jokes about existential risk. See below: "I have like structures, but I wouldn't say a bunker. None of this is gonna help if AGI goes wrong.”  - Sam Altman “AI will probably most likely lead to the end of the world, but in the meantime, there’ll be great companies.”  - Sam Altman Now Sam is asking for $7,000,000 million ($7 trillion) to increase existential risk while only committing $10 million to safety. This is dangerous and not human-aligned. What if we could increase knowledge and understanding of human behavior without increasing existential risk? Subconscious.ai is building human-level AI (specifically not superhuman) for the purposes of social research, market research, and product design. Want to test it yourself? Join our waitlist: https://lnkd.in/dW4VD_Ve #ResponsibleAI #HumanAlignedAI

  • No alternative text description for this image
Chris Chambers, MBA

Harvard Business Review Advisory Council | Chief Technology Officer | Chairman of the Board | Doctor of Business Administration Candidate | Future PhD Candidate in Management, Operations, Technology, or Strategy

6mo

Love the product and concept, it has tremendous practical application in the right hands from a business operations perspective.

To view or add a comment, sign in

Explore topics