Proudly resharing PsySafe AI, an open-source initiative by our founder focused on psychological safety in AI. At Koios AI, we're committed to personalised AI that truly supports users! 🚀
🎉 Excited to share a personal build-in-public project I've been working on — 𝗣𝘀𝘆𝗦𝗮𝗳𝗲 𝗔𝗜, an open-source initiative dedicated to psychological safety in AI applications 🚀 AI-powered mental health, therapy, coaching, customer support or companion apps are booming, yet there's surprisingly little out there to ensure these interactions remain psychologically safe. ⚠️ LLMs are powerful yet unpredictable. While many apps diligently prevent harmful content or hallucinations, most rely entirely on AI's intuition to handle sensitive mental health scenarios. 🙀 That's risky. Without clear guardrails, we're simply guessing — and hoping for the best — when the stakes for users' mental health are incredibly high. 👉 That's why I'm launching 𝗣𝘀𝘆𝗦𝗮𝗳𝗲 𝗔𝗜 — a first step toward safer AI interactions. My goal is straightforward: provide developers with immediate access to carefully researched guardrail prompts, enabling everyone to confidently build AI applications that genuinely respect and support users' mental well-being. 📌 Just released the first iteration of the guardrail focused on vulnerability, informed by extensive research into legal frameworks and academic research. It's an early start — I will continue to expand the repository regularly in the coming few weeks. ⭐️ If this resonates with you, please start by starring the repo on GitHub (link in comments). I'd also love to hear your thoughts — what guardrails would benefit your application? And if you're interested in collaborating on research directly, let me know! 👋 Building safe, empathetic AI isn't easy — but let's strive to make it the new standard, not the exception. #AI #OpenSource #MentalHealth #ResponsibleAI #SafetyFirst #LLMs