United We Care

United We Care

Wellness and Fitness Services

Los Angeles, California 12,206 followers

On a Mission to Make Mental Health Affordable, Accessible, & Adaptable for Billions Powered by Gen AI with a Human Touch

About us

Empowering Billions with AI-powered Mental Wellness | Making Holistic Health Accessible for All At United We Care, we're a deep tech Generative AI startup with a human touch. We're on a mission to transform mental health and wellness, making it affordable, accessible, and adaptable for billions. - World's most advanced Virtual Wellness Coach: Our AI companion provides personalized support, mindfulness exercises, and resources – all in the palm of your hand. - Holistic business solutions: Boost employee well-being, reduce absenteeism, and unlock your team's full potential with our innovative EAPs. - Human-centered approach: We combine the power of AI with the care and expertise of real professionals, ensuring a compassionate and effective experience. - Comprehensive mental health and wellness solutions for enterprises, healthcare systems, insurance companies, and consultants. Join us on the journey to a healthier, happier future for everyone. #GenerativeAI #MentalHealthMatters #WellbeingForAll #DeepTech Connect with us today! Website: www.unitedwecare.com

Industry
Wellness and Fitness Services
Company size
51-200 employees
Headquarters
Los Angeles, California
Type
Privately Held
Founded
2020
Specialties
mental health , employee assistance, virtual Gen AI Coach, Artificial Intelligence , Deep Tech , Emotional wellness, Mindfulness, mental wellness, emotional health, employee wellness, and work life balance

Locations

Employees at United We Care

Updates

  • View organization page for United We Care, graphic

    12,206 followers

    𝐒𝐨𝐜𝐢𝐚𝐥 𝐌𝐞𝐝𝐢𝐚: 𝐅𝐫𝐢𝐞𝐧𝐝 𝐨𝐫 𝐅𝐨𝐞? 𝐓𝐡𝐞𝐫𝐚𝐩𝐢𝐬𝐭𝐬 𝐑𝐞𝐯𝐞𝐚𝐥 𝐭𝐡𝐞 𝐒𝐡𝐨𝐜𝐤𝐢𝐧𝐠 𝐓𝐫𝐮𝐭𝐡! Is social media making you depressed? Learn how therapists help clients navigate the digital world. 5 Strategies to combat FOMO, cyberbullying, and more! Social media connects us, entertains us, and informs us. But what about its impact on mental health? Therapists are seeing a rise in anxiety, depression, and loneliness linked to social media use. So, is it time to ditch our phones? Not necessarily! But we need to be smart about how we engage with these platforms. This article explores the dark side of social media and offers practical strategies for therapists to help clients develop healthier habits. 𝐑𝐞𝐚𝐝 𝐭𝐡𝐞 𝐟𝐮𝐥𝐥 𝐚𝐫𝐭𝐢𝐜𝐥𝐞 𝐭𝐨 𝐥𝐞𝐚𝐫𝐧 1. How social media fuels comparison and FOMO. 2. The dangers of cyberbullying and addiction. 3. How therapists guide clients towards a more balanced digital life. Let's work together to make social media a force for good, not anxiety! #mentalhealth #socialmedia #therapy #wellbeing #digitaldetox #anxiety #depression #fomo #cyberbullying Ritu Mehrotra (She/Her) Ravi Kikan Sourav Banerjee Arti Khanijo

    Mental Health Implications of Social Media Use: Understanding the Impact and How Therapists Can Address These Issues

    Mental Health Implications of Social Media Use: Understanding the Impact and How Therapists Can Address These Issues

    United We Care on LinkedIn

  • View organization page for United We Care, graphic

    12,206 followers

    𝐈𝐬 𝐀𝐈 𝐓𝐚𝐤𝐢𝐧𝐠 𝐎𝐯𝐞𝐫 𝐓𝐡𝐞𝐫𝐚𝐩𝐲? 𝐓𝐡𝐞 𝐅𝐮𝐭𝐮𝐫𝐞 𝐨𝐟 𝐌𝐞𝐧𝐭𝐚𝐥 𝐇𝐞𝐚𝐥𝐭𝐡 𝐂𝐚𝐫𝐞 Are you tired of waiting weeks for a therapy appointment? Imagine having 24/7 access to personalized support. In the digital age, mental health care is undergoing a massive transformation. From AI chatbots to virtual reality, technology is changing the way we access and experience therapy. But is technology replacing human connection? Join me as I explore the exciting world of digital mental health and discover how it's empowering both patients and professionals. Read the full article to learn more: #mentalhealth #technology #digitaltransformation #therapy #ai #virtualreality #healthcare #innovation Ritu Mehrotra (She/Her) Ravi Kikan Sourav Banerjee Arti Khanijo

    Digital Transformation in Mental Health Practices - How digital transformation is reshaping mental health services

    Digital Transformation in Mental Health Practices - How digital transformation is reshaping mental health services

    United We Care on LinkedIn

  • View organization page for United We Care, graphic

    12,206 followers

    Can AI ease the day-to-day pressures clinicians face? Explore how artificial intelligence can lighten the load by automating tasks, improving patient care, and reducing burnout. Join our free webinar, '𝐄𝐦𝐩𝐨𝐰𝐞𝐫𝐢𝐧𝐠 𝐂𝐥𝐢𝐧𝐢𝐜𝐢𝐚𝐧𝐬 𝐰𝐢𝐭𝐡 𝐀𝐈: 𝐄𝐧𝐡𝐚𝐧𝐜𝐢𝐧𝐠 𝐂𝐚𝐫𝐞, 𝐍𝐨𝐭 𝐑𝐞𝐩𝐥𝐚𝐜𝐢𝐧𝐠 𝐈𝐭' to discover the real impact AI can have on clinical workflows. Don't miss this opportunity to see how technology can transform healthcare—𝗿𝗲𝗴𝗶𝘀𝘁𝗲𝗿 𝘁𝗼𝗱𝗮𝘆! 𝙍𝙚𝙜𝙞𝙨𝙩𝙧𝙖𝙩𝙞𝙤𝙣 𝙡𝙞𝙣𝙠 - https://lnkd.in/gbRKDyw6 #AIinHealthcare #ClinicianSupport #HealthcareInnovation

    • 𝐄𝐦𝐩𝐨𝐰𝐞𝐫𝐢𝐧𝐠 𝐂𝐥𝐢𝐧𝐢𝐜𝐢𝐚𝐧𝐬 𝐰𝐢𝐭𝐡 𝐀𝐈: 𝐄𝐧𝐡𝐚𝐧𝐜𝐢𝐧𝐠 𝐂𝐚𝐫𝐞, 𝐍𝐨𝐭 𝐑𝐞𝐩𝐥𝐚𝐜𝐢𝐧𝐠 𝐈𝐭
  • View organization page for United We Care, graphic

    12,206 followers

    AI in Healthcare: A Double-Edged Sword ⚔️ Is AI revolutionizing patient care, or is it a privacy nightmare? Discover how AI is transforming clinical note-taking and charting while exploring the critical concerns of data security and patient consent. Learn about best practices to harness AI's potential while safeguarding patient information. Read the full article to understand the risks and rewards of AI in healthcare. #AIinHealthcare #PatientPrivacy #DataSecurity #ClinicalDocumentation #HealthcareTechnology Ritu Mehrotra (She/Her) Ravi Kikan Arti Khanijo Sourav Banerjee

    Privacy & Security in AI Note Taking & Charts: A Clinician's Guide

    Privacy & Security in AI Note Taking & Charts: A Clinician's Guide

    United We Care on LinkedIn

  • View organization page for United We Care, graphic

    12,206 followers

    Is AI enhancing clinical judgment, or is it set to replace it entirely? This pressing question is at the heart of today's evolving healthcare landscape. As AI-driven tools become more sophisticated, many wonder how this will impact the role of clinicians and their decision-making process. Join us for this critical conversation and find out how AI can coexist with or challenge traditional clinical approaches. Register now for our free webinar and be part of this insightful conversation! 𝙍𝙚𝙜𝙞𝙨𝙩𝙧𝙖𝙩𝙞𝙤𝙣 𝙡𝙞𝙣𝙠: https://lnkd.in/gbRKDyw6 #AIinHealthcare #HealthcareInnovation #ClinicalAI

    • No alternative text description for this image
  • View organization page for United We Care, graphic

    12,206 followers

    Mental health apps have made mental health resources more accessible, especially for individuals in remote or underserved areas. They provide immediate support and resources without the need for physical travel to a clinic. Many apps offer personalized treatment options, allowing users to track their symptoms and progress. This can lead to better engagement and outcomes as users receive tailored advice and interventions. The rise of teletherapy has expanded access to mental health services, allowing us to receive care from the comfort of our homes. Studies suggest that teletherapy can be as effective as in-person therapy for many conditions. Now, social media also allows us to find support networks and communities that resonate with their experiences. It can also contribute to feelings of inadequacy, anxiety, and depression, particularly due to the comparison culture it fosters. We may experience decreased self-esteem and body image issues as a result of curated online personas. Sadly, not all mental health apps are created equal; some lack evidence-based practices and may not be safe or effective. Users need to be cautious and informed when choosing apps. There is a risk that we may become overly reliant on apps for support, potentially neglecting traditional forms of therapy or social interactions. The constant influx of notifications and the pressure to remain connected can lead to burnout and mental fatigue, impacting overall well-being. So, how do we find a way through this? 👉🏼 Establish specific limits on screen time and designate tech-free periods, particularly before bedtime, to improve sleep quality and mental clarity. 👉🏼 Take breaks from technology to engage in activities that do not involve screens, such as reading, exercising, or spending time outdoors. 👉🏼 Be intentional about technology use. Set specific goals for online activities, such as limiting social media checks to certain times of the day. 👉🏼 Engage in face-to-face interactions with friends and family to strengthen social bonds and reduce feelings of isolation. 👉🏼 If technology use is negatively impacting your mental health, consider consulting a mental health professional for support and guidance. We are here to bridge that gap for you. Visit our website or app if you require more information on destressing, unplugging, or even just someone you want to talk to. We’re here for you. 𝗧𝗮𝗸𝗲 𝘁𝗵𝗲 𝗳𝗶𝗿𝘀𝘁 𝘀𝘁𝗲𝗽 𝘁𝗼𝘄𝗮𝗿𝗱𝘀 𝗮 𝗵𝗲𝗮𝗹𝘁𝗵𝗶𝗲𝗿 𝗺𝗶𝗻𝗱—𝗲𝘅𝗽𝗹𝗼𝗿𝗲 𝗼𝘂𝗿 𝗿𝗲𝘀𝗼𝘂𝗿𝗰𝗲𝘀 𝘁𝗼𝗱𝗮𝘆 𝗮𝘁 : https://lnkd.in/gpzd7VUX #DigitalMentalHealth #Teletherapy #MentalHealthApps #MindfulTechnology #MentalWellbeing Ritu Mehrotra (She/Her) Sourav Banerjee Ravi Kikan sheetal sharma Sonakshi D. Torin Nicholas Ayushi Agarwal Arti Khanijo Bhavya Vats Sumit Khanna Anubhab Giri

    • No alternative text description for this image
  • View organization page for United We Care, graphic

    12,206 followers

    Whenever we need extreme precision, we turn to machines. Experience has taught us to 𝗯𝗹𝗶𝗻𝗱𝗹𝘆 𝘁𝗿𝘂𝘀𝘁 𝗺𝗮𝗰𝗵𝗶𝗻𝗲𝘀 for their 100% accuracy, and this trust extends to the ubiquitous LLM. Our new research paper, “𝗟𝗟𝗠𝘀 𝗪𝗶𝗹𝗹 𝗛𝗮𝗹𝗹𝘂𝗰𝗶𝗻𝗮𝘁𝗲, 𝗮𝗻𝗱 𝗪𝗲 𝗡𝗲𝗲𝗱 𝘁𝗼 𝗟𝗶𝘃𝗲 𝗪𝗶𝘁𝗵 𝗧𝗵𝗶𝘀” shows that this 𝘁𝗿𝘂𝘀𝘁 𝗶𝗻 𝗟𝗟𝗠𝘀 𝗶𝘀 𝘂𝗻𝗳𝗼𝘂𝗻𝗱𝗲𝗱. 𝗟𝗟𝗠𝘀 𝗰𝗮𝗻𝗻𝗼𝘁 𝗵𝗲𝗹𝗽 𝗯𝘂𝘁 𝗵𝗮𝗹𝗹𝘂𝗰𝗶𝗻𝗮𝘁𝗲 - to make mistakes and inaccurate generations is in their inherent mathematical structure. We rely on the greats, Gödel and Turing, to help us see this. Our proofs, built on Gödel's Incompleteness Theorem and Turing's model of the ideal computer, show that LLMs will always hallucinate. Through these tools, our paper provides a novel 𝘁𝗵𝗲𝗼𝗿𝗲𝘁𝗶𝗰𝗮𝗹 𝗹𝗲𝗻𝘀 𝗳𝗼𝗿 𝗟𝗟𝗠𝘀, as opposed to the more usual empirical results. This results in a 𝗴𝗲𝗻𝗲𝗿𝗮𝗹 𝘁𝗵𝗲𝗼𝗿𝘆 that applies to even the most powerful LLM, modelled as the ideal Turing Machine, with 𝘂𝗻𝗯𝗼𝘂𝗻𝗱𝗲𝗱 𝗺𝗲𝗺𝗼𝗿𝘆 𝗮𝗻𝗱 𝘁𝗶𝗺𝗲. Hence, our paper argues that even with 𝘂𝗻𝗹𝗶𝗺𝗶𝘁𝗲𝗱 𝗰𝗼𝗺𝗽𝘂𝘁𝗮𝘁𝗶𝗼𝗻𝗮𝗹 𝗿𝗲𝘀𝗼𝘂𝗿𝗰𝗲𝘀 and 𝗶𝘁𝗲𝗿𝗮𝘁𝗶𝘃𝗲 𝗰𝗵𝗲𝗰𝗸𝗶𝗻𝗴, the fundamental nature of LLMs means that hallucinations can never be completely eliminated. To highlight this, we introduce the concept of Structural Hallucinations: hallucinations that occur due to the inherent mathematical and logical structure of LLMs. We go on to argue that 𝗮𝗹𝗹 𝗵𝗮𝗹𝗹𝘂𝗰𝗶𝗻𝗮𝘁𝗶𝗼𝗻𝘀 𝗮𝗿𝗲 𝘀𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗮𝗹 𝗵𝗮𝗹𝗹𝘂𝗰𝗶𝗻𝗮𝘁𝗶𝗼𝗻𝘀, and they happen at 𝗲𝘃𝗲𝗿𝘆 𝘀𝘁𝗮𝗴𝗲 𝗼𝗳 𝗟𝗟𝗠 𝗴𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝗼𝗻. We address these stages in our assertions: 1. Training data is inherently incomplete. 2. Accurate information retrieval is undecidable.  3. Intent classification is undecidable. 4. Text Generation by LLMs is a priori unknowable.  5. Fact-checking mechanisms can never be completely accurate. Hence the claim, as Andrej Karpathy has said since 2023: 𝗛𝗮𝗹𝗹𝘂𝗰𝗶𝗻𝗮𝘁𝗶𝗼𝗻 𝗶𝘀 𝘄𝗵𝗮𝘁 𝗟𝗟𝗠𝘀 𝗱𝗼 ( https://lnkd.in/gU2SpH-c ). This has 𝘂𝗻𝗽𝗿𝗲𝗰𝗲𝗱𝗲𝗻𝘁𝗲𝗱 𝗰𝗼𝗻𝘀𝗲𝗾𝘂𝗲𝗻𝗰𝗲𝘀 𝗳𝗼𝗿 𝗺𝗼𝗱𝗲𝗿𝗻 𝗶𝗻𝗱𝘂𝘀𝘁𝗿𝘆 𝗮𝗽𝗽𝗹𝗶𝗰𝗮𝘁𝗶𝗼𝗻𝘀 of LLMs. Sensitive fields like medicine, policy and law, among others, will need to proceed with caution and comprehension whenever they employ LLMs for these sensitive tasks. Download the full paper here: https://lnkd.in/gX4ieGXJ A huge thank you to the brilliant minds behind this research: Sourav Banerjee Ayushi Agarwal Saloni Singla Ritu Mehrotra (She/Her) Ravi Kikan Arti Khanijo sheetal sharma Vinti Agarwal Sumit Khanna Anush Mahajan Ayush kumar Bar Promila Ghosh Syed Zaib Farooq Bhavya Vats Rohan kapoor Aditi Chawla Sonakshi D. Torin Nicholas DIPANJAN CHAKRABORTY #LLMsWillHallucinate #ArtificialIntellegence #DeepTech  #hallucinations #LLM

  • View organization page for United We Care, graphic

    12,206 followers

    Our key research paper has gone live - LLMs Will Always Hallucinate, and We Need to Live With This has been published This work introduces the concept of "Structural Hallucinations" as an intrinsic nature of these systems. By establishing the mathematical certainty of hallucinations, we challenge the prevailing notion that they can be fully mitigated. A big hug to the team and specially Sourav Banerjee Ayushi Agarwal & Saloni Singla for building this up and sharing. Ritu Mehrotra (She/Her) Ravi Kikan Arti Khanijo Sonakshi D. Deepanshu Nasa Bhavya Vats #MentalHealth #Deeptech #LLMs #Research

    View profile for Ravi Kikan, graphic

    Loves Transforming & Scaling Startups into Successes | Board Advisor - Nasscom Community | Driving Growth & Innovation as a Proven CXO in AI, SaaS, FinTech, HRTech, HealthTech, Healthcare, Mental Health, DeepTech & More

    So proud to announce that of our key research paper has gone live - LLMs Will Always Hallucinate, and We Need to Live With This has been published and gone live. As Large Language Models become more ubiquitous across domains, it becomes important to examine their inherent limitations critically. This work argues that hallucinations in language models are not just occasional errors but an inevitable feature of these systems. We demonstrate that hallucinations stem from the funda- mental mathematical and logical structure of LLMs. It is, therefore, impossible to eliminate them through architectural improvements, dataset enhancements, or fact- checking mechanisms. Our analysis draws on computational theory and Gödel’s First Incompleteness Theorem, which references the undecidability of problems like the Halting, Emptiness, and Acceptance Problems. We demonstrate that every stage of the LLM process—from training data compilation to fact retrieval, intent classification, and text generation—will have a non-zero probability of producing hallucinations. This work introduces the concept of "Structural Hallucinations" as an intrinsic nature of these systems. By establishing the mathematical certainty of hallucinations, we challenge the prevailing notion that they can be fully mitigated. This was led by our fantastic leadership team & cofounder Sourav Banerjee along with Ayushi Agarwal and Saloni Singla You can read more about it here - https://lnkd.in/gsybNN4W Ritu Mehrotra (She/Her) United We Care Deepanshu Nasa Sonakshi D. Torin Nicholas #Artificialintellegence #generativeAI #LLM #LargeLanguageModels #DeepTech #AI #startups #hallucination #Mentalhealth

  • View organization page for United We Care, graphic

    12,206 followers

    Thank you so much to Pascal Biese for sharing our latest research paper on hallucinations in LLMs! This allows our findings to reach a wider audience thus helping us contribute to the ongoing conversation about AI safety. Everyone can access and read our research on https://lnkd.in/g7iKiRHk Sourav Banerjee Ayushi Agarwal Saloni Singla Ritu Mehrotra (She/Her) Ravi Kikan sheetal sharma Bhavya Vats Arti Khanijo DIPANJAN CHAKRABORTY Sumit Khanna Vinti Agarwal #LLMs #ArtificialIntelligence #AIHallucination #GenerativeAI

    View profile for Pascal Biese, graphic

    Daily AI highlights for 60k+ experts 📲🤗 AI/ML Engineer

    Hallucinations in LLMs: a feature, not a bug? 🤔 As large language models (LLMs) become more powerful and pervasive, it's crucial that we understand their limitations. A new paper argues that hallucinations - where the model generates false or nonsensical information - are not just occasional mistakes, but an inherent property of these systems. While the idea of hallucinations as features isn't new, the researchers' explanation is. They draw on computational theory and Gödel's incompleteness theorems to show that hallucinations are baked into the very structure of LLMs. In essence, they argue that the process of training and using these models involves undecidable problems - meaning there will always be some inputs that cause the model to go off the rails. This would have big implications. It suggests that no amount of architectural tweaks, data cleaning, or fact-checking can fully eliminate hallucinations. So what does this mean in practice? For one, it highlights the importance of using LLMs carefully, with an understanding of their limitations. It also suggests that research into making models more robust and understanding their failure modes is crucial. No matter how impressive the results, LLMs are not oracles - they're tools with inherent flaws and biases. ↓ Liked this post? Join my newsletter with 45k+ readers that breaks down all you need to know about the latest LLM research: llmwatch.com 💡

  • View organization page for United We Care, graphic

    12,206 followers

    Sometimes it is ok not to be ok. Don't choke from inside Don't let anyone also go through it Speak out Share Unburden yourself Release your thoughts Pass on the positive vibes #mentalhealthmatters #mentalhealth

    View profile for Ravi Kikan, graphic

    Loves Transforming & Scaling Startups into Successes | Board Advisor - Nasscom Community | Driving Growth & Innovation as a Proven CXO in AI, SaaS, FinTech, HRTech, HealthTech, Healthcare, Mental Health, DeepTech & More

    This is why #mentalhealth is important and critical for everyone. Things, emotions, experiences that never come out of your mind and body might just keep troubling you unless you lighten up yourself in the ebb of life....you flow but never flow away. Whether you are a middle class, celebrity, startup, hustler, leader, entrepreneur or an aspiring one - you need to address this issue as on yesterday. Mental health is just not a stigma at times it is the call for your inner self to heal from any thing that is disturbing and troubling you. If you are some one who has gone through it, is going through it, who is seeing someone suffer silently seek help, talk to experts, decouple your negative energies and convert them into positive ones. Remember a smiling face might be a facade behind an emotional dam that is waiting to be burst anytime, it is just that - you never know when. Help yourself, help someone today. Godspeed success to you ❤️ pic credits - pinterest #mentalhealth #WorldSuicidePreventionDay #MentalHealthMatters United We Care

    • No alternative text description for this image

Similar pages

Browse jobs

Funding

United We Care 1 total round

Last Round

Seed

US$ 1.5M

See more info on crunchbase