Lyssn’s Post

View organization page for Lyssn, graphic

2,837 followers

With ChatGPT-4o incorporating new emotional intelligence capabilities, many are concerned that this may lead to algorithmic bias and ethical issues with its use. At Lyssn, we break these concerns into four critical areas that need to be assessed in any AI that interacts with humans in a health or behavioral health space: safety, bias, transparent validation, and robustness. Over the course of the next few months, we will be sharing much more on these topics. Follow our page to stay up to date! #chatgpt #ethicalAi #mentalhealthtech #aidebate

Are you 80% angry and 2% sad? Why ‘emotional AI’ is fraught with problems

Are you 80% angry and 2% sad? Why ‘emotional AI’ is fraught with problems

theguardian.com

To view or add a comment, sign in

Explore topics