Cambridge Women In Tech reposted this
On this Ada Lovelace Day, I’m thrilled to be hosting a special event with Speechmatics tonight, where we’ll explore the crucial topic of gender bias in speech technology. Ada Lovelace's legacy as the world’s first computer programmer inspires us to continue breaking barriers and driving innovation, especially in spaces where diversity still has room to grow. Our panel of experts, including Benedetta Cevoli, Lauren King, Bella Taylor and Teri Drummond, will be diving into the impacts of bias and the importance of inclusivity in tech. I can't wait to moderate this discussion and share insights from the Speechmatics Hackathon. What better way to honor Ada’s pioneering spirit than by pushing for a more inclusive future in tech? How are you celebrating Ada Lovelace Day? #AdaLovelaceDay #WomenInTech #GenderBiasInTech #SpeechRecognition #Innovation #DiversityInSTEM
Really great event Katherine Codlin. Lovely to see Catherine Breslin, reunited again, after we did our Cambridge WIT talks back in...2018 or something?! Really thought-provoking insights from the Speechmatics team members and I would so love to see this kind of discussion at the big Cambridge tech events...not sidelined in a separate panel on the fringe, but to see considerations around responsible tech woven into the conversation as a matter of course
It was a most meaningful toast to Ada Lovelace by her successoresses. Is this a new word for the LM? You have done it again Katherine Codlin with Cambridge Women In Tech ! And lovely to meet new friends.
Such a great event! Thanks for organising Kat, and thanks to all the speakers
Have a great evening everyone!! Sorry I couldn't be there!!
It was a great event!
Senior User Experience Consultant at Nexer Digital
1moAlthough this event focused on gender bias in AI, other kinds of bias also arose during the discussion. These included age, religion, and accents in speaking English. I'm sure that research will uncover others as well. Here's one of the many things I took away from it: As long as LLMs produce content that can be included in training LLMs, the problem could spiral. (Or am I being alarmist?) Folks are working to reduce the bias in the content, with the aim of slowing (or ideally reversing) the trend. It's a sticky problem.