"Isn't this a cat and mouse game? Won't you always be behind bad actors?" This is one of the most common questions we get at Loccus.ai regarding how effective our voice authenticity verification models will be when faced with voices from new AI voice tools we have never seen before. Any security solution is a cat and mouse game, and that's a fact. The point is being a "smart cat" so that this problem is minimised. Our models are built in a way that they do not rely on having seen specific data (AI voices) in the training process in order to detect them properly. We employ advanced feature extraction techniques aiming at generalising our detection, allowing us to detect voices from new AI tools that might come out in the future and that we have never seen before. Obviously, every time that a new AI voice tool is published, we test our models against it. Today, OpenAI has announced that they have been working on Voice Engine, a model that can clone anyone's voice with a 15-second sample of the original speaker. It seems it is already used in beta by HeyGen and Age of Learning. https://lnkd.in/gckAN46u Although they have not released the tool publicly (yet), they have published several voice samples in the article. We ran them through our models, and we detected 100% of them as AI-generated. Some of the results are in the first comment. This time, the cat was still faster than the mouse :)
Promising, indeed. Smart Cat!
Applied AI | Connecting Dots | Strategy | GTM | Growth | Product | Data | Company Building & Scaling | Web3
7moThis is amazing and not an easy test. My human ear couldn't detect this one. Smart Cat 💪