AI is rapidly advancing, and one outcome of such advancements has been the emergence of deepfakes. Whilst deepfakes can have beneficial uses across marketing, entertainment, retail, education, healthcare, and cultural applications, they also pose severe risks, including identity fraud, non-consensual manipulation, privacy violations, the spread of disinformation and national security risks..
Last month, the Saudi Data & AI Authority (SDAIA) published their ‘Deepfakes Guidelines’, a set of comprehensive guidelines to address the implications of deepfake technologies, with the aim of mitigating their associated risks. The Guidelines are separated into distinct sections with guidance provided specific to developers, content creators, regulators and consumers.
Other regulatory frameworks governing deepfakes or AI have predominantly focused on high-risk situations, and obligations upon developers and creators. Interestingly, the SDAIA Guidelines not only highlight malicious uses, but also discuss beneficial uses for deepfakes, and go one-step further by providing recommendations for consumers.
One of the most interesting aspects of the Guidelines is the section dedicated to consumers, and how people can potentially detect deepfakes. The Guidelines recommend that consumers assess the message, analyse the audio-visual elements such as the blinking patterns and lip-syncing, and where possible, authenticate the content. It is also strongly recommended that consumers report a deepfake, where it has been deployed for a malicious reason.
Some of our key observations :
1. The Guidelines establish ethical principles for deepfake technology developers and clear guidance for content creators. This objective is closely aligned with the EU AI Act, whereby developers are obliged to promote responsible creation and implementation of deepfake technologies, emphasizing transparency, consent, and respect for privacy, with content creators instructed to adhere to ethical standards and legal requirements.
2. For deepfake technology developers and content creators, the Guidelines recommend the implementation of strong data protection measures, ensuring that consent for using personal data is secure and companies maintain transparency regarding how deepfakes are generated.
3. Notably, when reviewing the Guidelines, its alignment with international standards and regulatory frameworks is clear, as many of the principles and provisions contained within the Guidelines have been mapped and show their alignment with the GDPR, national data protection laws.
White Label Consultancy has extensive experience supporting organisations with cyber security advisory and leadership. Reach out or schedule a call to learn more about our service offerings and how we support your organisation.
#securityleadership #cybersecurity #cybersecuritymaturity #cyberleadership #cybersecurityframework