The foundational AI models developed within the EU and US are predominantly trained on Western data. When these models are applied to a different geographic context, such as the MENA region, cultural biases become evident.
This is what I shared during my panel participation in the PRECRISIS Project international conference in Sofia this Tuesday, where I also presented our work at Imagga and the CounteR Project.
The main topic of the panel was AI applications in the security sector. In such high-risk applications, where a single inaccurate prediction could compromise the safety of individuals, it is the responsibility of the solution providers to actively work on mitigating biases inherited from the foundational models they use.
Furthermore, similarly to what we do as part of the CounteR Project, real-world pilot tests of these applications is essential before they reach the market, in order to identify the potential risks and gaps in the technology.
It was an honour to join industry experts like Alex White, Martina Bogdanova, Lorenzo Vaquero Otal, Maja Halilovic-Pastuovic, Alex Townsend-Drake, and many more.
A special thank you to Borislav Mavrov, PhD, Apostol Apostolov, and the entire team of The European Institute (EI) Foundation Фондация "Европейски институт" for not only the invitation to participate but also for their flawless organization of the conference.
P.S. It is quite funny how the EU flag was projected on my face during the panel, somehow implying our commitment (both at Imagga and Kelvin Health) in advancing deep tech innovations within the EU. :)