Bachelorarbeiten auf Weltkonferenz präsentiert: Im Juli 2024 fand in Malta die zweite Weltkonferenz für Erklärbare Künstliche Intelligenz (xAI) statt. Mittendrin waren zwei Studierende der Hochschule Luzern aus dem Studiengang Mobility, Data Science and Economics. Finn Schürmann und Antonio Mastroianni hatten die besondere Ehre, ihre Bachelorarbeiten aus dem Studiengang Mobility, Data Science and Economics auf der internationalen Bühne in Malta zu präsentieren.
Worum es bei ihren Bachelorarbeiten geht, weshalb es für Bachelorstudierende eine Ehre ist, an der xAI World Conference teilzunehmen, und welche Rolle ihre Dozentin Dr. Sibylle Sager-Müller dabei hatte, erfährst du in diesem Beitrag: https://lnkd.in/euy7YRTVWorld Conference on eXplainable Artificial Intelligence#hslu#hsluwirtschaft#malta
Attending the world conference on #explainableAI is my yearly highlight 🎆
I truly enjoyed the mix of applied and methodological #xAI sessions and came back from Malta full of inspiration, with many notes, new papers to read, and ideas to test.
Some observations about the field from this year
4 things that #stayed unchanged:
✅ Explainable AI is very interdisciplinary
✅ Explainable AI is applied in ML development at all stages, from early model building to productive monitoring
✅ Explainable AI methods depend on the target audience. There are methods better suitable for data scientists and those designed for end users
✅ Finance, as the classical area for explainable AI, keeps testing new methods and their applications
4 topics that got #stronger:
🔝 Explainable AI is closely linked to responsible AI and is an integral part of the regulatory #AIAct discussions
🔝 With the number of xAI methods growing, #benchmarking of the explainable AI methods is more prominent than ever
🔝 Explainable AI sees #LLMs both as models to be explained and tools that can help in the explanation process
🔝 Explainability of computer vision models is prominent in medical and manufacturing applications
4 new developments:
🆕 Explainability research in the process analytics community
🆕 Explainable AI for recommendation models
🆕 Explainability #engineering and #explainableAI as part of software engineering. Many data scientists and researchers are bringing explainability to production to improve existing decision systems and make these systems explainable by design. To me, this topic is first of all about the application of xAI in monitoring and MLOps solutions. But explainability engineering is more than that, it includes such aspects as explainable documentation and the explainability of non-AI components.
🆕 Prototypes are big in explainable AI both as a method to achieve explainability and to communicate the results to the end users
A huge thanks to Luca Longo and the organizers for setting the World Conference on eXplainable Artificial Intelligence in motion and creating a unique community. It's exciting to see the explainable AI field grow.
Alexandre Goossens, Kseniya Sahatova, Björn Milcke, Pascal Dinglinger, Marharyta Domnich, Claudia Sessa and many more incredible people I met - thank you for the technical, ethical, and philosophical xAI discussions and for the wonderful time at the conference.
Had the pleasure of attending World Conference on eXplainable Artificial Intelligence in beautiful Malta last week🇲🇹
It was a lovely conference, with so many interesting people attending. Amazing to see a community come together like this. 🤩
I myself presented our work on evaluating the benifits of model inspection for end users, to verify that the XAI methods we develop has measurable impact in helping humans!🧪🧑🔬
If you find this interesting please have a look:
https://lnkd.in/d9gEWPgV
🎉 Siamo orgogliosi di partecipare alla World Conference on eXplainable Artificial Intelligence 2024 a Malta per sostenere lo scambio di conoscenze e le attività di #ricerca e #innovazione che impattano sul territorio.
La World Conference on Explainable Artificial Intelligence è un evento annuale che riunisce ricercatori, accademici e professionisti per discutere e condividere conoscenze, prospettive ed esperienze nell'ambito dell'Intelligenza Artificiale eXplainable (XAI). Questo incontro multidisciplinare affronta aspetti pratici, sociali ed etici della spiegazione dei modelli di #AI, coinvolgendo esperti in Informatica, Psicologia, Filosofia, Scienze Sociali e altri campi.
La crescente attenzione verso l'#XAI è essenziale per affrontare la complessità dei modelli indotti dai dati e per soddisfare gli obblighi legali imposti da varie giurisdizioni nazionali e internazionali.
Thanks to Luca Longo della Technical University of Dublin.
#XAIWorldConference#AI#Innovazione
Today I had the pleaure of attending the World Conference on eXplainable Artificial Intelligence in Valletta and be part of a panel discussion about "The role of eXplainable AI (XAI) for regulations & its alignment with the EU AI Act" together with Dr. Ian Gauci, Mr. Trevor Sammut, Dr. Paul Micallef Grimaud, and the panel chair Dr. Annalise Vassallo Seguna.
We had a very interesting discussion on a number of topics about the recently approved EU AI Act and its link to XAI. In particular, I discussed:
1) How the industry is preparing for the EU AI Act;
2) KPMG's approach in helping our clients prepare for the EU AI Act with respect to AI readiness, Trusted AI Risk and Control, and AI Governance;
3) How the AI community at large including AI specialists, lawyers, executive directors, academics, and the regulators, can better prepare to achieve Europe's Digital Decade: digital target for 2030 of having tech up-take of 75% of EU companies in using AI; and
4) How XAI methods can help us achieve this ambitious digital target for 2030.
Big well done to the organisers and helpers of the conference, especially the Local Chair Dr. Charlie Abela for the great organisation of the event and invite to participate in such an interesting and insightful panel. #AI#xai_world#XAI#explainableai#EUAIAct#KPMG
Today, I was honoured to participate in the World Conference on eXplainable Artificial Intelligence, an international event held in #Malta. The event is exceptionally timely given the recent legislative developments, including the AI Act in the EU, Council of Europe Convention, and emerging regulations like those in California and Colorado, all emphasising the importance of explainable AI.
The people and students I met who are active in this industry are the real experts, and I was truly humbled to be part of their marvellous world. As it happened, my intervention was on the AI Act and other international laws touching on the explainability of AI. Our panel, composed of myself along Dr Paul Micallef Grimaud Dr Annalise Vassallo Seguna , Trevor Sammut and Dr Keith Cortis discussed different angles of elements leading to explainability.
During my session, I had the opportunity to :
(1) elucidate the intricate differences between explainability, explicability, and interpretability in AI and under the AI Act, including certain inherent limitations and risks around these principles.
(2) express concern that we risk facing significant challenges and subjective interpretations without clear industry standards for implementing these concepts.
(3) stress that we must establish robust, objective parameters to ensure transparency and accountability in AI systems, which must also be harmonised and ascertained.
I am looking forward to engaging with thought leaders and contributing to this critical discourse on AI.
Massive thanks to the University of Malta & Dr Charlie Abela for spearheading this event and inviting me to attend
#ExplainableAI#AIAct#AIRegulation#Transparency#Accountability#WorldConferenceOnXAI#Malta2024#AIIndustry#LegalTech#InnovationGTG Malta Digital Innovation AuthorityTech.mt
I had a great time during the World Conference on eXplainable Artificial Intelligence in Malta.
I discovered that people working in explainability are much more eager to explain everything to you, even from scratch. Is it because of the domain?
I returned home with plenty of new ideas. It was a pleasure to meet so many representatives of XAI community.
I am happy to share a paper "CNN-based explanation ensembling for dataset, representation and explanations evaluation" that I co-authored with Luca Longo and Przemyslaw Biecek, and received excellent reviews. (The link is in the comment.)