Responsible Innovation in Action: Building Trustworthy & Ethical AI systems
Photo by Johannes Plenio via Pexels

Responsible Innovation in Action: Building Trustworthy & Ethical AI systems

Technology shapes our world, often in surprising ways. AI, for example, optimizes hospital supplies, personalizes fan experiences, and even guides government decisions. This power demands responsible innovation. At SAS' core, we believe responsible innovation means asking "should we?" alongside "could we?". Our core values – human-centricity, inclusivity, accountability, transparency, robustness and privacy & security –  guide every step, from concept to creation.  


Human Centricity: Everybody is human 

AI systems are powerful, but their impact can be forgotten. Human-centric design ensures these models prioritize people. Data systems should empower individuals, not replace them. We design with dignity, fairness, and well-being in mind. By maintaining human oversight, we can ensure analytics serve us, not the other way around. 


Inclusivity: Omit No One 

Data-driven systems are only as good as the data they're built on. Inclusivity is key to avoiding bias and ensuring our analytics tools work for everyone. Early on, incomplete data can skew results. Just like checking the ingredients on your food, we need to assess the data's quality – is it complete, unique, and does it represent a variety of perspectives? As we build the models, mismatched values and outliers can also lead to unintended consequences. By being inclusive in our data collection and analysis, we can ensure our systems are fair and work for everyone. 


Transparency: Nothing to Hide 

Responsible innovation thrives on openness. Transparency in data-driven systems builds trust by letting users understand how decisions are made. We can achieve this by being clear about the model's goals, what it can (and can't) do, and any limitations it may have. Think of it like showing your work – users deserve to see the data's journey, from its source to its impact. Tools like data lineage help us achieve this by giving users a clear view of how data flows through the system, building trust and ensuring everyone's on the same page. 


Accountability: Own the Outcome 

Data-driven systems are a team effort. Everyone involved, from developers to users, plays a part in the model's outcome. Accountability means taking ownership of those results and working together to minimize any potential harm. Building a clear decision workflow is key here. This workflow allows users to track the model's decisions, from creation to implementation. By having clear oversight at each step, we can all be accountable for responsible use of this powerful technology. 


Robustness: Reliably Resilient & Safe 

Imagine a system that gives different answers depending on the day of the week. Not exactly reliable, right? That's where robustness comes in. Responsible data systems need to be dependable and deliver consistent results. Think of it as building a bridge – you want it strong and reliable, not swayed by the weather. 

One key to achieving robustness is model monitoring. It's like having a built-in quality check, constantly evaluating the model's accuracy, fairness, and relevance. Plus, we need to rigorously test our systems against a variety of situations. Just like stress-testing a building, this helps us identify and address any potential issues before they impact users. 


Privacy & Security: Control your Exposure 

Responsible innovation goes hand-in-hand with data privacy and security. We need to respect individual privacy and meet all regulatory requirements when handling data. This principle is all about respecting your right to control your information. 

Organizations that prioritize privacy can leverage information privacy capabilities. These tools can identify potentially sensitive data points, allowing users to handle this information with extra care. Data masking techniques can also be used when the best course of action is to hide specific data values. By prioritizing privacy and security, we empower users and ensure data is treated responsibly. 


Vrushali Sawant is a data scientist with SAS' Data Ethics Practice (DEP), steering the practical implementation of fairness and trustworthy principles into the SAS platform.

Renato Bozzaotra Renato Bozzaotra

Architetto-designer presso Attività autonoma

3mo

The image you published today, an Edison light bulb so called because it was Edison himself who introduced us to the light of then and of the future. However, the same lamp represents a conceptual work of mine for 24 images presented at the Rome Quadrennial in 1970 called the artistic review "the new generation". Preserved in the editorial headquarters of the Roman Quadrennial. Apart from this statement of mine, I want to say instead about the A1, a much discussed topic especially in these times. Certainly ethics, individual freedom, the replacement of man for reasons of economic improvements over time beyond convenient and irreplaceable simplifications. Given these points, there would certainly still be many but, continuing there would be twenty-four hours of daily life, how could we divide it up having a lot of free time available! By what means to improve life relieved by the new socio-economic conditions. Here is the real problem: limiting the use of the A1 or remedying nine democratic regulations, between sharing in the especially civil business world. Already new technologies have been adapted by large industries in every sector from medicine to surgery research.

Like
Reply
Renato Bozzaotra Renato Bozzaotra

Architetto-designer presso Attività autonoma

3mo

L’immagine che avete pubblicata oggi, una lampadina Edison così denominata perché fu proprio Edison ad avviarci sulla luce di allora e del futuro. Comunque la stessa lampada rappresenta una mia opera concettuale per 24 immagini presentata alla Quadriennale di Roma nel 1970 denominata la rassegna artistica” la nuova generazione. Conservata nella sede editoriale della Quadriennale Romana. A parte questa mia dichiarazione voglio dire invece su l’A1, argomento molto discusso soprattutto in questi tempi. Certo l’etica la libertà individuale la sostituzione dell’uomo in ragioni di miglioramenti economici di tempo oltre le semplificazioni comode ed insostituibili. Premesso questi punti certamente c’è sarebbero ancora tantissimi ma, continuando ci sarebbero le ventiquattr’ore di vita giornaliera come potremmo ridividerla avendo a disposizione molto tempo libero! Con quale mezzo migliorare la vita alleggerita dalla nuove condizioni socio economiche. Ecco il vero problema o limitare l’uso della A1 viceversa rimediare nove regolamentazioni democratiche, tra condividere nel mondo imprenditoriale soprattutto civile. Già nuove tecnologie sono state adattate da grandi industri in ogni settore nella medicina nelle ricerche nella chirurgia.

Like
Reply
Oliviero Casale

Innovation Manager - Innovation Manager Certified UNI 11814 - Committee Member ISO TC 279/WG3 - UNI/CT 016/GL 89 Gestione dell'innovazione

3mo

I read with interest the SAS article about responsible innovation in AI, which emphasizes the importance of developing ethical and transparent AI systems. I was struck by the focus on fairness, transparency, robustness, and privacy protection, all fundamental themes for building trust in artificial intelligence systems. Many of the aspects discussed in the article can be effectively addressed by implementing the ISO/IEC 42001:2023 standard. This international standard provides a comprehensive framework for establishing, implementing, maintaining, and continually improving an AI management system within organizations. #ResponsibleInnovation #AI #ISO42001 #AIManagement #Transparency #Fairness #Privacy #Security #AISystems

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics