How prepared is the engineering profession for AI ethical issues?
Artificial intelligence is really exciting. It opens up so many possibilities to make our lives better, but alongside the untapped potential of AI, sits the question of ethics. Where there is huge potential for AI to do good, there is also the potential for it to cause harm, either intentionally or otherwise. AI ethics involves building in the concepts of right and wrong to help guide those developing and using AI.
There are a number of examples where unintended consequences of AI can create issues such as bias. With digital amplification, any intrinsic bias becomes a significant problem. As an example, when Amazon attempted to use AI to improve their recruitment success rate they ran into issues. Their AI system taught itself that male candidates were preferable based on historically employing more men and therefore produced a bias against selecting women. Amazon eventually ditched the project as they were unable to exclude all biases.
Privacy is right up there for consideration. The ability to collect and connect people’s information from a variety of sources opens up a huge ethical question. In 2018 Mark Zuckerberg appeared before the US Congress on the privacy of data and transparency on its use. Specifically he was questioned on political consulting firm Cambridge Analytica’s misuse of Facebook customer data to interfere with the 2016 US presidential elections.
Examples like these highlight the risks associated with using AI. AI doesn’t just scale solutions, it also scales problems. When you get it wrong, the impact is large. That’s quite a scary thought when you consider an AI engineering algorithm. Imagine an erroneous design algorithm at scale, producing tens of thousands of under-conservative buildings or bridges.
Professionals (doctors, lawyers, engineers) have the ability to cause harm at scale by nature of the work they do. This is the reason they work under a code of ethics already. A professional code of ethics builds confidence in their profession’s trustworthiness and provides a common understanding of acceptable practice. But how well prepared are our professional codes of ethics to the unique ethical issues posed by AI technologies?
Recommended by LinkedIn
Quite well as it turns out. At least a B+ pass.
I recently completed a course with Marco Iansiti and Karim Lakhani from Harvard Business School (this is their book). Using their framework I compared the Engineering Code of Ethics with a framework specifically for AI. The Engineering Code of Ethics is based on professional competence, personal integrity and social responsibility, providing broad and holistic guidance for its members. Below is a comparison between the two frameworks along with a comment on any gaps.
The Engineering code of ethics is ahead on environmental issues and is strong on addressing privacy and security. It needs more work to provide guidelines for the other areas, and in particular there is a significant competence gap for engineers and their knowledge of AI. It is very difficult to judge the competency of ourselves and our colleagues when the practical understanding of AI across the Engineering profession is minimal.
Working in the new world of AI tech is exciting and the possibilities for good are enormous. As an engineering firm we’ll be working to build out our competency framework and will also continually assess the power of AI against an ethical framework. I’m keen to hear from others who are working in this space too.