“Another flaw in the human character is that everybody wants to build and nobody wants to maintain.” - Kurt Vonnegut.
I see AI ethics as a problem of maintainence.
The Hype Cycle around AI naturally privileges a surge in development. But ethical considerations — bias, security, privacy, misinformation etc — are what requires vigilance and maintenance.
For the most part, I respect what the companies behind these frontier large-language models (ChatGPT, Claude, Gemini) are attempting to do to stay moral. Navigating this Byzantine haze warrants some degree of empathy.
Despite the many lawsuits levied at OpenAI; despite the Scarlett Johannson “Sky” voice debacle; despite Perplexity and Forbes’ latest dispute; despite the miasma of resentment between the artistic community and Stable Diffusion; despite piracy and illegal scraping and pernicious Instagram settings and James Blake and Suno and Udio and AI art competitions and anti-AI rallies and Air Canada refund policies and Musk v. Altman and Clearview v. Social Media… and all the rest of it. Despite that, I’ve been pleasantly surprised with the way the AI community have attempted to address ethical issues.
Anthropic’s principle-led “Constitutional AI” charter exemplifies a human-centric approach, acknowledging complexities and motives behind queries. Ask Claude if the Earth is flat and you’ll see what I mean.
Gemini and ChatGPT, while criticised for lack of transparency in training data, strive to mitigate misinformation through robust content filters, disclaimers, and context. They excel in practical security with encryption, access controls, and regular audits. Both remain compliant with GDPR and CCPA regulations. What they lack in bias-mitigation they make up for in practical security.
Governing this space is intrinsically challenging, with our regulatory desires clashing with the rapid pace of innovation. While no one wants to die at the hands of The Singularity, overly restrictive laws could stifle the exploration of the world at our fingertips. When the CEO of OpenAI himself says, “my worst fears are that we cause significant harm to the world,” yet continues pushing for GPT-5, it underscores the complexity of balancing innovation with ethical responsibility.
Yes, it’s somewhat alarming that the “Superalignment” team at OpenAI imploded, and that the company has since been haemorrhaging safety-conscious employees. But at the same time the company famously promises to defend its enterprise customers from all copyright battles, no matter the cost, akin to Airbnb insuring its hosts up to $1m.
It’s confusing a confusing smoosh of priorities. But I enjoy these contradictions. They’re human. And as Ethan Mollick’s post shows, we should be looking at the entire AI endeavour through a social scientific lens anyway — it’s not just the machines that are awkward, black boxes of weirdly-behaving machinery, but the humans behind them too.
Computer scientists working on LLMs should look to social science methodology. We work with stochastic, black-box systems (yes, humans), too, & smart folks have developed a ton of methods for analyzing this sort of data in rigorous ways.
AI is not people, but the methods will help.
Also social scientists working on LLMs could borrow a few ideas from computer scientists as well - ablation tests, for example, are a clever way of examining mechanisms.
Academic, Data Scientist, Software Engineer | UK Global Talent
6moIt is gratifying to observe the growth. The initial stages of this discovery within a late night pair session remain vivid :)