Oxford China Policy Lab reposted this
DeepSeek's emergence as a frontier AI player has put China’s AI ecosystem on the map around the world – just ask the U.S. stock market. But amidst growing concern about the risks increasingly powerful Chinese models pose, many have overlooked a fascinating development unfolding since Christmas: DeepSeek and a bunch of other Chinese companies have rolled out promises to safeguard their AI. And their commitments look remarkably similar to the ones mostly Western companies had signed onto at the International AI Summit in Seoul in May. The similarities in wording and focus are striking. Both sets of commitments emphasize: - Risk assessment across the AI lifecycle - Safety-focused organizational structures - Clear risk mitigation processes - Transparency about model capabilities and limitations Of course, differences remain. The Chinese commitments have explicit provisions for open source models, for example, while the Seoul commitments emphasize specific risk thresholds. And Chinese industry has more explicit government backing through affiliations with the Ministry of Innovation and Information Technology. But this convergence could be significant. China’s release of its AI Safety Commitments follows a pattern in China's AI governance: launching domestic initiatives that foreshadow international consensus. Similar dynamics played out when China released its Global AI Initiative domestically before signing the Bletchley Declaration as part of a global effort. As frontier models become more powerful - and potentially riskier - finding space for coordination on safety becomes increasingly critical. The upcoming Paris AI Summit in February will be a key test: will more Chinese companies sign onto the Seoul Commitments? My latest analysis for Carnegie explores what these commitments are, how they came about, and what it all means for international AI governance: https://lnkd.in/esgg_qyt