Skip to content

Elon Musk and Other Doomers Gave the AI World an Aneurysm This Week

There's a growing dispute in the tech world about the direction AI regulations should take. And Elon claims he's ready to roll out his AI product.

Welcome to AI This Week, Gizmodoā€™s weekly roundup where we do a deep dive on whatā€™s been happening in artificial intelligence.

As governments fumble for a regulatory approach to AI, everybody in the tech world seems to have an opinion about what that approach should be and most of those opinions do not resemble one another. Suffice it to say, this week presented plenty of opportunities for tech nerds to yell at each other online, as two major developments in the space of AI regulations took place, immediately spurring debate.

The first of those big developments was the United Kingdomā€™s much-hyped artificial intelligence summit, which saw the UKā€™s prime minister, Rishi Sunak, invite some of the worldā€™s top tech CEOs and leaders to Bletchley Park, home of the UKā€™s WWII codebreakers, in an effort to suss out the promise and peril of the new technology. The event was marked by a lot of big claims about the dangers of the emergent technology and ended with an agreement surrounding security testing of new software models. The second (arguably bigger) event to happen this week was the unveiling of the Biden administrationā€™s AI executive order, which laid out some modest regulatory initiatives surrounding the new technology in the U.S. Among many other things, the EO also involved a corporate commitment to security testing of software models.

However, some prominent critics have argued that the US and UKā€™s efforts to wrangle artificial intelligence have been too heavily influenced by a certain strain of corporately-backed doomerism which critics see as a calculated ploy on the part of the tech industryā€™s most powerful companies. According to this theory, companies like Google, Microsoft, and OpenAI are using AI scaremongering in an effort to squelch open-source research into the tech as well as make it too onerous for smaller startups to operate while keeping its development firmly within the confines of their own corporate laboratories. The allegation that keeps coming up is ā€œregulatory capture.ā€Ā 

This conversation exploded out into the open on Monday with the publication of an interview with Andrew Ng, a professor at Stanford University and the founder of Google Brain. ā€œThere are definitely large tech companies that would rather not have to try to compete with open source [AI], so theyā€™re creating fear of AI leading to human extinction,ā€ Ng told the news outlet. Ng also said that two equally bad ideas had been joined together via doomerist discourse: that ā€œAI could make us go extinctā€ and that, consequently, ā€œa good way to make AI safer is to impose burdensome licensing requirementsā€ on AI producers.

More criticism swiftly came down the pipe from Yann LeCun, Metaā€™s top AI scientist and a big proponent of open-source AI research, who got into a fight with another techie on X about how Metaā€™s competitors were attempting to commandeer the field for themselves. ā€œAltman, Hassabis, and Amodei are the ones doing massive corporate lobbying at the moment,ā€ LeCun said, in reference to OpenAI, Google, and Anthropicā€™s top AI executives. ā€œThey are the ones who are attempting to perform a regulatory capture of the AI industry. You, Geoff, and Yoshua are giving ammunition to those who are lobbying for a ban on open AI R&D,ā€ he said.

After Ng and LeCunā€™s comments circulated, Google Deepmindā€™s current CEO, Demis Hassabis, was forced to respond. In an interview with CNBC, he said that Google wasnā€™t trying to achieve ā€œregulatory captureā€ and said: ā€œI pretty much disagree with most of those comments from Yann.ā€

Predictably, Sam Altman eventually decided to jump into the fray to let everybody know that no, actually, heā€™s a great guy and this whole scaring-people-into-submitting-to-his-business-interests thing is really not his style. On Thursday, the OpenAI CEO tweeted:

there are some great parts about the AI EO, but as the govt implements it, it will be important not to slow down innovation by smaller companies/research teams. i am pro-regulation on frontier systems, which is what openai has been calling for, and against regulatory capture.

ā€œSo, capture it is then,ā€ one person commented, beneath Altmanā€™s tweet.

Of course, no squabble about AI would be complete without a healthy mouthful from the worldā€™s most opinion-filled internet troll and AI funder, Elon Musk. Musk gave himself the opportunity to provide that mouthful this week by somehow forcing the UKā€™s Sunak to conduct an interview with him (Musk), which was later streamed to Muskā€™s own website, X. During the conversation, which amounted to Sunak looking like he wanted to take a nap and sleepily asking the billionaire a roster of questions, Musk managed to get in some classic Musk-isms. Muskā€™s comments werenā€™t so much thought-provoking or rooted in any sort of serious policy discussion as they were dumb and entertainingā€”which is more the style of rhetoric he excels at.

Included in Muskā€™s roster of comments was that AI will eventually create what he called ā€œa future of abundance where there is no scarcity of goods and servicesā€ and where the average job is basically redundant. However, the billionaire also warned that we should still be worried about some sort of rogue AI-driven ā€œsuperintelligenceā€ and that ā€œhumanoid robotsā€ that can ā€œchase you into a building or up a treeā€ were also a potential thing to be worried about.

When the conversation rolled around to regulations, Musk claimed that he ā€œagreed with mostā€ regulations but said, of AI: ā€œI generally think itā€™s good for government to play a role when public safety is at risk. Really, for the vast majority of software, public safety is not at risk. If an app crashes on your phone or laptop itā€™s not a massive catastrophe. But when we talk about digital superintelligenceā€”which does pose a risk to the publicā€”then there is a role for government to play.ā€ In other words, whenever software starts resembling that thing from the most recent Mission Impossible movie then Musk will probably be comfortable with the government getting involved. Until then…ehhh.

Musk may want regulators to hold off on any sort of serious policies since his own AI company is apparently debuting its technology soon. In a tweet on X on Friday, Musk announced that his startup, xAI, planned to ā€œrelease its first AI to a select groupā€ on Saturday and that this tech was in some ā€œimportant respects,ā€ the ā€œbest that currently exists.ā€ Thatā€™s about as clear as mud, though itā€™d probably be safe to assume that Muskā€™s promises are somewhere in the same neighborhood of hyperbole as his original comments about the Tesla bot.

The Interview: Samir Jain on the Biden Administrationā€™s first attempt to tackle AI

Photo: Center for Democracy and Technology
Photo: Center for Democracy and Technology

This week we spoke with Samir Jain, vice president of policy at the Center for Democracy and Technology, to get his thoughts on the much anticipated executive order from the White House on artificial intelligence. The Biden administrationā€™s EO is being looked at as the first step in a regulatory process that could take years to unfold. Some onlookers praised the Biden administrationā€™s efforts; others werenā€™t so thrilled. Jain spoke with us about his thoughts on the legislation as well as his hopes for future regulation. This interview has been edited for brevity and clarity.

I just wanted to get your initial response to Bidenā€™s executive order. Are you pleased with it? Hopeful? Or do you feel like it leaves some stuff out? Ā 

Overall we are pleased with the executive order. We think it identifies a lot of key issues, in particular current harms that are happening, and that it really tries to bring together different agencies across the government to address those issues. Thereā€™s a lot of work to be done to implement the order and its directives. So, ultimately, I think the judgment as to whether itā€™s an effective EO or not will turn to a significant degree on how that implementation goes. The question is whether those agencies and other parts of government will carry out those tasks effectively. In terms of setting a direction, in terms of identifying issues and recognizing that the administration can only act within the scope of the authority that it currently has…we were quite pleased with the comprehensive nature of the EO.

One of the things the EO seems like itā€™s trying to tackle is this idea of long-term harms around AI and some of the more catastrophic potentialities of the way in which it could be wielded. It seems like the executive order focuses more on the long-term harms rather than the short-term ones. Would you say thatā€™s true?

Iā€™m not sure thatā€™s true. I think youā€™re characterizing the discussion correctly, in that thereā€™s this idea out there that thereā€™s a dichotomy between ā€œlong-termā€ and ā€œshort-termā€ harms. But I actually think that, in many respects, thatā€™s a false dichotomy. Itā€™s a false dichotomy both in the sense that we should have to choose one or the otherā€”and in fact, we shouldnā€™t; and, also, a lot of the infrastructure and steps that you would take to deal with current harms are also going to help in dealing with whatever long-term harms there may be. So, if for example, we do a good job with promoting and entrenching transparencyā€”in terms of the use and capability of AI systemsā€”thatā€™s going to also help us when we turn to addressing longer-term harms.

With respect to the EO, although there certainly are provisions that deal with long-term harms…thereā€™s actually a lot in the EOā€”I would go so far as to say the bulk of the EOā€”deals with current and existing harms. Itā€™s directing the Secretary of Labor to mitigate potential harms from AI-based tracking of workers; itā€™s calling on the Housing and Urban Development and Consumer Financial Protection bureaus to develop guidance around algorithmic tenant screening; itā€™s directing the Department of Education to figure out some resources and guidance about the safe and non-discriminatory use of AI in education; itā€™s telling the Health and Human Services Department to look at benefits administration and to make sure that AI doesnā€™t undermine equitable administration of benefits. Iā€™ll stop there, but thatā€™s all to say that I think it does a lot with respect to protecting against current harms.

More Headlines This Week

The race to replace your smartphone is being led by Humaneā€™s weird AI pin. Tech companies want to cash in on the AI gold rush and a lot of them are busy trying to launch algorithm-fueled wearables that will make your smartphone obsolete. At the head of the pack is Humane, a startup founded by two former Apple employees, that is scheduled to unveil its much anticipated AI pin next week. Humaneā€™s pin is actually a tiny projector that you attach to the front of your shirt; the device is equipped with a proprietary large language model powered by GPT-4 and can supposedly answer and make calls for you, read back your emails for you, and generally act as a communication device and virtual assistant.

News groups release research pointing to how much news content is used to train AI algorithms. The New York Times reports that the News Media Alliance, a trade group that represents numerous large media outlets (including the Times), has published new research alleging that many large language models are built using copyrighted material from news sites. This is potentially big news, as thereā€™s currently a fight brewing over whether AI companies may have legally infringed on the rights of news organizations when they built their algorithms.

AI-fueled facial recognition is now being used against geese for some reason. In what feels like a weird harbinger of the end times, NPR reports that the surveillance state has come for the waterfowl of the world. That is to say, academics in Vienna recently admitted to writing an AI-fueled facial recognition program designed for geese; the program trolls through databases of known goose faces and seeks to identify individual birds by distinct beak characteristics. Why exactly this is necessary Iā€™m not sure but I canā€™t stop laughing about it.

Daily Newsletter

Get the best tech, science, and culture news in your inbox daily.

News from the future, delivered to your present.

Please select your desired newsletters and submit your email to upgrade your inbox.

You May Also Like

  ēæ»čƑļ¼š