What if artificial intelligence isn’t the apocalypse?
A mix of fear and trendiness has made AI appear to be the protagonist of a radical change in our society. But the technology may actually be overhyped
In just six months, searches for “artificial intelligence” on Google have multiplied by five. ChatGPT — launched on November 30 of 2022 — already has tens of millions of users. And Sam Altman, the CEO of OpenAI, the company that created ChatGPT, has already appeared before the United States Congress to explain himself and answer questions about the impact of AI. By comparison, it took Mark Zuckerberg 14 years to go to Washington to talk about the role of Facebook in society.
Altman has oddly been quite blunt about the technology that his firm produces. “My worst fears are that we can cause significant harm to the world… I think if this technology goes wrong, it can go quite wrong,” he said while testifying. However, some analysts have noted that the words about his supposed “fears” may be carefully calculated, with the intention of encouraging more stringent regulation so as to hinder the rise of competitors to OpenAI, which already occupies the dominant position in the sector.
Heavy and bombastic phrases about the explosion of AI have already spawned their own memes. The term “criti-hype” — created in 2021 to define criticism of a new technology — has become popularized, thanks to ChatGPT. A pioneering example of “criti-hype” was the case of Cambridge Analytica, when the company was accused of harvesting Facebook data to understand and influence the electorate during the 2016 presidential election.
The pinnacle of these statements was the departure of Geoffrey Hinton — known as the “godfather of AI” — from Google. He left the company to be able to speak freely about the dangers of AI: “From what we know so far about the functioning of the human brain, our learning process is probably less efficient than that of computers,” he told EL PAÍS in an interview after departing from Google.
Meanwhile, the U.K. government’s outgoing chief scientific adviser has just said that AI could “be as big as the Industrial Revolution was.” There are already groups trying to organize, so that their trades are not swept away by this technology.
There are too many prophecies and fears about AI to list. But there’s also the possibility that the impact of this technology will actually be bearable. What if everything ended up going slower than is predicted, with fewer shake ups in society and the economy? This opinion is valid, but it hasn’t been deeply explored amidst all the hype. While it’s hard to deny the impact of AI in many areas, changing the world isn’t so simple. Previous revolutions have profoundly changed our way of life, but humans have managed to adapt without much turbulence. Could AI also end up being a subtle revolution?
"I think if this technology goes wrong, it can go quite wrong."
— The Associated Press (@AP) May 16, 2023
Sam Altman, CEO of ChatGPT parent company OpenAI, shared his biggest fears about artificial intelligence before Congress Tuesday.https://t.co/ao01hIx3DS pic.twitter.com/L1ZOk3Y6op
“At the very least, [AI has caused] a big structural change in what software can do,” says Benedict Evans, an independent analyst and former partner at Andreessen Horowitz, one of Silicon Valley’s leading venture capital firms. “It will probably allow a lot of new things to be possible. This makes people compare it to the iPhone. It could also be more than that: it could be more comparable to the personal computer, or to the ‘graphical user interface,’” which allows interaction with the computer through the graphical elements on the screen.
These new AI and machine learning (ML) technologies obviously carry a lot of weight in the tech world. “My concern is not that AI will replace humans,” says Meredith Whittaker, president of Signal, a popular messaging app, “but I’m deeply concerned that companies will use it to demean and diminish the position of their workers today. The danger is not that AI will do the job of workers: it’s that the introduction of AI by employers will be used to make these jobs worse, further exacerbating inequality.”
It must be noted that the new forms of AI still make a lot of mistakes. José Hernández-Orallo — a researcher at the Leverhulme Center for the Future of Intelligence at Cambridge University — has been studying these so-called “hallucinations” for years. “At the moment, [AI is] at the level of a know-it-all brother-in-law. But in the future, [it may be] an expert, perhaps knowing more about some subjects than others. This is what causes us anxiety, because we don’t yet know in which subjects [the AI] is most reliable,” he explains.
“It’s impossible to build a system that never fails, because we’ll always be asking questions that are more and more complex. At the moment, the systems are capable of the best and the worst… They’re very unpredictable,” he adds.
AI Twitter these days. 👇🧵 pic.twitter.com/8pLn7wfUCb
— hardmaru (@hardmaru) May 12, 2023
But if this technology isn’t so mature, why has it had such a sudden and broad impact in the past few months? There are at least two reasons, says Hernández-Orallo: first, commercial pressure. “The biggest problem comes because there is commercial, media and social pressure for these systems to always respond to something, even when they don’t know how. If higher thresholds were set, these systems would fail less, but they would almost always answer ‘I don’t know,’ because there are thousands of ways to summarize a text.”
The second reason, he notes, is human perception: “We have the impression that an AI system must be 100% correct, like a mixture of a calculator and an encyclopedia.” But this isn’t the case. “For language models, generating a plausible but false text is easy. The same happens with audio, video, code. Humans do it all the time, too. It’s especially evident in children, who respond with phrases that sound good, but may not make sense. With kids, we just tell them ‘that’s funny,’ but we don’t go to the pediatrician and say that ‘my son hallucinates a lot.’ In the case of both children and certain types of AI, [there is an ability] to imitate things as best as possible,” he explains.
The large impact on the labor market will fade when it’s clear that there are things that the AI doesn’t properly complete. Similarly, when the AI is questioned and we are unsure of the answer it offers, disillusionment will set in. For instance, if a student asks a chatbot about a specific book that they haven’t read, it may be difficult for them to determine if the synopsis is completely reliable. In some cases, even a margin of doubt will be unacceptable. It’s likely that, in the future, humans using AI will even assume (and accept) that the technology will make certain errors. But with all the hype, we haven’t reached that stage yet.
The long-term realization of AI’s limited impact still doesn’t mean that the main fear — that AI is more advanced than human intelligence — will go away. In the collective imagination, this fear becomes akin to the concept of “a machine taking control of the world’s software and destroying humans.”
“People use this concept for everything,” Hernández-Orallo shrugs. “The questions [that really need to be asked when thinking about] a general-purpose system like GPT-4 are: how much capacity does it have? Does it need to be more powerful than a human being? And what kind of human being — an average one, the smartest one? What tasks is it specifically going to be used for? All of [the answers to these questions] are very poorly defined at this point.”
Matt Beane, a professor at UC Santa Barbara, opines that “since we’ve imagined machines that can replace us, we now fear them. We have strong evidence that shows how we rely on criticism and fear — as well as imagination and assertiveness — when it comes to thinking about new technologies.”
Fear has been the most recurring emotion when it comes to this issue. “We seem to fall into a kind of trance around these [AI] systems, telling these machines about our experiences,” says Whittaker. “Reflexively, we think that they’re human… We begin to assume that they’re listening to us. And if we look at the history of the systems that preceded ChatGPT, it’s notable that, while these systems were much less sophisticated, the reaction was often the same. People locked themselves in a surrogate intimate relationship with these systems when they used them. And back then — just like today — the ‘experts’ were predicting that these systems would soon (always ‘soon,’ never ‘now’) be able to replace humans entirely.”
Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition
Tu suscripción se está usando en otro dispositivo
¿Quieres añadir otro usuario a tu suscripción?
Si continúas leyendo en este dispositivo, no se podrá leer en el otro.
FlechaTu suscripción se está usando en otro dispositivo y solo puedes acceder a EL PAÍS desde un dispositivo a la vez.
Si quieres compartir tu cuenta, cambia tu suscripción a la modalidad Premium, así podrás añadir otro usuario. Cada uno accederá con su propia cuenta de email, lo que os permitirá personalizar vuestra experiencia en EL PAÍS.
En el caso de no saber quién está usando tu cuenta, te recomendamos cambiar tu contraseña aquí.
Si decides continuar compartiendo tu cuenta, este mensaje se mostrará en tu dispositivo y en el de la otra persona que está usando tu cuenta de forma indefinida, afectando a tu experiencia de lectura. Puedes consultar aquí los términos y condiciones de la suscripción digital.