Lies, BS and ChatGPT
Lies, BS and ChatGPT
Martin Reeves and Abhishek Gupta
For decades, the elusive metric of the effectiveness for artificial intelligence (AI) was the Turing Test, which measured whether a human could judge if their counterpart in a dialogue was a human or a machine. ChatGPT and other large language models (LLMs) have arguably met or exceeded this standard, generating plausibly human text, a remarkable achievement, with far reaching implications. But technological progress
This is certainly all true of LLMs like ChatGPT. The ability to generate persuasive text
Models like ChatGPT use massive training datasets (such as The Pile) to build prediction models which attempt to create plausibly human answers to prompts. They are not systematically untruthful, but rather, are build without regard to the truth. Analytic philosopher Harry Frankfurt explored this idea in his book On Bullshit. He explained that both the truth teller and the liar are concerned with the truth, one to reveal it and other to conceal it. But a bullshitter aims only to persuade, without regard to the truth. He saw bullshit (BS) as potentially more damaging than lies.
Of course, sometimes the objective is transparently only to persuade, as in the case of advertising. Few would regard the claims made in advertisements as complete and provably true. But in other contexts, we might be misled into thinking that a statement is made with an intent of accuracy, balance, and truthfulness, when it has merely been optimized for plausibility and persuasiveness.
Recommended by LinkedIn
One might object that even the scientist, never knows for sure whether a theory is true. Karl Popper proposed that the best that science could do was to propose falsifiable statements and then apply the scientific method to attempt to disprove them. But with the application of LLMs today there is nothing akin to a criterion of falsifiability and a scientific process of challenge and falsification. This could have real human consequences. Imagine that a mother was to rely on a very persuasive text claiming that toxic household chemicals where the cure for an infant’s sickness.
A related challenge is that the persuasiveness of AI is a property of the model, not of the person who deployed it. In normal social relations we judge the character, knowledge and credibility of a person in part by observing what they say. And we intuitively apply the same heuristics to what we read online. Even if we surmount the challenge of identifying the person who makes the post, we can’t easily know whether the content and argumentation was created by them or by a machine, undermining our ability to contextualize and qualify what is being said.
The historian of technology Carlota Perez has noted that the full impact of a technology is rarely obtained until there is an accompanying social innovation to unlock its value. The electric motor did not transform factory productivity until we reorganized factories and workflow to unlock their potential. Brian Arthur in his book How Technology Works, explains that we almost never have perfect foresight into new problems and solutions, and technology evolves according to a cumulative, serendipitous process in which parts of existing solutions are assembled in new combinations, only some of which turn out to be highly useful. Nevertheless, we can anticipate this process and at least front-load dealing with the challenges we already know about, such as the ones outlined here.
Without feigning perfect foresight, it’s reasonable to suggest that we will almost certainly need secondary innovations to unlock the value of new language technologies and that these will likely entail education, technology, and regulation. Many schools are already teaching children that they can’t trust everything they read online and how to qualify and triangulate sources. We will all need to learn new diligence measures
However, when the models themselves have been trained to be persuasive to humans, this will be hard to carry out without new tools to assist with identity and process verification
Purpose & Prosperity Mentor ∞ Shimrit Nativ / Master your mind & create the life you desire / Create abundance in Biz & Life / Check the free resources in the link👇🏽
2yThanks for sharing this, Martin
Author. Speaker. Trainer.
2yHuge development. Bigger questions. Good post!
Organisational agitator
2yFascinating. Possibly this is the use case that blockchain / NFTs have been looking for? The providence of everything from tweets to research papers will need to be verifiable. AI content tools that sit outside this framework will be considered suspect. The question then becomes whether it is a governmental or business led initiative.
Chief Executive Officer #Transformation | #Retail | #Leadership
2yTommy Weir
Leadership for Good | Host Leaders For Humanity & Business For Humanity | Good Organisations Lab
2yThat's a great point! Thanks Martin!