AI PR & comms technologist. Focus areas: AI, data, measurement, SEO, Analytics, Social Media. Consultant and trainer [3000+ organisations helped]
Generative AI presents a case study of how intellectual technologies interact with System 1 and System 2 thinking - posing both opportunities and challenges for human cognition. Here’s how it applies: Generative AI's Appeal to System 1: ➡️ Ease and Fluency: Generative AI generates text, images, and other content with remarkable fluency and speed, creating a highly engaging and satisfying user experience. This appeals to System 1’s preference for cognitive ease and can lead to uncritical acceptance of the information presented. ➡️ Confirmation Bias: Generative AI can easily be used to reinforce existing biases. Users can craft prompts that elicit responses aligning with their preconceptions, creating echo chambers and reinforcing pre-existing beliefs. ➡️ Truthiness and Misinformation: Generative AI, due to its probabilistic nature, can produce plausible-sounding but factually incorrect or misleading information. This “hallucination” problem can fuel the spread of misinformation, particularly when users rely on AI as a source of truth without verifying its outputs. Generative AI’s Impact on System 2: ➡️ Over-Reliance and Deskilling: The ease of using generative AI for tasks like writing, summarising, and brainstorming can lead to an over-reliance on the tool, causing a decline in human skills and critical thinking abilities. Just as calculators can reduce our mental arithmetic skills, relying on AI for routine cognitive tasks might diminish our capacity for independent thought and creative problem-solving. ➡️ Reduced Effort and Engagement: Generative AI can automate the effortful work of information gathering and content creation, reducing the need for System 2 engagement. This can lead to a more passive mode of learning and a shallower understanding of the information being processed. ➡️ Weakening of Critical Evaluation: The authoritative tone and persuasive presentation of AI-generated content can make users less likely to critically evaluate the information, leading to a passive acceptance of the AI’s outputs as truth, even when they contain errors or biases. What do you think? Let me know in the comments below. 🔔 Visit my profile and "ring" the bell icon for notifications on my latest posts (assuming you find them useful or interesting) #publicrelations #pr #ai #aiinpr *AI transparency declaration: I used Google Gemini to help summarise and synthesise quotes I had manually identified while reading various books on my Kindle to help write the final post. The image has been generated via MidJourney.
Andrew Bruce Smith - your list of appeals and impacts are on point, but are largely focused on the negative. There is certainly a risk that an overreliance on GenAI can lead to an overuse of System 1 thinking (fast, reactionary, uncritical, easy route, etc.). However, the opportunity for genAI benefits on System 2 thinking is immense. It can be used as a thought partner, a critic, an opponent, or whatever POV you want to leverage to achieve your goals. Even just the simple ability to share a draft with ChatGPT and ask "am I missing anything here?" or "is there anything here that stakeholder X would be concerned with?" gives you an immediate ability to see things from another perspective, which in turn gets your System 2 thinking firing on all cylinders. Always love a good application of the late, great Daniel Kahneman's thinking! #ThinkingFastAndSlow
I think my critical analysis is not being deskilled when using AI tools, as I'm constantly questioning and checking the output..it has helped speed up a lot of tasks, and with a good training and years of experience in applying critical thinking to Comms outputs, I apply this to draft AI output. Far from desskilling, AI has increased my creativity as I explore it's applications, how it can help, where it hinders, where it's flawed. I've forgotten all the maths I learned at School, because day.to.day I don't need to use those skills..A calculator does it perfectly well..I think the same with AI, as it develops further some things I've had to do, I can delegate to AI and free me up to do more interesting work, using my brain even more.
The fundamental dilemna for using AI as a problem solving tool is that humans need to already know the answer to be alert to any false AI hallucinations. I told my class of 21 year old undergraduates that they will be a remarkable generation - the last adults to have a mature appreciation of their world before AI. I'm fearful that humans being cognitive misers - aka we're lazy- will magnify incompetencies in using even lazier heuristics.
That’s really interesting - thanks for sharing. What are your experiences of tools that can help tackle some of these problems, especially around confirmation bias and misinformation.
Professional advisor and researcher supporting agencies and in-house teams across a range of management, corporate communications and public relations issues
3moThe main benefit I've discovered is increased cognitive function in research. I've a literature management system (Paperpile) containing almost 1k records, 300 of which I regularly use as part of my studies. There's no way that I'd be able to keep abreast of so much information without good data management (Paperpile+Notion and military self-discipline), AI summarisation (Notion and Claude) and pattern identification. I wholly agree with the need for critical thinking. This needs to be trained and is more important than ever. Thanks for the post Andrew Bruce Smith!