Daniel Kahneman's Systems 1 & 2 are useful models for the future of Artificial Intelligence

Daniel Kahneman's Systems 1 & 2 are useful models for the future of Artificial Intelligence

In his seminal book Thinking, Fast and Slow, psychologist Daniel Kahneman introduced and made famous the concepts of System 1 and System 2 thinking to describe two modes of human cognition. System 1 is fast, intuitive, and automatic, while System 2 is slow, deliberate, and analytical. These concepts provide a useful framework for comparing different types of artificial intelligence, specifically large language models (LLMs) and knowledge graph-based AI.

The conclusion? We need both! Neither can humans exist without both modes of thinking, nor should AI development focus solely on LLMs.

 

Large Language Models and System 1 Thinking

Large language models, such as OpenAI's GPT-4, excel in tasks requiring rapid, intuitive responses where the goal is to react quickly with a fairly high likelihood of "being right" (while accepting, therefore, that there is some chance of "being wrong" - i.e. when the model hallucinates). These models are trained on vast amounts of data, enabling them to generate seemingly coherent and contextually relevant responses in a matter of seconds.

This process appears remarkably similar to System 1 thinking in humans and even Kahneman himself used the analogy: Click here for the Lex Fridman podcast with Daniel Kahneman.

Our System 1 thinking is what drives our immediate reactions and instinctive decision-making based on previously experienced patterns which are stored in our brains as neural pathways. System 1 is remarkably powerful, helping us to avoid immediate danger, perform much of our communication, recall facts we have learned (including basic maths like 2x2=4), all to a high degree of accuracy and always accompanied by a warm, positive "feeling" that our instinctive reaction is the right one.

But it can also let us down. Often, our brains know when they need to switch from instinctive System 1 thinking to considered System 2, but sometimes they do not. Allegiance to political parties is a good example of this. Many voters stay loyal to political parties because their loyalty has been programmed into their neural pathways for many years. When deciding how to vote, this System 1 response often overrides any deployment of System 2 because of the strength of the feeling, even if the results of System 2 might lead them to vote for another party based on a careful consideration of how the party's policies might affect them and/or society at large.

The clash between Systems 1 & 2 in humans is called "cognitive dissonance". This is the deeply uncomfortable feeling we experience when our two modes of cognition disagree with each other, perhaps when we have a long-held belief which gets blown apart by new evidence. Experiencing cognitive dissonance is unpleasant but important. Without it, we would not be able to retrain our brains - we would never really learn!

Meanwhile, when an LLM is asked a question or given a prompt, it quickly parses the input, taps into its extensive training data, and produces an immediate response - the response that "feels" the most right in those few seconds. This speed and "feeling" are akin to how humans recognize a familiar face or verbally complete a common phrase without conscious effort. And, like System 1 thinking in humans, it is sometimes wrong. I'm sure everyone who has used ChatGPT or other similar models will have experienced a mix of perfect answers along with some strange hallucinations. However, unlike System 1 thinking in humans, the LLM does not currently have a cognitive dissonance mechanism where it can challenge itself when producing the most likely answer from its training data.


Knowledge Graph-Based AI and System 2 Thinking

While LLMs are eerily similar to System 1, knowledge graph-based AI operates more like System 2. Knowledge graphs are structured databases that store information in a network of interconnected nodes, representing concepts and the relationships between those concepts. This form of AI excels in tasks that require deep understanding, logical reasoning, deterministically defined relationships, and the integration of diverse pieces of information.

When tackling a complex query, knowledge graph-based AI methodically searches through its interconnected data, analysing relationships and drawing on multiple sources of information to generate a well-considered response. This is similar to how humans engage in System 2 thinking when, for example, tackling a complex maths problem. While you may have stored a simple mathematical problem like 2x2 in your System 1, you have probably not stored 27x14. You will have to engage your brain into some sort of process to work out an answer. That is a fairly speedy form of System 2, but longer versions might include planning a detailed project, or making a significant life decision.

Interestingly, the deliberate and structured approach taken to any of those problems produces a well-considered answer or plan, the reasoning for which the human would be able to explain afterwards. Similarly, most knowledge graph-based systems would also be auditable and explainable after the event in a way that LLMs and System 1 thinking are not.

System 2 thinking is also "effortful". It costs you something. Daniel Kahneman probably never used the phrase, "no pain no gain" in this context, but I doubt he would disagree! System 2 thinking requires us to engage our brains actively (at the expense of other simultaneous thought) and spend some time working through logic and abstractions allowing us the possibility of achieving a higher-quality decision than if we had relied just on our System 1 instincts.

It seems only fair, therefore, that the machine equivalents of System 2 thinking should be similarly effortful in order to achieve similarly deeply-considered results. The generation and maintenance of knowledge graphs can be significantly automated (indeed partly in collaboration with LLMs) but will always require direct, expert, human-in-the-loop supervision to determine certain structures and values. This considered human involvement can make things a bit slower but is undoubtedly a feature, not a bug, of knowledge graph AI.


An important caveat

The point of this article is not to say that knowledge graphs can, by themselves, provide a robotic solution for System 2 thinking in humans. There are many reasons why that is not the case now, nor may ever be, but the most important reason is probably that System 2 allows human beings to think in layers of abstraction, including imagining counterfactual things that have not happened. This is one of the human brain's extraordinary super powers. Knowledge graphs might give you codified human logic which could be deployed for some form of advanced machine reasoning, but they do not have the wondrous imagination of the human mind, and cannot therefore deal with much abstraction.

Bridging the two systems and leaving room for the human essence

Nevertheless, having established the difference between System 1 & 2 thinking in humans, and the difference between AI that infers (using LLMs), versus AI that determines (using knowledge graphs), it seems logical to bring them back together. Clearly, humans would be useless without both forms of thinking. System 2 would not save you from a speeding car at a pedestrian crossing. System 1 would not allow you to process the complex criteria and abstract scenario-planning required to choose a house! So let's start working on a future for AI that brings the two sides together as well.

Meanwhile, I will continue to cherish the uncomfortable feeling of cognitive dissonance - which might just be the crucial thing that separates us from even the most advanced machines in the farthest corners of our System 2 imagination!

Akansha Goyal

Product Lead @ LSEG Workspace | Cambridge MBA

3mo

This is a great mental model Oliver Hunt! Can’t wait to read more!! 😁

Chris Tinker

Founding partner - Libra Investment Services

3mo

A good point Oliver Hunt and as the Chicago Booth paper that Geoff Robinson MFM FMVA referenced notes - there is a clear pathway emerging to approach traditional income statement and balance sheet analysis with LLM  (System 1) methods. Personally, I see this as potentially providing higher quality, systematic input data sets  into System 2 models e.g. #KnowledgeGraph type approaches to investment decision tools such that the feedback loops that one can create between System 1& 2 thinking can provide for further improvements in training and design.   

Christophe Cop

Making your company Data Driven

3mo

Add Judea Pearl's methods of causal reasoning , and add probabilistic truth-values for quantified accuracy of statements. If you can do that at speed and scale, you'll be very close to AGI. Of course, you probably would want to add motivations and a multivariate moral landscape as well. (If you don't: avoid to give it agency and keep a human on/in the loop)

Geoff Robinson MFM FMVA

Founder, TheInvestmentAnalyst.com | 10x #1 ranked analyst Institutional Investor Survey | UBS Managing Director | 25yr+ instructional experience | 20,000+ analysts taught | EdTech Platform Builder | PADI Dive Instructor

3mo

Nice article Oliver Hunt I look at LLM models like having a super keen intern or analysts working on the team doing the research and getting into the weeds. Sometimes errors are made or analytical angle is missed and this is where education, knowledge and experience come in to review the content and make edits. This could be done manually or by reengineering the command prompts to better direct the LLM to a conclusion. System one leading to the engagement of system two. There was an interesting paper published last week by Chicago Booth brains that ran analysis through LLM and found the analysis to be up there with the best analysts.

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics