Unbridled Scientific Curiosity
Image created by Bing Image Creator (Unbridled Scientific Curiosity)

Unbridled Scientific Curiosity

I recently completed the Dead Space video game remake. At the same time, ChatGPT exploded onto the scene and created competing points of view about what this technology meant for the future. There were posts about how it would deliver new levels of efficiency and productivity and posts about how it would destroy jobs or end up hurting people.

For those not familiar with the series, here is a very brief synopsis (thanks Chat GPT 😊):

In Dead Space, the player assumes the role of Isaac Clarke, an engineer who is sent to investigate a mining spaceship, the USG Ishimura, after it loses contact with Earth. Upon arriving, Isaac discovers that the ship has been overrun by an alien race known as Necromorphs, reanimated human corpses that have been mutated by an extraterrestrial virus.

Isaac must fight his way through the ship, battling Necromorphs and solving puzzles to uncover the truth behind the virus and the fate of the Ishimura's crew. Throughout the game, Isaac encounters various characters who help him or hinder his progress, including his girlfriend Nicole, who was a member of the Ishimura's crew.

As Isaac progresses through the game, he discovers that the virus is actually a biological weapon created by a religious organization called Unitology, which seeks to use the Necromorphs as a means of achieving eternal life. Isaac ultimately confronts the leader of Unitology, who has been manipulating events on the Ishimura to further his agenda.

What I found interesting during the game was that most of the characters in Dead Space meant well, they never intended to create problems or hurt people (there were exceptions). Rather, the characters in the game I encountered / read about were guided by unbridled scientific curiosity. The government found alien technology, reverse engineered it without understanding the risks, realized it was dangerous after many people died, and tried to covered it up. Other people investigated the cover up, either to reveal the truth or because they thought the technology would help them, and more people died.

For students of history, this plot may sound familiar:

·      People who worked on past weapons projects may oppose their use once the project is “successful.” They claim there was no way for them to anticipate such a use, despite historical examples (Manhattan Project).

·      People working on the latest technology get caught up with the excitement of discovery. It is only after the goal is achieved that they take the time to reflect on what the discovery will mean, at which point it is too late to put the genie back in the bottle. We have already seen this with Large Language Models as pointed out here.

The recent departure of “The Godfather of AI” from Google has brought renewed attention to the question of what AI / ML means for the future. I believe we will see three approaches to AI / ML going forward:

The “Quick Buck” approach:

One thing we can count on with any new scientific breakthrough, someone is going to sell it as the solution to all our problems. Since ChatGPT was launched, the number of “experts” who advertise products and services that explain how their solution leverages LLM, AI, and/or ML to save money / solve problems has gone up by several factors. If you investigate these products and services, you may be hard pressed to find evidence that they can deliver what they claim.

What we should expect: For the near term, we should expect to see more companies posting how their solution(s) leverage AI / ML to solve problems (typically saving money, making money, or both). We should expect new companies to emerge claiming that they have leveraged these technologies to “leap ahead” of the competition (seemingly overnight) to provide a far superior service than their competitors. Finally, we should see an uptick in AI / ML “experts” who can solve everyone’s problems. In most cases, these promises will not come backed with hard data or guarantees (and with good reason). Some people will purchase these tools hoping for a quick fix to their problems, some people will see them for what they are, people selling snake oil.

This should not be a surprise. Anytime a new technology becomes popular, there will be people who try to repackage and market it to make money quickly, hoping to become rich before the fad expires. Consider how many “crypto-experts” were advertising their services when crypto first became popular. As unfortunate as this is, it is unlikely that these organizations will result in the end of our species.

The “Unbridled Scientific Curiosity” approach:

While we may hope for a “Thoughtful Progress” approach, we often end up with is “Unbridled Scientific Curiosity.” Organizations following this approach usually mean well, but become so focused on the prize they fail to account for the costs.

What we should expect: This is the approach we seem to be taking with AI / ML (I would love someone to prove me wrong). Companies, in a rush to get to market with a new AI / ML product or service, lest their competitors beat them to the punch, push the product out the door without fully understanding the ramifications. Investors, who are more concerned about making money than the long-term risks associated with unproven technology, push leaders to move quickly without understanding the consequences. Plans to address risks are either lacking or consist of “we will figure it out later.”

We should expect the AI / ML products that appear over the next few years to run the gamut from “way too soon” to “not bad, but needs improvement.” The “way too soon” could come in forms of technology which is adopted, ends up hurting people, and, once it hurts enough people, becomes visible enough for the creators of the technology to change or withdraw it.

This should not be a surprise, even if it is disappointing. The book Engineering a Safer World analyzes recent examples where failure to stop and assess the risks of a system via a holistic approach ended up costing people their lives. In the book Army of None, examples are presented where people trusted the technology they use to the point where they no longer questioned the machine. This has (and will continue) to cause serious accidents.

Why this is worrisome: AI / ML will follow its programming. If the programming says something can be done, it makes no difference to the machine if it is “right” or “ethical,” those concepts are not relevant to AI / ML. A human might consider the consequences of an instruction they were given if they felt it was wrong, an AI / ML will do no such thing. As AI / ML continues to embed itself into our lives, we will become more comfortable with the technology. This will lead to increased confidence that it is “right” without questioning the method used by AI / ML to come to a conclusion.

Imagine a conversation with an AI / ML system that is advising people on issues they face every day. How would a conversation between a doctor and an AI / ML system go regarding how to best treat a patient? What about a conversation between an executive and an AI / ML about the possibility of layoffs? What happens when a product manager has a conversation with an AI / ML system about how a new product or feature (which might negatively impact customers) but make the company money? What happens when a world leader has a conversation with an AI / ML system about the possibility of going to war, perhaps as a preemptive measure?

The scenarios above may result in significant “unanticipated collateral damage,” depending on how the human interprets the outputs of the AI / ML system. Many of us would not consider this a surprise, but we should acknowledge that we could, and should, do better.

We have created many technologies in the past that have had significant unintended consequences. The evidence is there; we should learn from the past and adjust our approach.

The “Thoughtful Progress” approach:

This is the approach we hope for. It is comforting to think that the greatest minds of humanity are carefully considering the risks and trade-offs of the work they do every day. It makes us feel good to believe that experts in their field are driven by the greater good, not by some selfish desire for money or power. We sleep better at night imagining that the people working on breakthrough technologies are taking careful, measured steps in their daily work, never cutting corners, never taking short cuts, always aware of the risks.

What we might expect: In this approach, organizations working on AI / ML take a measured, risk-based approach. They carefully consider the pros and cons of their technologies before releasing them. They put metrics in place to ensure that the technology works as intended, and if does not, they immediately discontinue it to mitigate damage. They gather feedback from customers, governments, and others to encourage active debate on the pros and cons. They share data, including what they do and do not know about what their products are capable of. They create rollback plans, so if the technology does not work as anticipated, the older technology can be re-introduced easily, allowing them to refine the new technology before trying again. Safety is considered holistically, with leaders of the organization driving accountability from the top, not relying on individual (and often disconnected) teams to make the right decision with limited information.

In parallel, citizens inform themselves about AI / ML and hold elected officials accountable. They demand laws protecting them from the unanticipated consequences of AI / ML. Elected officials enact pro-active policies and plan for the worst; they do not wait for a new technology to cause catastrophic damage and scramble to respond.


Lest this post be dismissed as the ravings of a technophobe and Luddite, I want to be clear: I am not against AI, ML, LLM’s, ChatGPT, or any such tools. I live a far more comfortable life than my ancestors thanks to technological innovation. I have made my living in the IT field, so it would be hypocritical of me to condemn it. What I oppose is a lack of accountability for AI / ML.

If a human decides to do something foolish, our legal system is (somewhat) capable of addressing that. However, it is not clear who is accountable for what an AI / ML does, or if anyone is accountable at all. When we put robots with weapons on the battlefield and they end up killing the wrong people, who is held accountable? The people that created the AI / ML algorithm, the maintenance technician who installed that algorithm, the person who deployed the robot onto the battlefield, the company that made the robot, or no one?

While the government might step in, it is run by humans and can make decisions that cause unintended consequences. The people who make such decisions are not always held accountable; and those hurt by those decisions do not always get justice (The Three Trillion Dollar War).

I am not saying we should stop work on AI / ML, but we should insist on accountability. We should ask organizations that profit from these technologies who stands to benefit from them, and who might be harmed. We should ask them, if people get harmed by their technologies, who will be held accountable? If the organizations cannot answer these questions, we should proceed with caution, lest unbridled scientific curiosity get the better of us.

Now if you will excuse me, I need to ask ChatGPT where to I can purchase a plasma cutter…just in case…

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics