From Acceleration to p(Doom) and Back Again: How AI may help and harm a grid in transition
Context:
At a DOE-sponsored conference in early February 2024, we had a medium-sized room for a panel and workshop to consider whether and how much various AI technologies would help us solve pressing problems facing the North American grid, like the coming much-higher percentages of renewables and other present and looming operational and cybersecurity challenges.
My two highly credentialed panelists, Dr. Colin Ponce of Lawrence Livermore National Lab, and Dr. Christopher Lamb of Sandia National Lab, were both brilliant and eloquent. Their bios revealed a mix of deep thought in computer science, mathematics, operational cybersecurity and AI. Over the course of the 2-hours we spent together and with the 100+ folks who attended, in responses to my questions as moderator as well as those lobbed at them from the workshop participants, they both revealed themselves to be more sanguine about the prospects for integrating AI tech into grid systems (including in some cases, control scenarios) than I am. And they seemed to move those in the room, who in a show of hands at closing time revealed they were more positive about what AI will do for grid management, and/or less worried, than they were at the beginning of the session.
For you, dear reader, who weren’t in the room, or for those who were and want to partially relive the experience, I bring you the opening monologue and a few of the questions we worked through. Perhaps you will find some of what follows not only interesting, but helpful ... that’s my hope, anyway.
Opening Monologue:
Imagine a boxing ring. In one corner is a school of thought that holds that AI research, development and deployment should proceed as fast as possible, unimpeded by excessive caution or regulation. Going by the name Accelerationism, and led by Guillaume Verdon, it:
Advocates for propelling rapid technological progress as the ethically optimal course of action for humanity. Its proponents believe that progress in AI is a great social equalizer. Followers see themselves as a counterweight to the cautious view that AI is highly unpredictable, potentially dangerous, and needs to be regulated.
In the other corner we have polymath Eliezer Yudkowsky of p(doom) fame. In a 3-hour podcast with MIT’s Lex Fridman shortly after the release of GPT-4, Yudkowsky makes a very well-supported case for AI as an existential threat to humanity, and that bad times are coming much sooner than later. As they approach the end of the interview, Lex asks him “what gives you hope?” To which Eliezer replies:
Recommended by LinkedIn
That I’m wrong.
How is this going to play out do you think? On an aging grid with many more renewables, with transmission congestion and capacity shortfalls, and according to the often smart if sometimes crazy Elon Musk, three times as much demand to serve by mid-century?
We’re building renewables (and maybe soon, SMRs) to reduce emissions for climate reasons. We are talking about AI because our understanding of its apparent potential to help operate the grid in the presence of a much higher percentage of variable resources. We’re also talking about AI because of what it seems poised to do for offensive and defensive cyber teams, including those defending and targeting the grid.
Questions:
Here is a subset of the questions that engendered the most conversation ... and there was A LOT of conversation. We didn’t get to all of them, and attendees volleyed quite a few of their own at us as well. You might try using these to stretch or shape your own thinking and socialize related concepts in the organizations you care about, including your own.
Utility Professional
8moThank you for moderating the panel discussion; it was an exceptionally lively and informative session. Unexplainable, unpredictable and unexplainable.
It was an insightful session. Clearly there are many opinions about AI. This is my second run at the AI wave so I'm taking a wait and see approach, especially because of the potential legal issues with copyright infringement. Great job moderating Andy. You would have made a great play-by-play sports commentator. Now we need to take a picture at a Celtics game to complete the Boston connection. Cheers and safe travels home.
Andrew Bochman - this was an awesome session to attend and greatly appreciated the insights provided and dialog, debate and banter that proceeded. Thank you for hosting and the short summary.
I protect against supply chain threats for critical infrastructure | Founder | Security Architect | Cyber Informed Engineering | Author | SANS SEC547 Instructor
8moOn point 7, I'd say yes, but with caveats that we have a mature approach for doing so. I can't see the CIP regulators I know of doing this, but perhaps the labs could help. The problem here though, is the massive backlog in getting products tested through current programs. This would require a huge investment or perhaps it is time to (in a controlled fashion) commercialize and license CyTRICS to industry to handle the load?