Why developers should consider automated threat modeling

Traditional threat modeling is hard. Can automated threat modeling make development and security teams' lives easier?

Successful threat modeling determines the security vulnerabilities of a system, enabling a chance to correct them. When it comes to threat modeling applications and code, however, many developers come up short, with developers unsure of security terms and vulnerabilities, as well as how exactly to conduct threat modeling. This creates a disconnect between development and security teams -- and leaves systems vulnerable to malicious actors.

To help break down silos between the teams, Izar Tarandach, principal security engineer at Squarespace, and Matthew J. Coles, senior principal product security engineer at Dell Technologies, wrote Threat Modeling: A Practical Guide for Development Teams, published by O'Reilly. In the book, the authors break down threat modeling into a simple language anyone can understand. Armed with their insights, developers, testers and DevSecOps professionals involved in the development process will learn how to review system security and how to discover potential application and code issues ahead of going to market.

Because not all companies have the budget necessary for threat modeling, automation must come into play. In this excerpt from Chapter 4 of Threat Modeling, read up on the benefits of automated threat modeling, and get introduced to two automated threat modeling methodologies.

Threat modeling isn't a static operation -- systems evolve in complexity over time. Learn how automated threat modeling can help make keeping a system model updated easier for all involved.

Check out the interview linked here in which Tarandach and Coles discuss why they wrote a book on threat modeling for developers after finding most books available are aimed at security practitioners, who already understand the basic concepts that developers may not.

Why automate threat modeling?

Let's face it -- threat modeling the traditional way is hard, for many reasons:

'Threat Modeling: A Practical Guide for Development Teams' cover imageClick here to learn more
about Threat Modeling:
A Practical Guide for
Development Teams

by Izar Tarandach and
Matthew J. Coles.
  • It takes rare and highly specialized talent -- to do threat modeling well, you need to tease out the weaknesses in a system. This requires training (such as reading this or other primers on threat modeling) and a healthy dose of pessimism and critical thinking when it comes to what is and what could be (and how things could go wrong).
  • There is a lot to know, and that will require a breadth and depth of knowledge and experiences. As your system grows in complexity, or changes are introduced (such as the digital transformation many companies are going through these days), the changes in technologies brings an accelerating number of weaknesses: new weaknesses and threats are identified, and new attack vectors created; the security staff must be constantly learning.
  • There are myriad options to choose from. This includes tools and methodologies to perform threat modeling and analysis, as modeled representations, and how to record, mitigate, or manage findings.
  • Convincing stakeholders that threat modeling is important can be difficult, in part because of the following:
    • Everyone is busy (as mentioned previously).
    • Not everyone in the development team understands the system as specified and/or as designed. What is designed is not necessarily what was in the specification, and what is implemented may not match either. Finding the right individuals who can correctly describe the current state of the system under analysis can be challenging.
    • Not all architects and coders have a complete understanding of what they are working on; except in small, highly functioning teams, not all team members will have cross-knowledge of one another's areas. We call this the Three Blind Men and the Elephant development methodology.
    • Some team members (hopefully, only a small number) have less-than-perfect intentions, meaning they may be defensive or provide intentionally misleading statements).
  • While you may be able to read the code, that does not show you the whole picture. If you have code to read, you may have missed your chance to avoid potentially serious mistakes introduced by the design that coding cannot mitigate. And sometimes it can be hard to derive the overlaying design from code only.
  • Creating a system model requires time and effort. And since nothing is ever static, maintaining a system model takes time. A system's design will change as the system requirements are modified in response to implementation, and you need to keep the system model in sync with any changes.

These are some of the reasons that some long-time members of the security community have expressed concerns on the practical use of threat modeling as a defensive activity during the development life cycle. And to be honest, these reasons are challenging.

But fear not! The security community is a hardy bunch who are never shy to take on a challenge to address a real-world problem, especially those problems that cause you pain, anguish, and sleepless nights. And automation can help address these concerns (see Figure 4-1).

Image from 'Threat Modeling: A Practical Guide for Development Teams'
Figure 4-1. 'Very small shell script' (source: https://oreil.ly/W0Lqo)

The difficult part of using automation is the complexity of systems and the relative inability for a program to do something the human brain can do better: pattern recognition. The difficulty is expressing the system in a way a computer can understand without actually creating the system. As a result, two related approaches are available:

Threat modeling from code

Creating computer code in a programming language or in a newly defined domain-specific language (DSL) that results, when executed, in analysis of threats being performed on a model that represents the input data provided.

Threat modeling with code (aka threat modeling in code)

Using a computer program to interpret and process information provided to it to identify threats or vulnerabilities.

Both approaches can be effective as long as you resolve the GIGO problem. The results you get must bear a direct relationship to the quality of your input (the description of the system and its attributes) for the automation. Both methods also require the algorithms and rules used in the analysis to be "correct," such that a given set of inputs generates valid and justifiable outputs. Either implementation can eliminate the need for specialized talent to interpret a system model and understand information about elements, interconnections, and data to identify the indicators of a potential security concern. Of course, this does require that the framework or language supports this analysis and is programmed to do it correctly.

We will talk first about the construction of a system model in a machine-readable format, and then present the theories for each type of automated threat modeling and provide commercial and open-source projects that implement them. Later in the chapter (and in the next chapter), we leverage these concepts to deliver information on further evolutionary threat modeling techniques that strive to work within the rapidly accelerating world of DevOps and CI/CD. Fundamentally, threat modeling relies on input in the form of information that contains or encodes data sufficient for you to analyze; this information enables you to identify threats. When using code rather than human intelligence to perform threat modeling, you describe the system to be evaluated (e.g., the entities, flows, or sequences of events that make up a system, along with the metadata necessary to support analysis and documentation of findings), and the application renders and analyzes the system representation to produce results, and optionally render the representation as diagrams.

About the authors

Izar Tarandach is a principal security engineer at Squarespace. Previously, he was a senior security architect at Bridgewater Associates, lead product security architect at Autodesk and security architect for enterprise hybrid cloud at Dell EMC, following a long stint in the Dell EMC Product Security Office as a security advisor. He was a core contributor to SAFECode and a founding contributor to the IEEE Center for Secure Design. Tarandach was an instructor in digital forensics at Boston University and in secure development at the University of Oregon.

Matthew J. Coles is a product security program leader and architect, previously at Dell EMC, Analog Devices and Bose. He applies over 15 years of experience to build security into products and connected ecosystems and processes that enable and support them. Coles has contributed to community security initiatives, including Common Weakness Enumeration, Common Vulnerability Scoring System and SAFECode, and he was an instructor in software security at Northeastern University.

Dig Deeper on Application and platform security