How do we right the CMMC ship?
Previously I wrote CMMC Trip to Tartarus story under the banner “CMMC is impossible and here is why!” I did not receive many comments that said this assertion was incorrect. Far fewer than I thought I would in fact, and I received many more “thank goodness someone finally said that,” comments than anticipated as well.
One comment that struck home however came from Brian Thompson on offering constructive solutions rather than just problems. I agree completely with Brian that we need to offer solutions rather than just problems, but in a CMMC Trip to Tartarus, I was trying to raise awareness on how the reality of the assessment process has become disconnected from the original vision of a maturity model with basic and moderate goals in the main.
Diagnose First
The first key to offering solutions should be diagnosing the problem. We all too often offer solutions when we have need really figured out what the cause of casualty actually is. This of course leads to a lot of energy spent doing things that don’t actually result in fixing the problem. Perhaps seemingly too obvious to mention, I think really understanding the problem is a great start and as we debate and examine solutions perhaps not enough time is being spent on diagnoses.
So what is the problem? Or is there one at all? Please add your thoughts and comments below. I am interested in the outlook of the professional community. At the very least there are a number of challenges! The strategic problem is the low level of cybersecurity maturity across the Defense Industrial Base (DIB).
Ultimately the entire purpose of CMMC is to raise the bar for the DIB in order to better protect America’s precious information assets. To the extent that CMMC helps solve that particular problem then CMMC is good, and to the extent that it hurts or lowers that security bar it is bad. This is the definition of success, or should be.
I would argue that raising the bar is not perfect security. It is not security that covers every angle and fully documents the inheritance of every control. DOCUMENTATION IS NOT SECURITY! This is one of my biggest issues with the way the Federal government has established the governance of its own cybersecurity.
There seems to be a presumption on the part of both the Government and many professionals with long experience inside of government, that the current USG approach to cybersecurity established in FedRAMP, NIST 800-53, FISMA, OMB Circular A-130, is good and effective. I beg to differ. It has substituted bureaucracy for security. The more important something is, and cyber is certainly at or near the top of the US priority list, the more bureaucracy we want to throw at it. This is expensive and not at all effective. We cannot enter into the debate on what the best way ahead is for CMMC with the presumption that the current USG model and controls are gold-plated good. They are not. Far from it.
This opinion crystallized for me several years ago when I was given the opportunity to review the last two years of “accomplishments” for the top cyber defense organization for a US Military Service. As I read the two documents I became more and more depressed. Every single bullet was bureaucratic, and not substantive. We completed an ATO for this, and an MOU for that, and Instruction for this other thing. Bureaucracy, bureaucracy, bureaucracy. This is NOT cyber defense. We are hiding the fact that we are failing to defend our networks and our nation behind a sea of paper and trying to convince ourselves this is worthwhile work.
We should not and cannot lose sight of raising the bar for real security in pursuit of unattainable zero-risk solutions with perfect bureaucratic coverage. Assessment is a means to drive maturity. Assessment and the work going along with them are tolerated overhead that should be minimized where possible. The goal is to do more real security, not more paperwork.
Two Problems with CMMC raising the bar
The two primary problems with CMMC assessments I laid out in the CMMC Trip to Tartarus blog were the adoption of the 171A assessment objectives, and the need for 100% to pass. Do you see other specific “problems” with the assessment approach? Do you think my diagnosis of the two main problems is incorrect? Welcome your comments below.
Addressing 171A
Let’s take the first of my identified problems; 171A. Briefly for anyone relatively new to these conversations 171A refers to NIST SP 800-171A which is the Assessment Guide created by NIST for NIST SP 800-171 which is the baseline standard for “moderate” security under the federal regulations for protecting Controlled Unclassified Information (CUI). 171 has been incorporated verbatim into CMMC, and 171A forms the basis for the CMMC Assessment Guides which also incorporate those recommendations verbatim as requirements.
On the surface, this seems to be an obvious and direct choice. This links closely to the derivative requirements of existing regulations (FISMA and its progeny) that apply to federal networks. Many in the Federal Government or long time professionals in the government cyber space think that this guidance for federal networks should be adopted for contractor networks. Some have even argued, incorrectly in my view, that these requirements already apply to contractor networks.
Trying to recreate, or apply the Federal way of doing this for their networks to contractors is a major mistake. The Federal mechanisms for cybersecurity compliance, as I rant on above, are at once enormously expensive and often of dubious benefit for real security. In my personal view, the current Federal mechanism offer enormously expensive mechanisms for papering over risk and hiding it, not mitigating it. They offer a mechanism for expending the enormous amounts of Federal dollars being appropriated for cyber defense and doing things the Federal enterprise knows how to do: Bureaucracy, Paperwork, Forms, Assessments, Studies, etc., while not really doing what needs to be done, real security operations. See my post here on cyber ops. Bureaucracy is much easier to build a program for, much easier to measure, and much easier to write reports off of to show we are doing something, but far less capable of doing the work that actually needs to be done. Some bureaucracy is needed. We have to have it, and CMMC needs assessment. We need to NOT let the bureaucratic requirements get out of control, and suck up the already limited DIB resources doing far more paperwork than real security. Bottom line we are walking down the road of applying what we are doing in the Federal space to contractors, and we should take a step back and realize that many of the Federal methods are abysmal at providing real security, and will not necessarily raise the bar on Cyber for the DIB.
171A not written as a 100% Audit standard
There are two interesting quotes out of NIST 800 171A that seem to have been lost in the approach to assessments that DoD and DIBCAC are using in setting the standard.
“The assessment procedures are flexible and can be customized to the needs of the organizations and the assessors conducting the assessments. Security assessments can be conducted as self-assessments; independent, third-party assessments; or government sponsored assessments and can be applied with various degrees of rigor, based on customer defined depth and coverage attributes. The findings and evidence produced during the security assessments can facilitate risk-based decisions by organizations related to the CUI requirements.” Page ii, under Abstract, 800-171A
And
“CAUTIONARY NOTE
The generalized assessment procedures described in this publication provide a framework and a starting point for developing specific procedures to assess the CUI security requirements in NIST Special Publication 800-171. The assessment procedures can be used to generate relevant evidence to determine if the security safeguards employed by organizations are implemented correctly, are operating as intended, and satisfy the CUI security requirements. Organizations have the flexibility to specialize the assessment procedures by selecting the specific assessment methods and the set of assessment objects to achieve the assessment objectives. There is no expectation that all assessment methods and all objects will be used for every assessment. There is also significant flexibility on the scope of the assessment and the degree of rigor applied during the assessment process. The assessment procedures and methods can be applied across a continuum of approaches—including self-assessments; independent, third-party assessments; and assessments conducted by sponsoring organizations (e.g., government agencies). Such approaches may be specified in contracts or in agreements by participating parties.” pg iv, NIST SP 800-171A
We have lost sight of this and set the standard of meeting every assessment objective with two forms of evidence. Reportedly DIBCAC required three forms of evidence for each Assessment Objective (AO) as listed in the NIST 800-171A. This is well beyond the envisioned application of this guide as originally produced.
In a way, this is requiring Level 5 maturity against Level 1 controls. I was talking with a colleague this evening about this and we shared impressions from the conduct of NIST Cybersecurity Framework (NIST CSF) assessments. In general, we apply the 5 levels of a maturity model to all of those controls. Organizations are then rated on their maturity against those controls for 1-5, or sometimes 0 to 5. With CMMC we have taken the approach of adding controls, not maturity, and are in essence requiring Level 5 maturity (perfectly documented and perfectly implemented and perfectly operating) against any implemented control. Honestly, this doesn’t really work well. Companies that are at a basic or moderate level of maturity are in general not able to implement any of their controls at Level 5 maturity. They just are not there yet.
Recommendation for 171A
For all of this, I think the solution is to drop the use of 171A assessment objectives altogether in the CMMC assessment guides. Define Basic and Moderate maturity for each “security requirement” as defined in 800-171 or control as used in 800-53, and give the company and the assessor room for risk and assessment. This is not perfect and is not meant to be. Perfection and zero risk are not achievable goals.
No Risk Acceptance and Scores must be 100%
We need a mechanism to allow for scores of less than 100% on everything. The current stated policy (and the lack of written official references of these interpretations is another major problem for another blog) is that failing a single Assessment Objective fails the entire control, and failing a single control fails the Assessment for that level. This will not work in the real world. It certainly does NOT work in the world where the USG applies cyber rules to their own networks.
There are several potential ways to approach this in the Assessment Guides, and each likely has its own positives and negatives. We must adjust the approach though. A scoring approach similar to the DCMA Self Assessment methodology with a minimum score per level might be one approach. Additional allowances for risk management and alternative controls might be another.
Recommendation for Risk Acceptance
We should utilize the working groups already available with the CMMC-AB, and rework the assessment guides. This should incorporate the Maturity Model concept, and input from Carnegie Melon, that underlies CMMC premise but has been essentially abandoned. This is no longer a scalable maturity approach. This is an audit against security controls that requires 100% conformance or fail. There are a number of available constructs from highly experienced professionals in the space. They will not all agree of course, and we will have to pick options with flaws.
Please put your ideas on how we might best address this aspect in the comments below.
Conclusion: CMMC can work
This can be done and CMMC can be administered, inside the current regulations, to work as a mechanism for raising the cybersecurity bar across the DIB without dumping billions into paperwork drills that don’t enhance security and crush small business. The risk to small businesses must be mitigated not just talked about. Those risks can be reduced, and real security and risk reduction can be achieved through this program. We need to be thoughtful and very deliberate in the approach, realize that we cannot drive instantaneous change but we can drive steady change.
CMMC Town Crier | Ask me about NIST security controls | Smashing compliance frameworks for fun and profit | Cyber policy wonk |
3yYour article conflates your opinions on the origin of CMMC assessment objectives and the need for adequate depth and coverage to satisfy assessment objectives. Assessment objectives are a function of control statements. The first step in our analysis should be determining whether or not a control should be present. Many people decry the aggregate burden of CMMC, 800-171, and other frameworks, but don't provide which specific controls need to be eliminated. Assuming we make it through that process, how would those control statements be decomposed into a testable objective? Thus, the next step is determining which assessment objectives are unreasonable (a difficult task since we already have a set of reasonable control statements). Your issue doesn't seem to (and shouldn't) stem from the raw number of assessment objectives, but the level of depth and coverage required. You quote 171A and claim that current guidance requires too much, but current CMMC guidance (2 assessment objects from at least 2 assessment methods) and DIBCAC guidance (3 objects) are consistent with 171A because they don't require every method and every object.
Security Compliance Aficianado | Certified CMMC Professional | Governance, Risk & | CCA, CCP, CMMC-RP, CISSP, CCSP, CCSK | NIST, CMMC, FedRAMP, HITRUST | Patent Infringement Expert
3yOne idea for fixing the cost problem: a tax credit for SMBs doing under $1M in federal business. Make it multi-year and begin with a CSF assessment in year 1.
Weaving security & compliance into business processes.
3yVincent Scott well-thought and well-written piece. The documentation requirements of CMMC ML3 are extremely burdensome for small biz. My number one wish for improving the CMMC would be the allowance for Compensating Controls! The PCI DSS has had this from the beginning, and it's a good way to manage risk without wasting resources.
Fooling with Words and Identities
3yAt the same time you can have all the technical controls and the systems but always the nuisance variables, silly humans, mess up the best models. You need basic governance and policy, but you can never govern culture so one must strike a balance. My thoughts (and I am no security expert, I am an actual imposter, not someone with imposter syndrome): 1. Data driven decisions. We will never see it but all those SPS and SRPS..I hope DoD mining the crap out of it to look for patterns that inform cybersecurity frameworks as living documents. I have no idea the psychometric decisions behind the scoring of the 171a methodology but the number of permutation and random cut scores (70 here or 75 there...when everyone prolly is negative) drives me batty. 2. Have assessors rate practices holistically and require 100% compliance. In the end you build stuff that blows up or goes into stuff that delivers stuff that blows up. Our enemies want to steal it. They are stealing it from you. This has and is happening. You should not be surprised the data owner has a high baseline. Some of you will choose to walk away from the DoD supply chain rather than harden your networks. I am kinda okay with this (but your insurance company will demand the same level of cybersecurity soon, BTW). But assessors need flexibility. Why we have laid out a rigorous three class and three test training program. So we can trust the results. 100% compliance hurts validity and invites fraud. 3. Have the assessor rate the objectives prescriptively and use this to inform the holistic practice rating. We need this data for threat analysis and vulnerability scanning (see point one. Read it one more time, and then again). 4. The 99s make no sense...All of the 99,98,97 are just copy /pasta....That raises such validity concerns....Why not I dunno write access control policy objectives that are about access control policy? You do the same for asset management policy...Oh wait in CMMC Asset Management is just CUI but wait all of 171 is protecting CUI and CMMC is all of 171..... 5. Stop creating a culture of pedantic arguing over controls and put the onus on the OSC to spew and organize Observable Evidence in situ as they try to do business better every day. 6. Use machine learning to assess policy. It will all just be SANS templates anyways. Use assessors to go interview and observe staff to see if policy matters. 7. Provide exemplar templates of how 99s should be assessed. 8. Create a policy reciprocity guide if other ISO certs already assess the same policy and procedures. 9. Count all your crap. Really wish inventory (yes I know it cuts across IDAM all over) was its own domain. You should be able to use objectives in a maturity model to help you grow. Very little scaffolding in levels 1-3. 10. Turn on MFA, please
CEO at Sentinel Blue | Paramedic | Host of The Watchers Podcast
3y"This is no longer a scalable maturity approach. This is an audit against security controls that requires 100% conformance or fail." I'd argue, as I'm sure you and many others would, that CMMC was never its namesake of a "maturity model". I frankly don't see anything about CMMC that is a maturity model other than the fact that there are levels, which don't seem anything like a maturity progression. CMMC is a system security standard, and that's fine. We need that. But we need that to be a bit more relaxed on its prescriptions, like the FIPS validation requirement. But then we need an actual organizational maturity model, and I'd like to see that be NIST CSF.