I have been thinking about betaPERT (we’ll refer to as PERT from here on out) and the potential it has to rightly influence security within an organization it is applied to. Unfortunately I can’t say that the realizations seem to align with operational security, or ‘actuality’ in practice as it pertains to aggregate information risk. Let’s first start with a little bit of background on who SIRA is and what they represent.
From the SIRA website: “The Society of Information Risk Analysts (SIRA) is dedicated to continually improving the practice of information risk analysis. We endeavor to do this by supporting the collaborative efforts of our members through research, knowledge sharing and member-driven education”. SIRA, in my opinion, has a valid mission objective and the group of professionals that make up the whole of the group are keen thinkers. However, I’d like to challenge the validity that PERT is claimed to bring to the table, as it applies to the idea of information security only.
SIRA does a good job of describing the PERT approach in the “Some Thoughts about PERT and other distributions“. Feel free to read the series and return for the response.
First and foremost, there’s no implication from SIRA that PERT is perfect. In fact it’s clearly stated in a SIRA blog post that: “under certain circumstances PERT distributions do not yield good information”, “there are other distributions that might be more useful than PERT”, and “it’s a learning process”.
That being said I think there are other considerations one needs to take into account prior to spending time modeling with tools like PERT in the best interest of the organization you’re representing.
First let’s step back for a minute and consider how modeling may be used within insurance, specifically car insurance. There are known factors within car insurance that can statistically guarantee an insurance company be profitable based on historical information. These data can appear in the representation of: age, gender, marital status, etc. We also have factors that are known outside the insured person including: car type, geographical locale, laws and regulations, etc. None of these are guaranteed to make a candidate ideal because there are always outside influences. Think uncontrollable events that pertain to the environment (you’re not driving in a pristine and known environment for the most part) and we also have the general human condition of mistakes. But overall, using the law of large numbers, there is predictable accuracy over the group as a whole. This is where it gets tricky when you try to do the same thing with information risk.
Let’s reflect on car insurance for a comparative. Again part of the advantage right out of the gate is that we have more applicable known bounds. For the nitpicker this example plays better to developed nations wherein driving a car is a typical task of the normal distribution of population. That aside we start out with (a representation of known bounds): strict rules and regulations that enforce age, training and testing requirements, cars that are required to pass safety laws and standards, roads that are generally well maintained and also contain a framework of law enforcement to keep the common driver within safe bounds, and the clincher (and less of a hard bound – but understood) is that most drivers have an inherent understanding of cause and effect as it applies to things that will affect driving success and failure. It could safely be stated that people know to drive slower, keep proper distance and increase driving awareness to road conditions when traveling on a snow covered road. While there are many more understood facets to driving the point is that the task of driving is well understood due to a few simple reasons: humans have the ability to understand simple physics, enforcement laws, and standardized controls (the car itself). Would a PERT distribution be a good choice to model values likely to occur within the context of driving? Definitely. There are significant data to represent the minimum, maximum and most likely values to occur, and more importantly, the landscape of driving is not changing drastically year over year.
Things change when we try to do this with information security because the complexity sky rockets exponentially. While there are some rules and regulations around controls that must be implemented as it applies to specific types of data, they are shit at best. Think: no speed limits, optional seat belts (if your car even has them), roads that have different destinations daily, law enforcement once or twice a year (and cops that are easily paid off or have no idea how to reason the laws or understand how different cars can pass those requirements), cars that have completely new controls from one year to the next, and physics that only applies to some and not to others because, well, we now have a phenomenon known as ’0-day’ physics. And we’re just getting started. We can break down the system as a whole into three main components: the Internet, organizational infrastructure & assets and the human element.
At this point let’s set up a component of the argument. I’ve seen PERT risk analysis being touted per very specific projects, technologies, etc. But this doesn’t always work well in, take for example, a large organization across 150k endpoints, 200k employees, 20 data centers, 40 Internet points of presence, and no less than 100 technology vendors (i.e. complex context). So lets say you are doing product selection for your next-gen firewall solution as it pertains to your particular organization. Things you need to take into consideration, at a high level, encompass: performance, functionality, manageability, and financials. These high-level components are the basis that can contain everything pertinent to the environment that the product selection will be applicable to, and this is how I run proof of concept and feasibility analysis for my employer (truth in transparency: organization tucked within the Fortune 25). While I’m not about to give away the entire secret sauce – the framework, as designed, takes into account a few main ideas: hundreds/thousands of data points that bring in every component measured over the analysis (think: the ability to use this data as an operational baseline after the fact), a weighting component so that metrics do not have to be directly skewed as it pertains to different stakeholders that may be looking at the individual technology (think: feedback loop of stakeholders into the selection process metrics) and finally a high level metric that takes into account the raw data and weighting so that the stakeholders aren’t engulfed in a confusing decision (think: culmination of the above two for fast data analysis and selection abstraction).
Now let’s focus on one of those four main sections: performance. As an example part of the performance component as it pertains to our firewall analysis is that while under normal load inspection components that have been identified as pertinent to operational security (i.e. a database farm) need to operate at 100% against 80% of known in-the wild vulnerabilities (the outlying 20% will be encompassed by a differing compensating control – so don’t focus on the missing percentage here). We’ve decided on this metric arbitrarily in this case as it has no actual meaning. But the point that I’m trying to showcase here is that this is one small slice of the entire component analysis. Would PERT be good to visualize this specific context? Maybe, but what if I have outliers such as multiple Internet connections feeding this particular segment, this segment happens to be in eastern Europe, half of the processing farm is run on Windows 2003 and the other half on CentOS 6.2, a compensating control into the environment includes IPsec tunnels, but there is a management segment with direct console access, etc, etc, etc. The point here, is that the risk environment here is very unique, very specific and there are many unknowns. Taking the time to understand the applicability and insert feedback loops in for stakeholders, or in other words the people that may understand the environment better than the security centric lines of business, is crucial to actually visualizing and understanding ‘actuality’ (as I like to refer to it). Min, max and most likely values now seem very obtuse and vague in comparison. And, at the end of the day, don’t really represent anything – except for when they’re used in generalizing how we can get a model to look like the reality of the situation after the fact. At that point, game over.
Does data modeling have value? Yes, definitely – in the context of well knowns. But analysis through simulation and understanding goes much further in the realm of security risk analysis. Security is a multitude of complex systems – and when those complex systems simplify themselves we call that failure. Don’t simplify yourself right out of the box. SIRA is a good resource, but they seem heavily invested in modeling as the end all be all – which I can’t fully agree with.
This is my opinion, but constructive feedback is always appreciated.
Tea: Mighty Leaf – Organic African Nectar