Anyone who has ever studied information security or completed any of the available information security certifications has surely read that risk is the product of impact multiplied by probability. The formula is easy to remember and seems to make sense, at least on the surface. The problems start, however, when you leave the classroom, where you have been provided values to work with, and begin performing risk assessments in the real world. At that point you are confronted with seemingly simple questions that are extremely difficult to answer. The first is: Which impact are we talking about? Is it financial loss, human injuries, production downtime, reputation, etc.? Do we combine these all together in a single value and, if yes, how? The second is: What is the probability this risk will be realized, that is, what is the likelihood that a threat, with sufficient skill and determination, will be able to exploit a vulnerability to damage our organization? To understand the various impacts that might result from any operational, legal, political, or human Threat Event we can generally speak with subject matter experts in the organization. To understand the probability of environmental risks we can refer to heat maps from various government and insurance organizations. What, however, is the probability that an Advanced Persistent Threat (APT) will choose to attack your organization today? What is the probability that a script kiddie will download a toolkit and run a DDNS attack against your corporate website? What is the probability that a disgruntled employee will circumvent your encryption and sell your database full of personally identifiable information (PII)? I know of no way to reliably calculate the probability of these types of random events.
I’ve been performing Information Security Risk Management (ISRM) for international organizations for several years but have generally used their in-house tools to ensure consistency with their existing risk management programs. While I do not always agree with the results the tools provide, they do have the benefit of allowing these organizations to compare apples with apples, that is, if all of their risks assessments suffer from the same logic error(s), the results, though potentially inaccurate, can still be used to prioritize their risks. When I began developing my iGsrc Tool, I realized that I need a risk management process that is both easy to use and delivers useful results. As I worked on the topic, I came to realize that in spite of years of ISRM experience, bringing ease of use and realistic risk assessments together in one tool was going to be much harder than I expected.
In order for an assessment of your Information Security Risk to be useful, you must have some sort of weighting scheme. Traditional Qualitative Risk Management will generally use terms such as high, medium, and low or colors such a red, yellow, and green. Unfortunately, I missed the section in my math course where we were able to obtain a meaningful result by multiplying “Red” times “Medium”. It is possible, of course, to create a heat map using the results high, medium and low but I am not a big fan as I prefer being able to point to a numeric value when explaining how I arrived at a given result. Quantitative Risk Management, on the other hand, attempts to assign numeric values to impacts and probability to allow one to prioritize risk accordingly. The Single Loss Expectancy (SLE) and Annual Loss Expectancy (ALE) are terms used to express losses (impacts) while Annual Rate of Occurrence (ARO) represents probability. Unfortunately, I suspect that few organizations are able to accurately determine the impact, in dollars and cents, of a given threat event and surely no one expected three major hurricanes in the U.S. in one season, so ARO is also something of a guess in spite of the volumes of historical weather data available.
A combination of Qualitative and Quantitative Risk Management can be leveraged to produce better results than either would on its own. To this end, I use Qualitative Risk Management terms like high, medium, and low and combine these with numeric values that allow me to perform mathematical functions on my data. This approach allows me to adjust the numeric values in the background but, the results are still presented as high, medium, and low with their associated colors which makes reporting easier and more consistent. This approach also allows risk and security experts to test the results and agree on a common weighting which allows for them to account for organizational differences. Changing the weighting in the backend automatically updates the risk reports on the front end to ensure consistency across all reports. Of course, one should generate hard copies of previous risk analysis to compare against the risk analysis after changes have been made.
When performing a risk analysis, I use the following terms, each of which is assigned a value from 0 to 32 and will be defined below as it applies to the Threat Profile, Threat Motivation, Exploitation Complexity, Impact Values and Control Effectiveness:
- None
- Very Low
- Low
- Medium
- High
- Very High
In the following section I will walk you through the various terms you will commonly encounter when discussing IT Risk Management and how these where used in iGsrc. I’ve modified the definitions of some terms from what you might find in Information Security literature to more accurately reflect how I view them from a practitioner point of view.
- Maximum Principle – The maximum principle is something that I have not seen discussed in the English language security literature I’ve read but, it is a common concept in German security circles. When referring to the maximum principle, we do not mean the mathematical definition but, rather, that the highest classification of information on a system, for example, will be used as the classification for the overall system. This is, sometimes, incorrectly used in risk management as well. For example, I’ve been told by a few Information Security Risk Managers that risk is the equivalent of the highest potential impact. The result of this thinking is that all risk assessments end up with a high or very high risk making it difficult to rank your organization’s risks. I prefer to use the average of multiple impacts to calculate the impact score for any given threat event.
- Threat Event (TE) – A threat event is a simple description of the risk you are trying to evaluate and is the starting point for your risk analysis. For example, “an EF5 Tornado destroys our primary data center” or “an APT steals confidential information and sells it to our competition.” I’ve seen risk analysis start with controls and then try to identify threats that might exploit those controls after which the vulnerabilities which might be exploited if the control were lacking are evaluated. This can work but requires that you have identified all of your vulnerabilities which, in my experience, will never be the case. Moreover, you may find yourself investing money to implement a control for a threat and vulnerabilities that do not apply to your organization. I feel it is better to begin with the threat event because this will encourage people to think about what could happen which, may help your organization uncover unknown vulnerabilities. For example: Your primary datacenter is hardened against an EF5 Tornado in an area that has never experienced a tornado but sits on a 100-year flood plain near a major river.
- Threat (T) – A threat is someone or something which might intentionally or unintentionally perform an action that could adversely impact your organization. A threat can be a hurricane, a terrorist, a careless employee, a foreign government, snow, etc. The threat value is calculated by multiplying the threat profile by the threat motivation (T = TP * TM) both of which are described below. As you read about threat profiles and threat motivations consider the following: A sophisticated threat that is exploiting a known unpatched vulnerability that has no controls should always result in a threat value that is higher than an unsophisticated threat that is attempting to exploit a known vulnerability that has been managed using proper controls.
- Threat Profile (TP) – The threat profile is an evaluation of the ability of the threat to realize its goals in the case of human threats
- Each of the threat profiles discussed below is further broken down to include 8 areas for consideration: Environmental, Operational, Technical, Political, Human, Physical, Legal and Financial. I’ve included examples which you would need to evaluate and modify according to your organizational needs. When considering the threat profile, there are a couple of things to consider:
- You must keep the threat event in mind as this what you are evaluating the threat profile against. Not every threat profile area will be applicable to every threat, hence the name “areas of consideration” and
- When developing your threat profiles, care must be given to ensure that any given threat only falls into one of the 8 areas. For example, natural events should only be evaluated as environmental while IT equipment failures should only be evaluated as technical threats.
- Threat Profile Areas of Consideration:
- Environmental: These are threats which result from the environment in which you operate: flood, fire, tornado, power outage, crime, neighbors, nuclear power plant failure, etc. Environmental threats should be considered as part of your information security risk assessment because they can result in the loss of data centers, a failure of your access management, the unintended exposure of confidential information, etc.
- Operational: These are threats that have an impact on your operations: Defective equipment, vendor delivery issues, material quality issues, insufficient forklifts, etc.
- Technical: These are threats that impact your IT hardware, software, industrial equipment. These can be as simple as a failed hard drive in a server to code changes that destroy your warehouse inventory. Technical also includes industrial machinery that may be required to perform your operations. Technical threats also include all types of malware as well as cyber-attacks.
- Political: These are threats that result from political decisions / political upheaval. State sponsored cyberwarfare or terrorism fall under this category as do regulations which place an added burden on your operations (think GDPR) or unstable governments. Terrorism (including cyber terrorism) is also a political threat and, as we’ve seen, not one that should be underestimated or ignored.
- Human: These threats include everything from an advance hacker, strikes, careless or disgruntled employees, workers’ councils and unions, etc.
- Physical: Physical threats could be anything from the building housing your data center collapsing to an intruder gaining physical access to your servers.
- Legal: Legal threats include those that include compliance with regulations, meeting contractual obligations, etc.
- Financial: These threats include fines for non-compliance, an inability to raise needed capital, insufficient spending on security requirements, an inability to make payroll or meet other financial obligations.
- Threat Profile Values:
- None (1)
- Environmental: Dust storms, located on a 1000-year flood plain, very low crime rate combined with effective law enforcement Operational: There are no production issues or data entry issues are logically prevented using effective controls.
- Technical: Operational equipment is functioning within acceptable parameters; There are no issues with production equipment, code is effectively managed by the SDLC process, hardware has been implemented using fault tolerance where required.
- Political: The government(s) where the organization operates are stable, no major legislation that could impact the organization is planned
- Human: Employees are well-trained and generally happy, the organization’s turnover rate is extremely low, and employees positively identify with the organization. The workers council proactively works with the organization in a positive manner. HR and the workers council are involved in the sanctioning of employees who fail to adhere to company policies. The organization is always able to arrive at an amicable agreement with sanctioned employees.
- Physical: The buildings that house your data made of steel reinforced concrete with solid walls which go from the floor to the physical ceiling, you have adequate physical barriers to prevent vehicles from getting too close to the building, you have dual access controls to prevent unauthorized individuals from entering the data centers, your server racks are locked and the keys are not stored in the server room. Your servers are located in organizationally owned and protected data centers.
- Legal: Your organization is always in compliance with regulatory requirements, your organization always honors the spirit of its contractual obligations, your organization always practices good corporate citizenship
- Financial: No financial impact
- Very Low (2)
- Environmental: EF1 Tornado, RS 4-4.9 Earthquake, smoldering ashtray, very short power outage that does not impact critical equipment, very low crime rate and/or effective law enforcement, dampness in cellars Operational: Small number of vendor mistakes, small number of non-critical production issues, few data entry errors which are easily caught and rectified.
- Technical: Minor issues with non-production equipment, very minor code issues that do not result in data quality issues
- Political: Newly elected government of the previous governing party, increases in the terrorism warning level with no specific threat to your business, minor changes to legislation that can be implemented with minimal costs
- Human: Employees are well-trained and are satisfied with their employer. Turnover is minimal and does not have a noticeable effect on the organization. The workers council raises few complaints, and these are handled swiftly and amicably resolved. HR and the workers council are involved in the sanctioning of employees who fail to adhere to company policies. Employee rarely file lawsuits none of which has resulted in a reversal of sanctioning.
- Physical: The buildings that house your data have steel reinforced concrete external walls and internal walls which go from the floor to physical ceiling, there is no vehicle parking near the data center and there are fences or bollards to prevent a vehicle from being parked there, you have dual access controls to prevent unauthorized individuals from entering the data centers, your server racks are locked and the keys are not stored in the server room, the rack cages are collocated with other reputable customers.
- Legal: Your organization is generally in compliance with regulatory requirements, your organization generally honors the spirit its contractual obligations, your organization generally practices good corporate citizenship
- Financial: Financial impacts that are less than 10% of your organization’s materiality level.
- Low (4)
- Environmental: EF 2 Tornado, RS 5 – 5.9 Earthquake, non-violent crime level below the national, law enforcement able to respond to all emergency calls, daily power outages that do not impact production systems; standing water in cellars
- Operational: Vendor mistakes which can be easily rectified, production issues which may result in missing non-critical deadlines, data entry issues which are missed but do not adversely impact operations, Technical: Minor issues with non-production IT equipment, code issues that require updates during the next change cycle
- Political: Newly elected government with a change of political parties; Increases in the terrorism warning level with unspecified threats to the industry or region in which you operate; minor changes to legislation which require few operational changes and/or budgetary changes to implement.
- Human: Turnover is average for the industry in which the organization operates. Employee training is provided but attendance is not tracked. Some employee discontent or minor agitation by workers councils. Direct supervisors are responsible for the sanctioning of employees who fail to adhere to company policies. Employee lawsuits rarely results in a reversal of sanctioning.
- Physical: The buildings that house your data have steel reinforced concrete external walls and internal walls which do go from the floor to physical ceiling, there is no vehicle parking near the data center but there are no fences or bollards to prevent a vehicle from being parked there, you have dual access controls to prevent unauthorized individuals from entering the data centers, your server racks are locked and the keys are not stored in the server room, the rack cages are collocated with other customers of unknown reputation.
- Legal: Regulatory compliance is determined by the likelihood and impact of any potential penalties, Contractual obligations are honored to the letter of the contract, payments to creditors is sometimes delayed, your organization talks about but does not practice good corporate citizenship
- Financial: Financial impacts that are less than 30% of your organization’s materiality level.
- Medium: (8)
- Environmental: EF3 Tornado, RS 6 – 6.9 Earthquake, non-violent crime level comparable to the national average, low enforcement understaffed, power outages that require production equipment to be shut down for a short period of time < 1 hour; minor flooding in the local area
- Operational: Issues with production equipment which negatively impact your operations; production issues resulting in missed critical deadlines; data entry issues which require an emergency change to avoid impacting operations, vendors deliver the wrong parts and/or quantities.
- Technical: Major issues with production IT equipment that causes a production slowdown, code issues that require an emergency patch to be deployed after production has completed.
- Political: Increases in the terrorism warning level with specific threats to the industry or region in which you operate; major changes to legislation which require significant operational changes and/or budgetary changes to implement.
- Human: Employee discontent and minor agitation by a union. Turnover is higher than the industrial average for the organization and is having a negative impact on the organizations ability to deliver services. Employee training is provided on an ad-hoc basis, generally as a result of mistakes that have been made. Senior Management is responsible for the sanctioning of employees who fail to adhere to company policies. Employee lawsuits occasionally results in a reversal of sanctioning.
- Physical: The buildings that house your data have cinder block external walls and internal walls which do not go from the floor to physical ceiling, there is vehicle parking near an outside wall of the data center, access requires swiping a badge with photo to prevent unauthorized individuals from entering the data center, the server racks are unlocked, the rack cages are collocated with other customers of unknown reputation. Some customer cages are disorganized and/or have visibly noticeable trash.
- Legal: Regulatory compliance is seen as a burden rather than an opportunity to improve the organization and any means of avoiding compliance are seen as fair, contractual obligations are honored to the letter of the contract and a large legal staff is retained to manage lawsuits, payments to creditors is often delayed earning the company a poor reputation with vendors and customers, your organization does not even pay lip service to good corporate citizenship
- Financial: Financial impacts that are equal to or exceed your organization’s materiality level.
- High (16)
- Environmental: EF4 Tornado, RS 7-7.9 Earthquake, significant non-violent crime, law enforcement chronically understaffed and under budgeted, power outages that require production equipment to be shut down for one hour or more; significant flooding in the local area
- Operational: Issues with production equipment which shut down operations resulting in financial loss; production issues resulting in missed critical deadlines and extra costs to expedite shipping; data entry issues which must be backed out and reentered, vendors miss deliveries of critical parts.
- Technical: Major issues with production IT equipment that causes a production outage, code issues that stop production while an emergency patch is developed and deployed.
- Political: Your organization is the subject of negative political complaining, increases in the terrorism warning level with specific threats to your organization; major changes to legislation with severe penalties and significant costs to comply.
- Human: Significant employee dissatisfaction and/or localized strikes. Turnover is significant and is has a negative impact on the organization’s bottom line and/or is resulting in customer dissatisfaction. Organizational training is an inside joke amongst employees. There is a process for the sanctioning of employees who fail to adhere to company policies, but this is no consistently implemented. Employee lawsuits generally results in a reversal of sanctioning. Physical: The buildings that house your data have cinder block external walls and internal walls which do not go from the floor to physical ceiling, there is vehicle parking near an outside wall of the data center, access requires swiping a badge without a photo to prevent unauthorized individuals from entering the data center, the server racks are unlocked, the rack cages are collocated with other customers of unknown reputation. The racks of other customers are unlocked, have no cable management, old equipment is stacked in corners, cages are disorganized and/or have visibly noticeable trash.
- Legal: Your organization does not have a compliance department and is unaware of its compliance requirements, Contractual obligations are frequently not honored and a large legal staff is retained to manage numerous lawsuits, payments to creditors is often delayed and/or reduced earning the company a poor reputation with vendors and customers, your organization is seen as having a negative effect on the environment, support poor political decisions, or disregarding human rights in the countries in which it operates.
- Financial: impacts that are up to two times of your organization’s materiality level.
- Very High (32)
- Environmental: EF5 Tornado, RS8 or above earthquake, violent crime, ineffective law enforcement, power outages that results in damaged production equipment; flooding on the organizational premises
- Operational: Issues with production equipment which shut down operations for an indeterminate period of time; production issues resulting in missed critical deadlines and sanctions from customers; data entry issues which result in significant integrity issues, vendors unable or unwilling to deliver material for an indefinite period of time.
- Technical: Major issues with production IT equipment that causes a production outage for an indeterminate period of time, code issues that stop production and require a return to a previous version of the application losing required functionality.
- Political: Your organization is the subject to public humiliation by senior politicians, Threats of kidnapping or murder directly at the CEO, CFO, etc. Terrorist or criminal plot identified by law enforcement with specific threats to your organization; major changes to legislation with which the organization cannot comply.
- Human: Widespread strikes, and/or turnover of significant senior level employees resulting is significant loss of customer trust. No employee training is provided resulting in significant processing errors. There is no formal process for the sanctioning of employees who fail to adhere to company policies. Employee lawsuits generally results in a reversal of sanctioning.
- Physical: The buildings that house your data have wooden or sheet metal external walls, internal walls which do not go from the floor to physical ceiling, there is vehicle parking near an outside data center wall directly next to the area where the servers cages are located of the data center, you rely on an individual checking a list of authorized users for access using badges with no photo, Not all customers have cages around their server racks have poor cable management and may or may not be locked, some customers have known negative reputations, the common trash bins are full of paper or other flammable materials.
- Legal: Your organization does not have a compliance department and is unware of its compliance requirements and ignores those it is aware of, Contractual obligations are frequently not honored and a large legal staff is retained to manage numerous lawsuits, payments to creditors is often delayed and/or reduced earning the company a poor reputation with vendors and customers, your organization is seen as actively polluting the environment, producing products which are dangerous to human health, exploiting poor people at home and abroad.
- Financial: impacts that exceed two times of your organization’s materiality level
- Threat Motivation (TM) – The threat motivation attempts to document the motivation of any given threat. Using financial gain as an example of motivation, a human would generally be more motivated to do something for one million dollars versus one thousand. Environmental threats, on the other hand, are not motivated by economic gain but certain traits such as location, environment, etc. can provide clues to help one roughly determine if an earthquake, for example, will be small and barely noticeable or major and life threatening.
- Vulnerability (V) – A vulnerability can be environmental, human, technical, etc. A vulnerability represents a weakness that a threat could exploit. For example, if you have a building in San Francisco that was not built to withstand an earthquake, you have a vulnerability. If you are using WPA2 on your wireless devices and have not deployed the latest vendor patches, you have a vulnerability. If you have a password policy that requires users to use complex passwords consisting of numbers, letters, and special characters but your log-on mechanism does not enforce this requirement, you have a vulnerability. If your financial system allows a single person to create vendors and make payments to those vendors, you have a vulnerability. I could go on, but I think you probably get the point. A quick check on the internet will provide dozens of lists that include hundreds of vulnerabilities for you to choose from. The Common Vulnerabilities and Exposures (CVE) list from mitre.org being a standard for use in Information Security.
- Exploitation Complexity (EC) – The exploitation complexity is an estimation of how difficult it would be for a threat to exploit a vulnerability. Using the example vulnerabilities above, a building with no protection against earthquakes in San Francisco might not be damaged by a light earthquake, say RS 3 or below but might be leveled by an RS 6. The difference between motivation and exploitation complexity with regard to environmental threats is that if you are located on the San Andreas Fault then an Earthquake has a higher motivation score (likelihood) than you would have for central Michigan. By the same token, if you are located on the San Andreas Fault, your buildings are probably constructed to withstand a much stronger earthquake than those central Michigan. An internal employee may not have the skillset to exploit a failure to enforce password complexity rules, but an advanced hacker surely would. A failure to implement segregation of duties to avoid a situation where a single employee can create and pay vendors might be irrelevant to the average trusted employee but could become a huge risk if we are talking about a disgruntled finance department employee. As a general rule, the higher the exploitation complexity, the less likely the vulnerability will be exploited.
- Impact (I) – Assuming the threat is successful, the impact is the cost of that success to your organization. To help one better understand and quantify those costs, I have broken impact down into 11 areas which are evaluated individually. It might be possible, for example, that an organization’s Image may be tarnished without the organization violating any regulations. Sadly, the reverse can probably also be said, that is, the violation of a regulation, even if made public, may not result in an impact to the organization’s image. Once all of the 11 Impact areas have been evaluated, their values are added together and these then become the impact value (I = IC + II + IA + INR + IRV + IPV + IPI + IDP + IO + IFL + IID).
- Confidentiality Impact (IC) – Confidentiality means that data are secure and can only be accessed by authorized individuals. A loss of confidentiality means that information has been made available to those who should not see it. This could be caused by failures to properly manage access, data being stolen, or something as mundane as returning a leased copier without properly erasing the hard drive. Examples of lost confidentiality could be the executive salaries being accidentally sent to the entire organization or, as we’ve recently seen, half of the American population’s financial data being stolen potentially opening them up to identity theft.
- Integrity Impact (II) –Integrity means that data are correct and complete. Integrity can be negatively impacted when, for example, there are system errors during processing or if an employee manually enters the wrong data. Integrity issues can be very difficult to discover. Consider, if you will, an integrity issue that causes an organization’s employees to be paid an extra ten euros in one month. The employees probably would not notice this amount of money, especially after taxes are deducted, but in a large international organization with centralized payroll this could amount to tens of thousands of Euros in overpayments and quite likely the same amount again to correct the problem.
- Availability Impact (IA) – Availability means that data are available for use when they are required. Availability can be negatively impacted by a production system crashing or by a careless employee who erroneously deletes a critical file. Availability can be improved by using redundant systems and by ensuring that a functional backup system is in place and regularly tested.
- Non-Repudiation (INR) – Non-Repudiation means that a person cannot deny that they performed an action. This means that the person must be positively identified before a transaction is performed. In the modern world, where an increasing number of sensitive activities are performed online, non-Repudiation is critical. This includes everything from the orders you place on Amazon to your online banking. While I cannot speak for all banks worldwide, mine not only makes me log in using my password, I must also provide a TAN for each and every transaction I perform. This ensures that I cannot deny that I performed the transaction.
- Regulatory Violation (IRV) – Regulatory violations include a wide range of compliance requirements and these are best evaluated by working with your compliance department. Violations of compliance requirements can result in significant fines. Rather than attempting to evaluate the impact of violating each and every regulation with which the organization must comply, a difficult task at best, I have settled for a single value which represents the combined impact of all potential violations that could result from any given threat scenario.
- Personal Rights Violation (IPV) –A violation of a person’s rights could include the person not being hired/promoted based on race, religion, sex, etc. This could also include a person being wrongly accused of a crime or an erroneous loss of privileges. Depending on the laws in the country in which you operate, a violation of a person’s rights may involve payments of millions of dollars or simply a slap on the wrist. As we are currently seeing in the U.S., violating a person’s right, regardless of how powerful you may think you are, can destroy your career and life even years after the violation occurred.
- Personal Injury (IPI) – Personal injury literally covers injuries to one or more people. This is generally rated from light injuries to a small number of people up to death of multiple individuals. While this may not seem like something one would associate with cybersecurity, consider, if you will, the results of a cyber-attack that results in the failure of a nuclear power plant, the derailment of a train or a power loss to a hospital. The ultimate goal of any security and/or risk management initiative is the preservation of human life. Aside from the deaths that may occur, organizations may, again depending on the country in which they operate, find themselves paying nothing to millions in wrongful death compensation.
- Data Privacy (IDP) – Before the advent of the EU General Data Protection Regulation (GDPR) I would have included Data Privacy under Regulatory and Personal Rights violations. Given the current high level of interest in Data Privacy it seems like a good idea to call it out separately. When performing a risk assessment which might involve Data Privacy violations, one must ensure that it is not counted twice as a Regulatory and Personal Rights violation. Since the GDPR, for example, includes theoretical fines of up to 4% of global turnover, it is critical for organizations to evaluate their Data Privacy practices and manage the associated risks accordingly.
- Operational (IO) –Operational impact includes all of the costs associated with operational outages. These could involve a few employees or entire production facilities. These might even involve customer outages as a result of your outage. I’ve seen operational outages calculated in minutes of plant downtime, missed shipments, customer downtime, and so forth. The key is to agree with operations how the Operational impact will be calculated, tracked, and reported. As part of your risk management practices, you should track all operational outages in a centralized system so you can understand their impact, frequency, and most importantly, their root cause.
- Financial Loss (FL) – Financial loss could include the fines that result from a regulatory violation, the charges from customers due to downtime your organization caused, remediation costs after a data breach, loss revenue to image damage (consider VW), or direct losses as the result of fraud or general mismanagement. In my experience, it is often difficult to nail down the exact costs which is why I use ranges in iGsrc. This value will need to be tailored to each organization to represent their pain threshold when it comes to financial loss. For example, a large international “iCompany” could potentially lose a million dollars and not even notice it while a small doctor’s office somewhere in Europe would probably have to close its doors if it lost a fraction of that amount.
- Image (Reputational) Damage (IID) – Image Damage occurs when an organization does something that results in negative press. The impact from Image Damage could be immediate, that is your organization is, for example, boycotted or longer term resulting in slowly declining sales.
- Probability (P) – I loved statistics in college. Unfortunately, the formula I learned way back then was that probability is equal to number of ways and event can happen divided by the total number of outcomes. For example, when flipping a coin, the number of ways the event can happen is one (1): head or tails (1) whereas the number of outcomes is two (2) heads and tails. The probability of a coin being heads or tails is therefore 1 divided by 2 or .50%. Alas, in information security it is often impossible to say what the number of ways an event can happen are and it is equally difficult to say with certainty what the outcomes might be.
From an Information Security perspective, probability is the likelihood that a threat will be able to exploit a vulnerability in a given period of time. For example, the probability that a tornado will flatten my house in the next year is roughly 0% since I live in an area that does not experience tornados. For environmental threats you can reference government or insurance charts. These are fairly straightforward and relatively reliable. As a risk management practitioner, I ran into problems when trying to determine, for example, what the likelihood that an unhappy employee would violate company policy or an external threat would take down our website might be. Often the results we came up with were nothing more than a SWAG (Silly Wild-Assed Guess). Alas, I am not a very good fortuneteller and I do not like giving my customers results that I could not back up. To improve our chances of delivering correct results, I’ve propose using the formula: Probability is quotient of the Threat divided by the Exploitation Complexity (P = T / EC). To me, the higher the Exploitation Complexity the lower the likelihood that any given threat will be able to exploit a vulnerability. Conversely, a lower Threat Value (Threat Profile and Threat Motivation), the less likely the Threat will be successful even if the Exploitation Complexity is also lower. While this does not directly answer the question of how often per year the Threat may exploit a given vulnerability, it does give us a starting point and while the results may not be perfect, they are consistent and surely better than the average SWAG.
- Inherent Risk (IR) – Inherent Risk is the risk the organization faces from any given threat before controls have been implemented. Inherent risk is calculated by multiplying Probability times Impact (P * I) which brings us back to the standard formula that most of us have been taught over the years. Unfortunately, many of the tools I have seen in use stop here and fail to consider controls which reduce the amount of risk the organization faces. As such, they end up with a skewed view of their overall risk. Moreover, if an organization does not consider the controls it has in place when evaluating risks, they may incorrectly prioritize their risks and fail to implement critical controls where they are needed most.
- Control Value (CV) – iGsrc includes a long list of controls which are suggested by various Authoritative Sources (frameworks, regulations, etc.). In iGsrc these are referred to as external controls. Similar external controls from several Authoritative Sources are related to a single internal control. In this manner, an organization can evaluate the effectiveness of an internal control and see their compliance to similar controls in the various Authoritative Sources. During the risk assessment process, the risk manager must determine which controls are relevant to the threat under evaluation. It is possible that an organization does not have any controls in place to manage a given threat, this situation would, however, be less than ideal.
- Control Effectiveness (CE1, CE2, CE….) – Control effectiveness is a measure of how well any given control will be able to reduce the impact of a Threat. For example, a shelter designed to withstand an EF2 tornado would be very effective against an EF1 tornado but potentially useless against an EF5 tornado. Each control is scored based on its effectiveness and then all of the controls identified for any given threat are added together to give us the CV score (CV = CE1 + CE2 + CE3 + CE….)
- Residual Risk (RR) – Residual Risk is the amount of risk left after the controls have been applied. If no controls are identified, the Residual Risk will equal the Inherent Risk. While not optimal, this may be the case when the impact of a successful attack is lower than the cost of implementing a control. For example, I could purchase Tornado insurance for my house which would cost me about €200/year. Since, however, I live in an area that has never experienced any tornado activity, the damage I am likely to incur during any given year is €0. As such, my choice is simply to accept the risk and use the money for another purpose.
Bringing it all together
Based on the discussion above, we have the following formulae:
- A Threat (T) is the product of the Threat Profile (TP) multiplied by the Threat Motivation (TM): T = TP * TM
- Probability (P) is the quotient of the Threat (T) divided by the Exploitation Complexity (EC): P = T / EC
- The Impact (I) is the sum of the impacts to Confidentiality (IC), Integrity (II), Availability (IA), Non-Repudiation (INR), Regulatory Violations (IRV), Personal Rights Violations (IPV), Personal Injury (IPI), Data Privacy Violations (IDP), Operational Impacts (IO), Financial Loss (IFL), Image Damage (IID): I = IC + II + IA + INR + IRV + IPV + IPI + IDP + IO + IFL + IID
- Inherent Risk (IR) is the product of Probability (P) multiplied by the Impact (I): IR = P * I
- The Control Value (CV) is the sum of Control Effectiveness (CE) of the applicable controls. It should be noted, there may be no applicable controls in which case the CV = 1 (because division by zero is not allowed. CV = CE1, CE2, CE3, CE….
- Residual Risk is the quotient of Inherent Risk (IR) divided by the Control Value (CV): RR = IR / CV
Taken together this gives us the following formula:
While imperfect and incomplete, this formula allows us to calculate Information Security Risk in a consistent manner. Moreover, when an auditor asks me to explain how I arrived at the risk ratings I am presenting, I can talk them through this formula. I am continuing to investigate if these formulae and the values I’ve used in the tool deliver practical real-world results (so far they do) but, ultimately, iGsrc is only a tool and this is only a formula. Without a solid understanding of Information Security Risk Management, the tool will not yield useful, repeatable results.
In this paper I have outlined the methodology implemented in iGsrc to calculate Information Security Risk. I’ve also gone through the vocabulary required to discuss Information Security Risk Management (ISRM).