Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 15 additions & 0 deletions docs/howto/gathering_info/automatable.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
# Gathering Information about Automatable

An analyst should be able to sketch the automation scenario and how it either does or does not satisfy each of the four kill chain steps.
Once one step is not satisfied, the analyst can stop and select [*no*](automatable.md).
Code that demonstrably automates all four kill chain steps certainly satisfies as a sketch.
We say sketch to indicate that plausible arguments, such as convincing psuedocode of an automation pathway for each step, are also adequate evidence in favor of a [*yes*](automatable.md) to *Automatable*.

Like all SSVC decision points, *Automatable* should capture the analyst's best understanding of plausible scenarios at the time of the analysis.
An answer of *no* does not mean that it is absolutely inconceivable to automate exploitation in any scenario.
It means the analyst is not able to sketch a plausible path through all four kill chain steps.
“Plausible” sketches should account for widely deployed network and host-based defenses.
Liveness of Internet-connected services means quite a few overlapping things [@bano2018scanning].
For most vulnerabilities, an open port does not automatically mean that reconnaissance, weaponization, and delivery are automatable.
Furthermore, discovery of a vulnerable service is not automatable in a situation where only two hosts are misconfigured to expose the service out of 2 million hosts that are properly configured.
As discussed in in [Reasoning Steps Forward](../../topics/scope.md), the analyst should consider *credible* effects based on *known* use cases of the software system to be pragmatic about scope and providing values to decision points.
23 changes: 23 additions & 0 deletions docs/howto/gathering_info/exploitation.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
# Gathering Information About Exploitation

[@householder2020historical] presents a method for searching the GitHub repositories of open-source exploit databases.
This method could be employed to gather information about whether *PoC* is true.
However, part (3) of *PoC* would not be represented in such a search, so more information gathering would be needed.
For part (3), one approach is to construct a mapping of CWE-IDs which
always represent vulnerabilities with well-known methods of exploitation.
We provide a list of possible CWE-IDs for this purpose below.

Gathering information for *active* is a bit harder.
If the vulnerability has a name or public identifier (such as a CVE-ID), a search of news websites, Twitter, the vendor's vulnerability description, and public vulnerability databases for mentions of exploitation is generally adequate.
However, if the organization has the ability to detect exploitation attempts—for instance, through reliable and precise IDS signatures based on a public *PoC*—then detection of exploitation attempts also signals that *active* is the right choice.
Determining which vulnerability a novel piece of malware uses may be time consuming, requiring reverse engineering and a lot of trial and error.
Additionally, capable incident detection and analysis capabilities are required to make reverse engineering possible.
Because most organizations do not conduct these processes fully for most incidents, information about which vulnerabilities are being actively exploited generally comes from public reporting by organizations that do conduct these processes.
As long as those organizations also share detection methods and signatures, the results are usually quickly corroborated by the community.
For these reasons, we assess public reporting by established security community members to be a good information source for *active*; however, one should not assume it is complete.

The description for *none* says that there is no **evidence** of *active* exploitation.
This framing admits that an analyst may not be able to detect or know about every attack.
Acknowledging that *Exploitation* values can change relatively quickly, we recommend conducting these searches frequently: if they can be automated to the organization's satisfaction, perhaps once a day (see also [Guidance on Communicating Results](../../howto/bootstrap/use.md)).
An analyst should feel comfortable selecting *none* if they (or their search scripts) have performed searches in the appropriate places for public *PoC*s and *active* exploitation (as described above) and found *none*.
Acknowledging that *Exploitation*.
9 changes: 9 additions & 0 deletions docs/howto/gathering_info/mission_impact.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
# Gathering Information About Mission Impact

The factors that influence the mission impact level are diverse.
The material here does not exhaustively discuss how a stakeholder should answer a question; that is a topic for future work.
At a minimum, understanding mission impact should include gathering information about the critical paths that involve vulnerable components, viability of contingency measures, and resiliency of the systems that support the mission.
There are various sources of guidance on how to gather this information; see for example the FEMA guidance in Continuity Directive 2 [@FCD2_2017] or OCTAVE FORTE [@tucker2018octave].
This is part of risk management more broadly.
It should require the vulnerability management team to interact with more senior management to understand mission priorities and other aspects of risk mitigation.

24 changes: 24 additions & 0 deletions docs/howto/gathering_info/system_exposure.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
# Gathering Information About System Exposure

*System Exposure* is primarily used by Deployers, so the question is about whether some specific system is in fact exposed, not a hypothetical or aggregate question about systems of that type.
Therefore, it generally has a concrete answer, even though it may vary from vulnerable component to vulnerable component, based on their respective configurations.

*System Exposure* can be readily informed by network scanning techniques.
For example, if the vulnerable component is visible on [Shodan](https://www.shodan.io) or by some other external scanning service, then it is *open*.
Network policy or diagrams are also useful information sources, especially for services intentionally open to the Internet such as public web servers.
An analyst should also choose *open* for a phone or PC that connects to the web or email without the usual protections (IP and URL blocking, updated firewalls, etc.).

Distinguishing between *small* and *controlled* is more nuanced.
If *open* has been ruled out, some suggested heuristics for differentiating the other two are as follows.
Apply these heuristics in order and stop when one of them applies.

- If the system's networking and communication interfaces have been physically removed or disabled, choose *small*.
- If [*Automatable*](automatable.md) is [*yes*](automatable.md), then choose *controlled*. The reasoning behind this heuristic is that if reconnaissance through exploitation is automatable, then the usual deployment scenario exposes the system sufficiently that access can be automated, which contradicts the expectations of *small*.
- If the vulnerable component is on a network where other hosts can browse the web or receive email, choose *controlled*.
- If the vulnerable component is in a third party library that is unreachable because the feature is unused in the surrounding product, choose *small*.

The unreachable vulnerable component scenario may be a point of concern for stakeholders like patch suppliers who often find it more cost-effective to simply update the included library to an existing fixed version rather than try to explain to customers why the vulnerable code is unreachable in their own product.
In those cases, we suggest the stakeholder reviews the decision outcomes of the tree to ensure the appropriate action is taken (paying attention to [*defer*](../../howto/supplier_tree.md) vs [*scheduled*](../../howto/supplier_tree.md), for example).

If you have suggestions for further heuristics, or potential counterexamples to these, please describe the example and reasoning in an issue on the [SSVC GitHub](https://github.com/CERTCC/SSVC/issues).

16 changes: 16 additions & 0 deletions docs/howto/gathering_info/technical_impact.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
# Gathering Information About Technical Impact

Assessing *Technical Impact* amounts to assessing the degree of control over the vulnerable component the attacker stands to gain by exploiting the vulnerability.
One way to approach this analysis is to ask whether the control gained is *total* or not.
If it is not total, it is *partial*.
If an answer to one of the following questions is _yes_, then control is *total*.
After exploiting the vulnerability,

- can the attacker install and run arbitrary software?
- can the attacker trigger all the actions that the vulnerable component can perform?
- does the attacker get an account with full privileges to the vulnerable component (administrator or root user accounts, for example)?

This list is an evolving set of heuristics.
If you find a vulnerability that should have *total* *Technical Impact* but that does not answer yes to any of
these questions, please describe the example and what question we might add to this list in an issue on the
[SSVC GitHub](https://github.com/CERTCC/SSVC/issues).
15 changes: 15 additions & 0 deletions docs/howto/gathering_info/value_density.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
# Gathering Information About Value Density

The heuristics presented in the *Value Density* definitions involve whether the system is usually maintained by a dedicated professional, although we have noted some exceptions (such as encrypted mobile messaging applications).
If there are additional counterexamples to this heuristic, please describe them and the reasoning why the system should have the alternative decision value in an issue on the [SSVC GitHub](https://github.com/CERTCC/SSVC/issues).

An analyst might use market research reports or Internet telemetry data to assess an unfamiliar product.
Organizations such as Gartner produce research on the market position and product comparisons for a large variety of systems.
These generally identify how a product is deployed, used, and maintained.
An organization's own marketing materials are a less reliable indicator of how a product is used, or at least how the organization expects it to be used.

Network telemetry can inform how many instances of a software system are connected to a network.
Such telemetry is most reliable for the supplier of the software, especially if software licenses are purchased and checked.
Measuring how many instances of a system are in operation is useful, but having more instances does not mean that the software is a densely valuable target.
However, market penetration greater than approximately 75% generally means that the product uniquely serves a particular market segment or purpose.
This line of reasoning is what supports a determination that an ubiquitous encrypted mobile messaging application should be considered to have a *concentrated* Value Density.
16 changes: 0 additions & 16 deletions docs/reference/decision_points/automatable.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,22 +43,6 @@ Due to vulnerability chaining, there is some nuance as to whether reconnaissance
This automates the _reconnaissance_ of vulnerable systems.
In this situation, the analyst should continue to analyze vulnerability A to understand whether the remaining steps in the kill chain can be automated.

!!! tip "Gathering Information About Automatable"

An analyst should be able to sketch the automation scenario and how it either does or does not satisfy each of the four kill chain steps.
Once one step is not satisfied, the analyst can stop and select [*no*](automatable.md).
Code that demonstrably automates all four kill chain steps certainly satisfies as a sketch.
We say sketch to indicate that plausible arguments, such as convincing psuedocode of an automation pathway for each step, are also adequate evidence in favor of a [*yes*](automatable.md) to *Automatable*.

Like all SSVC decision points, *Automatable* should capture the analyst's best understanding of plausible scenarios at the time of the analysis.
An answer of *no* does not mean that it is absolutely inconceivable to automate exploitation in any scenario.
It means the analyst is not able to sketch a plausible path through all four kill chain steps.
“Plausible” sketches should account for widely deployed network and host-based defenses.
Liveness of Internet-connected services means quite a few overlapping things [@bano2018scanning].
For most vulnerabilities, an open port does not automatically mean that reconnaissance, weaponization, and delivery are automatable.
Furthermore, discovery of a vulnerable service is not automatable in a situation where only two hosts are misconfigured to expose the service out of 2 million hosts that are properly configured.
As discussed in in [Reasoning Steps Forward](../../topics/scope.md), the analyst should consider *credible* effects based on *known* use cases of the software system to be pragmatic about scope and providing values to decision points.

## Prior Versions

```python exec="true" idprefix=""
Expand Down
24 changes: 0 additions & 24 deletions docs/reference/decision_points/exploitation.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,30 +9,6 @@ print(example_block(LATEST))

The intent of this measure is the present state of exploitation of the vulnerability. The intent is not to predict future exploitation but only to acknowledge the current state of affairs. Predictive systems, such as EPSS, could be used to augment this decision or to notify stakeholders of likely changes [@jacobs2021epss].

!!! tip "Gathering Information About Exploitation"

[@householder2020historical] presents a method for searching the GitHub repositories of open-source exploit databases.
This method could be employed to gather information about whether *PoC* is true.
However, part (3) of *PoC* would not be represented in such a search, so more information gathering would be needed.
For part (3), one approach is to construct a mapping of CWE-IDs which
always represent vulnerabilities with well-known methods of exploitation.
We provide a list of possible CWE-IDs for this purpose below.

Gathering information for *active* is a bit harder.
If the vulnerability has a name or public identifier (such as a CVE-ID), a search of news websites, Twitter, the vendor's vulnerability description, and public vulnerability databases for mentions of exploitation is generally adequate.
However, if the organization has the ability to detect exploitation attempts—for instance, through reliable and precise IDS signatures based on a public *PoC*—then detection of exploitation attempts also signals that *active* is the right choice.
Determining which vulnerability a novel piece of malware uses may be time consuming, requiring reverse engineering and a lot of trial and error.
Additionally, capable incident detection and analysis capabilities are required to make reverse engineering possible.
Because most organizations do not conduct these processes fully for most incidents, information about which vulnerabilities are being actively exploited generally comes from public reporting by organizations that do conduct these processes.
As long as those organizations also share detection methods and signatures, the results are usually quickly corroborated by the community.
For these reasons, we assess public reporting by established security community members to be a good information source for *active*; however, one should not assume it is complete.

The description for *none* says that there is no **evidence** of *active* exploitation.
This framing admits that an analyst may not be able to detect or know about every attack.
Acknowledging that *Exploitation* values can change relatively quickly, we recommend conducting these searches frequently: if they can be automated to the organization's satisfaction, perhaps once a day (see also [Guidance on Communicating Results](../../howto/bootstrap/use.md)).
An analyst should feel comfortable selecting *none* if they (or their search scripts) have performed searches in the appropriate places for public *PoC*s and *active* exploitation (as described above) and found *none*.
Acknowledging that *Exploitation*.

## CWE-IDs for *PoC*

The table below lists CWE-IDs that could be used to mark a vulnerability as *PoC* if the vulnerability is described by the CWE-ID.
Expand Down
9 changes: 0 additions & 9 deletions docs/reference/decision_points/mission_impact.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,15 +30,6 @@ Private sector businesses may better align with [operational and financial impac
While the processes, terminology, and audience for these different frameworks differ, they all can provide a sense of the criticality of an asset or assets within the scope of the stakeholder conducting the cyber vulnerability prioritization with SSVC.
In that sense they all function quite similarly within SSVC. Organizations should use whatever is most appropriate for their stakeholder context, with Mission Essential Function analysis serving as a fully worked example in the SSVC documents.

## Gathering Information About Mission Impact

The factors that influence the mission impact level are diverse.
The material here does not exhaustively discuss how a stakeholder should answer a question; that is a topic for future work.
At a minimum, understanding mission impact should include gathering information about the critical paths that involve vulnerable components, viability of contingency measures, and resiliency of the systems that support the mission.
There are various sources of guidance on how to gather this information; see for example the FEMA guidance in Continuity Directive 2 [@FCD2_2017] or OCTAVE FORTE [@tucker2018octave].
This is part of risk management more broadly.
It should require the vulnerability management team to interact with more senior management to understand mission priorities and other aspects of risk mitigation.

## Prior Versions

```python exec="true" idprefix=""
Expand Down
Loading