Skip to content

Commit 5644994

Browse files
committed
docs: add a specification for finding determination
The finding specificiation details how to use the EvaluationPlan and deconfliction strategies to determine findings for Layer5 action. Assisted by: Cursor Agent Signed-off-by: Jennifer Power <[email protected]>
1 parent f8cc1d1 commit 5644994

File tree

2 files changed

+252
-0
lines changed

2 files changed

+252
-0
lines changed

spec/README.md

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
# Specifications
2+
3+
This directory contains specifications for Gemara.
4+
5+
## Specifications
6+
7+
- **[Finding Determination Specification](finding-determination.md)** - Defines how assessment results are aggregated and how conflicts between multiple evaluators are resolved. Covers result types, conflict resolution strategies, and use cases for GRC Engineering workflows.
8+
9+
## Overview
10+
11+
The specifications in this directory describe the machine-readable formats and behavioral rules for the Gemara model layers.

spec/finding-determination.md

Lines changed: 241 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,241 @@
1+
# Finding Determination Specification
2+
3+
<!-- TOC -->
4+
* [Finding Determination Specification](#finding-determination-specification)
5+
* [Abstract](#abstract)
6+
* [Overview](#overview)
7+
* [Notations and Terminology](#notations-and-terminology)
8+
* [Notational Conventions](#notational-conventions)
9+
* [Terminology](#terminology)
10+
* [Finding](#finding)
11+
* [Evaluator](#evaluator)
12+
* [Conflict Resolution Strategy](#conflict-resolution-strategy)
13+
* [Authoritative Evaluator](#authoritative-evaluator)
14+
* [Result Types](#result-types)
15+
* [Conflict Resolution Strategies](#conflict-resolution-strategies)
16+
* [Strict Strategy](#strict-strategy)
17+
* [ManualOverride Strategy](#manualoverride-strategy)
18+
* [AuthoritativeConfirmation Strategy](#authoritativeconfirmation-strategy)
19+
* [Use Cases](#use-cases)
20+
* [Strategy Selection Anti-Patterns](#strategy-selection-anti-patterns)
21+
<!-- TOC -->
22+
23+
## Abstract
24+
25+
This specification defines how findings are determined when multiple assessment evaluators and procedures provide results for the same assessment requirement in Layer 4 Evaluation Plans.
26+
It specifies conflict resolution strategies and result type semantics to ensure consistent finding determination across implementations.
27+
28+
## Overview
29+
30+
Layer 4 Evaluation Plans support multiple assessment evaluators running assessment procedures to evaluate control requirements.
31+
When multiple evaluators run the same procedure, or multiple procedures evaluate the same requirement, conflict resolution strategies determine how their results are combined to determine if there is a finding.
32+
This specification provides a formal definition of these strategies to ensure consistent and predictable behavior across implementations.
33+
34+
## Notations and Terminology
35+
36+
### Notational Conventions
37+
38+
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [RFC 2119](https://tools.ietf.org/html/rfc2119).
39+
40+
### Terminology
41+
42+
This specification defines the following terms:
43+
44+
#### Finding
45+
46+
A finding is a documented observation that a Layer 2 control requirement, as referenced by a Layer 3 policy, is not being met or is not implemented correctly.
47+
48+
A finding is determined when one or more evaluator results indicate non-compliance with the assessed control requirement. Only results of `Failed`, `Unknown`, or `NeedsReview` constitute findings;
49+
`Passed` and `NotApplicable` results indicate compliance or inapplicability and do not produce findings.
50+
Findings **MUST** be supported by evidence from assessment logs that document the evaluation results, and serve as the basis for Layer 5 enforcement actions and remediation activities.
51+
52+
#### Evaluator
53+
54+
An evaluator is a tool, process, or person that executes an assessment procedure and produces a result.
55+
56+
#### Conflict Resolution Strategy
57+
58+
A conflict resolution strategy is an algorithm that determines how multiple evaluator results are combined to produce a single finding determination.
59+
60+
#### Authoritative Evaluator
61+
62+
Evaluator authoritativeness determines how an evaluator participates in conflict resolution when using the `AuthoritativeConfirmation` strategy.
63+
Evaluators can be marked as `authoritative` (can trigger findings independently) or `non-authoritative` (requires confirmation from authoritative evaluators to trigger findings).
64+
65+
The distinction between authoritative and non-authoritative evaluators maps directly to enforcement readiness:
66+
67+
- **Authoritative evaluators** represent procedures that are ready for enforcement. When authoritative evaluators report failures, findings are determined and can trigger enforcement actions (blocking, remediation, alerts).
68+
- **Non-authoritative evaluators** represent procedures that collect risk data but are not ready to trigger enforcement. Non-authoritative evaluators provide risk visibility and inform decision-making, but their failures do not independently trigger enforcement actions.
69+
70+
**Default Behavior**: If the `authoritative` field is not explicitly set, evaluators default to `authoritative: false` (non-authoritative).
71+
However, since the default conflict resolution strategy is `Strict`, this default has no effect on finding determination - `Strict` ignores the `authoritative` field and treats all evaluators equally.
72+
73+
## Result Types
74+
75+
The following result types are used in Layer 4:
76+
77+
- **NotRun**: The assessment was not executed
78+
- **Passed**: The assessment passed successfully
79+
- **Failed**: The assessment failed
80+
- **NeedsReview**: The assessment requires manual review
81+
- **NotApplicable**: The assessment is not applicable to the current context
82+
- **Unknown**: The assessment result is unknown or indeterminate
83+
84+
## Conflict Resolution Strategies
85+
86+
For aggregating results within a single log (e.g., multiple steps within an assessment), implementations MUST use severity-based determination, where the most severe result takes precedence according to the hierarchy: `Failed > Unknown > NeedsReview > Passed > NotApplicable`.
87+
If all results are `NotRun`, no finding **MUST** be determined. This severity-based aggregation strategy **MUST** be used if a finding determination is inconclusive using other strategies.
88+
89+
Three multi-source conflict resolution strategies are defined in this specification. Implementations **MUST** support all strategies.
90+
91+
### Strict Strategy
92+
93+
The Strict strategy determines that a finding exists if ANY evaluator reports a failure, regardless of other evaluator results. This strategy provides zero tolerance for failures and is the simplest conflict resolution approach.
94+
95+
**Security-First Design**: `Strict` applies uniform zero-tolerance logic to all non-passing results (`Failed`, `Unknown`, `NeedsReview`). This makes `Strict` ideal for organizations that want predictable, consistent behavior and absolute zero-tolerance for security violations, ensuring that any evaluator reporting a problem triggers a finding.
96+
97+
**Process**: When using the Strict strategy, a finding **MUST** be determined according to the following priority order:
98+
99+
1. If ANY evaluator reports `Failed`, then **Finding exists** (Failed)
100+
2. Else if ANY evaluator reports `Unknown`, then **Finding exists** (Unknown)
101+
3. Else if ANY evaluator reports `NeedsReview`, then **Finding exists** (NeedsReview)
102+
4. Else if ALL evaluator results are `Passed`, then **No finding** (Passed)
103+
5. Else if ALL evaluator results are `NotApplicable`, then **No finding** (NotApplicable)
104+
105+
Evaluators with `NotRun` results MUST be excluded from the determination process. All evaluators are treated equally when using the Strict strategy.
106+
107+
### ManualOverride Strategy
108+
109+
The ManualOverride strategy gives precedence to manual review evaluators over automated evaluators when determining findings from conflicting results.
110+
111+
**Process**: When using the ManualOverride strategy:
112+
113+
1. Separate results into manual and automated evaluator results based on the evaluator's ExecutionType (`Manual` vs `Automated`).
114+
2. If manual evaluators exist:
115+
- If any manual evaluator reports `Failed`: **Finding exists** (Failed)
116+
- Else if any manual evaluator reports `Unknown`: **Finding exists** (Unknown)
117+
- Else if any manual evaluator reports `NeedsReview`: **Finding exists** (NeedsReview)
118+
- Else if all manual evaluators report `Passed`: **No finding** (Passed)
119+
- Else: **No finding** (NotApplicable)
120+
121+
Evaluators with `NotRun` results **MUST** be excluded from the determination process.
122+
123+
### AuthoritativeConfirmation Strategy
124+
125+
The AuthoritativeConfirmation strategy treats non-authoritative evaluators as requiring confirmation from authoritative evaluators before triggering findings.
126+
127+
**Process**: When using the AuthoritativeConfirmation strategy:
128+
129+
1. Separate evaluators into authoritative and non-authoritative groups based on their `authoritative` field. Evaluators without an explicit `authoritative` field default to `authoritative: false` (non-authoritative).
130+
2. Authoritative evaluators trigger findings using Strict logic:
131+
- If any authoritative evaluator reports `Failed`: **Finding exists** (Failed)
132+
- Else if any authoritative evaluator reports `Unknown`: **Finding exists** (Unknown)
133+
- Else if any authoritative evaluator reports `NeedsReview`: **Finding exists** (NeedsReview)
134+
- Else if all authoritative evaluators report `Passed`: Continue to step 3
135+
3. Non-authoritative evaluators require confirmation:
136+
- If only non-authoritative evaluators report `Failed`: **No finding** (non-authoritative cannot trigger alone)
137+
- If non-authoritative evaluator reports `Failed` AND authoritative evaluator reports `Passed`: **No finding** (contradicted)
138+
- If non-authoritative evaluator reports `Failed` AND authoritative evaluator reports `Failed`: **Finding exists** (confirmed)
139+
- If non-authoritative evaluator reports `Failed` AND authoritative evaluator reports `Unknown` or `NeedsReview`: **Finding exists** (escalated for investigation)
140+
- If all evaluators (authoritative and non-authoritative) report `Passed`: **No finding** (Passed)
141+
- If all evaluators report `NotApplicable`: **No finding** (NotApplicable)
142+
143+
Evaluators with `NotRun` results MUST be excluded from the determination process.
144+
145+
**Key Behaviors**:
146+
147+
- **Authoritative evaluators** can trigger findings independently using zero-tolerance logic. If any authoritative evaluator reports a non-passing result, a finding is immediately determined, regardless of non-authoritative evaluator results.
148+
- **Non-authoritative evaluators** can only trigger findings when:
149+
- They confirm an authoritative evaluator's failure (both report `Failed`)
150+
- They escalate an unclear authoritative result (authoritative reports `Unknown`/`NeedsReview` and non-authoritative reports `Failed`)
151+
- **Non-authoritative evaluators cannot**:
152+
- Trigger findings independently (if only non-authoritative evaluators report failures, no finding is determined)
153+
- Override authoritative evaluators (if authoritative evaluators report `Passed`, non-authoritative failures are ignored)
154+
155+
## Use Cases
156+
157+
Layer 4 Evaluation Plans serve as the foundation for Layer 5 enforcement decisions. The evaluation results inform what enforcement actions should be taken, such as blocking deployments, triggering remediation, or generating alerts.
158+
However, not all procedures in an Evaluation Plan need to trigger enforcement actions.
159+
160+
Use the following decision matrix to select the appropriate conflict resolution strategy:
161+
162+
| Scenario | Recommended Strategy | Rationale |
163+
|----------------------------------------------------------------------|-----------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------|
164+
| **All evaluators are equally trusted and validated** | `Strict` | Zero tolerance for failures. Any evaluator reporting a problem triggers a finding. Simplest and most predictable behavior. |
165+
| **Multiple evaluators, need human judgment for final determination** | `ManualOverride` | Manual review takes precedence. Automated tools provide initial screening, but human reviewers make final decisions. |
166+
| **Gradual rollout of new enforcement** | `AuthoritativeConfirmation` | Start with evaluators as non-authoritative to collect baseline data. Promote to authoritative once validated. |
167+
| **Experimental or unvalidated evaluators** | `AuthoritativeConfirmation` | Mark experimental evaluators as non-authoritative. They provide visibility but don't trigger enforcement until confirmed by authoritative evaluators. |
168+
| **Simple, predictable behavior needed** | `Strict` | No complex logic. Any failure = finding. Easiest to understand and maintain. |
169+
| **Automated tools need human verification** | `ManualOverride` | Automated tools flag issues, but require manual confirmation before triggering findings. |
170+
| **Collecting metrics before enforcing** | `AuthoritativeConfirmation` | Run evaluators non-authoritatively to understand violation patterns, then promote to authoritative for enforcement. |
171+
172+
**Quick Decision Guide:**
173+
174+
```
175+
Do you need human judgment for final decisions?
176+
├─ YES → Use ManualOverride
177+
└─ NO → Do you need to distinguish enforcement-ready from experimental evaluators?
178+
├─ YES → Use AuthoritativeConfirmation
179+
└─ NO → Use Strict (default)
180+
```
181+
182+
A typical workflow for introducing promoting procedures from non-authoritative to authoritative:
183+
184+
1. **Add to Evaluation Plan (Non-Authoritative)**
185+
- Add new procedure with evaluators (default is `authoritative: false`, so no explicit setting needed)
186+
- Use `AuthoritativeConfirmation` strategy to enable authoritative/non-authoritative behavior
187+
- Run evaluations to collect baseline data
188+
- Understand violation patterns and impact
189+
- Validate that the evaluator produces accurate results
190+
191+
2. **Assess and Remediate**
192+
- Analyze non-authoritative evaluator results
193+
- Fix critical violations discovered during baseline collection
194+
- Validate procedure accuracy and false positive rates
195+
- Ensure evaluator is ready for enforcement
196+
197+
3. **Promote to Authoritative**
198+
- Change evaluator `authoritative` field to `true` (explicit opt-in required)
199+
- Now triggers findings and enforcement actions independently
200+
- Monitor enforcement impact
201+
- Verify that enforcement actions are appropriate
202+
203+
4. **Maintain Some Procedures as Non-Authoritative**
204+
- Keep informational/audit-only procedures as non-authoritative (default `authoritative: false`)
205+
- Maintain experimental procedures as non-authoritative until validated
206+
- Use non-authoritative for risk visibility without enforcement
207+
- With `AuthoritativeConfirmation` strategy, non-authoritative evaluators require confirmation
208+
209+
**Example: Understanding Defaults**
210+
211+
```yaml
212+
# Example 1: Default behavior (security-first)
213+
procedures:
214+
- id: check-branch-protection
215+
evaluators:
216+
- id: security-scanner
217+
# No authoritative field = defaults to false (but ignored by Strict)
218+
# No strategy = defaults to Strict
219+
# Result: Any failure from security-scanner triggers a finding (Strict ignores authoritative)
220+
```
221+
222+
```yaml
223+
# Example 2: Explicit non-authoritative (opt-out)
224+
procedures:
225+
- id: experimental-check
226+
evaluators:
227+
- id: new-tool
228+
authoritative: false # Explicitly opt-out
229+
strategy:
230+
conflict-rule-type: AuthoritativeConfirmation
231+
# Result: new-tool provides visibility but doesn't trigger findings alone
232+
```
233+
234+
### Strategy Selection Anti-Patterns
235+
236+
**Avoid these patterns:**
237+
238+
-**Using `AuthoritativeConfirmation` with all evaluators as non-authoritative** - No findings will ever be triggered. At least one authoritative evaluator is required.
239+
-**Using `ManualOverride` when all evaluators have ExecutionType `Automated`** - Falls back to `Strict`, defeating the purpose of manual override. At least one evaluator with ExecutionType `Manual` is required for ManualOverride to function as intended.
240+
-**Mixing strategies inconsistently** - Use the same strategy at both Assessment and Procedure levels unless there's a clear reason for different behavior.
241+
-**Setting `authoritative: false` without using `AuthoritativeConfirmation`** - The field is ignored by other strategies, which can be confusing.

0 commit comments

Comments
 (0)