Goals
Use the optional goals section to record intended outcomes and relationships between them.
Elements
goal: High-level objectives with@id,title, optionalpriority(must|should|may),status,ownerRef, plusstatementand optionalrationale.qgoal: Quality-specific goals with@id,title, optionalpriority/status,statement, and optionalmetric.obstacle: Risks to goal attainment with@id,title, optionallikelihoodandseverity, plusstatementand optionalmitigation.goalLink: Edges connecting goals/obstacles via@from,@to,type(TraceType), optionalconfidence(0–1), and@id.
Authoring tips
- Keep
@idstable; reference goals fromactorsorrequirementsusingrefs. - Use
goalLinkto model refinement and conflict (e.g.,refines,conflictsWith,mitigates) before deriving requirements. - Add
metrictoqgoalwhen verifiability matters (e.g., response time, availability).
Example
<goals>
<goal id="GOAL-AVAIL" title="High availability" priority="must" status="draft">
<statement>Maintain payment API availability during peak shopping.</statement>
<rationale>Protect revenue during events.</rationale>
</goal>
<qgoal id="QGOAL-LATENCY" title="Low latency" priority="should">
<statement>Keep API latency low for checkout.</statement>
<metric>p95 latency ≤ 500ms under 200 rps.</metric>
</qgoal>
<obstacle id="OBS-DB" title="DB contention" likelihood="medium" severity="high">
<statement>Single DB cluster could throttle writes.</statement>
<mitigation>Shard by merchant and add write queue.</mitigation>
</obstacle>
<goalLink id="GL-1" from="OBS-DB" to="GOAL-AVAIL" type="threatens" confidence="0.7"/>
</goals>
Code generation examples
LLMs can translate goals into architectural and operational code:
Quality goal monitoring:
// From QGOAL-LATENCY: p95 latency ≤ 500ms under 200 rps
export class LatencyMonitor {
private metrics: MetricsClient;
async recordRequest(duration: number): Promise<void> {
await this.metrics.histogram('api.latency', duration, {
goal: 'QGOAL-LATENCY',
threshold: 500,
});
const p95 = await this.metrics.getPercentile('api.latency', 0.95);
if (p95 > 500) {
this.metrics.alert('QGOAL-LATENCY violation', { p95 });
}
}
}
Availability infrastructure:
// From GOAL-AVAIL: High availability during peak shopping
export const availabilityConfig = {
replicas: 5, // for GOAL-AVAIL
healthCheck: {
path: '/health',
interval: 10000,
timeout: 2000,
},
autoScaling: {
minReplicas: 3,
maxReplicas: 20,
targetCPU: 70,
},
};
Obstacle mitigation:
// From OBS-DB: DB contention mitigation via sharding
export class ShardedPaymentRepository {
private shards: Map<string, DatabaseConnection>;
getShardForMerchant(merchantId: string): DatabaseConnection {
const shardKey = this.hashMerchant(merchantId);
return this.shards.get(shardKey)!;
}
}
Test generation examples
Goals inform test strategy and performance benchmarks:
- Quality goal tests: Performance/load tests targeting metrics from qgoals (e.g., p95 latency tests)
- Availability tests: Chaos engineering tests, failover scenarios, health check validation
- Obstacle scenarios: Tests that simulate obstacles and verify mitigations work
- Goal conflict tests: Tests that verify trade-offs are handled appropriately
- Metric collection tests: Verify monitoring and alerting for quality goals
Theory
- Goals represent stakeholder intentions; refining goals into requirements follows KAOS and i* goal-oriented RE practices.
- Quality goals need measurable criteria (ISO/IEC 25010 quality attributes) to avoid vagueness.
- Obstacles and conflicts align with risk/threat modeling; links capture rationale and traceability (IEEE 29148).
- Bibliography: KAOS, i* Framework, ISO/IEC 25010, IEEE 29148-2018.