There is no single universal KPI for non-conformance (NC) management effectiveness in aerospace. Mature sites rely on a small, coherent set of metrics across three areas: defect occurrence, NC process performance, and corrective/preventive effectiveness. Exact targets and thresholds are site-specific and depend heavily on data quality, integration, and process discipline.
1. Defect occurrence & non-conformance volume
These KPIs show how often non-conformances are created and where they come from. They measure outcome quality, not process speed.
- NC rate per unit / per operation
Examples: NCs per aircraft, per engine, per 1,000 hours of labor, or per 1,000 operations. Useful to normalize across programs and volumes.
- First-pass yield (FPY) / rolled throughput yield (RTY)
While not “NC-only” metrics, sustained low FPY with high NC volume usually indicates ineffective prevention and weak process capability.
- NCs by source and severity
Breakdown by process, cell, commodity, supplier, design vs manufacturing origin, and criticality class (e.g., safety/flight-critical vs cosmetic). This shows whether your NC system is surfacing meaningful risk or just low-impact issues.
- Repeat NC rates by characteristic or failure mode
Percentage of NCs tied to previously seen defect codes, characteristics, or failure modes. High repeat rate suggests weak corrective / preventive action.
2. Non-conformance workflow performance
These KPIs reflect how efficiently and consistently NCs are processed from detection through disposition, in the context of aerospace controls and approvals.
- NC cycle time (end-to-end)
Median and distribution from detection to closure, segmented by severity and part criticality. Long tails may reflect engineering bottlenecks, MRB overload, or system integration gaps. Targets must account for required reviews, signoffs, and regulatory documentation.
- Time in each stage
Detection to NC creation; creation to containment; containment to disposition; disposition to implementation/verification. Useful to see whether delays come from data entry, engineering review, MRB, or shop-floor execution.
- Open NC backlog and aging
Number of open NCs and aging buckets (e.g., <7 days, 8–30, 31–90, >90), separated by risk level. Aging critical NCs can point to systemic capacity or governance issues.
- NC rework / scrap proportion
Percentage of NCs resulting in rework, repair, scrap, use-as-is, or concession. Shifts over time can indicate changes in design robustness, process capability, or MRB behavior.
- Cost of poor quality (COPQ) attributable to NCs
Labor, material, and indirect cost tied to NC-related rework, scrap, concessions, and delays. COPQ accuracy strongly depends on accounting granularity and integration between MES, ERP, and quality systems.
3. Escape, containment, and risk control
In aerospace, one of the clearest signals of NC system effectiveness is how well it prevents and manages escapes, especially on safety and airworthiness characteristics.
- Escape rate
Number of defects detected at downstream stations, at customer, or in service that should have been caught by existing controls, per delivered unit. Often stratified by internal vs external escapes and by severity.
- Late discovery of NCs
NCs detected after major cost accumulation points (e.g., after assembly, after test, at delivery). High late-discovery rates indicate inadequate in-process controls or weak traceability.
- Emergency/containment actions per period
Count of line stops, quarantines, and urgent containment activities initiated by NCs, highlighting how often non-conformances create systemic risk or disruption.
- NCs related to special process or key characteristic failures
Proportion of NCs affecting special processes, key characteristics, or flight-safety parts. Even low volumes here can be more important than high-volume cosmetic issues.
4. Corrective & preventive action (CAPA) effectiveness
Non-conformance management is not just disposition; effectiveness is largely measured by how well the NC process feeds into and closes the loop with CAPA.
- Repeat NCs after CAPA closure
Percentage of NCs (by code, characteristic, or failure mode) that recur after an associated CAPA has been closed. A low rate, with consistent definition and traceability, is one of the best indicators that root cause analysis and corrective actions are effective.
- CAPA closure cycle time
Time from CAPA initiation (often triggered by NC trends) to verified effectiveness. Requires careful interpretation: very fast closure can mean superficial actions; very slow can mean overburdened teams or scope creep.
- CAPA implementation compliance
Rate at which defined corrective actions (e.g., process change, tooling update, training, inspection plan change) are implemented and reflected in controlled documents and systems (MES routes, work instructions, QMS procedures).
- NC trend reversal following CAPA
Measured change in NC rate, severity, and escape rate for the targeted failure mode over an agreed monitoring period. This depends on analytics maturity and reliable defect coding.
5. Data quality and system integration indicators
Many NC KPIs are only meaningful if the underlying data, coding, and system landscape are robust. In brownfield aerospace environments, this is often a limiting factor.
- NC classification completeness
Percentage of NCs with fully populated required fields (defect code, operation, part, root cause category, disposition, responsible area). Low completeness undermines all higher-level KPIs.
- NC-to-CAPA linkage rate
Share of significant or recurring NCs that are formally linked to CAPAs, engineering change requests, or design problem reports. Fragmented QMS/MES/PLM stacks can depress this linkage unless integration and governance are strong.
- Traceability of decisions
Proportion of NCs with complete electronic trace of MRB decisions, calculations, and approvals. This is essential for audit readiness and for learning from past non-conformances.
6. Tradeoffs and common pitfalls
When defining NC effectiveness KPIs in aerospace, several tradeoffs and constraints are typical:
- Volume vs severity
A simple “fewer NCs = better” view is misleading. Sustained low NC counts in a high-risk environment may reflect underreporting or weak culture, not process excellence. It is often better to target a stable or even increased NC capture rate, with improved containment and decreased severity and escape rates.
- Speed vs rigor
Pushing NC cycle times aggressively down can conflict with required engineering analysis, MRB activities, and documentation expectations. KPIs should differentiate normal disposition flow from complex investigations on critical hardware.
- Global vs program-specific metrics
Programs, platforms, and suppliers can have fundamentally different baseline defect rates. Comparing them directly without context or normalization (e.g., by complexity, maturity, supplier mix) can drive the wrong behavior.
- Brownfield system coexistence
In many aerospace plants, NC data is split across legacy MES, standalone QMS, PLM, and spreadsheets. Attempting a full system replacement just to improve NC KPIs often fails due to validation burden, integration complexity, and downtime risk. Incremental integration, better coding standards, and improved workflows within existing systems typically yield more reliable KPIs faster.
7. Practical starting set of NC effectiveness KPIs
A pragmatic set for most aerospace sites, assuming data is available, might include:
- NC rate per 1,000 operations (by severity and process area)
- NC end-to-end cycle time and open NC aging (by severity/criticality)
- Rework/scrap mix and NC-attributable COPQ
- Escape rate (internal and external) and late-discovery NCs
- Repeat NC rate after CAPA closure for top failure modes
- NC classification completeness and NC-to-CAPA linkage rate
The exact definitions, thresholds, and reporting cadence should be tailored to your programs, regulatory context, and system landscape, and validated through change control to ensure that they remain stable and auditable over time.