In ISO 22400 terminology, a performance indicator and a key performance indicator (KPI) use the same basic building blocks (measured values, times, events), but they differ in how they are selected and used by the organization.
ISO 22400 view: what they have in common
Under ISO 22400, both performance indicators and KPIs:
- Are quantitative measures derived from standardized manufacturing data (e.g., order times, machine states, quantities, scrap counts).
- Follow defined calculation rules, including numerator, denominator, time base, and aggregation logic.
- Can be computed by MES, historians, or analytics tools, and then surfaced in dashboards or reports.
- Must be traceable and reproducible, which is critical in regulated environments and during audits or investigations.
In other words, the standard does not say that KPIs use a different kind of data or math. The distinction is about relevance and governance, not about arithmetic.
What is a performance indicator in ISO 22400?
A performance indicator in ISO 22400 is any defined metric that characterizes some aspect of manufacturing performance. Examples include:
- Machine utilization percentage for a cell.
- Number of interruptions per shift.
- Scrap rate by operation.
- Average setup time for a machine group.
Key points about performance indicators:
- They can be local or narrow in scope (e.g., one resource, one product family, one area).
- You can have many performance indicators; they are the full toolbox of potential measures.
- In practice, they are often used by local teams (cell leads, planners, maintenance, quality engineers) to diagnose and optimize processes.
- They may not be reported to top management or regulators, even if they are well defined.
What is a KPI in ISO 22400?
A key performance indicator (KPI) is a performance indicator that the organization explicitly designates as critical for monitoring and controlling its manufacturing operations against strategic, contractual, or regulatory objectives.
In ISO 22400 terms, a KPI is a selected subset of performance indicators with additional expectations:
- It is aligned with higher-level goals such as delivery performance, capacity utilization, cost, safety, or quality objectives.
- It has an agreed target or threshold and typically clear escalation paths when out of tolerance.
- It is used in formal review mechanisms (e.g., S&OP reviews, management reviews, customer or regulatory reporting) rather than only local troubleshooting.
- It is more tightly controlled in terms of data integrity, calculation validation, and change control, because it influences decisions and external commitments.
Operationally, you might compute dozens of indicators in your MES or analytics stack, but only a smaller, governed set is treated as KPIs with documented definitions and owners.
Practical differences in regulated, brownfield environments
In a typical aerospace or similarly regulated plant with mixed systems (legacy MES, multiple ERPs, QMS, historians), the difference usually shows up in governance and risk, not technology:
- Scope and visibility: Many performance indicators never leave the area or department. KPIs are elevated, aggregated, and often cross-plant or cross-site.
- Governance: KPIs usually have documented definitions, owners, and change control. Any change to a KPI’s formula, data source, or time base may require impact assessment, revalidation, and communication to stakeholders. Most local indicators will not be managed that tightly.
- Validation expectations: Because KPIs are used in management reviews and sometimes in support of customer or regulatory conversations, their data pipelines and calculations are more likely to undergo formal verification and validation. This is especially true if MES/analytics outputs feed into QMS, audit evidence, or financial reporting.
- System coexistence: The same underlying data (machine states, order events, quality records) may feed both indicators and KPIs across multiple tools. Full replacement of indicator logic into one platform is rarely realistic in brownfield contexts due to integration debt, qualification burden, and downtime risk. Plants typically layer KPI computation and visualization on top of existing systems rather than rewriting everything.
How to treat the difference when designing your metrics
When applying ISO 22400 in your own environment, a practical approach is:
- Define a broad indicator library: Based on ISO 22400, identify the performance indicators that your systems can realistically calculate from available, reliable data sources (MES, SCADA, ERP, QMS).
- Select a small KPI set: From that library, explicitly select the few that will function as KPIs for plant leadership. These should map clearly to business, customer, or regulatory needs.
- Document and control KPIs: For each KPI, maintain a controlled definition that includes:
- Exact formula and units.
- Data sources and system of record.
- Time base and aggregation rules.
- Owner, review cadence, and targets.
- Allow more flexibility for non-key indicators: Local teams can evolve performance indicators faster for improvement work, as long as they are not represented as governed KPIs or used for formal external commitments.
This separation lets you leverage ISO 22400’s structure for consistency without over-burdening every local metric with full KPI-level validation and change control.
Summary
In ISO 22400, a performance indicator is any well-defined quantitative measure of manufacturing performance. A KPI is a performance indicator that has been explicitly selected as “key,” tied to higher-level objectives, and governed more tightly. The calculation logic can be identical; the difference is in criticality, governance, and how the measure is used in decision-making and oversight.