FAQ

How do I handle KPI definition changes in long-term trend dashboards?

In long-lifecycle, regulated operations you should assume KPI definitions will change. To keep long-term dashboards meaningful and audit defensible, you need to treat KPI definitions as versioned, governed objects, not as a static formula buried in a report.

1. Treat KPI definitions as versioned master data

Do not hide the KPI definition inside a BI tool calculation only. Instead:

  • Maintain a KPI catalog (often in a database, not just a spreadsheet) with fields such as KPI name, purpose, formula, data sources, filters, aggregation logic, owner, effective_from and effective_to dates.
  • Assign a KPI ID and a definition version (for example: OEE_V1, OEE_V2).
  • Put definition changes under change control with documented rationale and approvals, especially where KPIs tie into management review, bonuses, or regulatory commitments.

2. Time-bound KPI definitions in the data model

Dashboards can only handle definition changes cleanly if the data model knows which definition applied when. Typical approaches:

  • Effective dating: Store KPI values with a KPI_ID and the date/time they were calculated. Join to a KPI definition table using effective_from and effective_to so the dashboard can show the correct definition per time period.
  • Versioned KPI columns or measures: For critical KPIs, keep separate fields or measures (for example: oee_v1, oee_v2) and label them clearly in dashboards.
  • Metadata tagging: Include definition_version as a dimension so users can filter, color-code, or segment by definition version.

The right pattern depends on your data warehouse, BI tool, and how your MES/ERP/SCADA sources are integrated. In brownfield plants with multiple systems, you may need different tactics per source system at first.

3. Decide how to handle historical data when definitions change

When a KPI definition changes, you have three main options. Each has tradeoffs.

  • A. No backfill: keep old values as-is

    • What it means: Past KPI values stay based on the old definition. Only future data uses the new formula.
    • Pros: No reprocessing, easy to implement, preserves what leadership saw at the time.
    • Cons: Long-term trends will cross a definition boundary; comparisons can be misleading if the change is not clearly marked.
    • When to use: When the change is relatively small or when recalculation would be technically risky, expensive, or could break historical reports that are part of audit records.
  • B. Parallel series: run both definitions for an overlap period

    • What it means: For some period, calculate both old and new versions side by side.
    • Pros: Users can see the delta and understand the impact of the new definition; good for building trust.
    • Cons: Requires more ETL and dashboard design; may confuse users if not well labeled.
    • When to use: For major changes (for example, new OEE loss model, inclusion of rework, change to scrap definition) or where incentives/regulatory reporting depend on the KPI.
  • C. Historical backfill under the new definition

    • What it means: Recalculate historical KPIs using the new definition, at least for a defined time window.
    • Pros: Clean, continuous time series; easier for high-level management comparisons.
    • Cons: Practically difficult in brownfield environments; source data may not exist or may be inconsistent. Recalculation can invalidate prior management decisions or audit trails if not clearly documented. May require revalidation of reporting processes.
    • When to use: Only when underlying raw data is available, stable, and traceable, and when you can justify the effort and risk. Often limited to a recent period (for example, last 12–24 months).

In regulated environments, option B (parallel series) plus a clearly documented definition boundary in the chart is often the safest compromise.

4. Make definition boundaries visible in dashboards

Whatever data strategy you choose, the dashboard should make definition changes obvious. Tactics include:

  • Vertical line or marker on time-series charts where the definition changed, with a short label (for example, “OEE V2: availability now excludes planned maintenance”).
  • Tooltips that show KPI version and a summary of the active definition for a given data point or date range.
  • Toggle or filter to switch between definitions where parallel series exist.
  • Footnotes near key charts summarizing major KPI definition changes and when they occurred.

This matters when KPIs feed into performance reviews, supplier scorecards, or customer-facing metrics. Otherwise, apparent improvements may be due to definition changes rather than real process gains.

5. Align KPI changes with governance, validation, and audit needs

Because KPIs are often cited in audits, customer presentations, and management reviews, the change process itself needs structure:

  • Change control: Treat KPI definition changes like any other controlled configuration change: documented proposal, impact assessment, approvals, and implementation record.
  • Traceability: Keep a history of who changed what, when, and why. This can be in a QMS, BI governance tool, or a simple controlled repository, as long as it is auditable.
  • Validation of new logic: For KPIs derived from MES/ERP/SCADA integrations, test the new calculation logic against known scenarios and compare to manual calculations before releasing to production dashboards.
  • Communication: Brief operations, quality, and finance stakeholders on the change, its expected numerical impact, and how it will appear in dashboards.

6. Design for brownfield and mixed-system realities

In most plants you cannot standardize every KPI across all systems immediately. Common constraints include legacy MES/ERP, inconsistent tags in historians, and fragmented QMS or production logs. Practical steps:

  • Start by versioning KPI definitions in your analytics layer even if upstream systems remain inconsistent.
  • Where different plants or lines use different definitions, treat them as separate KPI versions explicitly rather than pretending they are the same metric.
  • Avoid full replacement of all existing KPI/reporting systems just to harmonize metrics. In aerospace-grade and similar environments, the qualification burden, downtime risk, and integration complexity often outweigh the benefit of a single unified KPI stack.
  • Incrementally harmonize: standardize the top few enterprise KPIs first (for example, OEE, scrap rate, on-time delivery), then expand as integrations and data quality improve.

7. Practical minimal pattern to adopt

If you need something pragmatic and not perfect, a workable pattern is:

  1. Create a KPI catalog with IDs, definitions, and effective dates.
  2. Ensure all KPI fact tables store KPI_ID and timestamp, not just a naked number.
  3. When changing a definition, create a new KPI version ID, do not overwrite the old one.
  4. Mark definition changes in dashboards with visible annotations and, for major changes, show both versions for at least a few months.
  5. Put KPI definition updates through your existing change control process and keep evidence for audits.

This keeps long-term trend dashboards usable and trustworthy without forcing a risky overhaul of your entire reporting stack.

Get Started

Built for Speed, Trusted by Experts

Whether you're managing 1 site or 100, Connect 981 adapts to your environment and scales with your needs—without the complexity of traditional systems.

Get Started

Built for Speed, Trusted by Experts

Whether you're managing 1 site or 100, C-981 adapts to your environment and scales with your needs—without the complexity of traditional systems.