FAQ

How do I handle resistance when new KPIs don’t match legacy numbers?

Start by assuming the resistance is rational. If a new KPI does not match a legacy number, the problem is usually not attitude alone. It is often a mismatch in definition, timing, source data, filtering rules, event capture, or master data. In regulated and brownfield environments, those differences are common.

The practical answer is to treat this as a metric reconciliation exercise before treating it as a change management problem. Do not ask teams to trust the new number until you can explain why it differs.

What to do first

  • Freeze the definitions. Document exactly how the legacy KPI is calculated and how the new KPI is calculated. Include numerator, denominator, exclusions, time boundary, unit of measure, system of record, and refresh timing.

  • Run both KPIs in parallel. Keep the legacy and new metric visible for a defined period. This reduces political friction and gives operations, quality, and IT a chance to see the variance pattern instead of arguing from anecdotes.

  • Reconcile to source events. Compare a sample of shifts, lots, work orders, machines, or jobs back to the underlying transactions. Differences usually come from status mapping, late postings, duplicate records, manual overrides, scrap treatment, rework handling, or missing downtime codes.

  • Classify the gap. Determine whether the new KPI is measuring the same thing differently, measuring a better version of the same thing, or measuring something else entirely. Those are not the same situation.

  • Set a controlled cutover rule. Do not switch incentive plans, escalation thresholds, or executive reporting to the new KPI until the variance is understood and approved.

How to respond to resistance

Do not frame the conversation as legacy versus modern. Frame it as traceability and fitness for use.

  • If the legacy KPI is operationally useful but loosely defined, say that plainly. It may still be valid for local management, but not reliable enough for cross-plant comparison or automated escalation.

  • If the new KPI is technically cleaner but depends on weak integrations, say that too. A better formula does not help if event capture is incomplete or delayed.

  • If the numbers differ because the new system exposes hidden loss, expect pushback. People may read the change as performance deterioration when it is actually measurement tightening.

  • If the new KPI rolls up across systems, explain the integration assumptions. In brownfield plants, ERP, MES, historians, QMS, and spreadsheets often disagree on timing and status. That is a systems reality, not user irrationality.

Resistance usually drops when people can see three things: where the number comes from, why it changed, and what decisions it should and should not drive.

What not to do

  • Do not declare the old number wrong without evidence.

  • Do not retire a legacy KPI before the new one is stable.

  • Do not mix old and new definitions in the same trend line without marking the change point.

  • Do not tie compensation, supplier scorecards, or audit-facing narratives to a new KPI before reconciliation and approval.

  • Do not assume a vendor default definition matches your plant reality.

Governance matters more than persuasion

The durable fix is governance, not messaging. Put KPI ownership, definition changes, mapping rules, and calculation logic under formal change control. Keep version history. Record who approved the metric, what changed, when it changed, and which reports are affected. That matters in regulated operations because performance measures often feed investigations, CAPA prioritization, release decisions, staffing choices, and management review.

If you need one rule of thumb, use this: no KPI should become official until operations, engineering, quality, and IT can all trace it from dashboard to source transaction and explain known limitations.

Tradeoffs to accept

There is no risk-free path.

  • Long parallel runs improve confidence but slow standardization.

  • Fast cutovers reduce reporting clutter but increase credibility risk.

  • Tighter definitions improve comparability but may break historical continuity.

  • Local exceptions preserve plant reality but weaken enterprise rollups.

In many regulated, long-lifecycle environments, full replacement of legacy reporting logic is not realistic in one step. Qualification burden, validation effort, downtime constraints, integration complexity, and existing evidence trails usually make phased coexistence the safer approach.

Get Started

Built for Speed, Trusted by Experts

Whether you're managing 1 site or 100, Connect 981 adapts to your environment and scales with your needs—without the complexity of traditional systems.

Get Started

Built for Speed, Trusted by Experts

Whether you're managing 1 site or 100, C-981 adapts to your environment and scales with your needs—without the complexity of traditional systems.