FAQ

What is the best way to handle serialized parts in KPI calculations?

The best way is to calculate KPIs at the right grain and keep serialized units separate from simple quantity-based reporting when needed. In practice, that means using the serial number as the primary reporting object for unit history, while still aggregating to order, operation, work center, program, or period for management reporting.

If you treat serialized parts like interchangeable pieces, KPI results often become misleading. A single serialized unit may pause, split, loop through rework, move between routings, or accumulate inspection and concession activity that does not fit cleanly into a basic completed-quantity model.

Practical approach

  • Use dual KPI logic: keep unit-level metrics for serialized behavior and flow-level metrics for line or cell performance.
  • Anchor calculations to the serial number: first-pass yield, rework rate, touch time, queue time, cycle time, and genealogy-dependent quality metrics should be traceable to each serialized unit.
  • Define event rules explicitly: specify what counts as start, complete, pass, fail, hold, rework entry, rework exit, scrap, replace, merge, split, and shipment.
  • Separate physical completion from booking completion: ERP completion timestamps, MES operation signoffs, and quality disposition dates are often different. Do not assume they are interchangeable.
  • Report rolled-up KPIs carefully: aggregate serialized results only after the unit-level logic is stable and governed.

What usually works best for common KPI types

Yield and first-pass yield: calculate at the serialized-unit level first. A part should count once for the relevant operation or route step, with a clear rule for whether re-entry after failure changes first-pass status. If the same serial can revisit an operation, you need a policy for unique pass/fail treatment.

Cycle time and lead time: use serial-level start and finish timestamps, then summarize distributions, not just averages. Serialized work often has extreme variance due to inspection waits, engineering holds, nonconformance review, and outside processing. Averages alone can hide operational risk.

WIP and aging: treat each serial as an individual WIP object with current status, current operation, and days in state. This is often more useful than unit counts because one aging serialized assembly can matter more than many standard parts.

Throughput: use completed serialized units for finished throughput, but distinguish between good completions, conditional releases, and units awaiting final quality disposition if that distinction matters in your environment.

OEE-adjacent metrics: be careful. Serialized part complexity can distort simple performance assumptions. If routing content differs by serial, quantity-per-hour may not be comparable without normalization by standard hours, operation content, or planned labor.

Scrap and rework: do not count only transaction quantities. Tie scrap and rework to serial status history and disposition events. Otherwise, replacement activity and partial recovery can produce false rates.

Key design decisions that affect KPI accuracy

  • Granularity: serial, lot, work order, operation, machine, shift, or program.
  • Rework policy: whether repeated operation attempts count as new opportunities or as continuation of the original unit path.
  • As-built structure changes: how substitutions, removed components, and serialized subassembly replacements affect denominator and completion logic.
  • Quality state handling: whether held, deviated, concessioned, or conditionally accepted units are included in standard output KPIs.
  • Timestamp precedence: which system is authoritative for operational events versus inventory movements versus quality dispositions.

Brownfield reality

In most plants, serialized part data lives across MES, ERP, QMS, test systems, and sometimes spreadsheets or local databases. The best KPI method depends on whether serial events are synchronized consistently across those systems. If they are not, KPI disputes usually reflect data model and process-control problems, not reporting problems.

A full rip-and-replace is rarely the best answer in regulated, long lifecycle environments. It often fails because qualification effort, validation cost, downtime risk, integration complexity, and change-control burden are higher than expected. A more realistic path is to establish a governed event model for serialized units, map source-system ownership clearly, and improve calculation logic incrementally.

What to avoid

  • Do not mix serialized and non-serialized production in one denominator without adjustment.
  • Do not use ERP completion transactions alone as proof of actual process completion.
  • Do not let operators or analysts infer KPI rules differently by program or shift.
  • Do not collapse rework loops unless you are doing it intentionally and documenting the tradeoff.

Bottom line

The best method is to model serialized parts as individually traceable units, calculate quality and time-based KPIs from serial event history, and then roll those metrics up under controlled rules. If your event definitions, system interfaces, or master data are inconsistent, the KPI will not be reliable regardless of the dashboard.

Get Started

Built for Speed, Trusted by Experts

Whether you're managing 1 site or 100, C-981 adapts to your environment and scales with your needs—without the complexity of traditional systems.