The best way is to calculate KPIs at the right grain and keep serialized units separate from simple quantity-based reporting when needed. In practice, that means using the serial number as the primary reporting object for unit history, while still aggregating to order, operation, work center, program, or period for management reporting.
If you treat serialized parts like interchangeable pieces, KPI results often become misleading. A single serialized unit may pause, split, loop through rework, move between routings, or accumulate inspection and concession activity that does not fit cleanly into a basic completed-quantity model.
Yield and first-pass yield: calculate at the serialized-unit level first. A part should count once for the relevant operation or route step, with a clear rule for whether re-entry after failure changes first-pass status. If the same serial can revisit an operation, you need a policy for unique pass/fail treatment.
Cycle time and lead time: use serial-level start and finish timestamps, then summarize distributions, not just averages. Serialized work often has extreme variance due to inspection waits, engineering holds, nonconformance review, and outside processing. Averages alone can hide operational risk.
WIP and aging: treat each serial as an individual WIP object with current status, current operation, and days in state. This is often more useful than unit counts because one aging serialized assembly can matter more than many standard parts.
Throughput: use completed serialized units for finished throughput, but distinguish between good completions, conditional releases, and units awaiting final quality disposition if that distinction matters in your environment.
OEE-adjacent metrics: be careful. Serialized part complexity can distort simple performance assumptions. If routing content differs by serial, quantity-per-hour may not be comparable without normalization by standard hours, operation content, or planned labor.
Scrap and rework: do not count only transaction quantities. Tie scrap and rework to serial status history and disposition events. Otherwise, replacement activity and partial recovery can produce false rates.
In most plants, serialized part data lives across MES, ERP, QMS, test systems, and sometimes spreadsheets or local databases. The best KPI method depends on whether serial events are synchronized consistently across those systems. If they are not, KPI disputes usually reflect data model and process-control problems, not reporting problems.
A full rip-and-replace is rarely the best answer in regulated, long lifecycle environments. It often fails because qualification effort, validation cost, downtime risk, integration complexity, and change-control burden are higher than expected. A more realistic path is to establish a governed event model for serialized units, map source-system ownership clearly, and improve calculation logic incrementally.
The best method is to model serialized parts as individually traceable units, calculate quality and time-based KPIs from serial event history, and then roll those metrics up under controlled rules. If your event definitions, system interfaces, or master data are inconsistent, the KPI will not be reliable regardless of the dashboard.
Whether you're managing 1 site or 100, C-981 adapts to your environment and scales with your needs—without the complexity of traditional systems.