To make OEE, FPY, and on time delivery comparable across products, lines, plants, and suppliers, you need formal, written metric specifications. These must define the exact formula, data sources, time base, and inclusion/exclusion rules, and be controlled under your normal change and validation processes.
1. General principles for comparable metrics
For each metric (OEE, FPY, on time delivery), the formal specification should at minimum define:
- Purpose and scope: Where the metric applies (plant, value stream, product family) and what decisions it is intended to support.
- Time base: Shift, day, week, month; calendar vs production time; how you handle holidays, shutdown, and rework periods.
- Numerator and denominator: Exact definitions, including units (parts, orders, hours) and counting rules.
- Inclusions and exclusions: What is counted and what is explicitly not (e.g., engineering trials, training runs, quarantine stock, rework lots).
- Data sources and system of record: MES, ERP, QMS, LIMS, historians, manual logs; and which system is authoritative when data conflict.
- Timestamp rules: Which timestamp is used (planned ship date, ATP date, first good piece time, etc.).
- Responsibility and governance: Metric owner, change control process, and validation/verification expectations.
- Known limitations: For example, “OEE excludes unplanned utilities outages” or “On time delivery excludes export holds outside plant control.”
In regulated environments, treat any metric used for release decisions, regulatory submissions, or management reviews as part of your controlled quality/operations documentation. Changes to definitions should go through documented review, impact assessment, and, where applicable, system revalidation.
2. Formally specifying OEE
OEE is frequently inconsistent across sites because each plant interprets Availability, Performance, and Quality differently. To ensure comparability, you must standardize at the factor level.
A typical top-level formula is:
- OEE = Availability × Performance × Quality
Your specification should define at least the following.
2.1 Availability
- Formula: Availability = (Planned Production Time − Planned Downtime − Unplanned Downtime) / (Planned Production Time − Planned Downtime).
- Planned Production Time: Exactly what counts (e.g., scheduled staffed time for that asset or line).
- Planned Downtime: Preventive maintenance, setup/changeover, cleaning, regulatory inspections; and whether they are excluded from the denominator.
- Unplanned Downtime: Equipment failures, unplanned maintenance, material shortages, IT outages; and whether external causes (e.g., power failures) are included.
- Minimum event duration: Threshold for logging an event (e.g., any stop > 1 minute) and how micro-stops are handled.
- Data sources: Line control system, MES downtime module, manual operator logs, and the hierarchy for conflict resolution.
2.2 Performance
- Formula: Performance = (Total Processed Units × Ideal Cycle Time) / Net Operating Time.
- Ideal Cycle Time: How it is defined (e.g., validated equipment capability at nominal settings) and how often it may be revised.
- Total Processed Units: Whether this includes reworked units that pass back through the equipment, scrapped units, or test pieces.
- Net Operating Time: Planned Production Time minus all downtime events counted as “lost time” in Availability.
- Speed losses: How you treat intentional slow-running for quality reasons or process validation; whether those are considered performance loss or excluded by design.
2.3 Quality (for OEE)
- Formula: Quality (OEE context) = Good Units / Total Units Produced in that time window.
- Good Units: Units that fully meet specification and are not sent to rework, hold, or concession.
- Defective Units: Whether you count scrap only, scrap + rework, or include units accepted under deviation/concession.
- Inspection timing: How you handle units that are produced within the time window but inspected later (e.g., batch testing, release by QMS).
- Data source: MES nonconformance records, QMS, SPC systems, manual quality logs.
Make clear that OEE can legitimately differ by product mix, batch size, regulatory hold times, and required in-process testing. Comparisons across unlike assets (e.g., high-speed packaging vs manual assembly) should be qualified and not treated as direct benchmarks unless the definitions and operating contexts are closely matched.
3. Formally specifying FPY
FPY becomes non-comparable when plants count different stages, use different units, or treat rework differently. A written FPY definition should fix these points.
3.1 Baseline FPY definition
- Common formula: FPY = (Units exiting process step without any rework or repair) / (Units entering that process step).
- Scope: Step-level FPY, line-level FPY, plant-level FPY; and explicit mapping of which steps are included.
- Unit definition: Piece, assembly, batch, lot, order; define the level for counting and keep it consistent.
3.2 Treatment of rework and repair
- Rework policy: FPY usually counts only units that pass the step the first time without rework. Specify that any unit requiring rework, repair, or deviation approval is counted as a fail for FPY.
- Loops: How you handle units that pass the same station multiple times (e.g., solder touch-up, re-test). Typically, you count them once in denominator at first entry and treat any additional passes as evidence of FPY failure.
- Concessions/deviations: Whether concession-accepted units are considered good for FPY (many sites still count these as FPY failures, even if shipped).
3.3 Multi-step FPY / rolled throughput yield
- Step list: Controlled list of process steps included in the rolled FPY calculation.
- Aggregation method: Whether you will use direct counting across the full route or the product of step-level FPY values.
- Exclusions: Engineering runs, validation lots, training batches, NPI prototypes, or certain outside-processing steps.
- Data sources: MES route data, QMS nonconformance records, test systems, and the convention for mapping defects to the “responsible” step.
Because FPY is often used in quality management reviews, its definition should be aligned with your nonconformance and CAPA processes. If steps are added, removed, or split in the routing, the FPY specification and any automated calculations must be updated and, where validated systems are involved, reverified.
4. Formally specifying on time delivery
On time delivery is often hardest to compare across sites and suppliers because there are many plausible date definitions. The specification must be very explicit about which dates count and what is in scope.
4.1 Core definition elements
- Basic formula: On time delivery (OTD) = (Number of orders delivered on or before the committed date) / (Total number of orders due in the period).
- Object of measure: Order line, shipment, customer order, internal work order; be precise and keep it consistent.
- Period: Defined by due date or ship date. For example: “Orders with committed ship dates within the calendar month.”
4.2 Date definitions
- Requested date: Date requested by the customer or internal demand signal.
- Committed date: Date the supplier or plant has confirmed (often from ATP or planning/MRP systems).
- Measurement date: Specify whether OTD is measured against requested date or committed date. For comparability, choose one standard (often committed date) and apply it consistently.
- Actual date: Define whether this is the ship date from ERP, the receipt date at customer site, or quality-acceptance date at customer.
- Shipping terms context: Recognize that INCO terms (FOB, DDP, etc.) change what “delivered” means. Your specification should clarify whether plant OTD is based on readiness to ship, handoff to carrier, or confirmed receipt.
4.3 Scope, partials, and exclusions
- Partial shipments: Whether a partial shipment counts as on time if at least X% of the quantity is shipped by the committed date, and how follow-on shipments are treated.
- Rescheduled orders: Rules for when and how a committed date can be changed without being counted as late, and how late reschedules are reported to avoid gaming.
- Customer-driven changes: How you treat orders where customers pull in, push out, or delay acceptance.
- External holds: Export control holds, credit holds, customer-site access issues, or regulatory release delays not caused by manufacturing. Define whether these are excluded or reported separately.
- Cancellations: How customer-initiated and supplier-initiated cancellations affect the numerator and denominator.
In regulated contexts, you should also specify how batches that pass manufacturing but await quality release or regulatory testing affect OTD. Many organizations keep separate metrics for “manufacturing OTD” and “customer OTD” to avoid hiding systemic release delays inside a shop-floor metric.
5. Ensuring cross-plant and supplier comparability
To make these metrics truly comparable, you need more than formulas. You need standardized governance:
- Global metric specification documents: Controlled documents describing OEE, FPY, and OTD definitions, including worked examples and edge cases.
- Implementation guides per system: Mapping the metric definitions to specific MES, ERP, QMS, and data warehouse fields, including any transformations or filters.
- Change control: Any change to definitions, data sources, or logic must go through formal review, alignment across plants, and (where systems are validated) documented verification/validation.
- Auditability: Ability to trace published metric values back to raw events/transactions, with versioned logic and configuration.
- Training and examples: Standard training materials, including examples of what is and is not counted as “on time,” “good unit,” “downtime,” etc.
- Exception reporting: When a site cannot fully follow the global definition (e.g., due to legacy system limitations), require documented exceptions and clear flags in consolidated reports.
Brownfield environments make strict standardization harder because different sites use different systems and data models. Where full harmonization is not immediately feasible, prioritize:
- Defining a minimum common specification that all plants can meet, even if some track additional local variants.
- Documenting site-specific deviations and adjusting cross-site comparisons accordingly.
- Using data integration layers that normalize field names, units, and event types while preserving traceability back to source systems.
6. Tradeoffs and failure modes to watch for
When you formalize these metrics, expect tradeoffs:
- Strict comparability vs local relevance: A global OEE definition may not reflect important local realities (e.g., long cleaning cycles for aseptic processes). Consider a core global metric plus local supplements, rather than allowing silent local redefinitions.
- Data quality vs coverage: Some older lines or suppliers may lack the instrumentation or system integration needed to support detailed definitions. Decide whether to invest in data capture upgrades or accept that those operations will be excluded from certain comparisons.
- Simplicity vs precision: Highly complex rules can be precise but hard to explain and maintain. Balance clarity with enough detail to avoid gaming.
- Metric gaming: If incentive plans are tied to OEE, FPY, or OTD, ambiguous definitions invite manipulation (reclassifying downtime, shifting due dates, avoiding difficult orders). Clear written rules and periodic audits are essential.
None of these definitions guarantee better performance, regulatory outcomes, or audit results. They simply make performance measurement more credible and comparable, which is a prerequisite for sound decisions in complex, regulated, multi-site environments.