Use a hybrid model. Precompute KPI grains that are stable, reused often, and costly or risky to recalculate differently across tools. Compute on the fly when the question is exploratory, the slice is uncommon, or the user needs flexibility more than speed.
In practice, most regulated manufacturers should precompute the lowest business-safe grain that supports repeatable reporting, then let analytics tools aggregate from there. That usually means event or transaction facts where possible, plus a controlled set of conformed rollups such as shift, day, asset, line, work order, operation, lot, batch, or part family. The exact answer depends on source system quality, timestamp fidelity, late-arriving data, and how much semantic governance you actually have.
Frequently used operational rollups: shift, day, week, asset, line, cell, work center, site, work order, operation, SKU or part number, lot or batch. These are the grains executives and supervisors ask for repeatedly.
Metrics with non-trivial business rules: OEE variants, first pass yield, schedule attainment, scrap classifications, downtime categorization, labor efficiency, and queue or wait time metrics. If every dashboard calculates them differently, trust erodes quickly.
Cross-system reconciled facts: measures that blend MES, ERP, QMS, CMMS, historian, or manual data. These usually need controlled joins, survivorship rules, unit normalization, and exception handling.
High-cost aggregations: metrics built from dense event streams, machine telemetry, or long time windows. Precomputing avoids repeated heavy queries and reduces dashboard variance.
Period-close or evidence-oriented snapshots: approved daily production summaries, genealogy-linked quality summaries, and month-end KPI snapshots. In regulated settings, a reproducible number for a defined cutoff matters more than theoretical real-time purity.
Ad hoc slicing: unusual combinations of filters, drill-downs, or user-defined cohorts that are not part of standard operating reviews.
Prototype metrics: early-stage KPIs still being debated. Do not harden them into pipelines too early if definitions are still moving.
Low-volume or infrequent analyses: engineering investigations, temporary improvement studies, or one-off root cause reviews.
Derived visualizations: percent-of-total, ranking, moving averages, and drill-path calculations that can safely sit in the BI layer if the base facts are governed.
Precompute when most of the following are true:
The KPI is reviewed routinely in tier meetings, management reviews, or customer-facing performance discussions.
The metric requires joins across systems or complicated logic.
Users need consistent numbers across reports and sites.
Query latency matters.
The measure may be used as an auditable operational record or evidence input.
Recalculation from raw data is expensive or sensitive to late corrections.
Compute on the fly when most of the following are true:
The question changes often.
The audience is analytical rather than operational.
The base facts are already trustworthy and well modeled.
Fast response is helpful but not operationally critical.
The logic is simple and transparent.
A common pattern is:
Store atomic events where feasible: machine states, production confirmations, quality results, labor transactions, inventory moves, and genealogy events.
Precompute governed fact grains that map to how the plant runs: shift-by-asset, day-by-line, work-order-by-operation, lot-by-step, and period snapshots.
Let BI compute lighter aggregations from those governed facts for dashboards and analysis.
This gives you traceability back to source events without forcing every dashboard to rebuild KPI logic from scratch.
Too much precomputation creates data sprawl, brittle pipelines, long backfills, and metric proliferation. It also increases validation and change control overhead.
Too much on-the-fly calculation creates performance issues, inconsistent definitions, and endless arguments about whose number is correct.
Late-arriving data can make precomputed rollups wrong unless you support restatement rules, versioning, or controlled refresh windows.
Master data drift can distort history if asset hierarchies, routing versions, or part mappings change without governance.
Site-to-site variation can make a global KPI grain look standardized while hiding incompatible local meanings.
In brownfield plants, KPI grain is not just a data modeling choice. It is constrained by legacy MES transaction design, ERP posting timing, historian quality, QMS coding practices, and the practical cost of validating transformations. Full replacement to get a cleaner KPI stack often fails because the qualification burden, integration complexity, downtime risk, and change control impact are too high relative to the reporting problem being solved.
That is why many organizations do better with an incremental approach: preserve source-system authority, define a canonical KPI layer outside the transactional systems, precompute only the governed grains that matter operationally, and keep lineage to raw records. If your MES and ERP disagree on production completion time, or your downtime model is only partially coded, precomputing faster will not fix the underlying trust problem.
Precompute governed, repeatable, cross-system KPI grains that the business depends on. Compute on the fly for exploration and lightweight derived analysis. If definitions, timestamps, or data ownership are still unstable, fix those first. The wrong grain strategy usually reflects unresolved data governance issues, not just a performance tuning problem.
Whether you're managing 1 site or 100, Connect 981 adapts to your environment and scales with your needs—without the complexity of traditional systems.
Whether you're managing 1 site or 100, C-981 adapts to your environment and scales with your needs—without the complexity of traditional systems.