Start with a simple rule: attribute directly traceable quality costs to the specific program, part, lot, supplier event, work order, or customer requirement that caused them. Only allocate costs across multiple programs or customers when direct attribution is not credible or would cost more to maintain than the insight is worth.
In practice, most organizations need a two-layer model.
Direct costs: scrap, rework labor, replacement material, expedited freight, containment activity, test reruns, supplier chargebacks, and concession processing that can be linked to a specific nonconformance, order, serial, or customer requirement.
Shared or pooled costs: central quality engineering, common inspection resources, enterprise CAPA effort, system administration, broad training, audit preparation, and recurring overhead tied to multiple programs.
Those pooled costs should be assigned using a documented allocation basis that is stable, explainable, and reviewable. Common drivers include production hours, direct labor hours, inspection hours, transaction counts, units processed, revenue, or program mix. No single basis is universally correct. The best choice depends on what the cost actually follows and what data you can defend later.
For most regulated manufacturing environments, the least problematic approach is:
Capture the originating quality event at the lowest practical level of traceability.
Book all directly attributable costs to that event first.
Define a limited number of shared quality cost pools.
Assign each pool one approved allocation driver.
Review the policy on a fixed cadence under change control rather than changing it case by case.
This prevents a common failure mode where teams retroactively move quality costs to protect program margins, customer relationships, or monthly performance reporting. That creates noise in the data and weakens trust in the numbers.
If the cost pool is driven mainly by inspection demand, inspection hours or inspection transactions are usually more defensible than revenue. If the pool is driven by production complexity, routing steps or labor hours may fit better. If the cost is tied to supplier-related escapes, supplier incident counts or receiving inspection volume may be more meaningful.
Revenue-based allocation is easy, but it often hides operational causality. It may be acceptable for high-level financial reporting, but it is usually weak for root cause analysis or program improvement decisions.
This only works if your data model supports it. Many plants have fragmented NCR, ERP, MES, QMS, and labor systems, so the underlying event, labor, material, and disposition data do not align cleanly. In that case, a more sophisticated attribution model can create false precision.
If your systems cannot reliably link nonconformance records to work orders, lots, serials, labor bookings, and material issues, keep the method simpler and make the limitations explicit. A defensible rough-cut model is usually better than a detailed model no one can validate.
Also, customer-specific treatment may be constrained by contract structure, internal finance policy, and whether the quality issue was caused by internal execution, supplier performance, design instability, or customer-driven change. Do not assume operational attribution and contractual recoverability are the same thing. They often are not.
Do not assume you need a full system replacement to improve attribution. In brownfield environments, that is often the wrong move. Replacing ERP, MES, QMS, or PLM just to get cleaner cost attribution usually fails because of qualification burden, validation effort, integration complexity, downtime risk, and the need to preserve traceability across long equipment and program lifecycles.
More often, the practical path is coexistence:
ERP remains the financial book of record.
QMS or NCR workflows remain the quality event record.
MES or labor systems provide execution and time data where available.
A governed reporting or costing layer performs the attribution logic.
That approach is less elegant, but usually more achievable and less disruptive.
Your attribution policy should define:
which quality costs are direct versus pooled,
approved allocation drivers for each pool,
required source records,
who can override default attribution,
how overrides are documented and approved,
how often the model is reviewed, and
how restatements are handled if source data changes.
Without that governance, the model becomes a negotiation tool instead of a management tool.
Attribute what you can directly. Allocate only what you must. Use causal drivers, document the policy, and preserve traceability back to the originating quality event. If your systems and processes are immature, say so and keep the model simple enough to validate. A less granular model with reliable evidence is usually more useful than a detailed model built on weak links between systems.
Whether you're managing 1 site or 100, C-981 adapts to your environment and scales with your needs—without the complexity of traditional systems.