Delivery counts are highly visible in aerospace, but they reveal little about true production capability. This article explains how throughput, flow, rework, and system stability provide a more accurate view of manufacturing performance in regulated environments.

In aerospace, delivery numbers dominate headlines and executive reviews. Monthly and yearly totals are easy to understand and easy to compare. But if you are responsible for an aircraft, missile, or space hardware production line, you already know the problem: deliveries say almost nothing about how hard the system had to work to ship that hardware, or whether it can do it again next month.
The core issue is the same one explored in the misleading aerospace scoreboard: we are using surface-level output metrics to judge systems that are fundamentally constrained by complexity, regulation, and coordination. If you manage an AS9100 production environment, you need a different scoreboard—one that measures throughput, flow, and system health instead of just deliveries.
Delivery charts are compelling because they compress a complicated story into a single line. Executives can see trends at a glance. Investors can compare OEMs. Programs can be ranked and benchmarked. A rising line signals momentum; a falling line triggers concern.
Inside factories and across the supply chain, these same charts shape behavior. Teams feel pressure to “hit the number” by quarter-end. Ship dates become fixed reference points, even when upstream realities are changing daily. The simplicity of deliveries makes them attractive, but that simplicity comes with cost: almost all context is stripped away.
Public aerospace companies rely on delivery counts as one of the few operational metrics that can be disclosed consistently across programs and time. Analysts model revenue and cash flow around shipments. Backlog and deliveries become shorthand for competitiveness.
This external framing seeps back into internal management. Senior leaders report deliveries upward, so functional teams naturally optimize for them. But an AS9100-regulated factory is not a commodity volume line. The effort required to ship one serial number can vary by orders of magnitude, depending on design maturity, supplier stability, and quality status. When all of that nuance is collapsed into a single count, the delivery number becomes a distorted lens rather than a clear one.
A delivery record tells you that a configuration passed a specific gate at a specific time. It does not tell you:
Two factories can both ship 10 aircraft in a month. One may do so with high first-pass yield, predictable cycle times, and low overtime. The other may rely on firefighting, expediting, and hidden backlog. The delivery count is identical; the production systems are not.
In high-mix aerospace environments, true throughput is not simply “units per hour.” It is the rate at which conforming, configuration-correct hardware moves through the value stream over time. That rate is constrained by:
Unlike high-volume industries, takt time is rarely a single fixed number. Different work centers operate on different cadences, with long dwell times around inspections and tests. Measuring throughput requires looking at the whole flow across cells, not just a nominal cycle time at a single station.
Regulated aerospace production inserts multiple non-optional steps into the flow: in-process inspection, functional test, environmental test, flight test, conformity checks, and regulatory or customer acceptance. Each of these can become a limiting constraint, particularly when demand fluctuates or when a quality escape triggers additional sampling.
Throughput therefore must be measured across these verification gates, not just across assembly operations. A line that can mechanically assemble hardware quickly but waits days or weeks for test capacity does not have high throughput. It has local speed and system-level delay.
Rework is where the disconnect between deliveries and real throughput becomes most obvious. A unit that passes final inspection after three major rework cycles shows up as a single delivery. In the system, though, it consumed the equivalent capacity of multiple units:
Concession-heavy programs can appear to be delivering acceptably while quietly burning massive capacity on hidden work. Without metrics that distinguish first-pass throughput from total output, leadership cannot see the erosion of real capability until it is severe.
Every major aerospace program carries a tail of non-conformances and deviations. A single shipset might involve dozens of quality records across subassemblies, hardware substitutions, and process deviations. Each record requires investigation, root cause analysis, and documented disposition.
On the shop floor, this translates into repair loops: units leave the main line, move to rework areas, wait on engineering or supplier input, and eventually return for retest and reintegration. From a delivery perspective, all of this collapses into a single date. From a throughput perspective, it represents a major diversion of flow and capacity.
Suppliers introduce another layer of hidden work. When critical components arrive late, out of tolerance, or incomplete, internal teams respond by:
These tactics protect the delivery schedule in the short term, but they damage throughput. Flow becomes unpredictable, WIP grows in odd places, and future deliveries inherit the disruption. Without a clear view of these patterns, leaders may misinterpret on-time delivery as evidence of a healthy system when it is actually the result of unsustainable expediting.
In AS9100 environments, paperwork and digital traceability are as important as physical build status. When travelers, inspection records, or certificates of conformance are incomplete, teams often scramble near ship dates to reconstruct the digital thread.
This reconstruction work rarely appears explicitly in any metric. Engineers and planners dig through email, shared drives, and spreadsheets to close gaps. The unit ships; the delivery target is met. But the underlying system is signaling a problem: execution and traceability are not aligned. True throughput should account for this late-stage effort, because it represents real cost and risk.
First-pass yield (FPY) measures the percentage of units that complete a process or flow without requiring rework. In aerospace, you can define FPY at multiple levels: operation-level, cell-level, or end-to-end configuration-level. High FPY indicates that work instructions, training, tooling, and design stability are aligned.
Right-first-time rates are powerful because they convert quality into a flow metric. A line that delivers 95% of units right-first-time has far more real capacity than one that delivers the same total output but with 60% FPY. Dashboards that highlight FPY by constraint area let teams attack the factors that erode throughput long before delivery numbers slip.
Touch time is how long a technician, inspector, or operator is actively working on a unit. Queue time is how long that unit is waiting—for materials, paperwork, quality sign-off, engineering decisions, or test slots. In many aerospace factories, queue time dwarfs touch time.
Measuring both is essential. You may discover that a critical assembly spends 80% of its lead time waiting between operations or sitting in front of a single constrained special process. Improving documentation flow or decision turnaround at those points can increase throughput without adding headcount or equipment.
Healthy systems limit WIP and keep it moving. When WIP ages—units sit in the same status for days or weeks—it signals a flow break. Tracking WIP aging by operation, shop, and supplier reveals where the system is actually constrained.
Bottleneck analysis in aerospace is more dynamic than in simple lines. Constraints move between internal cells, external suppliers, test facilities, and engineering. A robust metric set tracks where the constraint is this week, how much capacity it has, and how variability is affecting it. That is the level of visibility required to convert schedule promises into actual throughput.
Throughput is not just about average speed. It is about predictability. In a constrained, regulated environment, a stable 14-week lead time may be healthier than a nominal 10-week lead time that routinely fluctuates between 8 and 20 weeks.
Measuring lead time stability—for example, via standard deviation or on-time-complete metrics across internal milestones—gives you a truer sense of capability than delivery counts alone. Customers and program managers can plan around a stable system; unstable throughput forces constant replanning and erodes trust.
ERP systems are optimized for planning and financial posting, not for real-time execution. They know what should have happened, which operations are planned, and when a work order is financially complete. But they often lack granular timestamps, partial completions, or rich status reasons for delay.
The result is a binary view of the world: not started, in process, complete. That may be enough for material requirements planning, but it is insufficient for understanding true throughput. The system does not natively distinguish a unit smoothly moving through flow from one oscillating between rework, MRB, and waiting on engineering.
Many aerospace manufacturers have implemented MES systems for specific lines or processes, often around automated equipment or final assembly. But coverage is rarely universal. Manual, mixed-model, and prototype work frequently lives outside MES in travelers, whiteboards, and local databases.
These gaps fragment the execution picture. You may have good visibility in a test cell, but no structured data on how long units waited for that test or how many were diverted to repair before arriving. Without end-to-end coverage, throughput and flow metrics become partial and misleading.
To compensate, teams build their own visibility layers: spreadsheets for WIP tracking, PowerPoint-based status boards, and informal messaging channels. These tools are flexible and fast to change, but they are also fragile and non-authoritative.
From a metrics standpoint, manual layers break the digital thread. You cannot reliably compute FPY, WIP aging, or constraint utilization from a set of unlinked spreadsheets and hallway conversations. At best, you get snapshots; at worst, you get conflicting versions of reality across engineering, production, and quality.
A connected execution layer fills the gap between planning systems and the shop floor. It does not replace ERP or existing MES where they work well. Instead, it connects work orders, operations, and quality events into a coherent, real-time view of WIP.
In practice, this means every unit or serial number carries a live status: where it is, what operation it is on, who is working it, and what it is waiting on. With that foundation, throughput metrics are no longer estimates. You can see exactly how many conforming units are crossing key gates per day, week, or month and how that rate changes as conditions shift.
When quality events are integrated into the same execution layer, non-conformances, concessions, and repairs become part of the flow picture instead of being tracked separately. Each quality record is attached to specific work, operations, and components.
This enables new metrics: rework hours per shipped unit, FPY by operation and configuration, and the impact of specific defect modes on overall throughput. Leaders can identify which recurring issues are eroding capacity and prioritize corrective actions based on system-wide impact, not just defect count.
A connected execution layer can also extend beyond a single site. When suppliers participate—even with limited, well-scoped data sharing—you can visualize where work is actually piling up: internal assembly, external machining, special processes, or test labs.
Rather than treating supplier delivery performance as a black box, you see WIP stages, queues, and cycle times in aggregate. This supports more productive conversations: instead of demanding “faster deliveries,” OEMs and tier-ones can collaborate with suppliers on specific constraint relief that improves throughput for both sides.
In many aerospace organizations, each function has its own view of performance. Engineering tracks change implementation. Quality tracks findings and audits. Production tracks schedule adherence. Without a shared execution layer, these views diverge and debates about “what is really happening” consume time.
When all three functions work from the same operational data—live WIP, integrated quality events, and configuration-aware status—the conversation changes. Instead of arguing about numbers, teams can focus on constraints, trade-offs, and systemic improvements that raise real throughput.
Deliveries will always matter. Customers, warfighters, and mission operators depend on them. The goal is not to dismiss delivery metrics, but to put them in the right context. A modern aerospace scoreboard balances:
When these metrics move together, you know the system is getting healthier. When deliveries improve while FPY and stability worsen, you know you are borrowing from the future.
Throughput metrics can also serve as leading indicators of execution maturity. Examples include:
These indicators do not show up on investor slides, but they predict whether the system can handle increased rate, new configurations, or regulatory scrutiny without breaking.
Externally, aerospace organizations face a tension: markets want simple numbers; operations need nuanced ones. The path forward is not to publish every internal metric, but to frame deliveries and backlog as outcomes of an execution system—and to explain how that system is being strengthened.
That might mean discussing investments in connected execution, traceability, and supplier integration as part of program updates, or highlighting improvements in stability and right-first-time performance alongside shipment counts. Over time, the industry can move away from a one-dimensional scoreboard and toward a more accurate understanding of what real capability looks like in regulated aerospace manufacturing.
For manufacturers across the supply chain, the underlying message is consistent: if you rely on deliveries alone to judge performance, you will miss the early signals. Throughput, flow, and system health live in the execution layer—and that is where competitive advantage is now being built.
Whether you're managing 1 site or 100, C-981 adapts to your environment and scales with your needs—without the complexity of traditional systems.