Yes. MES-based AI projects usually need additional data security controls beyond a standard MES or reporting deployment, because they often aggregate sensitive production, quality, routing, maintenance, and operator data into new pipelines, storage layers, and model workflows.
The exact controls depend on what data the AI uses, where inference runs, whether data leaves the plant network, and whether regulated technical data or customer-restricted information is involved. There is no single checklist that fits every site.
AI projects create new exposure points that many plants do not have in traditional MES transactions:
Bulk extraction of MES records into data lakes, feature stores, or external platforms
Copying or transforming genealogy, quality, and machine data into less controlled environments
Use of cloud services, external model APIs, or vendor-managed platforms
New service accounts, connectors, and middleware with broad access
Training datasets that can retain sensitive product, process, or operator information long after the source record changes
Model outputs that influence operations without the same review controls as formal MES transactions
That means the security scope is not just the MES itself. It includes extraction paths, staging layers, integration middleware, model repositories, prompt or query interfaces, output distribution, and change control around models and data pipelines.
Classify MES-connected data before any AI work starts. Separate routine production telemetry from controlled technical data, quality evidence, operator records, customer-specific information, and any export-controlled or contract-restricted content. Many projects fail here by assuming all MES data is operational and low risk.
Minimize data movement. Do not export full MES history if the use case only needs a subset. Reduce fields, time ranges, identities, and attachments. AI teams often ask for more data than the use case can justify.
Segment environments. Keep plant execution systems, OT networks, historian layers, analytics zones, and enterprise AI platforms separated with controlled interfaces. Direct, broad connectivity from AI platforms into MES or OT creates unnecessary risk.
Enforce least-privilege access. Service accounts, data engineers, data scientists, vendors, and support teams should not inherit broad MES privileges. Read-only still needs scoping by plant, line, work center, product family, and data type where possible.
Control where training and inference occur. If data leaves the site, document exactly what leaves, why, how it is protected, and whether the destination environment is acceptable for that data class. Some sites will require on-premise or tightly bounded private environments for certain workloads.
Protect data in transit and at rest. This is basic, but in MES-AI projects it must cover connectors, intermediate files, caches, notebooks, backups, and exported datasets, not just the MES database.
Use strong identity, credential, and secret management. Hard-coded credentials in scripts, notebooks, and integration jobs are common failure modes in pilot AI projects.
Log and retain evidence. You need auditability for data extraction, model version changes, prompt or query access where relevant, output delivery, and privileged actions. In regulated environments, undocumented model and pipeline changes become a control problem quickly.
Apply change control to models and pipelines. Treat AI logic that affects production decisions as a governed change, not an informal analytics update. Retraining, feature changes, threshold changes, and prompt changes can all alter behavior.
Validate interfaces and intended use. If AI outputs are shown inside MES workflows, drive operator guidance, or affect release, inspection, scheduling, or routing decisions, the integration and use case may need formal validation based on site procedures and risk.
Set retention and deletion rules. AI datasets and model artifacts often persist longer than source MES records. That can create conflicts with record control, contract requirements, privacy obligations, or technical-data handling rules.
Review third-party model and platform terms carefully. A major risk is allowing a vendor platform to retain prompts, uploaded files, or training data in ways the plant did not intend.
In regulated manufacturing, the issue is not only confidentiality. It is also traceability and data integrity.
Preserve system of record boundaries. The AI layer should not quietly become the unofficial source for route status, genealogy, inspection disposition, or work instruction content.
Maintain evidence trails. If a model recommends an action, you may need to show what data informed that recommendation, which model version generated it, and whether a human reviewed it.
Protect approved content. If AI is allowed to summarize or generate operator-facing instructions from MES, PLM, or QMS content, approval status and version governance must remain explicit.
Separate decision support from automated execution. The security and validation burden is usually lower when AI provides bounded recommendations than when it writes back to MES transactions automatically.
No security design eliminates these concerns entirely. It reduces exposure and improves recoverability if something goes wrong.
Exporting too much MES history into a general-purpose AI environment
Using shared analyst accounts or overprivileged service credentials
Allowing ad hoc notebooks or scripts to bypass approved integration paths
Sending controlled files, traveler content, or part-specific process data to external model services without proper review
Skipping model and prompt change control because the project started as a pilot
Letting AI outputs influence quality or production decisions without traceable review
Assuming enterprise IT controls automatically cover plant-floor integrations and OT-adjacent systems
Most MES-based AI projects run in mixed environments with legacy MES, ERP, PLM, QMS, historians, custom interfaces, and long-lived equipment. In that setting, security problems usually come from the seams between systems, not from one platform alone.
That is why full replacement is often the wrong answer. Replacing MES or adjacent systems to create a cleaner AI architecture can trigger qualification work, revalidation, downtime risk, interface rewrites, retraining, and evidence trail disruption. In long lifecycle, regulated operations, those costs and risks are often higher than the AI benefit case supports.
A more realistic approach is controlled coexistence: keep core systems as systems of record, add tightly scoped extraction and inference layers, validate high-impact integrations, and phase controls in based on data sensitivity and operational risk.
Before approving an MES-AI project, most sites should require at least:
a data classification review
a defined system architecture showing every data hop
an access model for humans, service accounts, and vendors
a decision on on-premise versus cloud processing
logging and retention requirements
change control for models, prompts, and pipelines
a validation and risk review for any workflow that affects production or quality decisions
If those basics are missing, the project is not ready, even if the AI model itself looks promising.
Whether you're managing 1 site or 100, C-981 adapts to your environment and scales with your needs—without the complexity of traditional systems.