The Shelfware Problem: When Models Don't Connect to Work
There’s a pattern that plays out in nearly every MBSE adoption story, and it goes like this.
An organization decides to get serious about Model-Based Systems Engineering. They select a tool. They hire a consultant. Or three. They train a cohort of systems engineers. The systems engineers build a model. The model captures the architecture, the requirements, the interfaces. It’s a good model. It follows the methodology. It expresses the system accurately.
And then nothing happens.
The developers keep working from their issue tracker tickets. The test engineers keep maintaining their test plans in Excel. The program manager keeps running status reviews from PowerPoint slides built by hand. The model exists in a parallel universe. Complete, correct, and completely disconnected from the work.
This is the shelfware problem. And it’s not a failure of modeling. It’s a failure of integration. This is Part 3 of The MBSE Reckoning.
Five ways integration fails
When we talk to practitioners about why their MBSE model isn’t connected to their engineering workflow, we hear five distinct failure modes. They’re worth naming because they’re not all the same problem, and they don’t all have the same solution.
1. No interoperability
The most basic failure: the tools literally can’t talk to each other. Your system model lives in one tool. Your requirements live in another. Your test cases live in a third. Your risk register lives in a SharePoint spreadsheet that somebody’s executive assistant updates quarterly.
Each tool has its own data model, its own API (if it has one at all), and its own proprietary format. There is no shared language, no standard exchange format that actually works in practice, and no common data model that spans the lifecycle. XMI was supposed to solve this for UML and SysML. It didn’t. ReqIF was supposed to solve this for requirements, and it sort of did, for a narrow set of use cases. The reality is that most MBSE tools are informational silos, islands of structured data surrounded by seas of unstructured chaos.
2. No data synchronization
Even when tools can exchange data, they rarely stay in sync. A requirements engineer updates a requirement in the requirements management tool. That change needs to propagate to the system model, to the test plan, to the risk assessment. In theory, the tools support this. In practice, synchronization is manual, error-prone, and time-consuming.
Somebody runs an export. Somebody else runs an import. A third person compares the two versions to figure out what changed. A fourth person updates the downstream artifacts by hand. This isn’t a digital thread. It’s a game of telephone played in Excel.
The result is that the model drifts. Within weeks of deployment, the system model no longer reflects reality. It reflects the state of the system as it was understood at the time someone last bothered to update it. And once a model drifts, trust erodes. Once trust erodes, people stop looking at the model. Once people stop looking at the model, it’s shelfware.
3. Prohibitive cost
Integration in the MBSE world is expensive. Not “buy a SaaS connector” expensive. “Hire a systems integrator for a six-month engagement” expensive.
Connecting your modeling tool to your requirements tool requires middleware, custom configuration, and ongoing maintenance. Connecting either of those to your simulation environment requires a different middleware, different configuration, and different expertise. Each integration is a project unto itself, with its own timeline, budget, and failure risk.
For large defense programs with nine-figure budgets, this is painful but survivable. For a mid-size engineering firm trying to adopt MBSE for the first time? The integration cost alone can exceed the tool licensing cost. And the dirty secret is that many of these integrations are “expensive, finicky, or just straight up don’t work” reliably in production.
But the dollar figure isn’t even the most damaging part. The real cost is velocity.
In every other software-adjacent discipline, integration has gotten radically cheaper. SaaS connectors cost tens of dollars a month. iPaaS platforms ship pre-built connectors for thousands of tools. AI agents now generate code, draft requirements, and propose tests at the speed of thought. The rest of the engineering organization is operating on AI-cycle timelines.
MBSE integration timelines have not budged in twenty years.
That asymmetry is fatal, because safety-critical engineering is supposed to be the discipline that proves correctness keeps pace with change. That is the entire promise of the digital thread: when something moves, everything connected to it updates, and the verification evidence stays current. The model is supposed to be the mechanism that lets a regulated industry move fast safely.
What actually happens is the opposite. By the time a six-month integration project ships, the system has already changed. The hazard analysis is referencing an architecture that no longer exists. The verification matrix is anchored to a requirements baseline that has drifted. The safety case is built on yesterday’s snapshot, dressed up to look like today’s reality.
This is the failure that matters in the AI age. It is not that MBSE tools are slow. It is that the integration cost forces engineering organizations into a choice nobody should ever have to make: move fast and abandon the model, or maintain the model and fall behind. Move fast and accept that nothing is traceable, verified, or auditable. Maintain the model and watch competitors ship three iterations while you wait for the middleware vendor to release a patch.
Neither option is “moving fast safely.” Both are failure modes. And the AI-driven acceleration of every adjacent discipline is widening the gap by the month.
4. Customization hell
MBSE tools are famously customizable. You can create custom stereotypes, custom profiles, custom plugins, custom views. This sounds like a feature until you realize what it means in practice: every organization’s MBSE installation is a unique snowflake.
Organization A’s model structure is incompatible with Organization B’s. The consultant who set up the tool left, and nobody knows why the profile is configured the way it is. The plugin that handled the requirements tool integration was written in Java by a contractor in 2019 and hasn’t been updated since.
Customization creates technical debt. It makes upgrades risky, support difficult, and collaboration across organizational boundaries nearly impossible. When different architects on the same program use different tools and different approaches on the same model, you don’t have a shared model. You have multiple models wearing a trenchcoat.
5. Cultural misalignment
This is the failure mode that nobody wants to talk about: the model doesn’t connect to work because the organization doesn’t actually work from models.
MBSE is often adopted as a mandate: a contract requirement, a process improvement initiative, a strategic directive from leadership. The mandate creates the model. But it doesn’t change the workflows. Engineers still do their real work in the tools they’ve always used. The model becomes a deliverable to be produced, not a tool to be used. It’s compliance theater: a high-level documentation exercise that satisfies a checkbox without actually changing how decisions get made.
The deeper issue is that MBSE tools were conceived as standalone environments, not as nodes in a digital thread. They were designed to be the center of the engineering universe. But engineers already have a center of their universe. It’s the tool where they do their actual work. Asking them to maintain a separate model on top of their actual work is asking them to do double duty. And they will resist it, quietly and effectively, until the model is abandoned.
The duplication tax
All five failure modes lead to the same outcome: duplication.
Engineers maintain the model AND the real artifacts. They write the requirement in the requirements tool AND update it in the system model. They create the test plan in their test management tool AND trace it in the model. They build the architecture in the model AND describe it again in the PowerPoint for the design review.
Every duplicated artifact is a synchronization problem. Every synchronization problem is an opportunity for the model to drift from reality. Every drift is a reason for people to stop trusting the model. The duplication tax is the mechanism by which MBSE tools become shelfware.
This is worth stating directly: when an MBSE tool creates more work than it eliminates, it has failed. Not because modeling is wrong, but because the tool has not earned its place in the workflow. It exists as an additional burden rather than a load-bearing structure.
What a connected digital thread actually looks like
The Department of Defense talks about the “digital thread,” a continuous flow of data from requirements through design, manufacturing, testing, and operations. INCOSE’s Vision 2035 describes “continuous virtual exploration from initial design through decommissioning.” These are good visions. But they’re meaningless without implementation.
Here’s what a connected digital thread actually looks like in practice:
Requirements to architecture. A requirement is written. It’s immediately linked to the system element it constrains. When the requirement changes, the systems engineer gets a notification. Not an email, not a status meeting update. A live alert in context. When the architecture changes, the affected requirements are flagged for review automatically.
Architecture to risk. System elements carry risk metadata. A hazard analysis links hazards to specific architectural decisions. When a design changes, the risk assessment updates. Not because somebody remembered to run a report, but because the risk model is wired to the architecture model. ASIL derivations, FMEA scores, and STPA loss scenarios are all connected to the elements they analyze, all updating when those elements change.
Risk to verification. Every risk mitigation has a verification activity. Every verification activity has acceptance criteria. Every test result links back to the requirement it verifies, the risk it mitigates, and the architectural element it exercises. Coverage isn’t a spreadsheet someone maintains by hand. It’s a live computation over the connected data.
Verification to operations. Test results, performance data, and field observations feed back into the model. The system’s operational profile informs the next design iteration. The model isn’t a static snapshot. It’s a living record of the system’s entire lifecycle.
This is not hypothetical. Every one of these connections exists in some form, in some tool, for some organization. The problem is that they’ve never existed together, in one environment, without requiring a seven-figure integration project.
Why “another integration” isn’t the answer
The traditional approach to the shelfware problem is to build more integrations. Connect your modeling tool to your requirements tool. Connect that to your project tracker. Connect the tracker to your test tool. Build a dashboard that aggregates data from all four.
This approach fails because it treats the symptom, not the disease.
The disease is architectural: the model doesn’t live where the work lives. If the model is in one tool and the work is in four other tools, no amount of integration will make the model feel like a natural part of the workflow. It will always be the thing you update after you do the real work. It will always be one sync failure away from obsolescence.
The vendor lock-in dimension makes this worse. Proprietary data formats mean your model is trapped inside the tool that created it. Want to switch to a different tool? Good luck migrating a decade of model data stored in a proprietary format that no other tool can fully ingest. This is by design. It’s not interoperability. It’s a switching cost masquerading as a feature.
The answer isn’t more integrations between more tools. The answer is a tool that IS the connective tissue. An environment where requirements, architecture, risk, verification, and operations coexist natively, where the model is the work rather than a parallel artifact describing the work.
Data gravity
There’s a concept in cloud computing called data gravity: data attracts applications and services. The larger the dataset, the harder it is to move, and the more things get built around it. The data doesn’t move to the application. The application moves to the data.
The same principle applies to engineering workflows. Engineers will gravitate toward the tool where their data lives. If the requirements are in the requirements tool, that’s where requirements engineers will work. If the architecture is in the modeling tool, that’s where systems engineers will work. If the test plans are in Excel, that’s where, God help us, test engineers will work.
The MBSE shelfware problem is a data gravity problem. The model is in one place. The work is in another. And the work always wins.
The solution is to change the gravitational center. Build a tool where the model IS the work environment. A place where requirements, architecture, risk, test planning, and program visibility all live in the same system, operate on the same data model, and update the same source of truth. Not because you’ve stitched five tools together with middleware, but because the tool was designed from the ground up to be the place where engineering happens.
When the model is the work, nobody has to be told to update the model. They’re already in it. The duplication tax disappears. The drift problem disappears. The shelfware problem disappears.
That’s not a fantasy. That’s a design choice. And it’s the design choice the MBSE industry has so far refused to make.
The moment of truth
MBSE is at a crossroads. The models are getting better. SysML v2 is maturing. AI is making model creation and querying more accessible. But none of that matters if the model remains an island.
The question isn’t whether models are valuable. They are. The question is whether models can be operational, embedded in the daily work of every engineer, analyst, and manager who touches the system. If they can, MBSE delivers on its promise. If they can’t, MBSE remains what it too often is today: a high-fidelity document that nobody reads.
The model has to live where the work lives. Everything else is shelfware.
This is Part 3 of “The MBSE Reckoning,” a 10-part series from Luvian on the state and future of Model-Based Systems Engineering.
Series navigation:
- Part 1: The MBSE Reckoning - Why the Industry Is at a Breaking Point
- Part 2: Your MBSE Tool Was Designed for the Wrong Person
- Part 3: The Shelfware Problem - When Models Don’t Connect to Work (you are here)
- Part 4: The Maturity Myth - Why Nobody Knows Where They Are
Subscribe to our newsletter to get each article as it publishes.
Build better systems, faster.
Luvian is the AI system design platform for modern engineering teams. Join the waitlist for early access.
Join the Waitlist