Your MBSE Tool Was Designed for the Wrong Person
There is a test you can run on any MBSE tool. Open it. Set a timer. See how long it takes before you can do something useful.
Not “load a metamodel.” Not “configure your stereotype profile.” Something useful. Create a block. Name it. Connect it to another block. See the relationship.
In Figma, you can draw your first frame in under ten seconds. In Notion, you’re typing within two. In Linear, you’ve created and assigned an issue before your coffee gets cold.
In today’s MBSE tools, you’re waiting for Java to allocate heap memory.
This is Part 2 of The MBSE Reckoning, and it’s about the most fundamental failure in the MBSE tooling ecosystem: the tools were designed for the wrong person.
The 1990s called. They want their UI back.
Let’s be specific about the state of the dominant MBSE tool.
The most widely deployed MBSE tools share a common DNA: Java-based desktop applications with interface paradigms dating back to the late 1990s. Engineers working in these environments routinely have to increase heap size and allocate 32GB of RAM just to keep their model from freezing, with java.lang.OutOfMemoryError: Java heap space surfacing as a regular part of the workflow rather than an edge case. When those freezes turn into crashes, teams lose hours - and sometimes days - of modeling work.
This isn’t a minor inconvenience. It’s a fundamental architecture problem. When your tool requires a high-end workstation to open a model, you’ve already excluded most of the people who need to interact with that model. The program manager who needs to check requirement coverage? Locked out. The test engineer who needs to trace verification activities? Locked out. The stakeholder who approved the system concept and wants to see where it stands? Definitely locked out.
This isn’t limited to commercial tools. Open-source alternatives - the ones that were supposed to democratize MBSE - have their own catalogues of usability failures. Users report they can’t do basics like transitioning updated element names between architectural phases. Formatting options that should be trivial require workarounds or simply don’t exist. These tools assume you already know their underlying methodologies and punish you if you don’t.
Then there are the integrations. Practitioners describe MBSE tool integrations as “expensive, finicky, or just straight up don’t work.” Connecting your model to your project tracker, to your requirements tool, to your simulation environment, to your test management system - each connection is a custom integration project with its own maintenance burden, its own failure modes, and its own budget line item that somebody has to justify every year.
The design failure nobody talks about
Here is the uncomfortable truth: MBSE tools were designed by modeling language experts for modeling language experts. They optimize for ontological completeness - the ability to express every conceivable SysML construct in its full formal glory - not for the act of engineering.
The result is tools that are inaccessible to 90% of the people who need to interact with system data.
Think about who actually needs information from a system model across a program lifecycle:
- Systems engineers who build and maintain the architecture (maybe 5-10% of the team)
- Requirements engineers who write, trace, and validate requirements
- Test engineers who need to align verification activities with design intent
- Safety analysts who assess hazards against the system architecture
- Program managers who track scope, risk, and readiness
- Developers who need architecture context for implementation
- Stakeholders who approved the concept and need visibility into progress
Current MBSE tools serve the first group. Everyone else either learns to read SysML diagrams - a skill that takes months to develop - or relies on static exports, PowerPoint translations, and secondhand interpretations.
This isn’t a training problem. The learning curve isn’t a feature. It’s a design failure.
What modern software actually feels like
Compare the MBSE tooling experience to what engineers in adjacent disciplines use daily.
A designer opens Figma. The canvas loads instantly in a browser. They can share a link with anyone - developers, product managers, executives - and each person sees the same live artifact. Comments are inline. Changes are real-time. Version history is automatic. There’s no installation, no licensing headache, no RAM calculation.
A developer opens their IDE. IntelliSense completes their code. Errors are underlined in real-time. They push a commit, and CI/CD runs automatically. Their work is version-controlled, diffable, reviewable, and deployable - all within the same flow.
A product manager opens Linear. They see every issue, its status, its dependencies, who’s working on it, and whether the sprint is on track. They didn’t need a training course. The tool’s structure teaches them how to use it.
Now imagine asking any of those people to open a desktop MBSE tool and “just check the requirements traceability matrix.” The gap isn’t incremental. Engineering tooling is 15 years behind the rest of the software world, and the MBSE segment is at the far trailing edge.
The sacred complexity myth
There is a defense you’ll hear from MBSE tool apologists, and it goes like this: “Systems engineering is inherently complex. The tools reflect that complexity. If you want simple, go use a drawing tool.”
This is the sacred complexity myth, and it’s wrong.
Engineers don’t need complexity. They need power with usability. These are not the same thing.
Git is powerful. It manages the version history of every significant software project on earth. Its conceptual model is genuinely complex - directed acyclic graphs, content-addressable storage, three-tree architecture. But most developers interact with it through interfaces that abstract that complexity: pull, commit, push, merge. GitHub made it collaborative. GitLens made it visual. The complexity exists, but it’s disclosed progressively based on what you actually need.
This is the principle of progressive disclosure - a well-established UX pattern that MBSE tools have almost completely ignored. The idea is straightforward: show the user what they need for their current task, and reveal deeper functionality as they need it. A requirements engineer opening a model should see requirements, their traces, and their quality metrics - not a class diagram editor with 47 toolbar buttons.
Progressive disclosure doesn’t mean dumbing things down. It means designing multiple on-ramps to the same underlying data. The safety analyst gets a risk view. The program manager gets a readiness dashboard. The systems engineer gets the full architectural canvas. Same model. Different lenses. Zero SysML prerequisite for the people who don’t need it.
What SysML v2 gets right (and what it doesn’t)
SysML v2, the long-awaited revision to the Systems Modeling Language, introduces something genuinely promising: a textual notation.
Instead of requiring a graphical editor to create model elements, SysML v2 lets you express system architecture in text:
part def Vehicle {
part engine : Engine;
part transmission : Transmission;
connect engine.output to transmission.input;
}
This matters more than it might seem. Text is version-controllable. Text is diffable. Text can be authored in any editor. Text can be generated by AI. Text can be reviewed in a pull request. Text opens the door to treating system models with the same engineering rigor that software developers apply to code.
SysML v2’s textual notation could be the UX breakthrough that the MBSE community has been waiting for - not because text is inherently better than graphics, but because it unlocks workflows that graphical-only tools can’t support.
But here’s where the “v2 or bust” mentality becomes dangerous. SysML v2 is still a complex specification. If tool vendors implement it with the same UI philosophy they applied to SysML v1 - expose every construct, require expertise for every interaction, optimize for completeness over usability - then v2 will fail for exactly the same reasons v1 is failing.
The language isn’t the problem. The design philosophy is the problem. A well-designed tool can make SysML v2 accessible. A poorly designed tool can make even a simple notation feel impenetrable.
The cost of getting this wrong
The usability crisis isn’t just an annoyance. It has measurable consequences.
When tools are hostile, adoption craters. Organizations invest six or seven figures in MBSE tool licenses and training, and within months the tools sit unused. Engineers who were forced into MBSE training report it takes just as long - or longer to develop systems digitally as using older methods. That’s not because digital is inherently slower. It’s because the tools impose so much friction that any theoretical efficiency gain is consumed by the tooling overhead.
When tools exclude non-experts, the model becomes isolated. It turns into a silo maintained by a small priesthood - the three or four people on the team who actually know how to use the tool - while everyone else works from exports, spreadsheets, and hallway conversations. The model exists, but it doesn’t function as a source of truth because most of the organization can’t read it.
When tools don’t integrate, work gets duplicated. Engineers maintain the model AND the “real” artifacts - the Word documents, the Excel spreadsheets, the issue tracker tickets, the test plans. The model becomes another deliverable to maintain rather than the connective tissue that reduces deliverables. This is how MBSE goes from productivity tool to productivity tax.
What the right tool looks like
The right MBSE tool doesn’t start with a metamodel. It starts with a question: who is trying to do what, and what’s the fastest path to getting it done?
It looks like this:
- Browser-based. No installation. No RAM anxiety. Share a link, and anyone can see the model.
- Role-aware. The program manager sees a dashboard. The test engineer sees a verification matrix. The systems engineer sees the full architecture. Same model, different views, zero SysML prerequisite.
- Progressively disclosed. Create a block in one click. Add ports when you need them. Define constraints when you’re ready. The tool grows with your sophistication.
- Natively collaborative. Real-time editing. Inline comments. Change tracking. Not “export and email” - actual collaboration, the way Figma and Google Docs taught us it should work.
- Connected by default. Requirements, architecture, risk, verification, and operations in one environment - not because you bought five integration licenses, but because the tool was designed as connective tissue from day one.
This isn’t science fiction. Every one of these capabilities exists in tools engineers use in other domains. The MBSE world just hasn’t demanded it yet.
It’s time to start demanding it.
This is Part 2 of “The MBSE Reckoning,” a 10-part series from Luvian on the state and future of Model-Based Systems Engineering.
Series navigation:
- Part 1: The MBSE Reckoning - Why the Industry Is at a Breaking Point
- Part 2: Your MBSE Tool Was Designed for the Wrong Person (you are here)
- Part 3: The Shelfware Problem - When Models Don’t Connect to Work
- Part 4: The Maturity Myth - Why Nobody Knows Where They Are
Subscribe to our newsletter to get each article as it publishes.
Build better systems, faster.
Luvian is the AI system design platform for modern engineering teams. Join the waitlist for early access.
Join the Waitlist