
TL;DR — Most artifacts that claim to speak "business language" — DDD context maps, OWL ontologies, Gherkin scenarios, DSLs — are built from the technical side and traveled toward business readability. They flow against the current. Genuine business language originates from the natural sentences domain experts actually use, and is formalized from there toward technical implementation. The direction of derivation is not a detail. It determines whether a model is owned by the business or merely legible to it.
Solution Architects are trained to design. They come to a project with experience, patterns, and hard-won instincts about how systems should be structured. When a formal model lands in their hands, their first instinct is a reasonable one: does this tell me how to build it? And when it seems to, the friction begins.
This is the misreading that costs organizations dearly. FCO-IM artifacts — the outputs of CaseTalk's fact-oriented conceptual modeling process — are not architectural mandates. They are not database schemas dressed up in business language. They are something more fundamental, and more valuable: a precise, validated record of how the business talks about its data.
"FCO-IM doesn't design your solution — it defines the business reality your solution must respect."
That distinction matters enormously. A Solution Architect is free to implement a star schema, a graph database, a microservice mesh, or an event-driven platform. The FCO-IM model does not care. What it does care about — what it was built to preserve — is whether the distinctions that matter to the business survive the translation into technology.
How entity modeling quietly discarded the meaning AI is now desperately trying to recover
There is a quiet crisis at the heart of enterprise AI adoption, and almost nobody is naming it correctly.
Organizations are investing heavily in semantic layers, knowledge graphs, ontologies, and AI-powered data catalogues. The pitch is always some variation of the same idea: we will make our data understandable — to machines, to analysts, to the business. We will surface meaning from our data assets.
What is rarely said out loud is the uncomfortable premise underneath all of that investment: the data doesn't already know what it means.
That is not a technology problem. It is not something a better LLM will fix, or a richer graph schema, or a more expressive ontology language. It is a modeling problem — specifically, a problem that was baked in decades ago when organizations chose how to represent information, and what to keep and what to throw away.
This article is about what was thrown away, why it is so hard to get back, and why there is a family of modeling approaches that never threw it away in the first place.
In an era where organizations are drowning in data yet starving for meaning, there's a methodology developed decades ago that addresses a problem more relevant today than ever: how do we ensure that the people building IT systems truly understand what the business needs?
Marco Wobben has been working on fact-based modeling since the early 2000s, when a university professor handed him the source code of a modeling tool and asked him to maintain it. "I had to learn it from the inside out," he explains. "And now, with a lot of professors retired and the young people not having caught on yet, I'm kind of being considered the expert."
Data modelers trained in the conceptual/logical/physical paradigm often struggle to grasp what fact-oriented modeling — and FCO-IM in particular — actually offers. This isn't a failure of intelligence; it's a collision of mental models. Traditional data modeling treats structure as the primary artifact. Fact-oriented modeling treats semantics as primary, with structure as a derivable consequence.
This distinction matters enormously when we move from project-level modeling to enterprise-scale concerns.
Untangling the vocabulary of data and information modeling
If you've spent any time in the data modeling space, you've likely encountered a bewildering array of terms: data model, conceptual model, logical model, physical model, concept model, semantic model, information model, fact-oriented model. These terms are sometimes used interchangeably, sometimes mean completely different things depending on who's speaking, and often cause more confusion than clarity.
This article aims to untangle these terms, trace their historical origins, and explain why the distinctions matter—especially as organizations grapple with integrating systems across departments and making sense of decades of accumulated data.
The semantic grounding problem that glossaries, ontologies, and vector databases can't solve on their own
You've probably noticed the pattern by now. You feed your data to an AI, ask it a question, and get an answer that's technically plausible but semantically wrong. The AI confuses "customer" in your sales system with "client" in your support tickets. It hallucinates relationships between fields that share similar names but mean entirely different things. Ask the same question twice with slightly different phrasing, and you get contradictory responses.
Welcome to semantic drift—the silent killer of AI reliability.