
Solution Architects are trained to design. They come to a project with experience, patterns, and hard-won instincts about how systems should be structured. When a formal model lands in their hands, their first instinct is a reasonable one: does this tell me how to build it? And when it seems to, the friction begins.
This is the misreading that costs organizations dearly. FCO-IM artifacts — the outputs of CaseTalk's fact-oriented conceptual modeling process — are not architectural mandates. They are not database schemas dressed up in business language. They are something more fundamental, and more valuable: a precise, validated record of how the business talks about its data.
"FCO-IM doesn't design your solution — it defines the business reality your solution must respect."
That distinction matters enormously. A Solution Architect is free to implement a star schema, a graph database, a microservice mesh, or an event-driven platform. The FCO-IM model does not care. What it does care about — what it was built to preserve — is whether the distinctions that matter to the business survive the translation into technology.
How entity modeling quietly discarded the meaning AI is now desperately trying to recover
There is a quiet crisis at the heart of enterprise AI adoption, and almost nobody is naming it correctly.
Organizations are investing heavily in semantic layers, knowledge graphs, ontologies, and AI-powered data catalogues. The pitch is always some variation of the same idea: we will make our data understandable — to machines, to analysts, to the business. We will surface meaning from our data assets.
What is rarely said out loud is the uncomfortable premise underneath all of that investment: the data doesn't already know what it means.
That is not a technology problem. It is not something a better LLM will fix, or a richer graph schema, or a more expressive ontology language. It is a modeling problem — specifically, a problem that was baked in decades ago when organizations chose how to represent information, and what to keep and what to throw away.
This article is about what was thrown away, why it is so hard to get back, and why there is a family of modeling approaches that never threw it away in the first place.
In an era where organizations are drowning in data yet starving for meaning, there's a methodology developed decades ago that addresses a problem more relevant today than ever: how do we ensure that the people building IT systems truly understand what the business needs?
Marco Wobben has been working on fact-based modeling since the early 2000s, when a university professor handed him the source code of a modeling tool and asked him to maintain it. "I had to learn it from the inside out," he explains. "And now, with a lot of professors retired and the young people not having caught on yet, I'm kind of being considered the expert."