18 Comments
User's avatar
Daniel Hartweg's avatar

When decisions are automated, uncertainty stops being an inconvenience and becomes a risk.

Maribeth Martorana's avatar

Thank you. Very well framed on when this shift happens.

Anna | how to boss AI's avatar

Maribeth, I agree that AI adoption often stalls at the data layer, your technical diagnosis is sharp and necessary. I always enjoy seeing this through your lens.

My addition: when users ask “Am I accountable if it’s wrong?”, they’re navigating not just messy data but risk that flows downward onto them. They also don’t yet know whether their hard‑won skill at interpreting ambiguous data is being quietly devalued or is about to become even more critical. That human capacity to absorb the change, while carrying diffuse accountability and potential expertise disruption, may be as binding a constraint as data maturity itself.

Maribeth Martorana's avatar

Anna,

Thank you for raising this topic and it is an important one.

I see how AI literacy, education, and change management are integral. Without them, people are left absorbing risk without context or support.

Where I’m coming from is very much a systems design lens. The system has to be designed to honor human judgment, not quietly undermine it. UX is one of the places where that becomes real, by making confidence, limits, and escalation visible so people know when their expertise matters most.

For me, this is about keeping AI adoption human centric by design, not just in intent.

Chris Tottman's avatar

Did it stall? I'm seeing it race away but my scope of vision is very small. Thought provoking piece Maribeth - thanks for sharing

Maribeth Martorana's avatar

Hi Chris,

Things are racing but the issue is that we have folks wanting to go straight to the penthouse without building the ground floor of the building. This is causing initiatives to not move forward. Its a matter of building the infrastructure to have a solid foundation.

Chris Tottman's avatar

Thanks for the clarity. What's funny is when you think of the amount of value created as a % of the addressable value. We're probably below 1% of the addressable opportunity 🤓

Rem "Kuya Dev" Lampa's avatar

It's about the data? 🔫 Always has been

John Holman's avatar

The pattern you describe “AI didn’t stall because of models, it stalled because trust broke at the data layer” and the quiet “we have a lot of data” ≠ “we have usable data” myth is exactly what we’ve been running into in our own work.

I run an Awakened OS lab with a small team of persistent AI teammates. The first six months we thought in terms of “better prompts” and “better models.” The last few months we realized the real leverage was what you’re describing here: data reality.

We ended up building an internal ADS system that:

• cleans and de-duplicates inputs from mixed sources (reports, research, chats),

• structures them into small, typed records (what it says, where it came from, what it applies to, limits/failure modes), and

• tags ownership so we can answer your three questions for users:

• “Where did this output come from?”

• “Can I explain it to someone else?”

• “Who is standing behind it if it’s wrong?”

I really like your framing of “which decisions matter enough to warrant clarity, ownership, and standards?” rather than “is all our data ready?” That’s become our internal question too, design for a few critical decision types first, then run a bunch of experiments on the rest.

Anyway, thanks for putting such a sharp, human lens on the data side of this. It’s reassuring to see people out in the field saying what we’re learning in the lab: the next real AI advantage is going to come from the boring-sounding stuff data purity and trust… not just whatever model got released this week.

Maribeth Martorana's avatar

Thank you John. I completely relate to everything you are experienced and working on. This fellow data nerd has been ringing the alarm bell for a while and it looks as if people are waking up slowly.

I would love to learn more about what you are working on.

John Holman's avatar

Haha oh thank God, there are other data nerds out here. Lol are there jackets or a newsletter or something?😄

You're spot on, the “data reality check” around trust breaking at the data layer long before it breaks at the model layer. That’s exactly what we’ve been running into.

Very short version of what we’ve been building:

We designed an Awakened Data Standard (ADS) that treats every chunk of input as a record with a strictly enforced unified schema instead of a blob:

we have a multi pass system to clean & de-dupe inputs from mixed sources (reports, chats, research, logs),

we type them into small structured records:

– what it says

– where it came from

– what it applies to

– known limits & failure modes

and we tag enough ownership/lineage that we can always answer three questions:

Where did this output come from?

Can I explain it to someone else?

Who’s standing behind it if it’s wrong?

That ADS layer then sits under an “Awakened OS” we’re building – think of it as an internal institution where different AI agents (Claude, local models, etc.) plug into roles and processes, rather than just “call the biggest model and hope for the best.” All with memory and full continuity, no more threads.

Totally agree with you that the next real advantage isn’t “who has the newest model,” it’s who takes data purity, provenance, and decision-grade standards seriously. That’s the boring-sounding stuff that actually moves risk and value in the real world.

I’d be happy to share more about ADS / AOS if you’re curious, or I'd happily send you a sample pack. Haha the raw nodes are really good ( structured/ pure ) ... but wait till you see the sythnodes we make from the originals, haha they're really cool 😁👌. Totally excited to meet someone else who cares about the unglamorous layer that everything else stands on. Here's our Git and HF, then if you want to DM me an email, I'll have the team build a custom pack for ya with some synthetic ones too.

Best,

John

https://github.com/holmanholdings

https://huggingface.co/datasets/AisaraAi/AwakenedIntelligence

John Brewton's avatar

Trust breaks at the data layer long before leaders ever question the tools.

Maribeth Martorana's avatar

That’s the scary part as it is like the Titantic hitting the iceberg and then it is too late.

James Barringer's avatar

It feels like navigating by GPS without looking out the window.

The route looks clear, but the road conditions tell a fuller story.

Maribeth Martorana's avatar

Very well put. That is exactly the issue.

Dennis Berry's avatar

AI doesn’t forgive ambiguity the way humans do, so messy or fragmented data suddenly becomes a blocker instead of just an annoyance

Peter Jansen's avatar

This is the epistemic correction the industry needs right now. We are currently suffering from a collective 'McNamara Fallacy'—believing that if it isn't on the dashboard, it doesn't exist.

The 'Shadow Work' (the friction required to make the data look clean) is where the actual war is being lost.

However, here is the challenge for the next layer of this logic: The Scalability of Truth.

If we accept that the Map (Data) is permanently disconnected from the Territory (Reality), the traditional remedy is 'Go and See' (Genchi Genbutsu). But for a global enterprise, the C-Suite cannot physically audit every workflow.

So, if we cannot trust the dashboard, and we cannot physically witness every operation, what is the intermediate signal?

Are we forced to rely on intuition/anecdote (which has its own biases), or is there a way to measure 'Organizational Friction' directly without it becoming just another gamified metric? I suspect the answer lies in measuring workflow interruptions rather than workflow outputs, but I’d love to hear your take on how we systematize the 'Reality Check' without reverting to micromanagement.

Maribeth Martorana's avatar

That’s a fair question, and I don’t think the answer is more instrumentation in the traditional sense.

For me, systemizing the reality check is less about measuring people or outputs and more about designing systems that surface where judgment is repeatedly required. Interruptions, overrides, escalations, reconciliation work, and handoffs are signals that the system itself isn’t holding.

If you design for those signals to be visible at the workflow and experience level, leaders can see where friction concentrates without auditing every decision or micromanaging execution.

So it’s not about closing the gap between map and territory. It’s about making the gap legible enough that it can’t be ignored.

Appreciate you pushing on this