View this post on the web at https://opinionatedintelligence.substack.com/p/what-it-actually-takes-to-trust-ai
Picture this. It’s your fourth day at your new job. You’re sitting in a large team meeting, still figuring out how to connect to the office Wi-Fi, when the GM pulls up a chart and asks the room: “Retention dropped 5% last month. What’s going on?”
You glance at your laptop. You’ve been poking around the data warehouse since Day 1 because you’re eager and you’re smart. You actually have an answer. You found a dashboard, ran some queries, and you’re pretty sure you know what’s going on.
So you raise your hand.
“It looks like the Android cohort drove the decline. Their D7 retention fell significantly compared to the prior month.”
The room goes quiet. Your manager clears her throat.
“We… uh… changed the definition of ‘active user’ three weeks ago. It used to be any event and now it’s a meaningful engagement event. The drop isn’t real.”
You slowly lower your hand.
Here’s the thing: you’re eager and you’re smart. Your SQL was fine. Your chart was clean. Your explanation was coherent and plausible.
You were just missing context that no amount of raw intelligence could compensate for.
This is what AI does.
Subscribe to Opinionated Intelligence to get new essays on the future of analysis, decision-making + AI.
AI is the world’s smartest new hire
Today’s AI models are extraordinary. They write SQL so robust it’ll make your data engineer weep with respect. They generate charts as pristine as a McKinsey slide deck on presentation day. They can produce multi-paragraph explanations that boom with the full-regalia confidence of your most Senior, most Tenured, Crotchety Analyst.
But they don’t actually have that confidence.
When you point ChatGPT, Gemini, or Claude at your data warehouse and ask “why did retention drop?”, you are handing the question to a brilliant person who has never worked at your company, has never attended your team’s meetings, doesn’t know your metric quirks, and doesn’t know nearly half the things you do about your business.
The problem is not intelligence. The problem is everything else.
Today, dozens of cutting-edge companies have told us that their AI agents are right maybe 70-85% of the time. This is game-changing for data practitioners, who can now do analyses in a fraction of the time. If sometimes the responses smell fishy… Well, these data ninjas have the know-how to check, veto or rework their AI’s responses.
But to let business users loose with the promise of data democratization via natural language chat? Unfortunately 70-85% just isn’t good enough.
So how do we close the remaining gap and get something actually trustworthy?
4 things your best analyst knows that your AI should too
Think about that amazing senior analyst on your team that everyone trusts. The one whose Slack messages get screenshotted and forwarded to the exec team. What do they actually know that makes them so good?
1. Which numbers matter
Your company has 14 dashboards with some version of “retention” on them. Three of them use different definitions. One was built by someone who left two years ago and nobody’s touched it since. One is the “official” board metric.
Your best analyst knows which one is canonical. They know that the dashboard labeled “Retention - Master” is, ironically, the wrong one (it’s the one called “retention_jake_final” that the CFO actually uses, because Jake built it to match the board definition before he left, and nobody’s ever renamed it.)
Generic AI probably doesn’t know this. It’ll grab the table that sounds more correct and spit that number back via a snazzy “Ask AI in Slack” interface that everyone’s been using, and your PM will be none the wiser as they copy it into their next presentation where it becomes the number everyone argues about for the next 20 minute.
2. What changed in the business
Revenue jumped 30% last quarter, hooray! The latest model with access to your CRM and Slack might actually piece together a solid story: it finds the enterprise deal that closed, sees the Slack thread where the sales team went crazy on emojis, and pulls the contract value from Salesforce.
But this deal was unusual: the VP of Sales gave the client 18 months of free onboarding support to close, a concession made in the final stages of a late night phone call. The revenue is real, but the margin story is a different picture and the strategic rationale lives in someone’s head.
Piping in context from Slack, Linear and your CRM bridges part of the gap. But there will always be judgment calls, side conversations, and unwritten context that no integration automatically captures. These misalignments are often discovered the human way, when one day some part of a story doesn’t make sense and questions are asked. Your best analyst is always listening and paying attention to the goings-on of the business.
3. How to actually think about the problem
Give AI and a senior analyst the same question: “Why did retention drop?”, and watch what happens. AI opens the data, starts slicing, and follows whatever looks interesting. It’ll build a beautiful cohort analysis, then do a segmentation deep-dive, and come back with a full report that technically answers the question but doesn’t actually move the decision.
Your best analyst takes a different tack. They start by asking who cares and why. They scope the problem before they touch the data. They work backwards from what decisions need to be made, and then assesses whether a directional answer is sufficient or deep, precise answers are needed. They have entire playbooks for how to distill a business question into data questions, and how to separate the signal from the noise.
This is the accumulated judgment of someone who has done hundreds of analyses at this company, for these particular stakeholders, with all the weird quirks of this specific ecosystem.
Generic AI, like a junior analyst, investigates what’s asked and hands you an encyclopedic answer that’s technically impressive but may be practically useless, the analytical equivalent of answering “What should we have for dinner?” with a complete nutritional breakdown of every restaurant within ten miles.
4. What happened last time
Every January, your numbers dip. Every January, someone panics. And every January, your best analyst says: “It’s January. It always dips. It’ll bounce back by the third week.”
They know this because they investigated it the first time, were right, and watched what happened next. Over three years of doing this, they’ve built a calibrated sense of when to worry and when to wait.
Generic AI starts from zero, and it never closes the loop. It produces an answer and moves on. It doesn’t know which of its past recommendations were right, which were subtly wrong, or which ones led to a decision that backfired. It has no way to learn that the churn analysis it produced last quarter actually missed the real driver, or that the context it was given about metric definitions introduced a new error somewhere else.
Your best analyst is always updating their mental model with new information: this worked, that didn’t, this source is reliable, this one is error-prone. Most AI systems today have none of that. Teams add context ad hoc, fix the errors they notice, and have no systematic way to know if accuracy is improving or degrading.
What this looks like in real life
In one recent example, we gave 50 real questions from actual users to a state-of-the-art model running on top of a clean set of tables in a warehouse. These tables even had a semantic layer!
The model got slightly better than 80%. This is great if you are a data proficient analyst who can read SQL or Python as well as a bookseller reads novels; you are now vastly more empowered in your work.
This is not nearly good enough if you are a business user! Person after person told us how they’d rather rely on a human analyst since they couldn’t be sure AI was right. Even data leaders agreed: the downstream impact wasn’t just embarrassing meetings but bad business decisions leading to thrash across the organization and greater overall skepticism of AI analysis.
What were these 20% errors? They were practically all failures of context.
In a few cases, the system treated sign-ins as signups because event data used those terms in a non-standard way. In another, it interpreted “month X revenue” as a rolling 28-day window when the business expected a calendar month. In another, it pulled registration counts from the wrong source because cumulative registrations and daily registrations were defined differently, including how deleted users were handled.
Once the appropriate context was applied, the same set of questions jumped to 98% accuracy.
Same model; same warehouse; same underlying data.
What changed was not intelligence. It was institutional know-how, as well as a heaping dose of careful monitoring, measurement, and iteration.
What can you do about this?
Here are five practical places to start:
1. Define your canonical metrics.
Create one trusted source for your most important business metrics. Call it your “golden set.” Only one definition allowed. If there are confusingly named metrics, or you observe AI pulling incorrectly, feed that context in.
2. Log major business changes.
Campaign launches, pricing tests, onboarding redesigns, policy changes, instrumentation changes, metric definition changes — these should live somewhere machine-readable and easy to retrieve.
3. Capture analytical playbooks.
What do your best analysts check first when retention drops? Which cuts matter? Which segments are strategic? What questions are usually noise? Write that down.
4. Continuously update your memory.
When your team investigates a recurring issue, do not let the answer disappear into Slack. Store the conclusion, the evidence, and what was ruled out so the system can use it next time.
5. Measure whether it’s working. This is the step that matters the most as your system matures. Track accuracy over time. Understand which questions your AI gets right, which it gets wrong, and why. Without this, enrichment becomes a game of whack-a-mole because improving context in one spot can cause a regression in another. You need a feedback loop with enrichment → measurement → observability, not just growing piles of context.
The gap is closing, but not how you think
The funny thing about this problem is that it’s not going to be solved by meatier models.
GPT-6 or Claude 5 or whatever comes next will be even more capable at reasoning, even more fluent, even more powerful in its working memory. But out of the box, it still won’t know your business better.
Part of the answer is giving the model better context. It’s getting easier and easier to connect and ingest company details, and teams that do this well will see real improvements.
But context alone also hits a ceiling. Every team that’s pushed past 85% accuracy learns the lesson of the fragility of ad-hoc enrichment.
The teams that actually close the gap aren’t just adding more context, they’re measuring whether the context is actually working to improve quality. Enrich, then measure accuracy, then observe where things break, then decide what to enrich next. It’s a continuous loop.
In a world where every company has access to the same frontier models and the same integration tools, the differentiator is the system around it. How well does AI know your business? How well does it keep track of what’s changing? How well can you tell whether its trustworthiness is increasing [ https://substack.com/redirect/e6a8f010-f73b-4ee0-938f-45def619f096?j=eyJ1IjoiNzF4cDQwIn0.VLQsNiiAawz-DS2VtWTrcrG2IFeLIxnWNFcK9akSjpY ] or decreasing?
The best AI analysts won’t be the smartest.
They’ll be the ones running the tightest loop between what they know, what they measure, and what they fix next.
(P.S., if you want a system that does this… that’s why we built Sundial.ai [ https://substack.com/redirect/63d36652-8373-4c1b-8d62-11ed2904667d?j=eyJ1IjoiNzF4cDQwIn0.VLQsNiiAawz-DS2VtWTrcrG2IFeLIxnWNFcK9akSjpY ])
Subscribe to Opinionated Intelligence to get new essays on the future of analysis, decision-making + AI.
Unsubscribe https://substack.com/redirect/2/eyJlIjoiaHR0cHM6Ly9vcGluaW9uYXRlZGludGVsbGlnZW5jZS5zdWJzdGFjay5jb20vYWN0aW9uL2Rpc2FibGVfZW1haWw_dG9rZW49ZXlKMWMyVnlYMmxrSWpvME1qWTFNVFV3TkRBc0luQnZjM1JmYVdRaU9qRTVNVGt4TXpnM015d2lhV0YwSWpveE56YzBOVE01TURVMkxDSmxlSEFpT2pFNE1EWXdOelV3TlRZc0ltbHpjeUk2SW5CMVlpMDFNelUzTVRZeElpd2ljM1ZpSWpvaVpHbHpZV0pzWlY5bGJXRnBiQ0o5LkRFQU4yZlN3dS11YzU3QlA1bHc4aGdYVE92V2x6RlZGSi1WSnJWcWJBZk0iLCJwIjoxOTE5MTM4NzMsInMiOjUzNTcxNjEsImYiOnRydWUsInUiOjQyNjUxNTA0MCwiaWF0IjoxNzc0NTM5MDU2LCJleHAiOjIwOTAxMTUwNTYsImlzcyI6InB1Yi0wIiwic3ViIjoibGluay1yZWRpcmVjdCJ9.4hu1IHbaPYBjR6JuqzmwDJ0QAmlQ5XhVDUfGsgOYrzI?
View Email
What It Actually Takes to Trust AI With Your Data
opinionatedintelligence@substack.com3/26/2026
View this post on the web at https://opinionatedintelligence.substack.com/p/why-ai-analysis-gives-you-generic
The Metrics Trap
Consider a simple example. You’re told that a product has:
5 million monthly active users (MAU)
DAU/MAU of 80%
Most people would immediately assume this is a strong product. Five million users represents meaningful scale, and an 80% DAU/MAU ratio shows that four out of five monthly users come back every single day, signaling exceptional engagement. With only these two data points, both a human analyst and an AI system would reasonably conclude: this company is doing well.
Subscribe to Opinionated Intelligence to get new essays on the future of analysis, data, and decision-making.
But watch what happens when you add another piece of information.
Average daily session length: 30 seconds.
Now the picture shifts. For most products, especially consumer and social ones, a thirty-second session means users aren’t doing much. They might be opening the app out of habit, glancing at a notification, and leaving. The high DAU/MAU ratio suddenly looks less like deep engagement and more like shallow, reflexive behavior.
Except — what if the product is a payments app? Something like Venmo or Zelle or UPI? In that case, thirty seconds is perfectly natural. Users open the app, send money, confirm, and close it. A short session length isn’t a weakness; it’s a feature of the product category.
This is the metrics trap: any individual metric, taken in isolation, supports multiple contradictory interpretations. The number itself doesn’t tell you whether the company is thriving or struggling. Only context does.
Why AI Falls Into This Trap
LLM systems reason by pattern matching at enormous scale. When you present a set of facts, the model searches its learned representations for the most coherent explanation that fits those facts.
When context is rich and specific, this process works remarkably well. The model can narrow down to a single plausible interpretation and reason about it with precision.
But when context is thin, many different stories remain equally plausible and the model has no way to distinguish between them. In that situation, it does the only thing it can: it selects the interpretation that is most common in its training data and presents it as though it were the obvious conclusion.
This isn’t a bug. It’s the fundamental mechanism. And it means that vague inputs reliably produce generic outputs.
If you tell an AI “DAU/MAU is 80%” and nothing else, the model doesn’t know if the product has a hundred users or a hundred million. It doesn’t know if it’s a game, a banking app, or an enterprise tool. It doesn’t know if engagement is organic or subsidized. So it picks the most typical scenario, probably a consumer app with decent traction, and builds its analysis around that assumption, without telling you it’s assuming.
The Concept of Orthogonal Context
The solution is what we can call orthogonal context: independent pieces of information that describe the situation from different, non-overlapping dimensions.
The word “orthogonal” comes from geometry. It means “at right angles,” or more broadly, independent. In this context, it means each new piece of information you provide should reduce ambiguity in a direction that the other pieces don’t already cover.
Here’s a practical example. Consider these four data points:
DAU/MAU = 80% → tells you about engagement frequency
MAU = 5 million → tells you about scale
Average session length = 10 seconds → tells you about engagement depth
Product category = payments app → tells you about expected user behavior
Each one describes a different dimension of the product. None of them is redundant with the others. Together, they paint a specific and coherent picture: a payments app at meaningful scale with high-frequency, low-duration usage, which is exactly what you’d expect from a well-functioning product in that category.
Now compare that with providing four data points that all describe the same dimension:
DAU/MAU = 80%
Weekly active users / MAU = 90%
D7 retention = 75%
D30 retention = 70%
These are all engagement metrics. They’re correlated with each other. Providing all four gives you more precision on one axis, but it doesn’t help the model understand the broader picture. You know engagement is high, but you still don’t know at what scale, in what product category, or whether the engagement is organic.
The principle is straightforward: breadth of context matters more than depth on a single axis to construct a unique story.
AI Needs a Unique Story
Here’s a useful way to think about what happens inside the model when you give it information.
AI is implicitly trying to construct a single coherent narrative that explains all the data points simultaneously. The fewer data points you provide, the more narratives remain plausible. The more orthogonal context you add, the more candidate stories get eliminated, until ideally, only one remains.
Think of it like a detective solving a case. One clue (the suspect was in town that day) leaves hundreds of possibilities open. Two clues (they were in town and had a motive) narrows it down. Five independent clues might point to exactly one person.
Story A:
Ride-sharing app
8M MAU
DAU/MAU 70%
Average 4.5 rides per week per active user
This looks like a product with strong product-market fit. High frequency, solid scale, healthy engagement. AI would likely benchmark it against Uber’s early growth and project a promising trajectory.
Story B — same facts, plus one:
Average rider subsidy: $8 per ride
Now the original story crumbles. Users aren’t choosing the product, they’re choosing the discount. At 4.5 rides per week, the company is burning roughly $36 per user per week to maintain those engagement numbers. The DAU/MAU ratio isn’t measuring product love but rather price sensitivity. When the subsidy shrinks, so will every metric on this dashboard.
One additional orthogonal fact completely changed the story.
This is why context completeness matters more than the sophistication of the question you ask.
A brilliant question with sparse context will produce a mediocre answer. A simple question with rich, orthogonal context will produce a sharp one.
How to Read AI’s Output as a Diagnostic Tool
There’s an important corollary to all of this: the quality of AI’s output tells you something about the quality of your input.
If AI gives you a response that feels generic, confident, and unsurprising, that’s usually a signal. It’s not that the AI isn’t doing a good job, but that it likely didn’t have enough context to do anything other than default to the most common pattern.
Generic output is a symptom of ambiguous input.
When you see this happening, don’t try to fix it by asking a more clever follow-up question. Instead, go back and examine what context is missing. Ask yourself:
Does the AI know the scale of what I’m describing?
Does it know the category or domain?
Does it know about external factors — incentives, constraints, competitive dynamics?
Have I given it information that distinguishes my situation from the typical case?
If the answer to any of these is no, that’s where the gap is.
Conversely, when AI produces an insight that feels genuinely specific and non-obvious, it usually means you’ve provided enough orthogonal context for the model to converge on a single story. That’s the signal that the system is working well.
Practical Guidelines for Working with AI
If you want AI to produce high-quality analysis, focus less on crafting the perfect prompt and more on assembling the right context. Here’s how:
1. Provide multiple independent metrics
Don’t hand the model a single signal and expect it to work backwards to a full picture. Combine data points that cover different dimensions:
Scale: MAU, revenue, headcount
Engagement: DAU/MAU, session length, actions per session
Retention: D1, D7, D30 cohort retention
Economics: Unit economics, LTV/CAC, gross margin
Each category tells the model something the others don’t.
2. Always specify the product category and use case
This is one of the highest-leverage pieces of context you can provide, because it sets the baseline for what “good” looks like.
Ten seconds of daily usage in a payments app is excellent. Ten seconds in a social network is a disaster. Ten seconds in a meditation app is confusing. The exact same number means completely different things depending on what the product is supposed to do.
If you don’t specify the category, the model will guess. And it will usually guess “generic consumer tech product,” which may be completely wrong for your situation.
3. Surface incentives, subsidies, and external drivers
Engagement metrics are easy to distort. Common drivers that change interpretation include:
Promotional offers and sign-up bonuses
Referral rewards
Forced usage
Advertising spend driving installs
Seasonal effects
If any of these factors exist, the model needs to know. Otherwise, it will interpret artificially inflated metrics as organic signals and build its analysis on a false foundation.
4. Name what makes your situation unusual
AI defaults to the typical case. If your situation is atypical in any important way, you need to say so explicitly. This might include:
Operating in a regulated industry
Serving a niche market
Having an unusual business model
Facing a specific competitive threat
Being at an unusual stage of growth
The model can reason well about unusual situations, but only if it knows they’re unusual.
5. Reduce ambiguity before asking for analysis
Before asking the AI to draw conclusions, check whether you’ve given it enough information to rule out alternative interpretations.
The goal: give AI enough independent facts that only one story makes sense. That’s when analysis becomes sharp.
In Summary
Most people try to get better output from AI by writing better prompts. They tweak the phrasing, add instructions, ask the model to “think step by step.” These things help at the margins, but they’re optimizing the wrong variable.
The far higher-leverage move is assembling better context before you ever type the question. Give the model enough independent facts (scale, category, engagement depth, external drivers) that only one story makes sense. When you do that, you don’t need a clever prompt. The analysis sharpens itself.
The next time AI gives you a generic answer, don’t ask a better question. Ask yourself what you forgot to tell it.
Subscribe to Opinionated Intelligence to get new essays on the future of analysis, decision-making + AI.
Unsubscribe https://substack.com/redirect/2/eyJlIjoiaHR0cHM6Ly9vcGluaW9uYXRlZGludGVsbGlnZW5jZS5zdWJzdGFjay5jb20vYWN0aW9uL2Rpc2FibGVfZW1haWw_dG9rZW49ZXlKMWMyVnlYMmxrSWpvME1qWTFNVFV3TkRBc0luQnZjM1JmYVdRaU9qRTVNVE0zTURReU15d2lhV0YwSWpveE56Y3pPRFV5TURBNExDSmxlSEFpT2pFNE1EVXpPRGd3TURnc0ltbHpjeUk2SW5CMVlpMDFNelUzTVRZeElpd2ljM1ZpSWpvaVpHbHpZV0pzWlY5bGJXRnBiQ0o5LnlraE9kbU10S3JzZ2lkbGo1TVN1TFVLdDRGMUJmZy1PQUVmQjVZbWJMY0kiLCJwIjoxOTEzNzA0MjMsInMiOjUzNTcxNjEsImYiOnRydWUsInUiOjQyNjUxNTA0MCwiaWF0IjoxNzczODUyMDA4LCJleHAiOjIwODk0MjgwMDgsImlzcyI6InB1Yi0wIiwic3ViIjoibGluay1yZWRpcmVjdCJ9.74s99ss1y3con-OBIGv5Yg95z6MsSC_5BTfu3X1iHU8?
View Email
Why AI Analysis Gives You Generic Answers
opinionatedintelligence@substack.com3/18/2026
View this post on the web at https://opinionatedintelligence.substack.com/cp/190639594
Unsubscribe https://substack.com/redirect/2/eyJlIjoiaHR0cHM6Ly9vcGluaW9uYXRlZGludGVsbGlnZW5jZS5zdWJzdGFjay5jb20vYWN0aW9uL2Rpc2FibGVfZW1haWw_dG9rZW49ZXlKMWMyVnlYMmxrSWpvME1qWTFNVFV3TkRBc0luQnZjM1JmYVdRaU9qRTVNRFl6T1RVNU5Dd2lhV0YwSWpveE56Y3pNalE0TnpReUxDSmxlSEFpT2pFNE1EUTNPRFEzTkRJc0ltbHpjeUk2SW5CMVlpMDFNelUzTVRZeElpd2ljM1ZpSWpvaVpHbHpZV0pzWlY5bGJXRnBiQ0o5LjZkTFpLdkhjME1ub3M4VFlEenA0b0toaGplMWNxT1VkcXlXSmJGWTc0b1UiLCJwIjoxOTA2Mzk1OTQsInMiOjUzNTcxNjEsImYiOnRydWUsInUiOjQyNjUxNTA0MCwiaWF0IjoxNzczMjQ4NzQyLCJleHAiOjIwODg4MjQ3NDIsImlzcyI6InB1Yi0wIiwic3ViIjoibGluay1yZWRpcmVjdCJ9.xW49SKvRQxo6BvlE5uCw0M3R-Ndgtxp6E3EUnRceVYY?
View Email
What Excellent Growth Teams See That Others Miss
opinionatedintelligence@substack.com3/11/2026
View this post on the web at https://opinionatedintelligence.substack.com/p/the-data-job-isnt-dying-because-the
Every few years, the same fear resurfaces: “This time, the tooling is so good that the role itself disappears.”
We heard that 15 years ago when dashboards exploded onto the scene. “Oh no! With people self-serving their own data, who needs data practitioners?”
Actually, the opposite happened. The appetite for data just grew. Data teams doubled and tripled in size.
Subscribe to Opinionated Intelligence for the latest in the future of data + AI
Now, the same narrative rears its head again:
AI can write SQL!
AI can generate dashboards!
AI can produce explanations that sound confident and coherent!
From the outside, it looks like that’s the entire job. But this conclusion is shortsighted.
Code can be a black box. Data cannot.
In software, correctness is observable.
If the login succeeds, the payment gets processed, the page renders, an entire suite of tests passes, and a bunch of white-hat hackers can’t get in, it’s all good. You don’t really need to know the exact details of how the code was written.
Data analysis is an entirely different beast.
The SQL may run without error. The dashboard may load a pretty chart. An explanation may read beautifully.
And yet it can be wrong.
There is no way to guarantee, just by looking at the results, how trustworthy the conclusion is.
One must validate the steps of the work itself.
This is how trust is earned in data.
The bottleneck has moved
Yes, AI dramatically lowers the cost of getting data and producing analysis. But more data and more analysis do not automatically mean faster decisions.
The business of analysis has always had two parts: 1) generating outputs 2) Deciding which outputs deserve belief.
For years, analytics teams were constrained by the first. “Who can pull together an analysis on the health of the suburban teens segment?”
“Well, Jared’s queue is 16 requests long, so it’s going to be about three weeks.”
Today, J.ai.red can handle thousands of such requests within a day. But as answers become cheap, judgment becomes the bottleneck.
In most organizations today, even without AI, teams already struggle with two dashboards showing different retention numbers, or a board conclusion that doesn’t match the growth model, or an experiment result that contradicts a prior narrative.
Now imagine multiplying the volume of analysis by 10× or 100×. Poor Jared is now getting dozens of requests to the tune of: “Hey, does this look right?”
Good judgment is not easy to come by, and it asks meaningfully harder questions:
Are the underlying assumptions valid?
Is the data lineage stable?
Is the signal statistically meaningful or just noise?
Is this explanation consistent with our broader understanding of the business?
These questions ask for accountability rather than mere execution.
The new data role
Within data, the salient question is no longer: “Who can get the answer fastest?” It is “Who can decide what is true?”
This role requires:
Knowing which metrics are canonical and why.
Understanding which tables are authoritative.
Recognizing when an output violates prior institutional knowledge.
Detecting when a result is technically correct but strategically irrelevant.
Knowing who needs to act on which information.
Call this role an arbiter, a steward, a tastemaker. Or, my personal favorite: a data curator.
The rest of the org will know this group as “the data people we trust” and expect them to ensure answers hold up under scrutiny.
As analysis volume increases, we should expect greater volatility in the quality of answers. Without a trusted layer of curation, organizations will find themselves mired in even more noise and less signal, leading to decision paralysis or even worse: uninformed misalignment.
An example
Let’s say an executive asks a simple question: “Why did retention drop last week?”
Today’s AI can produce five plausible explanations:
A cohort mix shift
A recent feature launch
Competitive market pressures
A seasonality artifact
A marketing deluge
Each explanation includes supporting charts. Each sounds reasonable.
But can we trust that the AI is aware…
…a logging schema changed two weeks ago.
…the definition of “active user” was modified last quarter.
…a large enterprise customer churned and is distorting aggregates.
…a marketing campaign temporarily shifted acquisition mix.
…a prior experiment created a lagged retention artifact.
A strong data curator sees immediately:
One explanation is outright incorrect.
Three are technically true but misleading.
Only one meaningfully changes strategy.
They also know how to update the system with richer semantic definitions, crisper documentation and tighter canonical dashboards, so that the next AI-generated answer improves.
In the era of AI, jobs move up the stack
If there’s one thing you take away from this, let it be this: the data function is not disappearing.
The data job is moving up the stack, away from pure execution and toward interpretation, curation, and institutional memory.
The role becomes less about “Can I answer this?” and more about “What are the important questions for our organization to ask, and how can I curate a system that delivers fast, high-quality answers to those questions?”
In environments where decisions carry real cost, organizations will always prefer accountable interpretation over unowned output.
The future data leader is not the fastest producer of analysis. It is the person whose judgment the organization is willing to stand behind. When answers are abundant, trust becomes like a precious gem: increasingly rare and all the more valued.
Think someone should read this and talk about it? Share it below.
Unsubscribe https://substack.com/redirect/2/eyJlIjoiaHR0cHM6Ly9vcGluaW9uYXRlZGludGVsbGlnZW5jZS5zdWJzdGFjay5jb20vYWN0aW9uL2Rpc2FibGVfZW1haWw_dG9rZW49ZXlKMWMyVnlYMmxrSWpvME1qWTFNVFV3TkRBc0luQnZjM1JmYVdRaU9qRTROemd5TkRNeE9Dd2lhV0YwSWpveE56Y3hNelF5TXpBeExDSmxlSEFpT2pFNE1ESTROemd6TURFc0ltbHpjeUk2SW5CMVlpMDFNelUzTVRZeElpd2ljM1ZpSWpvaVpHbHpZV0pzWlY5bGJXRnBiQ0o5LlhQRnIwRk15Z1djVTRoR2tPYUJxVHBLRkkxeWgtalFXbW5YaThZQXBsMjAiLCJwIjoxODc4MjQzMTgsInMiOjUzNTcxNjEsImYiOnRydWUsInUiOjQyNjUxNTA0MCwiaWF0IjoxNzcxMzQyMzAxLCJleHAiOjIwODY5MTgzMDEsImlzcyI6InB1Yi0wIiwic3ViIjoibGluay1yZWRpcmVjdCJ9.TIsfrLie8F-2k9WC-znFXoJK6fYp7umrG_X299NroY8?
View Email
The Data Job isn't Dying Because the Trust Problem is Exploding
opinionatedintelligence@substack.com2/17/2026
[https://eotrx.substackcdn.com/open?token=eyJtIjoiPDIwMjUxMjE4MDMxNTEwLjMuYTQzNjg1NzVhMDM0MjY2OC5uamdyam5lMkBtZy1kMC5zdWJzdGFjay5jb20-IiwidSI6NDI2NTE1MDQwLCJyIjoiYkBlbWFpbC5nb21vZHVsci5jb20iLCJkIjoibWctZDAuc3Vic3RhY2suY29tIiwicCI6bnVsbCwidCI6bnVsbCwiYSI6bnVsbCwicyI6NTM1NzE2MSwiYyI6ImZyZWUtd2VsY29tZSIsImYiOnRydWUsInBvc2l0aW9uIjoidG9wIiwiaWF0IjoxNzY2MDI3NzExLCJleHAiOjE3Njg2MTk3MTEsImlzcyI6InB1Yi0wIiwic3ViIjoiZW8ifQ.PKOapUNTTDQhovszd5f2R0ouVsIbthj2jD1I8QtUYio]
Thank you for subscribing to Opinionated Intelligence
͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
͏ ͏ ͏ ͏
Opinionated Intelligence
[https://substackcdn.com/image/fetch/$s_!9iYB!,w_80,h_80,c_fill,f_auto,q_auto:good,fl_progressive:steep,g_auto/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3838e081-5cf4-43a7-906b-3ff319e9cd0a_481x481.png]
[https://substackcdn.com/image/fetch/$s_!Cznk!,w_1100,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f72707b-9fde-4d16-b1b9-36dc29e2e698_1100x220.png]https://email.mg-d0.substack.com/c/eJx0kc2O3SAMhZ8m7BKB-btZsKhU5TUiE5wM0wBRIJ1On77KvarUWXR3JMTn488LNtrK-enWk6j_oH0piVhwRuKqAyMnrDEcrBWCUcK4zxtlOrFRmLH982qsYm-OkONj1NKCNMLYhxCISviglnWUo0EWHXDQAsSDS6EFH-SASpqHthq5VGDMY8jv2_meCTrF09YHPtTL14bLj2EpicU6313vLq6dF7HdvbV21E5-62DqYCpHzLHku2LMjfY9bpQX-kLpYIodTFcOtMZMoZNTTFsnv79QoG-YvnEapr8f-6PU1icKEYcqB0z4u2T8qC-ghum4_B6XZ4wJN6rPOK4WLLe-H9dAvQrC9F74sZcmLDASkBkfsxCc_wLgw5E3dlx-XkpKV47tc6aMfqfwWvY1AlsseY7BaamtMIKdzneKP6UMW0klXPv5tFUvH0rCmN1_tLD29fRXpfMmKzBaaK44--ngTwAAAP__ypCx8g
You’ll start receiving new posts right here in your inbox. You can also log into
the website to read the full archives or access the blog from Substack app.
~ Julie and Chandra, from sundial.so
[https://email.mg-d0.substack.com/c/eJxUkMuO4yAQRb_G7GIB5uEsWMwmv2EVUHbI8LB4zCh_33J6070u1T1Hx0HHo9S32Svi7T9GVxISb9QCu_QEDdNKUa41YwQThLgdmLFCR79B_3FVWpCnYYLuXu5aS3cX3Hqwiint0d1B76vkJBhOuWScrXRhktF5mUEsapVaAl0EV2qd8-uor4x8EjQdN0_nNmzr4P7OriQS2na5Xi6m14EkmmfvZ5uWPxN_TPzRRvYB4tzKxB_kHHZzJaWRQ39vmMFG9N-P57AxOOih5C14IxepmWKkGjsJ-gHMR0nFj1g_5DasLwlCNuUMOZR8VQi5Y4zhwOyQ9N8ZR8N6LQuuJJNUUPLP8K8AAAD__yWGeK4]
© 2025 Julie Zhuo
548 Market Street PMB 72296, San Francisco, CA 94104
Unsubscribe
[https://email.mg-d0.substack.com/c/eJx0kk9TqzwYRz8N7GTyhyR1waKvii-O4NWitW6YkAQaCglCKG0__R16N9fFXT8zZ85z5ie4U7UdzlE1KHUzq1bYTvkyophXRPoqgoxSgBiD0Fcd121RK6MG7pQsuPvrSlno7yMiuChvYUgJA7eVpETCFSaMQclgJQj3dYQAIhDBFcCQQBDggIeYrggjHOAQUboKTFMPjVHIC0FX30gQjFM5Oi4OgbCdr8dicV1cIjdMym-jvXP96OG1h2IPxbbXRluzKGrjVNvqWhmhflA8FHPhtDUeiqUeedmq4or0cOzsQRkP36vzExTo4_yJ2kPSWJA2O5jm73N2vx4T899R4LeKb1_1S0dg-TiPSdfu5V1C01ygrFmfs4s4pZtZf33u56Sxp-xywFm-u2SXh9Pz3VMvcKpf9NMst4nL8hRmlweUbpIxMRnc6YQm3VsvUKzLx4_qawv3fDvr6jU45UXZV7-oPdaknDffj-6dZfFJbHqB13m8vvl__J5X7FmQ1O-nshC26yaj3blQZvlT_qnWT2WrBV8aFFpGBBMGKfSHqPRCcE0R1LazcmqHa_ZxKqXtuDbRP_r67ueGplENCzlElEACQuAfI_Q7AAD__2oj0RQ]
Get the app
[https://substackcdn.com/image/fetch/$s_!IzGP!,w_262,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack.com%2Fimg%2Femail%2Fgeneric-app-button%402x.png]https://email.mg-d0.substack.com/c/eJxUkT1u5TAMhE8jdTb0b6dQsUDwrmHQEu0osSRDpnaR2y_80rw0bGYw_MgJQLjX9u23hjj8wyPUjDx6p2GzkaOXk3NCTZOUHDOkY9mxYAPCuAC9qG4y_MNv66pMeDNy26JWYN9UjEqbGdRm7Rw3nrwSykolZ6GllWLUIxjtZjtZENoo5-axfO7ts6BiRuR9iGK8-noRhK8x1MzTtdysN4un1pEf_oPovJj-w9SDqcerm6kHnOfPHC6qDYeGMTUMxPSjU14C5BPSXph-vz0Z2hdSKjtT7inXQliI6ffnxmGrlbANayeqhZ99XULNuZdE3wsWWA-MP1RnX48UgFItS4reajtJJ3nzKzPimTXuNdfYj_Y86-prrBlS8fVMJdVyvzgVwuNIO5aAnH531C9sd7JRzkorjOB_vfofAAD__xWpm18Start
writing
[https://substackcdn.com/image/fetch/$s_!LkrL!,w_270,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack.com%2Fimg%2Femail%2Fpublish-button%402x.png]https://email.mg-d0.substack.com/c/eJxskUuO3CAYhE8Du7Z441mwiOT0BXIAC8NvhgkPC0OSuX3knkzUkbKkKH18qJztEGp7N3sDuP2E5GoG7I3idpceg6FaKcK0phRDtjGtAQo028Gvtj_dKi3wq2EvhJNZUuGEYla7nXnQen_xXHs_g8XRMMIkZXQmnEpKJj5ZwdUstbSEC6bUPJW30N4KMCRIDjdPpnNsZ7fu--RqxvFcL9fLxfQ2ACfz2vtxIv4FsTti9-f2dYyhjAPx--h5PetoDhBfPkuIqSvP4OPIiC8P7p_Q1dKhdMSXvdYO7TO2-bAxFMQXO3rdY0rgb38rl9y3x5tfHyy-bEiQB3cKNVc_UvtQUw3xRdNfhyD4GNvqas6jxP6-QrFbAv_xv2NsKTrbYy1r9EZyqamiuJn_c_E5Nl-zjcXUI5ZYyzVWLB1SigGKA9z_XXuc0C6yYEpSSQTBPwz7HQAA__8uU7QA
[https://eotrx.substackcdn.com/open?token=eyJtIjoiPDIwMjUxMjE4MDMxNTEwLjMuYTQzNjg1NzVhMDM0MjY2OC5uamdyam5lMkBtZy1kMC5zdWJzdGFjay5jb20-IiwidSI6NDI2NTE1MDQwLCJyIjoiYkBlbWFpbC5nb21vZHVsci5jb20iLCJkIjoibWctZDAuc3Vic3RhY2suY29tIiwicCI6bnVsbCwidCI6bnVsbCwiYSI6bnVsbCwicyI6NTM1NzE2MSwiYyI6ImZyZWUtd2VsY29tZSIsImYiOnRydWUsInBvc2l0aW9uIjoiYm90dG9tIiwiaWF0IjoxNzY2MDI3NzExLCJleHAiOjE3Njg2MTk3MTEsImlzcyI6InB1Yi0wIiwic3ViIjoiZW8ifQ.9Maqop90E4tPfffx8D2jJOZBy3v2SMbtcdl8feXVWzg][https://email.mg-d0.substack.com/o/eJxUkEmOIyEUBU9jdk4xgxecJcXwk_5uBouEbvn2pfSiVLUOKRTvRT8h9_F2xwC4_4cSewWSnBb-UImAY0Zryo1hjED1WPYMDYafkHY_f1BtJPnjhLQhyOiPaJgViinJ7UNrG0AEbo4HQccpV4wzSwVTjG5i81Joq4zyVEiutd3aM49nA36TtOZ7otu5wjl9_LvFXgme-9V6tbg5FpDXCnvsta6G871D86FA-kYFo5_Y247JKaEM04wMF26SfhRb7rWnVcbHfa6QevXYXH9hw96undgmlIIZWgQyfx-1ThiXWXKtmKKSkn-OfwUAAP__zP1urw]
View Email
Thanks for signing up!
opinionatedintelligence@substack.com12/18/2025