Explore all emails from Opinionated Intelligence

8 emails

View this post on the web at https://opinionatedintelligence.substack.com/p/data-has-a-discovery-problem You don’t discover things on Google. You go to Google when you already know what you’re looking for: top places to go in Rome; <name’s> net worth; how to cook asparagus to impress my in-laws. You type a question, get an answer, and leave. It’s a tool for intent. Subscribe to Opinionated Intelligence for for essays on the art and future of data analysis. Discovery happens somewhere else. It’s Instagram showing you a restaurant you didn’t know existed in a neighborhood you walk through every day. It’s X telling you that the latest model has dropped and it’s scary good. It’s TikTok teaching you a cooking technique (en papillote!) at 11pm that you never would have searched for because you didn’t know there was a name for it. Nobody asked for any of that. It was pushed. And it was relevant anyway. Now look at how every company interacts with its data today: someone has a question, they query a dashboard, they get an answer. A VP pings a data analyst at 4:47pm on a Friday: “Can you pull the numbers on power user engagement for the board meeting next week?” Query, chart, send, squint, follow-up. Everyone is doing Google Search. Pull works when someone knows what to ask. But what if the question that would actually change that VP’s behavior isn’t the one she asked? What if it’s the one she didn’t know to ask? Maybe mid-market healthcare deals are closing 40% faster this quarter and nobody’s reallocated pipeline. Maybe the Southeast rep who just hit 200% of quota is running a completely different playbook than everyone else. Those insights don’t emerge from Google Search. They sit in the warehouse like letters nobody opens. The two modes of knowing The data industry has been building for one mode of interaction for twenty years. It’s worth stepping back and noticing that there’s a second mode we’ve almost entirely ignored. “I have a question” is ad-hoc analysis. Why did conversion drop? What’s our CAC by channel? How well is our latest campaign performing? This is search. This is pull. It’s valuable, it’s not going away, and the industry is very good at it. “Tell me what I should know” is something else entirely. It’s not answering a question, it’s telling you which questions you ought to be asking. It’s the system saying: here’s something worth your attention that you haven’t thought to look for. Think about the difference: one is reactive: a human has a hypothesis and goes to validate it. The other is proactive: the system surfaces a pattern the human hasn’t formed a hypothesis about yet. Twenty years of data tooling, billions of dollars of infrastructure, and nearly all of it serves the first mode. The second mode is where the highest-leverage insights live, because the things you don’t know you don’t know are, almost by definition, the things that matter most. The things you don’t know you don’t know are, almost by definition, the things that matter most. A conversion metric declining 3% per week for six weeks while everyone’s focused on the product launch. A customer segment churning in a pattern that only appears if you cut the data a way nobody thought to cut it. A region outperforming every other region and nobody’s discussed it once. Those are the insights that change strategy. And they require push. The three tiers of push Push isn’t one thing. It’s a spectrum, and the mistake most people make is equating push with “more alerts.” Alerts are just the loudest, most annoying flavor. There are actually three tiers, each with a different job and a different cadence. Tier 1: Alerts Interruptive. Buzzing phone. Something needs attention now. This is the fire alarm. A good Tier 1 is your Slack lighting up at 2pm on a Tuesday with a note: “Step 3 of your onboarding funnel sharply dropped on iOS with the latest push indicating possible bug.” It tells you what happened, why, and where to look next. Most companies are stuck at Tier 1, and doing it badly. Their alerts are dumb threshold triggers with no context, no explanation, no signal about whether something is a blip or a trend. The result is alert fatigue, which is why “push” has a bad reputation. People hear “proactive insights” and picture Slack turning into an unreadable wall of bot messages. The fix: If it’s not something someone should stop what they’re doing to address, it’s not Tier 1. Most teams have 50+ alerts. They should have 0-5 interruptive ones a day. Tier 2: Scheduled briefings. Predictable. Your morning newspaper for business data. A good Tier 2 arrives at 8am with an intelligent summary that leads with what’s different: “Pipeline coverage for Q3 dropped below 3x for the first time this quarter. APAC added $1.2M in new opps overnight, offsetting a slowdown in EMEA. Three deals over $500K are scheduled to close this week, all in healthcare.” Coffee, briefing, first meeting. It replaces the ritual of opening four dashboards and trying to remember what the numbers looked like yesterday. A dumb scheduled report shows you the same charts regardless of whether anything changed or not. A smart Tier 2 briefing knows what changed and leads with that. The fix: Start with one briefing for one team. Define the five metrics that matter, set the delivery window, iterate on signal-to-noise weekly. If people aren’t reading it after two weeks, the content need improvement. Tier 3: Ambient discovery. This is the feed. Not urgent, not scheduled. There when you have a moment to browse, like standing in line for coffee and checking in on what’s interesting between meetings. Imagine opening a feed that says: “Customers who activate Feature X within their first week retain at 2x the rate, but only 12% of new accounts are doing it.” Or: “Support tickets mentioning ‘billing confusion’ are up 60% since the pricing page redesign. Here are the top complaint clusters.” Or: “The APAC region quietly overtook EMEA in net-new ARR for the first time in three years.” Some of these might be analyses other people have done, but hasn’t made it over to your world yet. Others are proactively surfaced by an agent. Tier 3 is the most underbuilt tier and the most valuable. It’s where serendipity lives. It’s the Instagram of your business data: you open it with no specific intent and you leave with a better mental model of your business. The fix: Hardest to build, easiest to test. Have your best analyst spend some time every week writing “three things I noticed in the data that nobody asked about.” Send it to a channel. If people start forwarding it, you’ve validated the concept. Now automate it. Why now? It’s worth asking why push is the future. The answer is because we’ve entered the agent era, and this i. Look at what’s happening in consumer software. OpenClaw went from zero to hundreds of thousands of users in a few months. Why? Because people want agents that think and act on their behalf so they can wake up to a morning summary prepared by their agent, or get messaged when something matters, or scroll a feed of tasks their agent completed overnight. Three tiers, running autonomously, in the background of someone’s personal life. That paradigm is coming to business data. Not as a chatbot sitting in the corner of your BI tool waiting to be asked a question but as a system that runs continuously in the background, understands what your metrics mean, and tells you what you need to know before you ask. We’re not talking better BI, we’re talking about a different kind of software entirely: Software that operates on your behalf instead of waiting to be operated. What this requires from you If the future of data is automatic, the most salient question for data leaders is: “what do I need to decide so the system can run itself?” Ask yourself these questions: What counts as an emergency? This defines Tier 1. Not the specific thresholds but the principle. What rises to the level of actually interrupting someone’s flow state? Is it churn above a certain rate? A deal backsliding late in the quarter? A production incident affecting revenue? What does your team need to orient every morning? This defines Tier 2. If your CEO had three minutes with coffee, what would they need to know? What’s the comparison window that reveals meaningful change? What’s the right delivery time and channel? What parts of the business should the system be watching? This defines Tier 3. Is it monitoring user behavior? Pipeline health? Feature adoption and retention? Support trends? Revenue mix? The more territory you hand over, the more your team will discover. These are questions of strategy and business context. Your job should be to tell the system what matters, and the system intelligently mines the depths to tell you what’s happening, why, and what you may want to do. Where this goes Five years from now, the idea that a human had to open a dashboard and manually notice a problem will feel as quaint as printing a spreadsheet to review in a meeting. The daily rhythm will be: Tier 1 for emergencies, Tier 2 for morning context, Tier 3 for discovery, and ad-hoc pull for the occasional deep dive. The question stops being “did you check the dashboard?” and becomes “did you see the latest?” The future of data is push. Not more dashboards. Not faster queries. Not better charts. A fundamentally different relationship between people and the information they need to make decisions. This is what we’re building at Sundial.ai [ https://substack.com/redirect/2bab4865-8b67-464d-a1ce-d5e0d6df2ba0?j=eyJ1IjoiNzF4cDQwIn0.VLQsNiiAawz-DS2VtWTrcrG2IFeLIxnWNFcK9akSjpY ]. Pull isn’t going away, but it’s going to stop being the center of gravity. Google Search had its decade; now it’s time for the feed. Subscribe to Opinionated Intelligence for essays on the art and future of data analysis. Unsubscribe https://substack.com/redirect/2/eyJlIjoiaHR0cHM6Ly9vcGluaW9uYXRlZGludGVsbGlnZW5jZS5zdWJzdGFjay5jb20vYWN0aW9uL2Rpc2FibGVfZW1haWw_dG9rZW49ZXlKMWMyVnlYMmxrSWpvME1qWTFNVFV3TkRBc0luQnZjM1JmYVdRaU9qRTVOREl5T0RNME5Dd2lhV0YwSWpveE56YzNNekEzTWprNUxDSmxlSEFpT2pFNE1EZzRORE15T1Rrc0ltbHpjeUk2SW5CMVlpMDFNelUzTVRZeElpd2ljM1ZpSWpvaVpHbHpZV0pzWlY5bGJXRnBiQ0o5Ll81NDVZX3Uyc2lWNGtJaTAtNHRGWmlYb1VwVFpzbDRRY0ZKY1BQSmp4RUUiLCJwIjoxOTQyMjgzNDQsInMiOjUzNTcxNjEsImYiOnRydWUsInUiOjQyNjUxNTA0MCwiaWF0IjoxNzc3MzA3Mjk5LCJleHAiOjIwOTI4ODMyOTksImlzcyI6InB1Yi0wIiwic3ViIjoibGluay1yZWRpcmVjdCJ9.JOKuJLnFztWLtFC3qytfcNSGuDxgklnAC4YpVezsz1U?
View Email

Stop Asking Your Data Questions

opinionatedintelligence@substack.com4/27/2026
View this post on the web at https://opinionatedintelligence.substack.com/p/learned-carelessness-bias-in-the AI has become extraordinarily powerful at analysis. Its reasoning can feel compelling, its hypotheses often sound insightful, and its pattern recognition is genuinely remarkable. It synthesizes ideas at a speed and scale no individual can match. But that strength conceals a critical failur00e mode: AI is just as adept at surfacing patterns that don’t exist as it is at identifying ones that do. It generates hypotheses that appear highly plausible on the surface yet collapse under deeper scrutiny. The confidence and fluency of its output can obscure fundamental flaws in the underlying logic. I’ll be the first to admit—I’ve been caught by this myself. I have the Learned Carelessness and the Automation bias. It Is Both Brilliant and Stupid at the Same Time This is the paradox that makes AI uniquely da0ngerous to work with: it can be breathtakingly right and breathtakingly wrong in the same breath, on the same topic, with the same tone of voice. AlphaGo – The Movie [ https://substack.com/redirect/69bb5b71-9687-461e-a427-36c709536a23?j=eyJ1IjoiNzF4cDQwIn0.VLQsNiiAawz-DS2VtWTrcrG2IFeLIxnWNFcK9akSjpY ] [ https://substack.com/redirect/69bb5b71-9687-461e-a427-36c709536a23?j=eyJ1IjoiNzF4cDQwIn0.VLQsNiiAawz-DS2VtWTrcrG2IFeLIxnWNFcK9akSjpY ]is one of the clearest illustrations of this duality. In the award-winning documentary, DeepMind’s AlphaGo plays Lee Sedol, one of the greatest Go players in history. In Game 2, AlphaGo plays Move 37—a move so unusual that commentators initially thought it was a mistake. No human would have played it. It turned out to be a stroke of genius that redefined how experts understood the game. Brilliant. And then came Game 4. Lee Sedol played his own unexpected move—Move 78—and AlphaGo spiraled. It made a string of bizarre, clearly losing moves, as if it had no understanding of the position at all. The same system that had just played the most creative move in Go history couldn’t recover from a single surprise. Stupid. That’s the pattern. Not a gradual decline from good to mediocre. A sudden, invisible cliff from exceptional to nonsensical. Consider a few more examples: On the GPQA Diamond benchmark—198 graduate-level science questions deliberately designed to be “Google-proof”—PhD domain experts score around 65-70%. The best AI models now score above 90%, outperforming the very experts who wrote the questions. These same models can then turn around and confidently tell you there are two R’s in “strawberry,” or fail to count the number of items in a short list. A system that reasons about quantum physics at a level beyond most PhDs cannot reliably do what a six-year-old can. Google’s medical AI, Med-PaLM 2, demonstrated expert-level accuracy on medical licensing exam questions, yet in open-ended clinical conversations, it occasionally fabricated drug interactions and cited nonexistent studies with complete confidence. A system that passes the doctor’s exam still invents treatments. AI coding assistants can architect sophisticated systems, write clean code, and explain complex algorithms then introduce a subtle off-by-one error or reference a library function that doesn’t exist, wrapped in perfectly formatted, well-commented code that looks more trustworthy than most human output. In every case, the failure shares the same signature: there is no change in tone, no hedging, no tell. The wrong answer arrives with exactly the same confidence as the right one. Why Our Instincts Fail Us Our entire lives are built on trust. We leave our phone on the charger and expect it to be there when we come back. When Amazon says the package arrives today, we plan around it. When a colleague gets something right ninety-nine times, we stop double-checking the hundredth. This may be a form of laziness. But it’s how humans function. Trust is the operating system that lets us navigate a complex world without re-verifying every single thing from scratch. And for the most part, it works beautifully. We also know how to scope our trust. When a Nobel Prize-winning economist discusses inflation, we inherently trust them. We might not trust them to make a soufflé, but on economics, they’ve earned our confidence, and rightly so. Human expertise has boundaries, and we’re intuitively good at mapping them. We trust the economist on economics, the surgeon on surgery, the mechanic on engines. Domain expertise is reliable within its domain. AI breaks this model entirely. AI does not degrade gracefully. It can be 99% correct in a domain and still produce outputs that are confidently, precisely wrong—within that same domain, on that same topic. There is no gradual degradation. With a human expert, failure at the boundary of their knowledge is expected and recognizable. They hedge, they slow down, they say “I’m not sure.” AI does none of this. Our instincts are not calibrated for a collaborator that oscillates between Move 37 and a total meltdown with zero warning. The result is systematic over-trust. And here’s what makes it truly unnerving: even if you know AI is wrong 1% of the time, you don’t know which 1%. There’s no pattern, no warning label, no category of question where you can say “this is where it falls apart.” The errors are scattered randomly across the full range of its competence, hiding in plain sight among the 99% that’s flawless. You can’t selectively distrust it. You have to maintain vigilance across everything, which is precisely the thing human brains are not built to do. This Isn’t New. The Social Sciences Saw It Coming. What we’re experiencing with AI has a name: automation bias—the well-documented human tendency to favor suggestions from automated systems over contradictory information, even when the contradictory information is correct. The term comes from decades of research in social psychology, cognitive science, and human-factors engineering, originally studied in aviation cockpits, nuclear power plants, and intensive care units. The research reveals an uncomfortable truth: expertise doesn’t protect you. Studies have shown that a 25-year veteran and a new hire are roughly equally likely to defer to a machine’s recommendation. Automation bias isn’t about naivety or technical inexperience. It’s rooted in how the brain manages limited attention and working memory. The human brain is a cognitive miser—it constantly looks for ways to spend less mental energy. When a system gives you an answer, accepting it is far easier than gathering your own evidence and reaching an independent conclusion. Worse, there’s a compounding effect that researchers call “learned carelessness.” When automated systems prove highly reliable over time, our monitoring degrades further. High accuracy builds trust, and high trust makes us less likely to catch the rare failure. The 99% that’s right trains us to stop checking which is precisely when the 1% that’s wrong does the most damage. What makes AI particularly insidious is that this bias was already dangerous with simple, rule-based automation—flight management systems, spell-checkers, diagnostic alerts. Those systems at least failed in predictable, often detectable ways. AI fails unpredictably, fluently, and with the full appearance of expertise. It doesn’t flash a warning light. It writes you a paragraph. Why This Is So Hard to Guard Against If the problem were simply “AI sometimes gets things wrong,” it would be manageable. We deal with unreliable information all the time. What makes AI different is that it attacks the very mechanism we use to detect unreliability. The fluency problem. We use language quality as a proxy for thinking quality. Typos, hedging, and disorganized arguments signal uncertainty. They invite scrutiny. AI produces none of them. Its worst outputs are grammatically flawless, logically structured, and presented with the same polish as its best. The heuristic we’ve relied on for our entire lives—”if it sounds like one knows what they are talking about, they probably do” breaks down completely. The volume problem. AI lets you produce more, faster. But every additional output is another thing you need to verify. The temptation is to let volume outrun scrutiny, and most of us give in to it without even noticing. When you’re reviewing the twentieth AI-generated analysis of the day, your verification standards are not what they were on the first. The expertise inversion. Counterintuitively, the areas where AI is most impressive may be where it’s most dangerous. When AI produces an output that’s clearly beyond your own expertise. An analysis you couldn’t have written yourself and you have the least ability to evaluate it. You’re in awe of the quality, and you lack the domain knowledge to spot the flaw. The moment you’re most impressed is often the moment you should be most skeptical. The speed trap. The whole point of using AI is to go faster. Verification slows you down. This creates a constant tension between the reason you adopted AI in the first place and the discipline required to use it safely. Recalibrate So, how can this be fixed? There’s no silver bullet here. No checklist that makes this easy. The honest truth is that guarding against this bias requires something uncomfortable: recalibrating a trust instinct that has served you well your entire life. That instinct that says “this has been reliable, so I can relax” isn’t wrong. It’s just miscalibrated for this particular tool. And recalibration is hard precisely because it means overriding something that feels right. You’re not fighting ignorance. You’re fighting a lifetime of well-earned intuition. But there are ways to start. Change your mental model. The single most effective shift is to stop thinking of AI as a senior expert and start thinking of it as a brilliant but unreliable intern. Talented, fast, impressive on a good day but you’d never submit their work without reading it yourself. That reframe alone could change how you engage with the output. Separate generation from evaluation. Use AI to produce analysis. Put the output down. Come back to it with the explicit goal of finding what’s wrong, not confirming what’s right. The shift from “creator” to “critic” is small but powerful. Ask AI to argue against itself. After AI gives you an answer, ask it to make the strongest possible case that its own answer is wrong. This won’t catch every error, but it surfaces the ones hiding behind confident phrasing. If the counterargument is more compelling than the original, you’ve found a crack. Verify the load-bearing claims. You can’t check everything nor should you try. Instead, identify the one or two claims that the entire output depends on and verify those independently. If the foundation holds, the structure is more likely sound. If it doesn’t, nothing built on top of it matters. Build verification into the workflow, not after it. If checking happens at the end—when you’re tired, when the deadline is close, when the output already looks done—it won’t happen well. Design your process so that critical verification steps occur during the work, not as a final gate that’s easy to rush through. None of this is easy. Recalibration never is. It means slowing down when the tool is designed to speed you up, and maintaining skepticism toward something that has earned your trust ninety-nine times out of a hundred. But that’s the work now. The teams and individuals that will define this era won’t be the ones that adopt AI the fastest. They’ll be the ones that learn to recalibrate the fastest - trusting AI enough to benefit from it, while questioning it enough to survive it. Unsubscribe https://substack.com/redirect/2/eyJlIjoiaHR0cHM6Ly9vcGluaW9uYXRlZGludGVsbGlnZW5jZS5zdWJzdGFjay5jb20vYWN0aW9uL2Rpc2FibGVfZW1haWw_dG9rZW49ZXlKMWMyVnlYMmxrSWpvME1qWTFNVFV3TkRBc0luQnZjM1JmYVdRaU9qRTVORFF6T1Rnd05Td2lhV0YwSWpveE56YzJNelkyTWpnNUxDSmxlSEFpT2pFNE1EYzVNREl5T0Rrc0ltbHpjeUk2SW5CMVlpMDFNelUzTVRZeElpd2ljM1ZpSWpvaVpHbHpZV0pzWlY5bGJXRnBiQ0o5LlNaN2ZlR016U3QyeEdrUWpwVDI2Q2JiV3p0T0QybXM4WkZGYmVhb0t6UzgiLCJwIjoxOTQ0Mzk4MDUsInMiOjUzNTcxNjEsImYiOnRydWUsInUiOjQyNjUxNTA0MCwiaWF0IjoxNzc2MzY2Mjg5LCJleHAiOjIwOTE5NDIyODksImlzcyI6InB1Yi0wIiwic3ViIjoibGluay1yZWRpcmVjdCJ9.vZgzeO6mNpqYtbwtud_7GQ9XBuHjgzWFmDnMn-7WC_s?
View Email

Learned Carelessness Bias in the Age of AI

opinionatedintelligence@substack.com4/16/2026
View this post on the web at https://opinionatedintelligence.substack.com/p/the-value-of-correlations-validation Introduction In the previous piece [ https://substack.com/redirect/f307fd09-bba8-43d3-8c0b-997d0b9ce649?j=eyJ1IjoiNzF4cDQwIn0.VLQsNiiAawz-DS2VtWTrcrG2IFeLIxnWNFcK9akSjpY ], we explored how orthogonal context helps AI construct a unique, accurate story about a business. Independent signals such as user count, engagement depth, product category, incentive structures give the model enough dimensionality to converge on a single coherent interpretation rather than defaulting to the most generic one. But there’s a second class of signals that plays an equally important role, and it works in the opposite direction. Orthogonal metrics help AI build the story. Correlated metrics help AI pressure-test whether the story is true. Correlated metrics are measurements that naturally move together because they describe different facets of the same underlying phenomenon. In product analytics, these relationships are everywhere: DAU, WAU, and MAU are mathematically nested. Revenue tracks with transaction volume and average order value. New user signups and total active users are behaviorally linked. There’s a deeper reason why correlations are so pervasive in business data: human behavior is not random. People are social creatures. We mimic each other, follow similar patterns, respond to similar incentives. The theoretical spectrum of possible behaviors is vast, but the behaviors that actually occur in nature cluster tightly. Users don’t independently invent unique ways of interacting with a product — they gravitate toward common patterns, which is precisely why metrics that measure different aspects of that behavior tend to move in lockstep. Because these metrics are connected, they form a kind of internal accounting system. When everything is working normally, they move in predictable harmony. When they don’t — that is, when metrics that should move together start diverging — that anomaly is a signal. It might indicate a data integrity problem, a measurement bug, or a genuine shift in user behavior. But it almost always means something worth investigating. The ability to detect these divergences is uniquely valuable because it enables a different kind of decision: not just “what is happening?” but “should I trust what the data is telling me?” and “has something fundamentally changed?” It’s one of the most powerful capabilities AI can bring to the table if it has access to the right correlated signals. Part One: Correlations as a Validation Layer Before any analysis can be trusted, the data itself has to be trustworthy. This sounds obvious, but it’s one of the most common failure modes in practice. Teams build sophisticated analyses on top of numbers that are silently broken: a tracking pixel that stopped firing, a deduplication bug in the data pipeline, a timezone mismatch between two systems. Correlated metrics provide a natural defense against this. They act as a built-in consistency check: if two metrics that should move together are moving apart, something in the data layer may be broken before you even get to the interpretation layer. Example 1 — Let’s take the example of DAU, WAU and MAU. Daily, weekly, and monthly active users are inherently nested. By definition, every daily active user is also a weekly active user and a monthly active user. This means certain mathematical relationships must always hold: DAU ≤ WAU ≤ MAU. Always. Without exception. Beyond that hard constraint, there are softer behavioral expectations. If DAU increases steadily over several weeks, WAU should generally rise as well. Now imagine you see this pattern in your dashboard: Metric Trend DAU Increasing WAU Decreasing MAU Flat This should stop you in your tracks. More people are showing up every day, but fewer people are showing up every week? While this can theoretically happen (overlap over days keeps increasing), it rarely ever happens in nature. The most likely explanations are a tracking bug, a pipeline issue, or an error in how users are being deduplicated across time windows. The point is that you can catch this problem before you waste time interpreting what the trend “means” for the product. The correlated metrics told you the data was inconsistent before you even asked an analytical question. This is a case where AI adds clear value: it can continuously monitor the nested mathematical relationships between DAU, WAU, and MAU across every product line and segment, flagging violations the moment they appear. No one needs to remember to check. But there’s an important asymmetry here. If the deduplication bug has always existed. That is, if WAU has never properly reflected the nesting relationship because the pipeline has been broken since launch, then the historical data will look internally consistent. AI will learn the broken pattern as normal. The persistent absence of a correlation that should exist is much harder to detect than a sudden change in one that did exist. That’s the kind of gap where a data scientist’s mental model matters: someone who knows these metrics must nest a certain way can look at the data and ask “why don’t they?” This is a question AI won’t generate on its own without that expectation supplied as a prior. Example 2 — DAU vs New Users Here’s a subtler case. Suppose you observe: New user signups increased 7% over the past three weeks New users represent roughly 10% of the total active user base Under normal conditions, you’d expect DAU to tick upward. The math is simple: 7% growth in a segment representing 10% of the total should produce roughly 0.7% growth in overall DAU, assuming existing user behavior remains constant. But instead, you see: Metric Change New Users +7% DAU 0% The overall number didn’t move. That means something absorbed the new user growth. Either existing users churned at a rate that exactly offset the new arrivals, or the new users themselves aren’t actually showing up as active, which could mean an activation problem, a measurement gap between “signed up” and “active,” or a tracking issue on one side of the equation. Without looking at these two metrics side by side, you’d see new user growth and feel good about acquisition. You’d see flat DAU and feel neutral. Neither metric alone raises an alarm. The alarm only becomes visible when you check whether the two metrics are behaving consistently with each other. An AI system monitoring these two metrics over time would notice if they stop tracking together. That is, if new users spike but DAU doesn’t follow, the divergence from the historical pattern is detectable. But imagine a company where this offset has been the norm for months: new user counts have always been slightly inflated relative to DAU because of a gap between “signed up” and “counted as active.” In that case, there’s no divergence for AI to detect. The data looks stable. It takes a human asking “shouldn’t a 7% increase in 10% of the base produce a measurable DAU lift?” to surface the problem. The analytical value isn’t in the pattern — it’s in the expectation that the pattern should look different than it does. Example 3 — Transactions versus Revenue Financial metrics offer some of the cleanest examples of correlated validation, because the relationships are often arithmetic rather than just behavioral. Revenue = Number of Transactions × Average Order Value. This isn’t a loose correlation. These are exactly related by an equation. So when the relationship breaks, the signal is unambiguous: Metric Change Transactions +15% Average Order Value Stable Revenue +2% Fifteen percent more transactions, at the same price per transaction, should produce something close to fifteen percent more revenue. A two percent increase means roughly thirteen percentage points went missing somewhere. The possible explanations are specific and testable: a spike in refunds, a revenue attribution error, a reporting lag, or perhaps a mix shift toward lower-value transaction types that isn’t captured in the “average” figure. Whatever the cause, the gap between expected and observed revenue is concrete and measurable. This is exactly the kind of discrepancy that demands investigation before anyone presents these numbers in a board deck. This is one of the cleanest illustrations of the detection asymmetry. If transactions and revenue have been moving in lockstep for six months and then suddenly diverge, an AI system monitoring correlation stability will flag it immediately and can do so across hundreds of metric pairs simultaneously, which is something no analyst can replicate manually. That’s the automation upside. But if the relationship has never been tight. Say, because a longstanding refund process has always decoupled the two, or because revenue attribution has been broken since the pipeline was built, then AI has no baseline to deviate from. It will treat the broken state as normal. A data scientist who knows that Revenue = Transactions × AOV will look at the same data and immediately ask why the arithmetic doesn’t hold. That prior knowledge, the expectation of what the correlation should be, is exactly the kind of input that makes AI systems dramatically more powerful when it’s supplied. Part Two: Correlations as an Anomaly Detection System Validation catches data problems. But correlation analysis does something more interesting too: when the data is clean and correlated metrics still diverge, that divergence often contains the most important insight in the entire dataset. These are real anomalies and not bugs. These are genuine shifts in how the system is behaving. They’re the moments where something changed in the product, the market, or user behavior, and the fingerprint of that change shows up as a broken correlation. Growing Acquisition, Flat Engagement Metric Trend New Users Increasing rapidly DAU Flat This is one of the most common and most important anomaly patterns in product analytics. Acquisition is working. Marketing is driving signups. But the users aren’t staying. They sign up, maybe open the app once, and disappear. The total active user count doesn’t grow because new arrivals are being offset by new churn. The possible causes cluster around a few themes. The acquisition channel may have shifted to a lower-quality source. Paid campaigns are driving installs but attract users with weak intent. The onboarding experience may be failing, losing users before they reach the product’s core value. Or there may be a product-market fit gap for the new audience being targeted. Here’s why the correlation matters: if you only looked at acquisition metrics, you’d see a growth story. If you only looked at DAU, you’d see stagnation. Neither view alone tells you that the problem is specifically in the conversion from new user to retained user. The broken correlation between the two metrics is what localizes the problem. Flat Sessions, Rising Time Spent Metric Trend Sessions Flat Total Time Spent Increasing People are opening the app the same number of times, but spending more time each visit. Session length is growing. The optimistic reading: the product is getting more engaging. Users are finding more to do, going deeper, consuming more content per session. If you recently launched a new feature — a feed, a recommendation engine, longer-form content — this pattern might be confirmation that it’s working. The less optimistic reading: users are spending longer because the experience is getting harder. A slower-loading interface, a confusing navigation change, or a search function that requires more attempts to find the right result can all increase time spent without increasing satisfaction. Time is going up, but it’s frustrated time, not engaged time. The correlation pattern alone can’t tell you which interpretation is correct. But it does something essential: it surfaces the question. Without noticing the divergence between sessions and time spent, you might never ask why session length is changing. The anomaly is the starting point for investigation, not the conclusion. Rising DAU, Stable New Users Metric Trend New Users Stable DAU Increasing This is often the healthiest pattern a product can show. Acquisition isn’t growing, but active users are. That means existing users are coming back more often, staying longer, or reactivating after periods of inactivity. The product is getting stickier without relying on a growing top of funnel. Possible drivers include improved retention from product changes, a new feature that increases usage frequency, network effects kicking in as the user base matures, or seasonal tailwinds that favor the product category. This pattern is particularly valuable because it suggests the growth is organic and sustainable. It’s not being purchased through marketing spend or inflated by a promotional cycle. The product itself is generating more engagement from its existing base, which is usually a strong signal of deepening product-market fit. This is also a pattern where AI’s continuous monitoring shines. A human analyst might check retention dashboards periodically, but the moment DAU begins outpacing new user growth even by a small margin, it is easy to miss in a weekly review. AI watching the correlation daily can surface the divergence early, often weeks before it would become obvious in a dashboard trend line. Early detection of this pattern matters because it can inform decisions about whether to double down on retention investments or shift marketing spend. Part Three: When Correlations Themselves Change There’s a more advanced concept worth understanding: the relationships between metrics are not permanent. They evolve as products and businesses mature. In an early-stage product, new user growth is usually the primary driver of DAU. The user base is small, retention patterns haven’t stabilized, and each new cohort represents a large percentage of total activity. At this stage, new users and DAU will be tightly correlated. In a mature product, the relationship loosens. The existing user base is large enough that DAU is primarily a function of retention and engagement frequency, not new signups. A 10% increase in new users might barely register in overall DAU because new users represent such a small fraction of the total. This shift is natural and expected. But it means that the benchmarks for what constitutes a normal correlation need to evolve over time. A divergence between new users and DAU that would be alarming in an early-stage product might be completely unremarkable in a mature one. This is where the interplay between AI and human judgment becomes most nuanced. AI is well-equipped to detect that a correlation has weakened over a defined time period. It can measure the rolling correlation coefficient between new users and DAU, notice that it dropped from 0.85 to 0.40 over six months, and surface that as a finding. What AI is less equipped to do unprompted is distinguish between a correlation that weakened because the business matured (normal and expected) and one that weakened because something broke (abnormal and urgent). Both look the same in the data. The difference lies in whether the analyst expected the shift. This requires understanding the product’s lifecycle stage, strategic direction, and what “normal maturation” looks like for this type of business. Supplying that frame is what turns AI’s detection capability into genuine analytical insight. The same principle applies to other metric pairs. Revenue and transaction count may be tightly coupled when a company has a single product at a fixed price point. As the company introduces new products, pricing tiers, or subscription models, the relationship becomes more complex and the historical correlation weakens, not because something is wrong, but because the underlying system has changed. Monitoring changes in correlations over time can therefore reveal structural shifts in the business: transitions from growth-led to retention-led DAU, shifts in monetization mix, changes in the composition of the user base. These are slow-moving but high-impact changes, and they’re often invisible in any single metric trend. The Framework In summary, orthogonal and correlated metrics serve complementary roles in how AI and humans should reason about complex systems: Orthogonal metrics provide the independent dimensions needed to construct a specific, non-generic interpretation. They help AI build the story. Correlated metrics provide the internal consistency checks needed to verify that the story holds up. They help AI validate the story. Correlation shifts — changes in the historical relationships between metrics — surface structural changes that neither individual metrics nor static correlations would reveal. They help AI detect when the system itself has changed. When AI has access to all three types of signals, it can do something that looks remarkably like good analytical judgment: construct a coherent narrative, verify it against multiple independent checks, and flag the moments where historical patterns break down. Those breakdowns, the places where metrics that should agree start disagreeing, are almost always where the most valuable insights are hiding. But there is a critical asymmetry in what AI can and cannot do here. AI excels at detecting changes in correlation: when two metrics that historically moved together suddenly diverge, that’s a statistical signal it can pick up automatically across hundreds of metric pairs simultaneously. What AI is far less likely to detect is the persistent absence of an expected correlation. This is the case where two metrics that should be related have never been related in the data, because of a longstanding bug, a broken pipeline, or a structural issue that predates the available history. In those cases, the data looks internally consistent. There’s no divergence to flag. The insight comes only from someone who knows what the relationship should look like and notices that it doesn’t. The best analysts have always known this intuitively. The opportunity with AI is to make this kind of multi-dimensional, correlation-aware reasoning systematic, automatic, and continuous applied not just to the metrics someone thought to check, but to every relationship in the system, all the time. And the opportunity for data scientists is to supply the priors that AI cannot generate on its own: the expectations about what correlations should exist, what ranges are normal, and what absences are suspicious. That combination of AI’s breadth of detection plus human domain knowledge about what should be true is where the real analytical power lies. Unsubscribe https://substack.com/redirect/2/eyJlIjoiaHR0cHM6Ly9vcGluaW9uYXRlZGludGVsbGlnZW5jZS5zdWJzdGFjay5jb20vYWN0aW9uL2Rpc2FibGVfZW1haWw_dG9rZW49ZXlKMWMyVnlYMmxrSWpvME1qWTFNVFV3TkRBc0luQnZjM1JmYVdRaU9qRTVOREkxTmpnNU5pd2lhV0YwSWpveE56YzJNakl5TkRVNExDSmxlSEFpT2pFNE1EYzNOVGcwTlRnc0ltbHpjeUk2SW5CMVlpMDFNelUzTVRZeElpd2ljM1ZpSWpvaVpHbHpZV0pzWlY5bGJXRnBiQ0o5Lmp6WnJud2hfemZQRWVzWWlBb2hVTURPejNUbWRmWkRTQ2c0cnZPaHZfeDgiLCJwIjoxOTQyNTY4OTYsInMiOjUzNTcxNjEsImYiOnRydWUsInUiOjQyNjUxNTA0MCwiaWF0IjoxNzc2MjIyNDU4LCJleHAiOjIwOTE3OTg0NTgsImlzcyI6InB1Yi0wIiwic3ViIjoibGluay1yZWRpcmVjdCJ9.qCdYMbTlyQ7bRoY9hSGfpb8WCLmNKEAqEvMahbfK0rg?
View Email

The Value of Correlations: Validation and Anomaly Detection

opinionatedintelligence@substack.com4/15/2026
View this post on the web at https://opinionatedintelligence.substack.com/p/what-it-actually-takes-to-trust-ai Picture this. It’s your fourth day at your new job. You’re sitting in a large team meeting, still figuring out how to connect to the office Wi-Fi, when the GM pulls up a chart and asks the room: “Retention dropped 5% last month. What’s going on?” You glance at your laptop. You’ve been poking around the data warehouse since Day 1 because you’re eager and you’re smart. You actually have an answer. You found a dashboard, ran some queries, and you’re pretty sure you know what’s going on. So you raise your hand. “It looks like the Android cohort drove the decline. Their D7 retention fell significantly compared to the prior month.” The room goes quiet. Your manager clears her throat. “We… uh… changed the definition of ‘active user’ three weeks ago. It used to be any event and now it’s a meaningful engagement event. The drop isn’t real.” You slowly lower your hand. Here’s the thing: you’re eager and you’re smart. Your SQL was fine. Your chart was clean. Your explanation was coherent and plausible. You were just missing context that no amount of raw intelligence could compensate for. This is what AI does. Subscribe to Opinionated Intelligence to get new essays on the future of analysis, decision-making + AI. AI is the world’s smartest new hire Today’s AI models are extraordinary. They write SQL so robust it’ll make your data engineer weep with respect. They generate charts as pristine as a McKinsey slide deck on presentation day. They can produce multi-paragraph explanations that boom with the full-regalia confidence of your most Senior, most Tenured, Crotchety Analyst. But they don’t actually have that confidence. When you point ChatGPT, Gemini, or Claude at your data warehouse and ask “why did retention drop?”, you are handing the question to a brilliant person who has never worked at your company, has never attended your team’s meetings, doesn’t know your metric quirks, and doesn’t know nearly half the things you do about your business. The problem is not intelligence. The problem is everything else. Today, dozens of cutting-edge companies have told us that their AI agents are right maybe 70-85% of the time. This is game-changing for data practitioners, who can now do analyses in a fraction of the time. If sometimes the responses smell fishy… Well, these data ninjas have the know-how to check, veto or rework their AI’s responses. But to let business users loose with the promise of data democratization via natural language chat? Unfortunately 70-85% just isn’t good enough. So how do we close the remaining gap and get something actually trustworthy? 4 things your best analyst knows that your AI should too Think about that amazing senior analyst on your team that everyone trusts. The one whose Slack messages get screenshotted and forwarded to the exec team. What do they actually know that makes them so good? 1. Which numbers matter Your company has 14 dashboards with some version of “retention” on them. Three of them use different definitions. One was built by someone who left two years ago and nobody’s touched it since. One is the “official” board metric. Your best analyst knows which one is canonical. They know that the dashboard labeled “Retention - Master” is, ironically, the wrong one (it’s the one called “retention_jake_final” that the CFO actually uses, because Jake built it to match the board definition before he left, and nobody’s ever renamed it.) Generic AI probably doesn’t know this. It’ll grab the table that sounds more correct and spit that number back via a snazzy “Ask AI in Slack” interface that everyone’s been using, and your PM will be none the wiser as they copy it into their next presentation where it becomes the number everyone argues about for the next 20 minute. 2. What changed in the business Revenue jumped 30% last quarter, hooray! The latest model with access to your CRM and Slack might actually piece together a solid story: it finds the enterprise deal that closed, sees the Slack thread where the sales team went crazy on emojis, and pulls the contract value from Salesforce. But this deal was unusual: the VP of Sales gave the client 18 months of free onboarding support to close, a concession made in the final stages of a late night phone call. The revenue is real, but the margin story is a different picture and the strategic rationale lives in someone’s head. Piping in context from Slack, Linear and your CRM bridges part of the gap. But there will always be judgment calls, side conversations, and unwritten context that no integration automatically captures. These misalignments are often discovered the human way, when one day some part of a story doesn’t make sense and questions are asked. Your best analyst is always listening and paying attention to the goings-on of the business. 3. How to actually think about the problem Give AI and a senior analyst the same question: “Why did retention drop?”, and watch what happens. AI opens the data, starts slicing, and follows whatever looks interesting. It’ll build a beautiful cohort analysis, then do a segmentation deep-dive, and come back with a full report that technically answers the question but doesn’t actually move the decision. Your best analyst takes a different tack. They start by asking who cares and why. They scope the problem before they touch the data. They work backwards from what decisions need to be made, and then assesses whether a directional answer is sufficient or deep, precise answers are needed. They have entire playbooks for how to distill a business question into data questions, and how to separate the signal from the noise. This is the accumulated judgment of someone who has done hundreds of analyses at this company, for these particular stakeholders, with all the weird quirks of this specific ecosystem. Generic AI, like a junior analyst, investigates what’s asked and hands you an encyclopedic answer that’s technically impressive but may be practically useless, the analytical equivalent of answering “What should we have for dinner?” with a complete nutritional breakdown of every restaurant within ten miles. 4. What happened last time Every January, your numbers dip. Every January, someone panics. And every January, your best analyst says: “It’s January. It always dips. It’ll bounce back by the third week.” They know this because they investigated it the first time, were right, and watched what happened next. Over three years of doing this, they’ve built a calibrated sense of when to worry and when to wait. Generic AI starts from zero, and it never closes the loop. It produces an answer and moves on. It doesn’t know which of its past recommendations were right, which were subtly wrong, or which ones led to a decision that backfired. It has no way to learn that the churn analysis it produced last quarter actually missed the real driver, or that the context it was given about metric definitions introduced a new error somewhere else. Your best analyst is always updating their mental model with new information: this worked, that didn’t, this source is reliable, this one is error-prone. Most AI systems today have none of that. Teams add context ad hoc, fix the errors they notice, and have no systematic way to know if accuracy is improving or degrading. What this looks like in real life In one recent example, we gave 50 real questions from actual users to a state-of-the-art model running on top of a clean set of tables in a warehouse. These tables even had a semantic layer! The model got slightly better than 80%. This is great if you are a data proficient analyst who can read SQL or Python as well as a bookseller reads novels; you are now vastly more empowered in your work. This is not nearly good enough if you are a business user! Person after person told us how they’d rather rely on a human analyst since they couldn’t be sure AI was right. Even data leaders agreed: the downstream impact wasn’t just embarrassing meetings but bad business decisions leading to thrash across the organization and greater overall skepticism of AI analysis. What were these 20% errors? They were practically all failures of context. In a few cases, the system treated sign-ins as signups because event data used those terms in a non-standard way. In another, it interpreted “month X revenue” as a rolling 28-day window when the business expected a calendar month. In another, it pulled registration counts from the wrong source because cumulative registrations and daily registrations were defined differently, including how deleted users were handled. Once the appropriate context was applied, the same set of questions jumped to 98% accuracy. Same model; same warehouse; same underlying data. What changed was not intelligence. It was institutional know-how, as well as a heaping dose of careful monitoring, measurement, and iteration. What can you do about this? Here are five practical places to start: 1. Define your canonical metrics. Create one trusted source for your most important business metrics. Call it your “golden set.” Only one definition allowed. If there are confusingly named metrics, or you observe AI pulling incorrectly, feed that context in. 2. Log major business changes. Campaign launches, pricing tests, onboarding redesigns, policy changes, instrumentation changes, metric definition changes — these should live somewhere machine-readable and easy to retrieve. 3. Capture analytical playbooks. What do your best analysts check first when retention drops? Which cuts matter? Which segments are strategic? What questions are usually noise? Write that down. 4. Continuously update your memory. When your team investigates a recurring issue, do not let the answer disappear into Slack. Store the conclusion, the evidence, and what was ruled out so the system can use it next time. 5. Measure whether it’s working. This is the step that matters the most as your system matures. Track accuracy over time. Understand which questions your AI gets right, which it gets wrong, and why. Without this, enrichment becomes a game of whack-a-mole because improving context in one spot can cause a regression in another. You need a feedback loop with enrichment → measurement → observability, not just growing piles of context. The gap is closing, but not how you think The funny thing about this problem is that it’s not going to be solved by meatier models. GPT-6 or Claude 5 or whatever comes next will be even more capable at reasoning, even more fluent, even more powerful in its working memory. But out of the box, it still won’t know your business better. Part of the answer is giving the model better context. It’s getting easier and easier to connect and ingest company details, and teams that do this well will see real improvements. But context alone also hits a ceiling. Every team that’s pushed past 85% accuracy learns the lesson of the fragility of ad-hoc enrichment. The teams that actually close the gap aren’t just adding more context, they’re measuring whether the context is actually working to improve quality. Enrich, then measure accuracy, then observe where things break, then decide what to enrich next. It’s a continuous loop. In a world where every company has access to the same frontier models and the same integration tools, the differentiator is the system around it. How well does AI know your business? How well does it keep track of what’s changing? How well can you tell whether its trustworthiness is increasing [ https://substack.com/redirect/e6a8f010-f73b-4ee0-938f-45def619f096?j=eyJ1IjoiNzF4cDQwIn0.VLQsNiiAawz-DS2VtWTrcrG2IFeLIxnWNFcK9akSjpY ] or decreasing? The best AI analysts won’t be the smartest. They’ll be the ones running the tightest loop between what they know, what they measure, and what they fix next. (P.S., if you want a system that does this… that’s why we built Sundial.ai [ https://substack.com/redirect/63d36652-8373-4c1b-8d62-11ed2904667d?j=eyJ1IjoiNzF4cDQwIn0.VLQsNiiAawz-DS2VtWTrcrG2IFeLIxnWNFcK9akSjpY ]) Subscribe to Opinionated Intelligence to get new essays on the future of analysis, decision-making + AI. Unsubscribe https://substack.com/redirect/2/eyJlIjoiaHR0cHM6Ly9vcGluaW9uYXRlZGludGVsbGlnZW5jZS5zdWJzdGFjay5jb20vYWN0aW9uL2Rpc2FibGVfZW1haWw_dG9rZW49ZXlKMWMyVnlYMmxrSWpvME1qWTFNVFV3TkRBc0luQnZjM1JmYVdRaU9qRTVNVGt4TXpnM015d2lhV0YwSWpveE56YzBOVE01TURVMkxDSmxlSEFpT2pFNE1EWXdOelV3TlRZc0ltbHpjeUk2SW5CMVlpMDFNelUzTVRZeElpd2ljM1ZpSWpvaVpHbHpZV0pzWlY5bGJXRnBiQ0o5LkRFQU4yZlN3dS11YzU3QlA1bHc4aGdYVE92V2x6RlZGSi1WSnJWcWJBZk0iLCJwIjoxOTE5MTM4NzMsInMiOjUzNTcxNjEsImYiOnRydWUsInUiOjQyNjUxNTA0MCwiaWF0IjoxNzc0NTM5MDU2LCJleHAiOjIwOTAxMTUwNTYsImlzcyI6InB1Yi0wIiwic3ViIjoibGluay1yZWRpcmVjdCJ9.4hu1IHbaPYBjR6JuqzmwDJ0QAmlQ5XhVDUfGsgOYrzI?
View Email

What It Actually Takes to Trust AI With Your Data

opinionatedintelligence@substack.com3/26/2026
View this post on the web at https://opinionatedintelligence.substack.com/p/why-ai-analysis-gives-you-generic The Metrics Trap Consider a simple example. You’re told that a product has: 5 million monthly active users (MAU) DAU/MAU of 80% Most people would immediately assume this is a strong product. Five million users represents meaningful scale, and an 80% DAU/MAU ratio shows that four out of five monthly users come back every single day, signaling exceptional engagement. With only these two data points, both a human analyst and an AI system would reasonably conclude: this company is doing well. Subscribe to Opinionated Intelligence to get new essays on the future of analysis, data, and decision-making. But watch what happens when you add another piece of information. Average daily session length: 30 seconds. Now the picture shifts. For most products, especially consumer and social ones, a thirty-second session means users aren’t doing much. They might be opening the app out of habit, glancing at a notification, and leaving. The high DAU/MAU ratio suddenly looks less like deep engagement and more like shallow, reflexive behavior. Except — what if the product is a payments app? Something like Venmo or Zelle or UPI? In that case, thirty seconds is perfectly natural. Users open the app, send money, confirm, and close it. A short session length isn’t a weakness; it’s a feature of the product category. This is the metrics trap: any individual metric, taken in isolation, supports multiple contradictory interpretations. The number itself doesn’t tell you whether the company is thriving or struggling. Only context does. Why AI Falls Into This Trap LLM systems reason by pattern matching at enormous scale. When you present a set of facts, the model searches its learned representations for the most coherent explanation that fits those facts. When context is rich and specific, this process works remarkably well. The model can narrow down to a single plausible interpretation and reason about it with precision. But when context is thin, many different stories remain equally plausible and the model has no way to distinguish between them. In that situation, it does the only thing it can: it selects the interpretation that is most common in its training data and presents it as though it were the obvious conclusion. This isn’t a bug. It’s the fundamental mechanism. And it means that vague inputs reliably produce generic outputs. If you tell an AI “DAU/MAU is 80%” and nothing else, the model doesn’t know if the product has a hundred users or a hundred million. It doesn’t know if it’s a game, a banking app, or an enterprise tool. It doesn’t know if engagement is organic or subsidized. So it picks the most typical scenario, probably a consumer app with decent traction, and builds its analysis around that assumption, without telling you it’s assuming. The Concept of Orthogonal Context The solution is what we can call orthogonal context: independent pieces of information that describe the situation from different, non-overlapping dimensions. The word “orthogonal” comes from geometry. It means “at right angles,” or more broadly, independent. In this context, it means each new piece of information you provide should reduce ambiguity in a direction that the other pieces don’t already cover. Here’s a practical example. Consider these four data points: DAU/MAU = 80% → tells you about engagement frequency MAU = 5 million → tells you about scale Average session length = 10 seconds → tells you about engagement depth Product category = payments app → tells you about expected user behavior Each one describes a different dimension of the product. None of them is redundant with the others. Together, they paint a specific and coherent picture: a payments app at meaningful scale with high-frequency, low-duration usage, which is exactly what you’d expect from a well-functioning product in that category. Now compare that with providing four data points that all describe the same dimension: DAU/MAU = 80% Weekly active users / MAU = 90% D7 retention = 75% D30 retention = 70% These are all engagement metrics. They’re correlated with each other. Providing all four gives you more precision on one axis, but it doesn’t help the model understand the broader picture. You know engagement is high, but you still don’t know at what scale, in what product category, or whether the engagement is organic. The principle is straightforward: breadth of context matters more than depth on a single axis to construct a unique story. AI Needs a Unique Story Here’s a useful way to think about what happens inside the model when you give it information. AI is implicitly trying to construct a single coherent narrative that explains all the data points simultaneously. The fewer data points you provide, the more narratives remain plausible. The more orthogonal context you add, the more candidate stories get eliminated, until ideally, only one remains. Think of it like a detective solving a case. One clue (the suspect was in town that day) leaves hundreds of possibilities open. Two clues (they were in town and had a motive) narrows it down. Five independent clues might point to exactly one person. Story A: Ride-sharing app 8M MAU DAU/MAU 70% Average 4.5 rides per week per active user This looks like a product with strong product-market fit. High frequency, solid scale, healthy engagement. AI would likely benchmark it against Uber’s early growth and project a promising trajectory. Story B — same facts, plus one: Average rider subsidy: $8 per ride Now the original story crumbles. Users aren’t choosing the product, they’re choosing the discount. At 4.5 rides per week, the company is burning roughly $36 per user per week to maintain those engagement numbers. The DAU/MAU ratio isn’t measuring product love but rather price sensitivity. When the subsidy shrinks, so will every metric on this dashboard. One additional orthogonal fact completely changed the story. This is why context completeness matters more than the sophistication of the question you ask. A brilliant question with sparse context will produce a mediocre answer. A simple question with rich, orthogonal context will produce a sharp one. How to Read AI’s Output as a Diagnostic Tool There’s an important corollary to all of this: the quality of AI’s output tells you something about the quality of your input. If AI gives you a response that feels generic, confident, and unsurprising, that’s usually a signal. It’s not that the AI isn’t doing a good job, but that it likely didn’t have enough context to do anything other than default to the most common pattern. Generic output is a symptom of ambiguous input. When you see this happening, don’t try to fix it by asking a more clever follow-up question. Instead, go back and examine what context is missing. Ask yourself: Does the AI know the scale of what I’m describing? Does it know the category or domain? Does it know about external factors — incentives, constraints, competitive dynamics? Have I given it information that distinguishes my situation from the typical case? If the answer to any of these is no, that’s where the gap is. Conversely, when AI produces an insight that feels genuinely specific and non-obvious, it usually means you’ve provided enough orthogonal context for the model to converge on a single story. That’s the signal that the system is working well. Practical Guidelines for Working with AI If you want AI to produce high-quality analysis, focus less on crafting the perfect prompt and more on assembling the right context. Here’s how: 1. Provide multiple independent metrics Don’t hand the model a single signal and expect it to work backwards to a full picture. Combine data points that cover different dimensions: Scale: MAU, revenue, headcount Engagement: DAU/MAU, session length, actions per session Retention: D1, D7, D30 cohort retention Economics: Unit economics, LTV/CAC, gross margin Each category tells the model something the others don’t. 2. Always specify the product category and use case This is one of the highest-leverage pieces of context you can provide, because it sets the baseline for what “good” looks like. Ten seconds of daily usage in a payments app is excellent. Ten seconds in a social network is a disaster. Ten seconds in a meditation app is confusing. The exact same number means completely different things depending on what the product is supposed to do. If you don’t specify the category, the model will guess. And it will usually guess “generic consumer tech product,” which may be completely wrong for your situation. 3. Surface incentives, subsidies, and external drivers Engagement metrics are easy to distort. Common drivers that change interpretation include: Promotional offers and sign-up bonuses Referral rewards Forced usage Advertising spend driving installs Seasonal effects If any of these factors exist, the model needs to know. Otherwise, it will interpret artificially inflated metrics as organic signals and build its analysis on a false foundation. 4. Name what makes your situation unusual AI defaults to the typical case. If your situation is atypical in any important way, you need to say so explicitly. This might include: Operating in a regulated industry Serving a niche market Having an unusual business model Facing a specific competitive threat Being at an unusual stage of growth The model can reason well about unusual situations, but only if it knows they’re unusual. 5. Reduce ambiguity before asking for analysis Before asking the AI to draw conclusions, check whether you’ve given it enough information to rule out alternative interpretations. The goal: give AI enough independent facts that only one story makes sense. That’s when analysis becomes sharp. In Summary Most people try to get better output from AI by writing better prompts. They tweak the phrasing, add instructions, ask the model to “think step by step.” These things help at the margins, but they’re optimizing the wrong variable. The far higher-leverage move is assembling better context before you ever type the question. Give the model enough independent facts (scale, category, engagement depth, external drivers) that only one story makes sense. When you do that, you don’t need a clever prompt. The analysis sharpens itself. The next time AI gives you a generic answer, don’t ask a better question. Ask yourself what you forgot to tell it. Subscribe to Opinionated Intelligence to get new essays on the future of analysis, decision-making + AI. Unsubscribe https://substack.com/redirect/2/eyJlIjoiaHR0cHM6Ly9vcGluaW9uYXRlZGludGVsbGlnZW5jZS5zdWJzdGFjay5jb20vYWN0aW9uL2Rpc2FibGVfZW1haWw_dG9rZW49ZXlKMWMyVnlYMmxrSWpvME1qWTFNVFV3TkRBc0luQnZjM1JmYVdRaU9qRTVNVE0zTURReU15d2lhV0YwSWpveE56Y3pPRFV5TURBNExDSmxlSEFpT2pFNE1EVXpPRGd3TURnc0ltbHpjeUk2SW5CMVlpMDFNelUzTVRZeElpd2ljM1ZpSWpvaVpHbHpZV0pzWlY5bGJXRnBiQ0o5LnlraE9kbU10S3JzZ2lkbGo1TVN1TFVLdDRGMUJmZy1PQUVmQjVZbWJMY0kiLCJwIjoxOTEzNzA0MjMsInMiOjUzNTcxNjEsImYiOnRydWUsInUiOjQyNjUxNTA0MCwiaWF0IjoxNzczODUyMDA4LCJleHAiOjIwODk0MjgwMDgsImlzcyI6InB1Yi0wIiwic3ViIjoibGluay1yZWRpcmVjdCJ9.74s99ss1y3con-OBIGv5Yg95z6MsSC_5BTfu3X1iHU8?
View Email

Why AI Analysis Gives You Generic Answers

opinionatedintelligence@substack.com3/18/2026
View this post on the web at https://opinionatedintelligence.substack.com/cp/190639594 Unsubscribe https://substack.com/redirect/2/eyJlIjoiaHR0cHM6Ly9vcGluaW9uYXRlZGludGVsbGlnZW5jZS5zdWJzdGFjay5jb20vYWN0aW9uL2Rpc2FibGVfZW1haWw_dG9rZW49ZXlKMWMyVnlYMmxrSWpvME1qWTFNVFV3TkRBc0luQnZjM1JmYVdRaU9qRTVNRFl6T1RVNU5Dd2lhV0YwSWpveE56Y3pNalE0TnpReUxDSmxlSEFpT2pFNE1EUTNPRFEzTkRJc0ltbHpjeUk2SW5CMVlpMDFNelUzTVRZeElpd2ljM1ZpSWpvaVpHbHpZV0pzWlY5bGJXRnBiQ0o5LjZkTFpLdkhjME1ub3M4VFlEenA0b0toaGplMWNxT1VkcXlXSmJGWTc0b1UiLCJwIjoxOTA2Mzk1OTQsInMiOjUzNTcxNjEsImYiOnRydWUsInUiOjQyNjUxNTA0MCwiaWF0IjoxNzczMjQ4NzQyLCJleHAiOjIwODg4MjQ3NDIsImlzcyI6InB1Yi0wIiwic3ViIjoibGluay1yZWRpcmVjdCJ9.xW49SKvRQxo6BvlE5uCw0M3R-Ndgtxp6E3EUnRceVYY?
View Email

What Excellent Growth Teams See That Others Miss

opinionatedintelligence@substack.com3/11/2026
View this post on the web at https://opinionatedintelligence.substack.com/p/the-data-job-isnt-dying-because-the Every few years, the same fear resurfaces: “This time, the tooling is so good that the role itself disappears.” We heard that 15 years ago when dashboards exploded onto the scene. “Oh no! With people self-serving their own data, who needs data practitioners?” Actually, the opposite happened. The appetite for data just grew. Data teams doubled and tripled in size. Subscribe to Opinionated Intelligence for the latest in the future of data + AI Now, the same narrative rears its head again: AI can write SQL! AI can generate dashboards! AI can produce explanations that sound confident and coherent! From the outside, it looks like that’s the entire job. But this conclusion is shortsighted. Code can be a black box. Data cannot. In software, correctness is observable. If the login succeeds, the payment gets processed, the page renders, an entire suite of tests passes, and a bunch of white-hat hackers can’t get in, it’s all good. You don’t really need to know the exact details of how the code was written. Data analysis is an entirely different beast. The SQL may run without error. The dashboard may load a pretty chart. An explanation may read beautifully. And yet it can be wrong. There is no way to guarantee, just by looking at the results, how trustworthy the conclusion is. One must validate the steps of the work itself. This is how trust is earned in data. The bottleneck has moved Yes, AI dramatically lowers the cost of getting data and producing analysis. But more data and more analysis do not automatically mean faster decisions. The business of analysis has always had two parts: 1) generating outputs 2) Deciding which outputs deserve belief. For years, analytics teams were constrained by the first. “Who can pull together an analysis on the health of the suburban teens segment?” “Well, Jared’s queue is 16 requests long, so it’s going to be about three weeks.” Today, J.ai.red can handle thousands of such requests within a day. But as answers become cheap, judgment becomes the bottleneck. In most organizations today, even without AI, teams already struggle with two dashboards showing different retention numbers, or a board conclusion that doesn’t match the growth model, or an experiment result that contradicts a prior narrative. Now imagine multiplying the volume of analysis by 10× or 100×. Poor Jared is now getting dozens of requests to the tune of: “Hey, does this look right?” Good judgment is not easy to come by, and it asks meaningfully harder questions: Are the underlying assumptions valid? Is the data lineage stable? Is the signal statistically meaningful or just noise? Is this explanation consistent with our broader understanding of the business? These questions ask for accountability rather than mere execution. The new data role Within data, the salient question is no longer: “Who can get the answer fastest?” It is “Who can decide what is true?” This role requires: Knowing which metrics are canonical and why. Understanding which tables are authoritative. Recognizing when an output violates prior institutional knowledge. Detecting when a result is technically correct but strategically irrelevant. Knowing who needs to act on which information. Call this role an arbiter, a steward, a tastemaker. Or, my personal favorite: a data curator. The rest of the org will know this group as “the data people we trust” and expect them to ensure answers hold up under scrutiny. As analysis volume increases, we should expect greater volatility in the quality of answers. Without a trusted layer of curation, organizations will find themselves mired in even more noise and less signal, leading to decision paralysis or even worse: uninformed misalignment. An example Let’s say an executive asks a simple question: “Why did retention drop last week?” Today’s AI can produce five plausible explanations: A cohort mix shift A recent feature launch Competitive market pressures A seasonality artifact A marketing deluge Each explanation includes supporting charts. Each sounds reasonable. But can we trust that the AI is aware… …a logging schema changed two weeks ago. …the definition of “active user” was modified last quarter. …a large enterprise customer churned and is distorting aggregates. …a marketing campaign temporarily shifted acquisition mix. …a prior experiment created a lagged retention artifact. A strong data curator sees immediately: One explanation is outright incorrect. Three are technically true but misleading. Only one meaningfully changes strategy. They also know how to update the system with richer semantic definitions, crisper documentation and tighter canonical dashboards, so that the next AI-generated answer improves. In the era of AI, jobs move up the stack If there’s one thing you take away from this, let it be this: the data function is not disappearing. The data job is moving up the stack, away from pure execution and toward interpretation, curation, and institutional memory. The role becomes less about “Can I answer this?” and more about “What are the important questions for our organization to ask, and how can I curate a system that delivers fast, high-quality answers to those questions?” In environments where decisions carry real cost, organizations will always prefer accountable interpretation over unowned output. The future data leader is not the fastest producer of analysis. It is the person whose judgment the organization is willing to stand behind. When answers are abundant, trust becomes like a precious gem: increasingly rare and all the more valued. Think someone should read this and talk about it? Share it below. Unsubscribe https://substack.com/redirect/2/eyJlIjoiaHR0cHM6Ly9vcGluaW9uYXRlZGludGVsbGlnZW5jZS5zdWJzdGFjay5jb20vYWN0aW9uL2Rpc2FibGVfZW1haWw_dG9rZW49ZXlKMWMyVnlYMmxrSWpvME1qWTFNVFV3TkRBc0luQnZjM1JmYVdRaU9qRTROemd5TkRNeE9Dd2lhV0YwSWpveE56Y3hNelF5TXpBeExDSmxlSEFpT2pFNE1ESTROemd6TURFc0ltbHpjeUk2SW5CMVlpMDFNelUzTVRZeElpd2ljM1ZpSWpvaVpHbHpZV0pzWlY5bGJXRnBiQ0o5LlhQRnIwRk15Z1djVTRoR2tPYUJxVHBLRkkxeWgtalFXbW5YaThZQXBsMjAiLCJwIjoxODc4MjQzMTgsInMiOjUzNTcxNjEsImYiOnRydWUsInUiOjQyNjUxNTA0MCwiaWF0IjoxNzcxMzQyMzAxLCJleHAiOjIwODY5MTgzMDEsImlzcyI6InB1Yi0wIiwic3ViIjoibGluay1yZWRpcmVjdCJ9.TIsfrLie8F-2k9WC-znFXoJK6fYp7umrG_X299NroY8?
View Email

The Data Job isn't Dying Because the Trust Problem is Exploding

opinionatedintelligence@substack.com2/17/2026
[https://eotrx.substackcdn.com/open?token=eyJtIjoiPDIwMjUxMjE4MDMxNTEwLjMuYTQzNjg1NzVhMDM0MjY2OC5uamdyam5lMkBtZy1kMC5zdWJzdGFjay5jb20-IiwidSI6NDI2NTE1MDQwLCJyIjoiYkBlbWFpbC5nb21vZHVsci5jb20iLCJkIjoibWctZDAuc3Vic3RhY2suY29tIiwicCI6bnVsbCwidCI6bnVsbCwiYSI6bnVsbCwicyI6NTM1NzE2MSwiYyI6ImZyZWUtd2VsY29tZSIsImYiOnRydWUsInBvc2l0aW9uIjoidG9wIiwiaWF0IjoxNzY2MDI3NzExLCJleHAiOjE3Njg2MTk3MTEsImlzcyI6InB1Yi0wIiwic3ViIjoiZW8ifQ.PKOapUNTTDQhovszd5f2R0ouVsIbthj2jD1I8QtUYio] Thank you for subscribing to Opinionated Intelligence ͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­ Opinionated Intelligence [https://substackcdn.com/image/fetch/$s_!9iYB!,w_80,h_80,c_fill,f_auto,q_auto:good,fl_progressive:steep,g_auto/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3838e081-5cf4-43a7-906b-3ff319e9cd0a_481x481.png] [https://substackcdn.com/image/fetch/$s_!Cznk!,w_1100,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f72707b-9fde-4d16-b1b9-36dc29e2e698_1100x220.png]https://email.mg-d0.substack.com/c/eJx0kc2O3SAMhZ8m7BKB-btZsKhU5TUiE5wM0wBRIJ1On77KvarUWXR3JMTn488LNtrK-enWk6j_oH0piVhwRuKqAyMnrDEcrBWCUcK4zxtlOrFRmLH982qsYm-OkONj1NKCNMLYhxCISviglnWUo0EWHXDQAsSDS6EFH-SASpqHthq5VGDMY8jv2_meCTrF09YHPtTL14bLj2EpicU6313vLq6dF7HdvbV21E5-62DqYCpHzLHku2LMjfY9bpQX-kLpYIodTFcOtMZMoZNTTFsnv79QoG-YvnEapr8f-6PU1icKEYcqB0z4u2T8qC-ghum4_B6XZ4wJN6rPOK4WLLe-H9dAvQrC9F74sZcmLDASkBkfsxCc_wLgw5E3dlx-XkpKV47tc6aMfqfwWvY1AlsseY7BaamtMIKdzneKP6UMW0klXPv5tFUvH0rCmN1_tLD29fRXpfMmKzBaaK44--ngTwAAAP__ypCx8g You’ll start receiving new posts right here in your inbox. You can also log into the website to read the full archives or access the blog from Substack app. ~ Julie and Chandra, from sundial.so [https://email.mg-d0.substack.com/c/eJxUkMuO4yAQRb_G7GIB5uEsWMwmv2EVUHbI8LB4zCh_33J6070u1T1Hx0HHo9S32Svi7T9GVxISb9QCu_QEDdNKUa41YwQThLgdmLFCR79B_3FVWpCnYYLuXu5aS3cX3Hqwiint0d1B76vkJBhOuWScrXRhktF5mUEsapVaAl0EV2qd8-uor4x8EjQdN0_nNmzr4P7OriQS2na5Xi6m14EkmmfvZ5uWPxN_TPzRRvYB4tzKxB_kHHZzJaWRQ39vmMFG9N-P57AxOOih5C14IxepmWKkGjsJ-gHMR0nFj1g_5DasLwlCNuUMOZR8VQi5Y4zhwOyQ9N8ZR8N6LQuuJJNUUPLP8K8AAAD__yWGeK4] © 2025 Julie Zhuo 548 Market Street PMB 72296, San Francisco, CA 94104 Unsubscribe [https://email.mg-d0.substack.com/c/eJx0kk9TqzwYRz8N7GTyhyR1waKvii-O4NWitW6YkAQaCglCKG0__R16N9fFXT8zZ85z5ie4U7UdzlE1KHUzq1bYTvkyophXRPoqgoxSgBiD0Fcd121RK6MG7pQsuPvrSlno7yMiuChvYUgJA7eVpETCFSaMQclgJQj3dYQAIhDBFcCQQBDggIeYrggjHOAQUboKTFMPjVHIC0FX30gQjFM5Oi4OgbCdr8dicV1cIjdMym-jvXP96OG1h2IPxbbXRluzKGrjVNvqWhmhflA8FHPhtDUeiqUeedmq4or0cOzsQRkP36vzExTo4_yJ2kPSWJA2O5jm73N2vx4T899R4LeKb1_1S0dg-TiPSdfu5V1C01ygrFmfs4s4pZtZf33u56Sxp-xywFm-u2SXh9Pz3VMvcKpf9NMst4nL8hRmlweUbpIxMRnc6YQm3VsvUKzLx4_qawv3fDvr6jU45UXZV7-oPdaknDffj-6dZfFJbHqB13m8vvl__J5X7FmQ1O-nshC26yaj3blQZvlT_qnWT2WrBV8aFFpGBBMGKfSHqPRCcE0R1LazcmqHa_ZxKqXtuDbRP_r67ueGplENCzlElEACQuAfI_Q7AAD__2oj0RQ] Get the app [https://substackcdn.com/image/fetch/$s_!IzGP!,w_262,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack.com%2Fimg%2Femail%2Fgeneric-app-button%402x.png]https://email.mg-d0.substack.com/c/eJxUkT1u5TAMhE8jdTb0b6dQsUDwrmHQEu0osSRDpnaR2y_80rw0bGYw_MgJQLjX9u23hjj8wyPUjDx6p2GzkaOXk3NCTZOUHDOkY9mxYAPCuAC9qG4y_MNv66pMeDNy26JWYN9UjEqbGdRm7Rw3nrwSykolZ6GllWLUIxjtZjtZENoo5-axfO7ts6BiRuR9iGK8-noRhK8x1MzTtdysN4un1pEf_oPovJj-w9SDqcerm6kHnOfPHC6qDYeGMTUMxPSjU14C5BPSXph-vz0Z2hdSKjtT7inXQliI6ffnxmGrlbANayeqhZ99XULNuZdE3wsWWA-MP1RnX48UgFItS4reajtJJ3nzKzPimTXuNdfYj_Y86-prrBlS8fVMJdVyvzgVwuNIO5aAnH531C9sd7JRzkorjOB_vfofAAD__xWpm18Start writing [https://substackcdn.com/image/fetch/$s_!LkrL!,w_270,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack.com%2Fimg%2Femail%2Fpublish-button%402x.png]https://email.mg-d0.substack.com/c/eJxskUuO3CAYhE8Du7Z441mwiOT0BXIAC8NvhgkPC0OSuX3knkzUkbKkKH18qJztEGp7N3sDuP2E5GoG7I3idpceg6FaKcK0phRDtjGtAQo028Gvtj_dKi3wq2EvhJNZUuGEYla7nXnQen_xXHs_g8XRMMIkZXQmnEpKJj5ZwdUstbSEC6bUPJW30N4KMCRIDjdPpnNsZ7fu--RqxvFcL9fLxfQ2ACfz2vtxIv4FsTti9-f2dYyhjAPx--h5PetoDhBfPkuIqSvP4OPIiC8P7p_Q1dKhdMSXvdYO7TO2-bAxFMQXO3rdY0rgb38rl9y3x5tfHyy-bEiQB3cKNVc_UvtQUw3xRdNfhyD4GNvqas6jxP6-QrFbAv_xv2NsKTrbYy1r9EZyqamiuJn_c_E5Nl-zjcXUI5ZYyzVWLB1SigGKA9z_XXuc0C6yYEpSSQTBPwz7HQAA__8uU7QA [https://eotrx.substackcdn.com/open?token=eyJtIjoiPDIwMjUxMjE4MDMxNTEwLjMuYTQzNjg1NzVhMDM0MjY2OC5uamdyam5lMkBtZy1kMC5zdWJzdGFjay5jb20-IiwidSI6NDI2NTE1MDQwLCJyIjoiYkBlbWFpbC5nb21vZHVsci5jb20iLCJkIjoibWctZDAuc3Vic3RhY2suY29tIiwicCI6bnVsbCwidCI6bnVsbCwiYSI6bnVsbCwicyI6NTM1NzE2MSwiYyI6ImZyZWUtd2VsY29tZSIsImYiOnRydWUsInBvc2l0aW9uIjoiYm90dG9tIiwiaWF0IjoxNzY2MDI3NzExLCJleHAiOjE3Njg2MTk3MTEsImlzcyI6InB1Yi0wIiwic3ViIjoiZW8ifQ.9Maqop90E4tPfffx8D2jJOZBy3v2SMbtcdl8feXVWzg][https://email.mg-d0.substack.com/o/eJxUkEmOIyEUBU9jdk4xgxecJcXwk_5uBouEbvn2pfSiVLUOKRTvRT8h9_F2xwC4_4cSewWSnBb-UImAY0Zryo1hjED1WPYMDYafkHY_f1BtJPnjhLQhyOiPaJgViinJ7UNrG0AEbo4HQccpV4wzSwVTjG5i81Joq4zyVEiutd3aM49nA36TtOZ7otu5wjl9_LvFXgme-9V6tbg5FpDXCnvsta6G871D86FA-kYFo5_Y247JKaEM04wMF26SfhRb7rWnVcbHfa6QevXYXH9hw96undgmlIIZWgQyfx-1ThiXWXKtmKKSkn-OfwUAAP__zP1urw]
View Email

Thanks for signing up!

opinionatedintelligence@substack.com12/18/2025