Explore all emails from Demand Loops

17 emails

View this post on the web at https://demandloops.substack.com/p/your-ai-powered-marketing-team-is I can’t lie, this week has just been so heavy for me. It feels like it’s all compounding in our little B2B bubble right now so I figured I might as well write it out. A friend of mine got laid off two weeks ago. Her boss told her they were replacing her with Claude. Not like a “we’re restructuring”, or “your role is being eliminated”, like legit we’re replacing you with an AI tool. She’s a senior demand gen manager with eight years of experience. She ran their ABM program, built their program from scratch, managed a $200K/quarter paid media budget. And her company decided an untrained robot could do her job, today. I’m embedded in multiple B2B SaaS companies right now, and I can feel this wave building. More CEOs are looking at AI output demos and asking the same question: “If AI can do all that, why do we need such a big marketing team? Let’s trim it.” I use AI more than almost anyone I know (doesn’t mean I always use it well, but I’m spending the majority of my days building, testing, iterating). And I think most companies are about to get this very, very wrong. This is my list of warnings, or at least considerations. 👋 Hi, I’m Kaylee Edmondson [ https://substack.com/redirect/fb66b37c-f196-48c4-ac0b-41baa2bee3ed?j=eyJ1IjoiNzF4cDQwIn0.VLQsNiiAawz-DS2VtWTrcrG2IFeLIxnWNFcK9akSjpY ]. Looped In lands in your inbox every Sunday with one goal: to give you a sharper way to think about demand gen and growth in B2B SaaS. 2k+ marketers are already reading it. If you’re not subscribed yet, fix that below. The “Full-Stack Marketer” Mirage I keep hearing this phrase in conversations: “We need full-stack marketers.” AI handles the execution, you just need a handful of strategic generalists who can prompt their way through any channel or function. The full-stack marketer everyone imagines rarely exists, and doesn’t exist at the salary they want to pay at all. Someone who can genuinely operate across paid media, ABM, marketing ops, content strategy, product marketing, brand, field, analytics, and campaign execution is a senior director with 10+ years of experience at best, and they’re not taking your $120K IC role. The generalist who can “do it all with AI” is a bet that AI closes the depth and context gap. And right now, it just doesn’t. The Sea of Sameness Here’s what I think happens when every company compresses their marketing team and leans on AI for output: everything starts sounding the same. Because it is the same. The same models, trained on the same data, prompted by people following the same playbooks, producing content that reads like it all came from one room. I already see it in my feed. The LinkedIn posts all read the same. The blog structures are interchangeable. The ad copy uses the same hooks. The AI-powered marketing team that leadership is so excited about is going to produce the exact same output as every other AI-powered marketing team. And when everyone’s content sounds identical, none of it will work. The whole point of marketing is to stand out. AI, by default, converges to the mean. And at some point I’m actually starting to worry if it forgets how to learn. Or worse, if we forget how to learn. This is the part that should worry CEOs the most, because it’s the hardest problem to see from the top. The output will look professional. It’s grammatically clean. It hits the right keywords. But it has no edge. No POV. Nothing that makes a buyer stop scrolling and think, “this company gets it.” We’ve never been able to A/B test our way to that. And I really believe we won’t be able to prompt our way to it either. Taste Is the New Moat There’s a word that keeps coming up in every AI conversation I’m in right now: taste. When production is basically free, the ability to produce stops being valuable. What becomes valuable is knowing what’s good and what to cut. Knowing when something technically works but feels off. Knowing that your competitor’s new positioning is weak even though it checks every messaging framework box. That’s taste. And taste lives in experiences, but most definitely not in models (at least not yet). The companies that compress their teams down to a handful of AI-prompters are going to produce more content than they ever have. They’re also going to produce the most forgettable content they’ve ever published. Because nobody on the team has the experience or the authority to say something is mid, kill it or this is close but the angle is wrong, or the market is tired of this framing, try something nobody else is doing. Taste is the editorial layer that separates a brand with a point of view from one that’s just adding to all the noise we’re facing. I don’t think you can hire for it at the salary ranges I’m seeing (stacked with all the other requirements), and I’m pretty confident we’re not going to automate it anytime soon. Creation Without Distribution There’s another gap that compression makes worse. Very few marketing teams have figured out both creation and distribution. Most are decent at one and terrible at the other. AI helps a lot on the creation side. I’ve seen it. I use it every day. We can produce more content, more ad variations, more email sequences, more landing pages in a fraction of the time. But distribution still requires human judgment, relationships, and a deep understanding of your buyer’s behavior. Getting the right content in front of the right people at the right time through the right channels has always been the harder half, and I don’t see AI solving that part yet. Speed of output is different from speed of judgment. When the team gets compressed, the people left are trying to do both. And I think they’re going to default to the side AI makes easier: creation. Which means you end up with a team producing a mountain of content that nobody sees because distribution strategy got deprioritized the moment headcount shrank. More content, and less pipeline. That’s the outcome I’d predict for most compressed teams within six months. The Tribal Knowledge Problem I see versions of this at every company I’m embedded in. The people who built the systems are the people who understand the systems. When the team gets compressed, you don’t just lose headcount. You lose the institutional knowledge of why things are set up the way they are. And that knowledge lives in people’s heads, and not in documentation, because rarely do people document this stuff. I’ve never walked into a client engagement where the MOPs infrastructure was well-documented. Not once. In 10+ years. If this compression wave hits the way I think it will, companies are going to fire the people who built their systems and then spend the next six months paying contractors like me a premium to figure out what those people already knew. What I Think Happens Next I haven’t watched this play out yet. Not fully. But the signals are everywhere, and here’s my prediction for how it goes at most companies that compress too fast: The reorg gets announced. “We’re building a lean, AI-powered marketing team.” The people who stay feel chosen. Cautious optimism. Within a few weeks, the cracks show. Nobody can figure out why the lead routing broke. The scheduled reports stopped running and nobody knows which connector was triggering them. The team starts drowning in maintenance, and they haven’t even gotten to the part where they’re supposed to be building new things. Pipeline starts slipping. Not because the team isn’t working hard, but because campaigns that were running on autopilot actually needed someone monitoring them. The content being produced is higher volume but substantially lower quality, and it’s blending into the same AI-generated sea as everyone else’s. Leadership brings in a contractor to “help stabilize things.” The contractor spends the first three weeks doing discovery on what’s broken. This is effectively paying a premium for someone to rebuild context that walked out the door. To Everyone CEO Considering This… Document everything before touching the org chart. Do it while the people who built the systems are still around to verify the documentation is right. Audit what the team spends their time on, automate the repetitive work, measure real time savings. Compress through attrition, not through RIFs. When someone leaves, let the team try to absorb the work with AI. If it works, that’s true efficiency. If it doesn’t, you’ve learned something about what that role drives for your business that AI can’t (yet). And invest in taste. If you’re going to run a smaller team, those people need to be true unicorns. Likely not a junior hire, or a $120k generalist, but someone with enough experience to know what good looks like and enough authority to kill what doesn’t meet the bar. The companies that figure this out will end up with smaller, sharper teams that use AI as a multiplier I have no doubt about that. The ones that just cut headcount and hope AI fills the gap are going to spend the next year wondering why their pipeline is flat (or declining) and their content sounds so mundane. See ya next week, Kaylee ✌ P.S. Next week I promise to be back on the AI adoption train, but this week’s entry needed to be a diary of the thoughts shuffling around in my brain 😅 Unsubscribe https://substack.com/redirect/2/eyJlIjoiaHR0cHM6Ly9kZW1hbmRsb29wcy5zdWJzdGFjay5jb20vYWN0aW9uL2Rpc2FibGVfZW1haWw_dG9rZW49ZXlKMWMyVnlYMmxrSWpvME1qWTFNVFV3TkRBc0luQnZjM1JmYVdRaU9qRTVOVFUzT1RJd05pd2lhV0YwSWpveE56YzNNalUzTVRZMUxDSmxlSEFpT2pFNE1EZzNPVE14TmpVc0ltbHpjeUk2SW5CMVlpMHhPRGd5TmpjM0lpd2ljM1ZpSWpvaVpHbHpZV0pzWlY5bGJXRnBiQ0o5LmJYa3JCTEd4ZEtJUW12dUpiTFh2TUNIYllQN0dJWU9FVWxDeU9BWnRQbHMiLCJwIjoxOTU1NzkyMDYsInMiOjE4ODI2NzcsImYiOnRydWUsInUiOjQyNjUxNTA0MCwiaWF0IjoxNzc3MjU3MTY1LCJleHAiOjIwOTI4MzMxNjUsImlzcyI6InB1Yi0wIiwic3ViIjoibGluay1yZWRpcmVjdCJ9.LJF4zT6rGHu3nA0LIXIPuMsy3zWYg0K9ycJucXPmQ1c?
View Email

Your AI-Powered Marketing Team Is Going to Sound Like Everyone Else's

demandloops@substack.com4/27/2026
View this post on the web at https://demandloops.substack.com/p/my-most-used-claude-skill The patterns worth automating only become visible after a few weeks of doing the work. But by then, you’re too deep in the weeds to notice them. The skill I built to solve this runs every Monday morning, reads the last seven days of my activity, and hands me a ranked list of the skills I should build next. …yep. It’s a skill for skills. Whatever works, right? A skill only pays off if you run it 5+ times a month. The build step is the easy part. Noticing what to build a skill around is harder (at least for me it was), and that’s where most operators stall. Memory is terrible at it. Your tools remember better than you do. “If you can’t describe what you are doing as a process, you don’t know what you’re doing.” W. Edwards Deming 👋 Hi, I’m Kaylee Edmondson [ https://substack.com/redirect/18b86bfc-0df3-47f2-9ce0-89cc47b88b8a?j=eyJ1IjoiNzF4cDQwIn0.VLQsNiiAawz-DS2VtWTrcrG2IFeLIxnWNFcK9akSjpY ]. Looped In lands in your inbox every Sunday with one goal: to give you a sharper way to think about demand gen and growth in B2B SaaS. 2k+ marketers are already reading it. If you’re not subscribed yet, fix that below. A One-Time Setup Before the weekly audit ever runs, the skill onboards you with a 2-minute interview. Five questions, asked one at a time: What’s your role, and which clients are you working on? Which tools do you use most? (Slack, Gmail, Monday.com, Granola, Google Calendar, HubSpot, Google Drive, LinkedIn Ads, Google Ads, Salesforce) What feels like it takes the most time in your week? Do you already have any skills, templates, or automations set up? What day of the week do you want the audit to run? My answers got saved to a markdown file in my workspace called skill-audit-profile.md. Every future weekly run reads that file first. Self-reported time sinks get weighted heavier in the ranking. Existing automations get excluded from recommendations. Then the skill schedules itself using a cron expression to fire at 9:00 AM on whatever day I picked. I picked Monday because I want the report waiting for me when I open my laptop at the top of a fresh week. The Weekly Skill Audit: 3 Phases Every Monday at 9:00 AM, the skill runs automatically. Three phases. I don’t have to prompt it. I don’t have to remember. By the time I’m pouring coffee, the report is sitting in my workspace. Phase 1 is the scan. Phase 2 is the scoring. Phase 3 is the report. Phase 1: Scan The audit reads my profile first so it knows my role, my clients, my primary tools, and my self-reported time sinks. Then it scans the last 7 days of activity across whichever tools I marked as primary. Each tool has its own scan behavior: Granola: queries recent meetings, scans titles and transcripts for patterns Gmail: searches sent emails (in:sent newer_than:7d) for templated language and recurring email types Slack: searches recent messages for repeated phrases, status updates, recurring explanations Monday.com: checks recent board activity and item creation patterns Google Calendar: reviews the week’s events for recurring meeting types HubSpot: checks recent CRM activity for repeated workflows Google Drive: checks recently created or modified files for naming patterns If a tool isn’t connected, the audit skips it and notes the gap in the final report. Phase 2: Score The audit looks for six specific pattern types. These are the categories baked into the skill: Repeated creation (same type of deliverable made regularly) Multi-step workflows (predictable sequences across tools) Recurring communication (templated emails or messages) Data gathering rituals (pulling info from multiple places on schedule) Context switching overhead (bouncing between tools predictably) Knowledge transfer moments (explaining the same thing to different people) Every pattern that surfaces gets scored on five dimensions: Frequency: how many times it showed up this week Time per occurrence: estimated minutes per instance Complexity: how complicated the workflow is Automatable portion: what fraction a skill could realistically handle Estimated weekly time saved: the ranking number The patterns get ranked by estimated weekly time saved. The audit also cross-references every candidate against my profile. Self-reported pain points get weighted heavier. Anything I already have a skill for gets dropped from the list before it even hits the report. The scoring works as a filter. It removes the candidates that would burn a weekend for low payoff. The patterns that survive the rank order are the ones worth a Monday of building work. Most weeks only one or two patterns make the cut. That’s correct. Phase 3: Report The audit writes a markdown file to my workspace called weekly-skill-audit-[YYYY-MM-DD].md. The structure is fixed: # Weekly Skill Audit — [date] **Tools scanned:** [list] **Activity window:** Last 7 days ## New Skill Opportunities This Week (Ranked by estimated weekly time saved. Only NEW patterns not flagged in previous reports.) ### 1. [Skill name] — [est. weekly time saved] **Pattern:** [what was observed] **Evidence:** [paraphrased examples] **What the skill would do:** [2-3 sentences] **Complexity to build:** [Low / Medium / High] ## Recurring Patterns (Still Present) (Flagged before, still showing up. Brief status update.) ## Gaps (Tools not connected, insufficient data, light weeks.) After saving the report, the skill sends me a short summary message in conversation. Three or four sentences, no more. Top one or two new findings only. If I want the details I open the file. What the report does not include: a drafted SKILL.md file, trigger phrases, worked examples, or a ready-to-ship skill. Building the skills it surfaces is a completely separate job I do later in the week, with a different tool, only if I agree with the audit’s ranking. I Ran It on Myself A normal Monday. The week behind it: three ABM kickoff prep sessions, a stack of ad reviews, twelve Granola meetings, and one Slack thread where I’d spent half an hour walking a new team member through tier logic. The audit covered Granola, Gmail, Slack, Monday.com, Google Calendar, and Google Drive. The top opportunity from the report: ## New Skill Opportunities This Week ### 1. abm-strategy — est. 180 min/week saved **Pattern:** Knowledge transfer moments + multi-step workflow + recurring communication **Evidence:** 3 ABM kickoff prep sessions this week each working through the same ~8 questions (tier structure, budget allocation, BDR trigger thresholds, ad format mix by stage, intent signal handoff). One Slack thread explaining the same budget math to a new team member. Two Granola calls where the first 20 minutes were spent re-covering tier logic. **What the skill would do:** Intake the 8 standard ABM questions, return a draft tier structure, budget allocation per tier, BDR trigger rules, ad format recommendations by stage, and an ads-to-outbound signal framework. One shot, based on my existing ABM playbook. **Complexity to build:** Medium The summary message the skill sent me after the report saved: “1 new opportunity worth ~3 hours/week: ABM strategy intake (Medium complexity, est. 180 min/week). Full report in workspace.” The audit’s job stops at description. It captures the opportunity in 2-3 sentences, ranks for complexity, and shows me the evidence. I built abm-strategy the following week. Took me about an hour, in part because the audit had already done the work of articulating what the skill needed to do. The audit has since flagged four more adjacent patterns (quarterly ABM performance review, account list QA, tier migration logic, ABM-to-outbound feedback loop) that I’d never have connected to each other without seeing them stacked in one report. Each one earned its own line item in subsequent weekly audits. What does this mean? (but hummed to the tune of 😏) [ https://substack.com/redirect/b86d49ec-002d-45af-a347-30a845d8fbad?j=eyJ1IjoiNzF4cDQwIn0.VLQsNiiAawz-DS2VtWTrcrG2IFeLIxnWNFcK9akSjpY ] 1. Discovery and building are different jobs. The audit describes opportunities. It does not build skills. That separation is deliberate. When you mash discovery and building together, you prioritize whatever task you feel like automating in the moment, which is almost never the highest-leverage one. Forcing a beat between “this is worth building” and “I’m building it” gives you the chance to ask whether the audit’s #1 candidate is actually the one you should ship next. Sometimes I disagree with the rank. It can’t see into the future after all. 2. Your tools remember better than you do. Your sent folder, your calendar, your call transcripts, and your Slack history are an honest record of your week. My memory is almost always a vibe check. The audit reads the actual artifacts, which is why it surfaces opportunities your brain would have missed. 3. The profile is the cheat code. The one-time interview creates skill-audit-profile.md. Every future weekly run reads that file first. Self-reported time sinks get weighted heavier in the ranking. Existing automations get excluded from recommendations. Without the profile the audit still works, but with it, the audit picks skills you’ll build because they touch work you already admitted is painful. Two minutes of interview, six months of better recommendations. Worth it. The objection I hear most often: “You need to be a developer to do this.” You don’t. SKILL.md is a markdown file with frontmatter. The audit’s scheduled task is set up for you by the skill itself. If you can answer five questions in an onboarding interview and pick a day of the week, you can run this. Writing SKILL.md is straightforward. If you want this in your own setup, grab the /weekly-skill-audit SKILL.md template [ https://substack.com/redirect/ced80673-0ef7-4e7e-a72b-89760b124c72?j=eyJ1IjoiNzF4cDQwIn0.VLQsNiiAawz-DS2VtWTrcrG2IFeLIxnWNFcK9akSjpY ]. Run the interview, pick your day, and let it scan your weeks. And reply to this email with the skill you’re building next. I’ll feature the best replies in a future issue. See ya next week, Kaylee ✌ Unsubscribe https://substack.com/redirect/2/eyJlIjoiaHR0cHM6Ly9kZW1hbmRsb29wcy5zdWJzdGFjay5jb20vYWN0aW9uL2Rpc2FibGVfZW1haWw_dG9rZW49ZXlKMWMyVnlYMmxrSWpvME1qWTFNVFV3TkRBc0luQnZjM1JmYVdRaU9qRTVORGN6TmpjeE1pd2lhV0YwSWpveE56YzJOalEwTURReUxDSmxlSEFpT2pFNE1EZ3hPREF3TkRJc0ltbHpjeUk2SW5CMVlpMHhPRGd5TmpjM0lpd2ljM1ZpSWpvaVpHbHpZV0pzWlY5bGJXRnBiQ0o5LnFkZ2cwUi1XZWhSVGltempZWnA3Y043ZF94RlVFSEswb0hBRFR2aWZnc1kiLCJwIjoxOTQ3MzY3MTIsInMiOjE4ODI2NzcsImYiOnRydWUsInUiOjQyNjUxNTA0MCwiaWF0IjoxNzc2NjQ0MDQyLCJleHAiOjIwOTIyMjAwNDIsImlzcyI6InB1Yi0wIiwic3ViIjoibGluay1yZWRpcmVjdCJ9.ZWXEb8rDs8mPaa3kMqbg8RDAv6GW5uFr88P-k_99ziQ?
View Email

My most-used Claude Skill

demandloops@substack.com4/20/2026
View this post on the web at https://demandloops.substack.com/p/marketing-built-the-dashboard-but DemandLoops is embedded in six B2B SaaS companies right now. Different industries, different stages, different tech stacks. And we keep running into the same problem at every single one of them. Marketing has signals. Lots of them really. Intent data flowing, website visitors getting deanonymized, engagement being tracked across channels. The signal supply chain is…supplied. And sales isn’t doing as much with it as they probably could. TL;DR: Marketing teams have gotten really good at sourcing signals in the last year or so, but not great yet at prioritizing and orchestrating them. 👋 Hi, it’s Kaylee Edmondson [ https://substack.com/redirect/5a18fbdf-cf72-4ab5-a889-eb447fe00067?j=eyJ1IjoiNzF4cDQwIn0.VLQsNiiAawz-DS2VtWTrcrG2IFeLIxnWNFcK9akSjpY ] and welcome to Looped In, my newsletter exploring demand gen and growth frameworks in B2B SaaS. Subscribe to join 2k+ readers who get Looped In delivered to their inbox every Sunday. The Orchestration Layer Over the last few years, B2B marketing teams got really good at signal sourcing. Intent data vendors matured. Deanonymization tools at the contact level became available. Engagement tracking got more granular. Product usage signals started flowing into CRMs. Praise be. Most marketing teams with any budget have some version of this infrastructure in place now. But sourcing signals was only the first job. The second job was building the orchestration layer: deciding which signals matter, how they combine, when they should trigger action, and routing them to the right motion with a defined owner and a timeline. Most are still working on building that part. Marketing teams shipped the signal infrastructure, showed the dashboard to sales leadership, and mentally checked the box. “We gave them the data.” Job done. Except raw signals without orchestration are 100% of the time getting ignored by our sales friends. Separate Your Signals First Before you build any orchestration, you need to sort your signals into three categories. You’ve probably seen some version of this framework, but stick with me, because how you separate these determines whether your tiering works downstream. Fit signals are static-ish. Industry, employee count, ARR range, tech stack. They tell you whether an account even belongs in your universe, and they don’t change week to week. But worth revisiting quarterly or so. *Unless tech stack investment is a major signal for you, then prioritize it accordingly. Relevance signals are where it gets interesting, because these are time-decaying. A new VP of Demand Gen got hired six weeks ago. The company just posted a role for an ABM Manager. They raised a Series B last quarter. Each of these tells you something about why now, but they all have a shelf life. A hiring signal from three months ago is very different from one that posted last Tuesday. Engagement signals are the clearest in-market indicators: pricing page visits, repeat sessions on your site, ad clicks, community activity. These are the signals that say “this account is aware of us and actively doing something about it.” A fit signal alone tells you an account could be a customer. A relevance signal alone tells you something changed at the company. An engagement signal alone tells you someone clicked on something. None of those individually are worth your time. But a fit signal + a relevance signal + an engagement signal, all firing within a defined window? That’s a compound signal. That account is likely in an active buying window. The combination is the insight. Tiering, Not Scoring Once your signals are separated, the next step is tiering. And I specifically mean tiering, not scoring. Tiering is simpler. Every account in your universe gets placed into a tier based on signal density, and each tier maps to a specific marketing motion. Tier A: Fit confirmed + two or more relevance signals + at least one engagement signal, all within your defined window (I typically use 60 days, but this depends on your sales cycle). These accounts get your highest-touch motion. Immediate action. Tier B: Fit confirmed + one relevance signal or one engagement signal. These accounts get an ABM motion. Progressive, multi-touch, account-specific. You’re building toward Tier A. Tier C: Fit confirmed, but no active signals yet. Always-on demand gen. You’re keeping your brand in front of them so that when signals do fire, they already know who you are. Tier D: Doesn’t meet fit criteria. Stop spending money and time here. The tier determines the motion. The signals determine the tier, the tier determines what happens next, how fast, and who owns it. On GTM alpha. Clay coined the term “go-to-market alpha” to describe the unique tactical advantages in your GTM strategy that your competitors haven’t found yet, borrowed from the finance concept of alpha as outperformance over a benchmark. Your GTM alpha lives in your specific signal combinations, the fit + relevance + engagement patterns that predict pipeline for your business. You can’t copy someone else’s signal architecture and expect it to work. Your ICP is different. Your sales cycle is different. Your data is different. The Play Menu Ok, so an account hits Tier A. Signals are converging. The system routes it to the BDR team. Now what? At most companies, “now what” is a Slack notification that says something like “high intent account: Acme Corp.” Maybe there’s a link to the intent dashboard. Maybe there’s a lead score attached. And then the BDR has to figure out what to do with it. What do they send? What angle do they take? How urgent is it? They’re making these decisions from scratch, every time, for every account. This is where marketing needs to finish the job. The handoff to sales likely shouldn’t be a Slack notis. It should be a brief with three components: 1. Here’s what they’ve done. The specific signals: “Their new VP of Demand Gen started eight weeks ago. They visited your pricing page and integrations page twice in the last ten days. And they just posted an open role for an ABM Manager.” Some context a BDR can use. They can reference the hiring context in their outreach. They can speak to the integrations the prospect was researching. The signals become the talk track. 2. Here’s why it matters. This is the context layer that marketing is positioned to provide because marketing generated most of these signals in the first place. What does it typically mean when a new demand gen leader is hired, the company is evaluating your integrations, and they’re building out an ABM function simultaneously? It means they’re standing up a demand gen engine from scratch. They’re in build mode. They need help, and they need it now while the new leader still has a mandate to make changes. That context gives the BDR confidence. They understand the “why” behind the outreach, and that shows up in how they write and how they talk. 3. Here’s the play. Instead of leaving sales to decide what to do, marketing should be building a play menu: a pre-built set of outreach motions, each one mapped to a specific signal combination and persona. Think of it like a matrix. Signal combination on one axis. Persona on the other. The play in each cell. Building This with AI The reason this orchestration layer hasn’t existed at most companies is that it was genuinely hard to build manually. Monitoring five to ten signal sources, cross-referencing them against your ICP, figuring out which combinations are firing on the same account within the same window, then generating a contextualized brief for Sales? Nobody had time for that. That’s changed. I’m using Claude (both Cowork and Code) to build this across my client portfolio right now. Will share more on the skills and resources in the coming weeks. Finding your GTM alpha in your pipeline data. Take your last 12 months of closed-won deals and feed them into Claude with your signal data. Ask it to surface the patterns. Which combinations of fit + relevance + engagement were present in the accounts that actually closed? That output becomes your tier definitions. That becomes your GTM alpha. And it takes hours instead of a quarter-long analysis project. Building the signal-to-brief pipeline. I have agents running that pull from multiple signal sources, cross-reference what’s firing against my tier definitions for each client, and auto-generate briefs with all three components: what the account has done, why it matters, and which play from the menu to run. A rep opens their morning with the work already prioritized. The brief meets them where they are. Generating the play menu itself. I feed Claude the signal taxonomy, the ICP definitions, and the tier structure, and it drafts play options mapped to each signal combination and persona. I edit these heavily. The plays need to sound human and reflect real sales conversations, not templates. But the scaffolding, the structure, the coverage across every tier and persona combination, that’s what AI handles. Quarterly recalibration. Your tier definitions and play menus can’t be static. Every quarter, I run the same pipeline pattern-matching exercise on recent data to check whether the signal combinations that were predictive six months ago are still working. Signals shift. A relevance signal that used to be strong (like Bombora surges in a specific topic cluster) can get noisier as more companies game it. The system needs to stay alive. Let’s get into it Sourcing signals was step one. Most of us got stuck there. The actual work, the part that moves pipeline, is everything that comes after: separating your signals, tiering your accounts, building a play menu that arms your reps with context instead of alerts, and using AI to make the whole thing run without bumping into resource constraints. If your marketing team has built the signal infrastructure but pipeline isn’t moving, look at the gap between detection and action. That’s your orchestration layer. And in my opinion, it’s marketing’s job to build it. See ya next week, Kaylee ✌ Unsubscribe https://substack.com/redirect/2/eyJlIjoiaHR0cHM6Ly9kZW1hbmRsb29wcy5zdWJzdGFjay5jb20vYWN0aW9uL2Rpc2FibGVfZW1haWw_dG9rZW49ZXlKMWMyVnlYMmxrSWpvME1qWTFNVFV3TkRBc0luQnZjM1JmYVdRaU9qRTVNemt4TmpReE1Td2lhV0YwSWpveE56YzJNRFF3TnpnM0xDSmxlSEFpT2pFNE1EYzFOelkzT0Rjc0ltbHpjeUk2SW5CMVlpMHhPRGd5TmpjM0lpd2ljM1ZpSWpvaVpHbHpZV0pzWlY5bGJXRnBiQ0o5LnEtakhFZERBOFMtQWRrc3hKU05ES29jeWlNX0Q0UGJIbmdpdzlGLVZhNEEiLCJwIjoxOTM5MTY0MTEsInMiOjE4ODI2NzcsImYiOnRydWUsInUiOjQyNjUxNTA0MCwiaWF0IjoxNzc2MDQwNzg3LCJleHAiOjIwOTE2MTY3ODcsImlzcyI6InB1Yi0wIiwic3ViIjoibGluay1yZWRpcmVjdCJ9.-jcdKIoIkf2-aTsf8vDrlVmtBGd0RlevFEO8Bc2wRZQ?
View Email

Marketing built the dashboard. But nobody checks it.

demandloops@substack.com4/13/2026
View this post on the web at https://demandloops.substack.com/p/taking-the-week-off-but-the-archive I’m writing this from my couch, still half-covered in sunscreen, recapping a week I fully needed. Spring break with my daughters. Set a global ‘away’ status on Slack, didn’t check LinkedIn, only 1 small-ish client fire drill. Just sisterly disputes over trivial things, and the kind of touching grass that reminds you demand gen will, in fact, survive without you for eight days. I didn’t write anything new this week on purpose. So instead of forcing something, I pulled three of the most-read posts from the archive. If you’ve been around a while, maybe you missed one. If you’re newer, these are a solid entry point to what this newsletter is actually about. 👋 Hi, I’m Kaylee Edmondson [ https://substack.com/redirect/795e9418-9f34-4a9a-a040-12483b2d1a58?j=eyJ1IjoiNzF4cDQwIn0.VLQsNiiAawz-DS2VtWTrcrG2IFeLIxnWNFcK9akSjpY ]. Looped In lands in your inbox every Sunday with one goal: to give you a sharper way to think about demand gen and growth in B2B SaaS. 2k+ marketers are already reading it. If you’re not subscribed yet, fix that below. The Top 3 from the archive: 1. What 12 Months of ABM Data Reveals About What Actually Works [ https://substack.com/redirect/f7f26dc0-5f49-49cd-add4-8be610f07dbe?j=eyJ1IjoiNzF4cDQwIn0.VLQsNiiAawz-DS2VtWTrcrG2IFeLIxnWNFcK9akSjpY ] This one hit harder than I expected. Most ABM programs fail before they ever launch because teams are optimizing for the wrong things. I broke down what the data shows after a full year of running these programs across multiple clients. If you run any kind of account-based motion, start here. 2. The Great ABM Unbundling Is Here [ https://substack.com/redirect/c68859ea-bcd0-4a65-b8a2-42a96dafe0eb?j=eyJ1IjoiNzF4cDQwIn0.VLQsNiiAawz-DS2VtWTrcrG2IFeLIxnWNFcK9akSjpY ] Your all-in-one ABM platform is expensive, underused, and designed for 2016. This piece is about why the “one big platform” era is ending and what a modern, unbundled ABM stack looks like — with real budget allocations and tool recommendations. 3. ABM Isn’t Dead–It Just Got Smarter: The 2025 Modern ABM Playbook [ https://substack.com/redirect/eafd003c-20bc-4131-8ff5-f455cd89303d?j=eyJ1IjoiNzF4cDQwIn0.VLQsNiiAawz-DS2VtWTrcrG2IFeLIxnWNFcK9akSjpY ] I wrote this one after a client asked me point-blank: “How can ABM still work after all the 6sense hype?” Here’s exactly what I told him. A practical breakdown of what modern ABM looks like right now (or at least in 2025 when I wrote this…which does feel like 20 years ago now in AI-world). I’ll be back next week with fresh content. I’ve got a couple of ideas I’m working through that I can’t wait to get out of my brain and into your inbox. See ya next week, Kaylee ✌ Unsubscribe https://substack.com/redirect/2/eyJlIjoiaHR0cHM6Ly9kZW1hbmRsb29wcy5zdWJzdGFjay5jb20vYWN0aW9uL2Rpc2FibGVfZW1haWw_dG9rZW49ZXlKMWMyVnlYMmxrSWpvME1qWTFNVFV3TkRBc0luQnZjM1JmYVdRaU9qRTVNek13TXpJMU1Td2lhV0YwSWpveE56YzFORE0wT0RBd0xDSmxlSEFpT2pFNE1EWTVOekE0TURBc0ltbHpjeUk2SW5CMVlpMHhPRGd5TmpjM0lpd2ljM1ZpSWpvaVpHbHpZV0pzWlY5bGJXRnBiQ0o5LmdXcTMwd1B6T19aSG5TLVRXaGdTb2JFZVktTVYwMGNFVlZnVTFtTmRuRTQiLCJwIjoxOTMzMDMyNTEsInMiOjE4ODI2NzcsImYiOnRydWUsInUiOjQyNjUxNTA0MCwiaWF0IjoxNzc1NDM0ODAwLCJleHAiOjIwOTEwMTA4MDAsImlzcyI6InB1Yi0wIiwic3ViIjoibGluay1yZWRpcmVjdCJ9.THACDYnVldtNz1WRhGnEfjAut-ugZdsCv9bUVHjhShs?
View Email

Taking the week off, but the archive is open 📖

demandloops@substack.com4/6/2026
View this post on the web at https://demandloops.substack.com/p/every-demand-gen-use-case-im-running I spent 4 hours last week training a client’s marketing team on Claude. Not the “here’s how to write a blog post with AI” kind of training. The kind where we built a shared marketing brain (very similar to the one I shared here last week), loaded it with ICP definitions, competitive battle cards, messaging guidelines, and editorial standards, and then showed the team how to use it to augment the parts of their job that are most repetitive and could be 80% augmented with this new brain. By the end of the session, we had talked through additional potential use cases, and it felt like it was finally starting to click for a room full of people who had been using Claude to “help me rewrite this email.” I’ve been playing and building in Claude Code and Claude Cowork for a few months now. I keep finding new use cases that save me time, help me think differently, iterate on concepts faster. And like I shared on LinkedIn earlier this week, I feel like I’m spending every waking moment possible in this new stack, yet still feel more behind in my craft than I ever have. So I’ll say this, if you’re building, exploring, testing in a new tool this week, you’re right where you’re supposed to be. There’s so much hype, especially on LinkedIn these days, and these tools are shipping new models, functionality, and features faster than ever. And that combined makes all of us feel like we’re falling behind. But as long as we keep building, and sharing what we’re learning, I think we’ll all turn out just fine. So, let me open up a bit about what I’ve been testing and learning. 👋 Hi, it’s Kaylee Edmondson and welcome to Looped In, my newsletter exploring demand gen and growth frameworks in B2B SaaS. Subscribe to join 2k+ readers who get Looped In delivered to their inbox every Sunday. Why Most Marketers Are Getting 10% of What Claude Can Do Most demand gen teams I talk to are using AI for two things: writing first drafts of copy, and summarizing meetings. That’s fine. But you’re leaving a lot on the table. The gap between “I use Claude” and “Claude runs half my workflows” really comes down to context. I built what I call a “marketing brain.” It’s a set of markdown files that load automatically at the start of every Claude session: who I am, who my clients are, how I write, how I run campaigns, what tools I use, my ICP definitions, my messaging house. Claude reads all of it before I type a single word. That means when I ask it to do something, it already knows my business. It’s not starting from zero every time. (I wrote about the marketing brain concept last week, so I won’t belabor it. If you missed it, go read that one first.) Here are the 11 use cases I’m running (or building/finessing) right now. Some of these save me 30 minutes a week. A couple of them eliminated entire workstreams. 1. Discovery Call Research Briefs Before every discovery call with a potential client, Claude pulls together a research brief. Company background, the prospect’s LinkedIn activity, any public talks or posts they’ve done, likely pain points based on role and company stage. Last week I had a call with a Head of Demand Gen at a customer experience platform. Claude surfaced that she’d spoken publicly about whether marketing attribution is broken, that she was actively hiring a Demand Gen Manager and ABM Manager (suggesting the engine is early-stage), and flagged that she likely was the budget holder. So I should position DemandLoops as complementary to her hiring plan, not competitive with it. 90 seconds. That used to be 20-30 minutes of LinkedIn stalking and Googling. 2. Weekly Data Pulls from HubSpot + Salesforce I’m embedded in a client right now where the HubSpot-to-Salesforce integration is... let’s just say it’s a project. Every week, Claude pulls data from HubSpot, runs VLOOKUPs against Salesforce records, cleans up naming conventions, deduplicates contacts, and flags anything that looks off. Not glamorous. But this used to eat 3-4 hours a week, and if you skip it, we had no idea what to go optimize for pipeline. 3. CRM Property Mapping Across 5 Systems Same client. They run HubSpot, Salesforce, Vitally, NetSuite, and PandaDoc. Five systems. Trying to figure out which property maps to what across all of them in a spreadsheet made me want to quit consulting. (I’m being dramatic. But only slightly.) I built a Claude assistant that takes the property lists from each system and creates a unified mapping doc. Markdown file for quick reference, structured spreadsheet for the full picture. Now when someone asks “where does this data live?” I can answer in seconds instead of opening five admin panels. This is one of those use cases where Claude is serving as a bandaid solution. Eventually these systems will all be cleaned up, or replaced entirely, synced to the data warehouse, and integrated appropriately, but for now while we’re in the messy middle, post M&A (we’ve all been there), this Claude task is doing some heavy lifting. 4. Competitive Intel, Weekly I have a competitive intel workflow that runs every week. Claude pulls from competitor websites, checks their ad libraries on Meta and LinkedIn, and flags what changed: messaging shifts, new product positioning, campaign themes, creative formats they’re testing. The output is a structured report. What changed, what it probably means, whether we need to respond. Typically, quarterly competitive reviews are already stale by the time they ship. This approach keeps you within a week-ish of what competitors are doing. 5. Personalized ABM Ad Copy and Landing Pages For one client’s ABM program, we’re building hundreds of individualized ads and landing pages. And I mean individualized. Not “Hi {Company Name}” personalization. Actually different messaging by industry, company size, and persona. Claude generates the copy variations using our ICP definitions and messaging house as the foundation. We pipe account data through Clay for enrichment. The output is account-specific ad copy and landing page content. 6. Lead Scoring and Account Tiering I’m building what I’m calling an enterprise appetite scoring matrix for a client. Six components for now: company size signals, tech stack indicators, buying intent, engagement depth, organizational complexity, and budget authority signals. I’ll also add in their GTM Alpha. Claude will weigh each one and assign a tier. The part I find most useful is what I’ll call “synthetic attributes” for now. Data points that don’t actually exist in your CRM but can be inferred from combinations of other fields. For example: you might not have a “budget authority” field, but you can infer it from title seniority + company size + the presence of a procurement process. Claude is surprisingly good at this kind of inference when you give it a clear framework to work within. 7. Salesforce Flow Documentation If you’ve ever inherited a Salesforce instance with 40+ automation flows and zero documentation, you know this pain. Claude analyzes the flows, documents what each one does, flags redundancies, and identifies which ones are actually firing vs. sitting dormant. What would have been a two-week documentation project took about three hours. My output for the first run was far from perfect, but probably 60-70% there. 8. Campaign Consistency Checks Messaging drift is real. Especially when you have three or four people writing copy across email, ads, landing pages, and social. I built a workflow where Claude checks any new piece of copy against a client’s campaign strategy doc and messaging house before I launch it. Flags anything off-brand, off-message, or inconsistent with what they’ve already published. Takes about 10 seconds. Replaces what used to be a “can you review this” Slack thread that took a day to resolve. 9. Newsletter Topic Development I use Claude to help me develop newsletter topics, but probably not in the way you’d expect. I don’t ask it to “give me 10 newsletter ideas.” Instead, I have it pull from my meeting notes (via Granola), my Slack conversations, and current industry trends, then find the intersections. Where does my lived experience this week overlap with what the market is talking about? This newsletter is a good example. Claude surfaced that the intersection of “I just trained a client team on AI workflows” and “the industry is obsessed with AI in marketing but nobody’s sharing specific use cases” was a strong topic. The pattern-matching was collaborative. The writing is mine. 10. Automated Content Maintenance For clients with large content libraries, I’m building a series of Claude agents that pull existing site content, find what needs updating (outdated stats, broken cross-links, FAQ gaps), make the changes, and push them back for approval. Nobody wants to do this work. I’m finding it sits undone for months. But it compounds. Outdated stats kill credibility. Broken links hurt SEO. FAQ pages that don’t reflect the current product confuse prospects. Claude is great for this grunt work. 11. Meeting Prep and Follow-Up Every morning, Claude pulls my calendar, cross-references it with my meeting notes from prior conversations with the same people, and gives me a prep brief. After meetings, it processes the transcript and drafts follow-up emails, action items, and internal notes for my team. The follow-up emails are where the time savings really add up. Claude knows my voice, knows the client context, and knows what was discussed. The drafts need light editing, not full rewrites. I was spending 15-20 minutes per follow-up before. Now it’s 2-3 minutes of editing. How I’d Prioritize If You’re Starting From Scratch If you’re looking at this list and wondering where to start, here’s the honest answer: start with the boring stuff. Data pulls. Research briefs. Competitive monitoring. Copy QA. The work you do every single week that requires knowing your business well but follows a predictable pattern. After that, move to the structured creative work. ABM copy, account scoring, content ideation. These still need your judgment. But Claude handles the 80% that’s pattern-matching, and you focus on the 20% that requires taste. Claude has very little natural taste. Last priority: the one-off projects like documentation. The through-line across everything I listed: the marketing brain is the multiplier. Every single use case works dramatically better because Claude already knows my clients, my ICPs, my voice, and my tech stack before I ask it to do anything. Without that context layer, you’re prompting from scratch every time. What’s Next I’m sure everything will change on us again tomorrow. And I’m also certain someone is going to reply to this and ask why I’m not using X, Y, Z hot new tool to do one of these workflows instead of Claude. For now, I’m just trying to familiarize myself with as much as possible while keeping my day job and still finding time to touch grass. So I’m sure there are other tools out there that could do these jobs better, please tell me if you have opinions here as I would love to try them out. And if you’re a demand gen operator building workflows in Claude or any AI tool, I want to hear about it. Reply to this email and tell me what you’ve built. I’ll feature the best ones in a future issue. I’m taking next week off to spend spring break with my kiddos. Hope you all are finding time to recharge, too! And then I’ll catch you back here the week after next! Kaylee ✌ Unsubscribe https://substack.com/redirect/2/eyJlIjoiaHR0cHM6Ly9kZW1hbmRsb29wcy5zdWJzdGFjay5jb20vYWN0aW9uL2Rpc2FibGVfZW1haWw_dG9rZW49ZXlKMWMyVnlYMmxrSWpvME1qWTFNVFV3TkRBc0luQnZjM1JmYVdRaU9qRTVNak16T0RVek55d2lhV0YwSWpveE56YzBPRFF3T0RVNUxDSmxlSEFpT2pFNE1EWXpOelk0TlRrc0ltbHpjeUk2SW5CMVlpMHhPRGd5TmpjM0lpd2ljM1ZpSWpvaVpHbHpZV0pzWlY5bGJXRnBiQ0o5Lm1KUElFZnlNLUVoOTFETWFOTDMyR2V6VUZENjBjeGJQNm5TaHZuN3l2dGsiLCJwIjoxOTIzMzg1MzcsInMiOjE4ODI2NzcsImYiOnRydWUsInUiOjQyNjUxNTA0MCwiaWF0IjoxNzc0ODQwODU5LCJleHAiOjIwOTA0MTY4NTksImlzcyI6InB1Yi0wIiwic3ViIjoibGluay1yZWRpcmVjdCJ9.dSLvgCXQzS8ugQIFJhiE6kx7gOJ2BcdJv_Kw0X5thog?
View Email

Every Demand Gen Use Case I’m Running in Claude Right Now

demandloops@substack.com3/30/2026
View this post on the web at https://demandloops.substack.com/p/turns-out-claude-needs-a-brain You can't open LinkedIn without being sold AI. The cold email in your inbox was probably written by AI. The newsletter about AI best practices was probably AI-assisted. It’s become all-consuming. My feed looks and feels like this lately: But what I’m observing inside marketing orgs right now – across clients, conversations, and peer communities – is a different story. Almost everyone is feeling behind. Almost no one can point to a concrete workflow where AI is producing better outcomes than what they had before. The mandate is real. The results mostly aren’t yet. DG surveyed 100 CMOs recently and found that 85% named AI adoption as their top priority for 2026 and almost none felt far enough along. His read: “There’s a huge gap between perception and reality. What people are saying about AI on LinkedIn is dramatically different from what is actually happening inside a marketing org.” This matches exactly what I’m seeing. We’ve been here before. Just not quite like this. B2B marketing has a long history of hype cycles, and they all follow the same arc: a new category emerges with genuinely compelling use cases, early adopters get results, the trade press picks it up, LinkedIn turns the volume to eleven, the board starts asking questions, budgets get allocated, tools get purchased, rollouts happen — and then, about 12-18 months later, most teams are sitting on a piece of expensive tech debt and wondering what went wrong. I’ve watched it happen enough times to recognize the pattern by feel. HubSpot and the inbound marketing wave in the early 2010s. The promise was that you could replace interruption-based marketing with content that buyers would come to you for. Real for some companies, absolutely. For many others: a blog nobody read, an ebook that generated zero pipeline, and a CMS contract that outlasted three CMOs. Conversational marketing and the Drift era. The promise was that AI-powered chat would transform how B2B companies captured and qualified pipeline. Real for some companies. For many others: a chatbot icon on the website, a sales team that ignored the alerts, and a multi-year contract nobody wanted to renew. Dark social. Zero-click content. ABM platforms at $150K+ annually. Each one has had a hype cycle. We are marketers after all. 😏 Which brings us to AI. I want to be careful here, because I genuinely believe this one is different. Not different in kind, the hype cycle is running the same playbook, but different in scope and intensity. The investment levels are bigger. The board pressure is more universal. The category is broader, touching every function, every role, every workflow simultaneously. And the FOMO is more acute than anything I’ve ever seen, because unlike ABM or dark social, AI feels existential in a way that a marketing channel never did. So what’s the diagnosis, doc? Most companies aren’t buying AI tools because they’ve identified a specific problem and need to scale the solution. They’re just not. They’re doing it because they’re terrified of being the one who didn’t. When you buy to reduce FOMO, you buy the wrong thing, for the wrong reasons, before you’re ready to use it. This was true for non-AI tech stacks and I believe it carries through to this era, too. Devin Reed put it cleanly in a recent newsletter: “You can’t scale a process you haven’t built. You can’t automate thinking you haven’t documented.” Generic prompt in. Generic content out. Turns out this thing needs a brain As I’ve spent my last several weeks obsessing over Claude Cowork, through much trial and error I’ve found the output is substantially better if you give Claude a “brain”. Claude is only as useful as what you tell it. Out of the box, it’s a generalist. But if you spend time upfront loading your context into Claude Cowork’s CLAUDE.md file, it becomes something closer to a second brain that knows your stack, your audience, your voice, and your opinions. This is the prompt structure I’ve used. The good news is you can and should make this entirely your own. You can also go back and build new phases over time once you get into a new project and realize Claude has no idea what you’re talking about. To start, there are XX phases in this initial prompt. It interviews you one phase at a time, builds a structured file after each phase, waits for your approval, then moves on. When it’s done, you have a complete marketing brain that makes Claude useful from the first message of every future session. And if you go implement this and have learnings, feedback, questions, etc. hit reply! I really read every one. Phase Map The Prompt Copy everything and paste it into Claude Cowork as your first message. 👋 Hi, I’m Kaylee Edmondson [ https://substack.com/redirect/bb01253d-f61d-4d3e-9f92-256779d7e053?j=eyJ1IjoiNzF4cDQwIn0.VLQsNiiAawz-DS2VtWTrcrG2IFeLIxnWNFcK9akSjpY ]. Looped In lands in your inbox every Sunday with one goal: to give you a sharper way to think about demand gen and growth in B2B SaaS. 2k+ marketers are already reading it. If you’re not subscribed yet, fix that below. Unsubscribe https://substack.com/redirect/2/eyJlIjoiaHR0cHM6Ly9kZW1hbmRsb29wcy5zdWJzdGFjay5jb20vYWN0aW9uL2Rpc2FibGVfZW1haWw_dG9rZW49ZXlKMWMyVnlYMmxrSWpvME1qWTFNVFV3TkRBc0luQnZjM1JmYVdRaU9qRTVNVFl5TlRJMU15d2lhV0YwSWpveE56YzBNVGd4T0RFd0xDSmxlSEFpT2pFNE1EVTNNVGM0TVRBc0ltbHpjeUk2SW5CMVlpMHhPRGd5TmpjM0lpd2ljM1ZpSWpvaVpHbHpZV0pzWlY5bGJXRnBiQ0o5LmNHVVVhZWdNZG5YV0RiazA1LXM5NnlaZXZwU0U5S0o1WWxuNDg3Y1hqbjAiLCJwIjoxOTE2MjUyNTMsInMiOjE4ODI2NzcsImYiOnRydWUsInUiOjQyNjUxNTA0MCwiaWF0IjoxNzc0MTgxODEwLCJleHAiOjIwODk3NTc4MTAsImlzcyI6InB1Yi0wIiwic3ViIjoibGluay1yZWRpcmVjdCJ9.UC6e5LaBtAiKEtNv4l7KG8yyPcC_lHoG33CfcevhOkI?
View Email

Turns out Claude needs a brain

demandloops@substack.com3/22/2026
View this post on the web at https://demandloops.substack.com/p/your-icp-is-static-or-doesnt-exist A few weeks ago, I was deep into an SOW conversation with a prospective client. We’d done the capabilities overview, talked through scope, agreed on a starting point. Then their Head of Marketing came in and scratched ICP work from the SOW. She mentioned their ICPs were already defined, to which I countered: “I hear you that ICPs are defined, but are they also operationalized? Meaning, are those definitions actually wired into your systems, your scoring, your routing? They’re breathing, not static.” I’ve been thinking about it ever since. ICPs are almost always either nonexistent, far too broad, or living (and dying) in a spreadsheet. 👋 Hi, I’m Kaylee Edmondson [ https://substack.com/redirect/876dd250-7999-4fef-97dc-b28a3b83626f?j=eyJ1IjoiNzF4cDQwIn0.VLQsNiiAawz-DS2VtWTrcrG2IFeLIxnWNFcK9akSjpY ]. Looped In lands in your inbox every Sunday with one goal: to give you a sharper way to think about demand gen and growth in B2B SaaS. 2k+ marketers are already reading it. If you're not subscribed yet, fix that below. A few months back I wrote about rethinking how we segment ICPs [ https://substack.com/redirect/8cb9439a-3a4f-4dc3-ba80-b99d7a3330c3?j=eyJ1IjoiNzF4cDQwIn0.VLQsNiiAawz-DS2VtWTrcrG2IFeLIxnWNFcK9akSjpY ] – specifically why company size is often the wrong organizing principle. Today I want to go one level deeper: not just how you define your ICP, but whether your ICP definition actually does anything. And how to use AI to make the whole thing work the way it was supposed to. I’ve found most companies have an ICP. Almost none have operationalized it. And nearly all of them will keep running on that static definition – until something forces them not to. That “something” varies. Sometimes it’s a huge miss on pipeline one quarter. Sometimes it’s a churn problem that suddenly becomes impossible to ignore. Sometimes it’s a campaign that just...doesn’t work, and nobody can explain why. Sometimes it’s all three at once. ICP is often static, until it’s not. Why ICPs drift (and why neither reason is obvious) There are two distinct failure modes here, and they’re both common. The first: the ICP was wrong from the start, and nobody knew it. You built the definition off intuition, early wins, or a competitive benchmark, but not off rigorous analysis of your actual customers. Everything looked fine until churn started accumulating in accounts that fit the definition perfectly on paper. That’s not a sales problem. That’s a definition problem. The second: the ICP was right once, and drifted. The business shifted its go-to-market. The market changed. The product expanded into new use cases. The pricing moved upmarket. Any of those transitions will change who your real ICP is, but if nobody updates the definition, your systems keep targeting the old version of the customer while the business is trying to sell to a different one. Both versions of this problem have the same symptom: demand gen that feels like it should be working, but isn’t producing the results you’d expect. And the fix for both starts in the same place. Step zero: validate the definition before you build anything Before you wire your ICP into any system, you need to know whether the definition you’re working with is actually correct. I see teams skip this all the time. They operationalize a definition that was never right to begin with, and then wonder why the machine isn’t producing. Here’s the process I use when I come into a new engagement and suspect the ICP is off. Pull these lists from your CRM: The “good” cohorts: Closed-won customers Longest-standing customers Highest-paying customers Highest cross-sell / upsell customers The “bad” cohorts: Churned customers Shortest contract length customers Lowest-paying customers Customers with the highest support ticket volume and/or lowest NPS Or anything else that’s specifically relevant to your business that you’d want included as part of the ICP analysis. The thought here is to build a before-and-after picture of your customer base. What do your best customers actually have in common? What do your worst ones have in common? And critically…what do those two groups not share? Enrich everything with whatever your enrichment tool of choice is before you analyze anything. The data quality problem is consistent across every brand I’ve worked with. Your CRM exports will have company names, maybe industry and size, and not much else. That’s not enough to do meaningful pattern analysis. Before you do anything analytical, run all eight cohort lists through Clay/ZoomInfo/Apollo/etc. to add firmographics, technographics, funding history, hiring signals, and whatever else is relevant for your business. Then I’d load everything into Claude (or you can analyze them manually either way). This is where it’s been getting interesting for me, and to be transparent this is a workflow I’m actively building and refining, not something I’ve run a hundred times. But the approach is sound, and I think it’s where a lot of demand gen teams are going to land over the next year. Once you have your enriched cohort exports, drop them into a Claude Cowork session. You’re not asking Claude to make up attributes or hallucinate patterns just give it your first-party data and ask it to find what you’d miss doing this manually or what would take you two days in Excel. Before you run any prompts, set up the project with three context files. The prompts below will work without them. But they’ll work significantly better with them — especially when you’re running this analysis across multiple clients and need consistent, immediately usable output every time. icp-analysis-framework.md [ https://substack.com/redirect/2c0f4e4c-a841-43a6-a0ef-c6a949a4b326?j=eyJ1IjoiNzF4cDQwIn0.VLQsNiiAawz-DS2VtWTrcrG2IFeLIxnWNFcK9akSjpY ] — a reusable file that tells Claude how to think about ICP analysis: what dimensions to look at, how to weight signals, when to flag something as uncertain versus confident, and what to watch out for (correlation vs. causation, survivorship bias, missing data). Build this once, drop it into every ICP project. icp-output-template.md [ https://substack.com/redirect/ab4c5150-43df-4720-914f-d987f94cf80d?j=eyJ1IjoiNzF4cDQwIn0.VLQsNiiAawz-DS2VtWTrcrG2IFeLIxnWNFcK9akSjpY ] — the exact output structure you want back. Narrative description, positive and negative fit signal tables across firmographics/technographics/demographics, fit scoring framework with point values and tier thresholds, and a plain-language summary card you can hand to a sales rep. Claude matches this format exactly instead of inventing something new each time. client-context.md [ https://substack.com/redirect/45da735a-b7dc-4aa5-8514-e3071e8ee391?j=eyJ1IjoiNzF4cDQwIn0.VLQsNiiAawz-DS2VtWTrcrG2IFeLIxnWNFcK9akSjpY ] — fill this out per engagement. Product description, current ICP hypothesis, average ACV and deal cycle, known hard disqualifiers, tech stack dependencies, any attributes the team already suspects matter, and what the output will be used for. This stops Claude from asking questions you already know the answers to and focuses the analysis on what’s actually uncertain. Here are the prompts I’ve been using: Prompt 1: Surface the patterns “I’ve uploaded X CSV files representing different customer cohorts. The good cohorts are: closed-won customers, longest-standing customers, highest-paying customers, and highest expansion/upsell customers. The bad cohorts are: churned customers, shortest contract customers, lowest-paying customers, and customers with the highest support volume and/or lowest NPS scores. Please analyze all cohorts and tell me: 1. The firmographic attributes that appear most consistently in the good cohorts but not the bad ones 2. The firmographic attributes that appear most consistently in the bad cohorts but not the good ones 3. Any technographic patterns (tech stack, tools) that differentiate good from bad 4. Any signals that appear to have been present at the time of sale for good vs. bad accounts Weight each attribute by how strongly it differentiates good from bad cohorts. Flag any patterns where the sample size is too small to draw reliable conclusions.” Prompt 2: Write the ICP definition “Based on your analysis, please write a revised ICP definition that includes: - A narrative description of our ideal customer profile - Positive fit signals (demographic, firmographic, technographic), ranked by predictive strength - Negative fit signals, ranked by predictive strength - Positive behavioral and intent signals that suggest readiness to buy - Negative signals that suggest poor fit or poor timing - A suggested fit scoring framework with point values for each signal” The output you’re looking for is a narrative with a structured breakout of patterns. The reason I’d expect Claude to do this better than a human doing manual analysis isn’t speed (though it is faster). But Claude will surface cross-attribute correlations that are nearly impossible to spot manually. It’s not just “a lot of churned accounts are in healthcare.” It’s “healthcare companies that were Series B or earlier and didn’t have a dedicated RevOps function at time of sale.” This narrative should give you a solid output, backed by data, to use as a conversation starter internally against your committee of stakeholders that care about your ICP. Typically people like Product, Sales, RevOps, Marketing, and Customer Success. The goal is to leverage this process, the data, the documentation to gain internal alignment (which typically isn’t an easy feat). Now operationalize it Once you have a definition you, and your committee, trust, here’s where it needs to get wired in. 1. The CRM field Your CRM is almost always the single system of record for your business. Which means if your ICP definition doesn’t live there, as a queryable, filterable, reportable field or set of fields, it effectively doesn’t exist operationally. What that looks like is different for every company. Maybe it’s a single ICP Tier field with a simple picklist: Tier 1, Tier 2, Not ICP. Maybe it’s a series of fields capturing individual fit dimensions (industry fit, size fit, tech stack fit) that roll up into an overall score. There’s no single right answer. What matters is that there’s a method, it’s consistently applied, and anyone on the team can pull a report against it without exporting and shuffling things around manually. In most companies I’ve inherited: none of that exists. The ICP is a document somewhere, and the CRM has no idea it was written. The fix is purely mechanical. Decide on your structure, build the fields, and assign a value to every account in your database. Every downstream workflow like scoring, routing, TAL management, reporting, now has something to reference. 2. Scoring that accounts for fit, signals, and engagement Most scoring models I inherit are 100% behavioral. Visited pricing page: +20. Downloaded a guide: +10. Hit 50 points: MQL. The problem is that behavior without fit is noise. A VP of Operations at a 50-person out-of-ICP company who visits your pricing page four times is not a better lead than a VP of Supply Chain at a $2B target account who visited once. Your model needs three dimensions: fit score (firmographic, technographic, demographic - built from your new ICP definition), signal score (hired a new critical role, just raised a round of funding, is slacking on their security posture, etc.) and engagement score (behavioral - 1st party signals typically from deanonymized activity on your website). Gate MQL status on a minimum fit threshold plus behavioral activity. Your MQL volume will drop. Your pipeline quality will go up. 3. Routing that reflects account value If a Tier 1 account submits a demo request and hits the same queue, SLA, and rep assignment as a non-ICP startup, your ICP is not working hard enough for you. You defined it and then built a system that ignores it. Tier 1 inbounds should route differently. Senior reps, faster SLA, Slack alert, different sequence. I set up a routing build at one client where Tier 1 inbounds had a 15-minute contact SLA during business hours. Strictly because their historical data showed a medium first-response time of 11 minutes for deals that closed. 4. A maintained TAL Most target account lists get built once, saved to a shared drive, and not touched again. But a TAL is just a snapshot of your ICP in time. Companies raise funding. Headcount hits a threshold. A startup on your “watch” list announces a $40M Series B and is now squarely in your sweet spot. Which is all great…but if that TAL isn’t shifting, you’re likely not working an ICP. More likely working a snapshot. I’d say you want at least a quarterly review cadence. Define three signals that move an account from “watch” to “active” (funding round, headcount milestone, a specific hire). Build a Clay or Apollo view that surfaces accounts hitting those signals monthly. 5. Reporting that filters by ICP fit Most demand gen reporting: total leads, total MQLs, total pipeline, total revenue. Broken out by channel or campaign. Almost never by ICP fit. I’ve seen campaigns that look like wins on blended pipeline numbers, where filtering for Tier 1 shows barely any ICP engagement at all. Without the filter you run it again. With the filter you kill it or completely rethink the targeting. Add ICP Tier as a dimension in every report. The part most people skip: keeping it dynamic What I haven’t seen written about much, and what I think is the unlock: an ICP isn’t something you define once and operationalize. It’s an ongoing puzzle. The market shifts. Your clients’ needs shift. Your business shifts. Any of those will change who your real ICP is and if you’re not actively maintaining the definition, you’ll drift back into the same problem you just fixed. Make this a recurring project that you prioritize. I’m working on building a project in Claude Code that helps solve for some of the manual parts of this process. Will report back on if it stands up to the test. Are you using AI for ICP analysis yet, or is that still on the “someday” list? Reply and tell me where you are with it. I’m genuinely curious how people are approaching this, and I’m building my own workflow in real time. See ya next week, Kaylee ✌ Unsubscribe https://substack.com/redirect/2/eyJlIjoiaHR0cHM6Ly9kZW1hbmRsb29wcy5zdWJzdGFjay5jb20vYWN0aW9uL2Rpc2FibGVfZW1haWw_dG9rZW49ZXlKMWMyVnlYMmxrSWpvME1qWTFNVFV3TkRBc0luQnZjM1JmYVdRaU9qRTVNVEEzTURFeE55d2lhV0YwSWpveE56Y3pOakUxTmpjd0xDSmxlSEFpT2pFNE1EVXhOVEUyTnpBc0ltbHpjeUk2SW5CMVlpMHhPRGd5TmpjM0lpd2ljM1ZpSWpvaVpHbHpZV0pzWlY5bGJXRnBiQ0o5Lnh2akVlMWhPdUxadk84NGx1dF9DY2tQaXVqQUl5UUJUSDZnbW90bkxLS28iLCJwIjoxOTEwNzAxMTcsInMiOjE4ODI2NzcsImYiOnRydWUsInUiOjQyNjUxNTA0MCwiaWF0IjoxNzczNjE1NjcwLCJleHAiOjIwODkxOTE2NzAsImlzcyI6InB1Yi0wIiwic3ViIjoibGluay1yZWRpcmVjdCJ9.nwMo0XYvZRv8Vab39uh9Visiyg4tgvJlmCNrmDQ7h0o?
View Email

your ICP is static, or doesn't exist at all

demandloops@substack.com3/15/2026
View this post on the web at https://demandloops.substack.com/p/70-plays-for-your-demand-gen-library Most B2B marketing teams are running campaigns. Very few are running plays. I’ve spent years inside demand gen orgs and consulting with them, and the pattern holds pretty consistently. Teams build out a content calendar, align on messaging for the quarter, get the creative approved, schedule the emails, and launch. Rinse and repeat the next quarter. Maybe they layer in a little segmentation. Maybe they have a nurture track or two. And when pipeline is slow, they do more of it. More campaigns, more emails, more LinkedIn spend. Push the message out harder. R.A.M. (random acts of marketing) all around. The problem isn’t that campaigns are bad. Campaigns are necessary. But it’s how most teams are running campaigns when they should also be running plays, and they’re treating those two things as if they’re the same motion. They’re not. At least not in my mind. 👋 Hi, I’m Kaylee Edmondson [ https://substack.com/redirect/5e315d05-997b-4558-8386-e4a0eb7807bb?j=eyJ1IjoiNzF4cDQwIn0.VLQsNiiAawz-DS2VtWTrcrG2IFeLIxnWNFcK9akSjpY ]. Looped In lands in your inbox every Sunday with one goal: to give you a sharper way to think about demand gen and growth in B2B SaaS. 2k+ marketers are already reading it. If you're not subscribed yet, fix that below. A campaign is how your brand talks to a market. It’s your point of view, broadcast to a segment. A campaign tells a story about what you believe, what problem you solve, and why your category matters. Done well, it builds awareness, shapes perception, and warms up your total addressable market over time. Think: a product launch, a seasonal push, an industry report rollout, a brand awareness series. These are all campaigns. They go out to a defined audience regardless of what that audience is doing right now. The message is relatively fixed. The trigger is the calendar. A play is how you respond to a moment. Something specific happens with a specific account or person, and you act on it. That’s the whole premise. A play exists because a signal exists. Take away the signal, and you have nothing to send. The activation (e.g. an email, an ad, a nurture, a dinner invite, etc.) is triggered by an insight you’ve gained. A play looks like: your target account’s Head of Marketing just posted on LinkedIn about struggling with pipeline attribution. You have a relevant take on that problem. You reach out directly, referencing what they shared. I’d call that a play, not a campaign. Or: a free trial user at one of your top 10 target accounts hit 80% of the usage threshold that typically predicts conversion. The play is triggered. Sales gets a task. A personalized email goes out. Or: your best customer champion just started a new job at another company in your ICP. The moment they update their LinkedIn, a play fires. These things can’t be scheduled. A play requires a specific response to a specific moment. That doesn’t mean it has to be manually written every time. Just that the activation should be designed around the trigger. The message, the timing, the ask, and the content all need to connect back to the trigger. If you swap out the trigger and the outreach still makes sense, you don’t really have a play. The building blocks of a play. Every play needs four things ideally to function: A trigger. The specific condition that activates the play. Job change, pricing page visit, webinar attendance, competitive tool in their stack, funding announcement. No trigger, no play. This is the element most teams underdefine. “Website visitor” is too broad. “Pricing page visitor from a Tier 1 account with two or more visits in seven days” is a viable trigger. A target. The play applies to a specific account, contact, or segment. Combined with your tier framework, this is how you control who gets what level of effort. A Tier 1 account hitting the same trigger as a Tier 3 account should get a meaningfully different play, or at least a different level of personalization within the same play. An action. The actual thing you’re activating/launching. Could be an email, a LinkedIn message, an SDR task, a direct mail drop, a personalized landing page. The action has to be proportional to the signal. A pricing page visit from a cold account doesn’t warrant an exec-to-exec letter. A multi-visit, multi-stakeholder pattern at a named account does. A clear connection between trigger and message. The prospect should be able to read your outreach and feel like it arrived at the right moment, even if they can’t articulate why. The message doesn’t always need to explicitly reference what they did, I personally prefer to take the “serendipitous” route. It should feel relevant to where they are right now. The plays most teams are missing After mapping this out with a handful of clients and building a play library over the past few months, a few categories consistently come up as gaps. Most teams have some version of inbound signal response. If you fill out a demo request, someone follows up. Someone signs up for a webinar, they get a transactional calendar invite. Those are plays, even if teams don’t call them that. Other, less commonly adopted, plays: Job change plays. A former champion switches companies and lands somewhere in your ICP. This is one of the warmest possible signals you’ll ever see. They already know your product. They likely have an opinion on it. And they just stepped into a new role where they have both the mandate to make changes and the political capital to push something new through. Proactive outreach plays with a real reason to reach out. Not “just checking in.” Not a generic sequence dressed up with a first name variable. An actual reason. Your team member is traveling to their city. They posted something on LinkedIn about a challenge you solve. Their competitor just went through a product sunset. Reason-to-reach-out plays are wildly underused. Customer and expansion plays. The entire post-sale motion is usually absent from any plays discussion. A health score drop, an NPS promoter who hasn’t been asked for a referral, a champion who just got promoted, a usage spike that signals upgrade readiness. All of these are plays. All of them generate real revenue. Almost no demand gen team is running them systematically. Multi-threading plays. A new decision-maker joins an account you’ve been working. A second contact is identified at a stalled deal. These moments are often caught by sales, but they rarely exist as a defined play with clear activation logic and outreach assets ready to go. Campaigns and plays aren’t competing priorities One thing worth being direct about: you still need campaigns. Campaigns build the brand awareness and category credibility that make your plays land better. If someone’s never heard of you and you fire a play at them because there’s a potential partnership advantage, you’re going to get a much colder response than if you’d been building presence in their feed for the past few months. Leverage them both. The ratio matters depending on where you are. If you’re early stage and relatively unknown, more of your energy goes into campaigns. As you build a bigger installed base, more signal data, and more brand presence in your category, plays start carrying more of the pipeline weight. But even at the earliest stage, you should have plays. At minimum: inbound follow-up, job change for champions, and at least one proactive outreach play for your top accounts. A plays library I’ve been building out a reference library of plays that any B2B demand gen team can pull from, organized by signal type, account tier fit, and whether they’re core (run these regardless of stack maturity) or advanced (require specific tooling or higher personalization effort). There are 70+ plays across five categories: 1st party signals, 2nd party signals, 3rd party signals, proactive outreach, and customer and expansion. Each one includes the specific trigger that activates it and a starting point for tier fit so you can figure out which accounts get what level of effort. I’m sharing the full library here as a download. Start with the core plays. Get those running. Then layer in the advanced ones. It goes without saying, every company is different, use this template as a starting point, and customize based on what you know about your company. [Plays Library [ https://substack.com/redirect/86a90a1e-8e3e-4782-913b-7f9b061713ab?j=eyJ1IjoiNzF4cDQwIn0.VLQsNiiAawz-DS2VtWTrcrG2IFeLIxnWNFcK9akSjpY ]] *The goal is not necessarily to run all 76. See ya next week, Kaylee ✌ Unsubscribe https://substack.com/redirect/2/eyJlIjoiaHR0cHM6Ly9kZW1hbmRsb29wcy5zdWJzdGFjay5jb20vYWN0aW9uL2Rpc2FibGVfZW1haWw_dG9rZW49ZXlKMWMyVnlYMmxrSWpvME1qWTFNVFV3TkRBc0luQnZjM1JmYVdRaU9qRTRPVFk0TVRrek9Td2lhV0YwSWpveE56Y3lPVFk1TURNekxDSmxlSEFpT2pFNE1EUTFNRFV3TXpNc0ltbHpjeUk2SW5CMVlpMHhPRGd5TmpjM0lpd2ljM1ZpSWpvaVpHbHpZV0pzWlY5bGJXRnBiQ0o5LkpzUWJvV3JlU2s2aWVPSTV1cDY1YjhTd2xUT3NNZHh2dzRPUzdjdXZZQU0iLCJwIjoxODk2ODE5MzksInMiOjE4ODI2NzcsImYiOnRydWUsInUiOjQyNjUxNTA0MCwiaWF0IjoxNzcyOTY5MDMzLCJleHAiOjIwODg1NDUwMzMsImlzcyI6InB1Yi0wIiwic3ViIjoibGluay1yZWRpcmVjdCJ9.Y-9H9AgD7PcP1YFdzASOWx9rnifVEzu2Xm9o_bfHYMA?
View Email

70+ Plays for your Demand Gen Library

demandloops@substack.com3/8/2026
View this post on the web at https://demandloops.substack.com/p/your-signal-stack-should-be-unique Every team I talk to right now is building a signal-based stack. Job changes. G2 reviews. Web visits. Funding rounds. Hiring spikes. The tools are everywhere and the category is exploding. But another thing I’m seeing is that most teams are running the same plays off the same signals as every other company chasing their ICP. And then they wonder why the conversion rates fall flat. I watched this play out with two clients last year. Different companies, same target personas. Both had decent signal coverage. Both were triggering outreach sequences off the same intent data. Both were getting okay-but-not-great results, and neither could figure out why. TL;DR is that the problem was they’d never stopped to ask, “which signals matter for us, specifically?” There’s a difference between building a signal stack and building your signal stack. This article is about the second one. 👋 Hi, I’m Kaylee Edmondson [ https://substack.com/redirect/6347ad2e-f8e9-4725-a79b-fed782ec2cc1?j=eyJ1IjoiNzF4cDQwIn0.VLQsNiiAawz-DS2VtWTrcrG2IFeLIxnWNFcK9akSjpY ]. Looped In lands in your inbox every Sunday with one goal: to give you a sharper way to think about demand gen and growth in B2B SaaS. 2k+ marketers are already reading it. If you're not subscribed yet, fix that below. Why Most Signal Stacks End Up Looking Identical Everyone starts from the same “menu”. Intent data platforms hand you a list of signals to track. Your ABM tool has recommended triggers built in. You look at what your competitors are doing and reverse-engineer their plays. Before long, you’ve got a list of 15-20 signals and a rough sense that you should be “acting on” all of them. But the signals aren’t the edge. The edge is which signals are predictive for your product, your ICP, and your motion specifically. If you’re showing up in communities asking, “what signals are working for everyone rn?”…you’re asking the wrong question. Clay wrote a piece last year on what they call GTM alpha - the idea that winning teams use data others don’t have, in plays others can’t run. That framing is right. But most demand gen teams interpret that as “we need more signals” when the actual answer is almost always “we need fewer, better ones.” Start With Your Best Customers, Not a Signal Menu The process has to go backwards. Before you open a signal tool or build a play, you need to understand what was actually true about the accounts that closed, stayed, and expanded. This is the work most teams skip because it feels slow. Or companies are convinced they can’t learn anything from the past. It feels like research instead of execution. But skipping it is what lands you in generic plays that everyone else is also running. Here’s the exercise I walk clients through: Pick 5-10 accounts you’d clone if you could. The ones where the deal moved fast, the champion was engaged, the expansion came without you having to chase it. Write them down. Then answer these questions for each one: What was happening at that company in the 60-90 days before they came into your pipeline? Not just “they visited the website.” What was the business context? Were they in a growth phase? Had leadership changed? Were they mid-stack consolidation? Had they just shipped something new? What did your sales team already know about them before the first call? What research had your AE done that gave them an edge in discovery? If you had 10 interns researching an account before outreach, what would you have them look for? What information, if you had it, would change your message or your timing? When you do this across 5-10 accounts, patterns emerge. They almost always do. You’ll start to see 3-5 behavioral or contextual signals that show up consistently in your best accounts. Those become your hypotheses. Everything else is noise until proven otherwise. Creating Signal Tiering Once you have your hypotheses, you need a way to organize them. Not all signals carry equal weight for your motion, and treating them like they do is how you end up with a 20-signal stack firing off random acts of marketing. I use a simple three-tier framework. Tier 1: High-conviction signals These are specific, time-sensitive, and directly connected to a problem your product solves. They’re rare, likely for a smaller audience set, but when they fire, they mean something. An example for a company that sells onboarding tooling: a company just hired its third Customer Success Manager in 60 days. This probably isn’t ironic timing, but is your sign the team is scaling a function that has a real, immediate problem you solve. The signal is specific. The timing matters. There’s a clear message to build around it. Tier 1 signals are your plays. They’re what you build creative around and automate with care. Tier 2: Supporting signals These add context and confirm fit, but they’re not strong enough to trigger a play on their own. A Series B raise plus active SDR hiring is interesting. Layer it on top of a Tier 1 signal and it sharpens your targeting. Use it alone and you’re competing with everyone else who has the same data. Tier 2 signals belong in your enrichment layer. They help you score and prioritize, but they don’t drive plays. Tier 3: Noise signals Every team has a few of these. They felt promising when you added them. You’ve been tracking them for 12-18 months. They haven’t correlated to anything meaningful. But nobody has had the conversation about cutting them because it feels like giving up. Cut them. The cognitive overhead of managing signals that don’t convert is real, and it crowds out the space you need to think clearly about the ones that do. Start by mapping everything you’re currently tracking into these three tiers. Most teams will find they’re heavily over-indexed on Tier 3, lightly invested in Tier 1, and confused about where Tier 2 fits. Building Plays Around Your Tier 1 Signals When you know which signals are predictive, play design gets cleaner. A well-built signal-driven play has four components. Before you build anything, you need answers to all four. The trigger: What exact condition fires the play? “Job change” is not a trigger. “New VP of Revenue Operations hired from a company with $50M+ ARR, into a company currently using a fragmented data stack” is a trigger. The more specific you can get here, the more relevant your outreach will be. Specificity is not over-engineering. Specificity is respect for your prospect’s time. The context layer: What do you need to know about this account before you reach out? What enrichment should happen automatically before the sequence fires? Think about what a great AE would research before a cold call. Some of that can be automated now. Build it in. The message: What is the one thing you want to communicate based on this signal? One. If you’re trying to say three things in your first touch, you’re not clear on why the signal matters. Go back and sharpen it. The timing window: When does this signal stop being relevant? Most signals have a 2-4 week window before the context shifts. A new hire settles in. A compliance event gets handled. If you’re not building timing into your plays, you’re leaving a lot of relevance on the table. A quick example from a previous client. They sell to mid-market HR teams. Their signal stack was pulling job change data on HR leaders and triggering generic sequences. Response rates were mid at best. We went back through 12 months of closed-won deals and found that the accounts that moved fastest had one thing in common: the HR leader had been promoted into the role internally, rather than hired externally. Internal promotions meant they were inheriting a tech stack they didn’t choose and were actively evaluating what to keep. Turns out this was a Tier 1 signal. It was specific, time-sensitive, and directly tied to a buying moment. They rebuilt the play around that one signal. The message became more specific/resonate. The timing changed. Results improved. Maintaining Signal Hygiene Over Time Signals decay. Honestly probably faster than we’re even estimating. A play that worked six months ago may be producing diminishing returns now because the market shifted, because prospects have gotten wise to the trigger, or because three of your competitors started running the same sequence off the same data. This is the reality of signal-based marketing. Nothing stays alpha forever. Quarterly signal audits, at minimum, are how you can stay ahead of decay. This doesn’t have to be a big process. Here’s what to do: An easy way to do this is to organize signals to campaigns. Then pull down your campaign data to see which signals are correlating to pipeline and closed won. If a signals has been active with plays running against it for 90+ days and you can’t draw a line to revenue, this has likely become a Tier 3 signal that needs to be demoted or cut entirely. Review anything you’re tracking but not acting on. Either build something around it or stop tracking it. Talk to your sales team every quarter about what patterns they’re seeing in discovery. New signals show up there first. An AE who’s done 30 discovery calls in the last 90 days knows things about your buyers that no intent tool can surface. And treat your signal stack the way you’d treat your tech stack. Regular pruning. Add new things intentionally. Question anything that’s been there a long time without proving itself. One thing’s for sure, the signal-base marketing noise is only going to get louder. Every team is getting access to more data, better tooling, and faster workflows. The advantage is not having more signals, but in knowing which 2-3 signals are yours to own, and building plays that nobody else can replicate because nobody else did the work to find them. See ya next week, Kaylee ✌ Unsubscribe https://substack.com/redirect/2/eyJlIjoiaHR0cHM6Ly9kZW1hbmRsb29wcy5zdWJzdGFjay5jb20vYWN0aW9uL2Rpc2FibGVfZW1haWw_dG9rZW49ZXlKMWMyVnlYMmxrSWpvME1qWTFNVFV3TkRBc0luQnZjM1JmYVdRaU9qRTRPRFkxTlRBMk5Td2lhV0YwSWpveE56Y3hOems1TURZMExDSmxlSEFpT2pFNE1ETXpNelV3TmpRc0ltbHpjeUk2SW5CMVlpMHhPRGd5TmpjM0lpd2ljM1ZpSWpvaVpHbHpZV0pzWlY5bGJXRnBiQ0o5LnlOU3JKV2ViNjNFbDhDMkJRWk1pN2J2dDdPeC1MM0NIVy1YSlVEdS1UZ3ciLCJwIjoxODg2NTUwNjUsInMiOjE4ODI2NzcsImYiOnRydWUsInUiOjQyNjUxNTA0MCwiaWF0IjoxNzcxNzk5MDY0LCJleHAiOjIwODczNzUwNjQsImlzcyI6InB1Yi0wIiwic3ViIjoibGluay1yZWRpcmVjdCJ9.Wi7Sg3Oigv4_zW1wiOWRHRw6WiFfK8TtWoY6ZxQirwo?
View Email

Your Signal Stack Should Be Unique to You

demandloops@substack.com2/22/2026
View Email

Why Your Best Hires Stop Sharing Opinions

demandloops@substack.com2/15/2026
View this post on the web at https://demandloops.substack.com/p/demandloops-2025-annual-letter This week I was attempting to hit inbox zero and came across a letter from Immad Akhund, CEO at Mercury. He later ended up sharing to his LinkedIn community [ https://substack.com/redirect/8163c6a1-8bd7-4fca-8a5f-eda0580a67c1?j=eyJ1IjoiNzF4cDQwIn0.VLQsNiiAawz-DS2VtWTrcrG2IFeLIxnWNFcK9akSjpY ], too. But I read the whole thing. Twice. The first time through I was reading as an avid member of their community. I’m very bought into the product they’re building and the problems they’re solving for founders. But the second time through I was reading with my own “founder” hat on. I love the vulnerability, the transparency, the rhythm of doing something like this annually. So…I thought welp, why the heck not. And now I’ve spent half my day writing my own version of an annual letter for DemandLoops. I’m hoping this serves a few purposes: To anyone wanting to leave W-2 life: I’m not a “founder” or CEO or inventor. I’m just an operator who got good at one thing (demand gen) and built a business around it. If I can do this, you can too. To anyone trying to up-level in their current role: The gaps we’re solving across clients aren’t unique. Pattern matching, org-wide visibility, managing up - these skills matter everywhere, not just in consulting. To anyone we might work with: This is how we think about problems and why we operate the way we do. So, here it goes. Dear DemandLoops community, readers, friends, I started DemandLoops mid 2023 with a belief that felt obvious but kept getting ignored. I believe growth is not linear. Careers aren’t. Companies aren’t. Demand gen definitely isn’t. But most of the industry treats all three like straight lines: More tools → better outcomes More leads → more revenue More effort → more progress But I believed back then, and still today, that progress actually comes from compounding feedback loops: What you measure changes behavior Behavior shapes systems Systems reinforce outcomes Outcomes then justify the original measurement Companies then, and now, keep hiring new talent, or new agencies, to “fix” demand gen. They get new campaigns. Fresh creative. Maybe better automation. Six months later, nothing fundamental has changed. Sales still complains about lead quality. Attribution still makes no sense. Pipeline forecasts are still vibes and spreadsheets. What I kept seeing was this: AI was scaling bad data. Sophisticated tools were amplifying broken processes. Short‑term pipeline pressure made it impossible to build anything sustainable. So I built DemandLoops to be the opposite. Originally DemandLoops was purely a solo venture. A chance for me to break the cultural norms of having to climb the corporate ladder in order to be successful, time to focus on client projects that kept my interest, and a position where clients paid for an honest perspective on their growth opportunities. What We Keep Finding In Client Systems Account selection is still the Wild West Most companies confuse their TAM with their ICP. And worse, they confuse their ICP with whoever came inbound last quarter. I wrote about this in October because I kept seeing the same pattern across engagements. Marketing would proudly show me their “ICP definition” - usually a neat little chart with company size, industry, tech stack. Then I’d ask sales what their best deals looked like. Completely different answer. One client came to us saying they couldn’t get inbound to work. When we dug in, they were targeting their entire TAM (tens of thousands of companies) in their paid programs. No prioritization. No tiering. Just “spray and hope someone converts.” The fix isn’t complicated, but it requires admitting something uncomfortable: you can’t go after everyone, even if they technically could buy from you. Here’s what we’ve seen work: Start with your best customers (highest LTV, shortest sales cycle, actual success with your product) Document the firmographics, technographics, demographics, AND behavioral signals they share Build account tiers based on fit, not just size Accept that some accounts in your TAM shouldn’t get the same attention as others The hardest part isn’t the analysis, but getting leadership to agree that tier 3 accounts don’t deserve the same resource investment as tier 1. That conversation is where most companies get stuck. The measurement problem is always an alignment problem This one showed up in literally every engagement in 2025. Sales and marketing weren’t fighting about lead quality, but instead are really fighting because they were measuring success using different definitions. Marketing measured MQLs, form fills, campaign influence. Sales measured closed-won deals and pipeline created. Neither team’s metrics connect to the other’s. One CMO told me: “We hit 120% of our MQL goal last quarter and sales still said we weren’t performing.” When we mapped it out, turns out those MQLs had a 3% conversion rate to closed-won. Marketing was generating volume. Sales needed velocity. The pattern I keep seeing: companies layer on more sophisticated attribution tools thinking that will solve alignment. It doesn’t. Because both teams are still optimizing for different outcomes. What changed things for clients: Getting sales and marketing leadership in the same room to define entrance/exit criteria Building metrics both teams believed in (usually around qualified pipeline/revenue, not MQLs generated) Creating visibility into what happens after the handoff, not just before Admitting that some of their “best performing” campaigns were only best at generating activity, not revenue The breakthrough moment usually happens when we show them their top 10 campaigns by MQL volume next to their top 10 campaigns by closed-won revenue. Almost never the same list. AI is exposing every process gap you’ve been ignoring This is the one that’s evolving fastest and causing the most chaos. Every client conversation in Q4 included some version of: “We’re implementing AI tools but not seeing the efficiency gains we expected.” Well, AI is really good at doing what you tell it to do. If your process is broken, AI just executes that broken process faster. So things like: Clean CRM data (accounts properly structured, duplicates merged, fields actually used) Clear signal definitions (what counts as intent, what’s just noise) Documented workflows (so AI can follow a process that works) Measurement systems that connect activity to outcomes …are becoming more critical than ever. The AI stuff everyone’s excited about - signal orchestration, predictive scoring, automated personalization - only works if the foundation is solid. The Shape of What We’re Building The clients we said no to (and why that matters) 27 times in 2025, we turned down potential clients. Clients that were ICP fit, but either they weren’t ready for the work, or we didn’t have capacity. On the work front, the pattern was always the same. Leadership wanted better pipeline numbers. They thought the solution was new campaigns, better tools, maybe some AI magic. They wanted tactical fixes to organizational problems. One VP of Marketing called in March. Their CEO wanted to “fix demand gen in 90 days.” Had this hard out he was adamant about including in the contract. When I asked about their current attribution setup, said “we track everything in Salesforce.” When I asked how sales and marketing defined a qualified lead, said “that’s actually been a point of tension.” That’s code for: nobody agrees on what success looks like, but leadership wants it fixed by Q2. I could have taken that engagement. Delivered some new campaigns. Generated some activity. Collected the check. But six months later, they’d still have the same tension, just with new dashboards. The clients we said yes to had something different. Not bigger budgets. Not simpler problems. They had leadership who was willing to hear that their measurement was wrong, their ICP needed work, or their tech stack was amplifying their dysfunction. It’s some uncomfortable stuff to admit. But you can’t fix what you won’t acknowledge is broken. The red flags we learned to spot: Leadership using phrases like “quick win” or “low hanging fruit” in the discovery call Sales and marketing leaders not both involved from day one Timeline pressure that doesn’t allow for learning (everything needs to be “done by end of quarter”) Unwillingness to change measurement, only tactics Every no felt so expensive in the moment. But every yes to the wrong client would have meant less capacity for the companies ready to do this work differently. An attempt at productization Q1 2025, I decided to productize some of my processes. The thinking made sense: clients kept asking for the same initial audit. I’d built a mostly-repeatable framework. Why not turn it into a self-serve offering? Scale the business without scaling my hours. I spent a few several nights and weekends thinking through and documenting how I wanted to build it out. What the experience might look like. How I’d market it. It literally never left my drafts. That was a year ago now. In that moment I was still working solo, trying to keep too many irons in the fire. The irony was that I needed to take some of my own advice and focus. Do less, better. The hire that changed everything I’d become this accidental cheerleader for going solo. Built a big part of my online presence around it. So the idea of hiring felt like... I don’t know, admitting I couldn’t hack it? Or something equally stupid. I’ll never forget it. I was on the front porch, multi-tasking while the kids played, and within 30 minutes, I said “I think this might be too good to be true.” And just like that, took the leap from solo to company of two. I’d resisted hiring for almost two years. Solo was simple. No management overhead. No risk of a bad culture fit. But solo also meant turning down good clients because I was maxed out. It meant every vacation required either overworking before I left or disappointing clients while I was gone. It meant my business could only grow as much as I personally could scale. And it meant making every decision alone. The hiring process was simpler than expected. I found the perfect hire in my network. Someone who could think strategically and also execute. Who understood the technical side of MarTech and could talk to executives about business outcomes. Who was comfortable with ambiguity because I was still figuring things out. The part I didn’t anticipate: the founder-sales problem. People got used to seeing me in their feeds. Reading this newsletter. Had heard me on podcasts, etc. When they realized I wouldn’t be on every client account, they got nervous. Early contracts had addendums guaranteeing my involvement in weekly calls, strategy sessions, anything to give them that safety net. That lasted about three months. By December, something shifted. Clients realized she wasn’t just qualified - she was bringing perspective and skills I couldn’t. The quality of our work improved because I wasn’t context switching between four client calls a day. And I could actually think about where this business should go instead of just surviving the current quarter. Looking back, I should have hired sooner. But I also don’t think I could have hired well until I felt that constraint. Sometimes you need to hit the ceiling before you’re ready to raise it. The Weird Choices We Made About Growth Here’s what we decided about how DemandLoops would grow: Bootstrapped, profitable Work‑life integration, not balance Learning in public, including the failures Intentionally small team, high leverage Those choices absolutely capped certain growth paths. They also attracted clients who were ready to fix root problems instead of leverage RAM (random acts of marketing). What We’re Building Toward Next In 2026, outside of maintaining our standard for client delivery, we’re focusing on three things. Making the positioning switch official We’re fully committed to building a small, intentional team. Not scaling to 20 people. Not becoming a traditional agency. Just a tight group of operators who can solve hard problems for clients ready to do the work. That means updating the website. Rewriting messaging. Finally announcing who joined the team and what she brings to clients. All the positioning work I’ve been quietly putting off because I wasn’t sure how to talk about it yet. Now I am. Hiring again (and we already found them) This year we’re making our second hire so we can say yes to more clients. Still selective. Still high-touch. Just not so limited by capacity anymore. The best news? We’ve already signed the offer for the perfect candidate. 🤗 Building more candidly in public With all the AI slop crowding my feed, I’ve been thinking about what got me here in the first place. I got my start sharing pretty candidly on LinkedIn. The good, the bad, the ugly. From there came the Demand Gen Chat podcast at Chili Piper. That format of connecting with the community, learning from others, sharing wins, elevating voices so people could be heard above all the noise - that’s how I learned and grew in this craft. I really haven’t been on camera much since leaving Chili Piper. Definitely outside my natural comfort zone. But I built my platform through writing and podcasting because it helped me stay connected to this work and this community. Getting back into that rhythm is my goal for 2026. So expect some video content this year. Probably a small series on LinkedIn. Maybe some uncomfortable first attempts. But that’s kind of the point. This probably doesn’t read like a traditional annual letter. But traditional isn’t really working in demand gen right now. If you’re dealing with any of the patterns I mentioned - misaligned ICPs, broken measurement, AI amplifying dysfunction - I’d genuinely love to hear about it. Hit reply. I read everything. Here’s to growth, Kaylee ✌ Unsubscribe https://substack.com/redirect/2/eyJlIjoiaHR0cHM6Ly9kZW1hbmRsb29wcy5zdWJzdGFjay5jb20vYWN0aW9uL2Rpc2FibGVfZW1haWw_dG9rZW49ZXlKMWMyVnlYMmxrSWpvME1qWTFNVFV3TkRBc0luQnZjM1JmYVdRaU9qRTROekV5T0RZMk1pd2lhV0YwSWpveE56Y3dOVFV6TWpJMkxDSmxlSEFpT2pFNE1ESXdPRGt5TWpZc0ltbHpjeUk2SW5CMVlpMHhPRGd5TmpjM0lpd2ljM1ZpSWpvaVpHbHpZV0pzWlY5bGJXRnBiQ0o5LmNvV1ZNQUVwVmlFRUFNd2JoTXFxQWd6OVRLSTFUM2JESDJrSkd5M3dBZWciLCJwIjoxODcxMjg2NjIsInMiOjE4ODI2NzcsImYiOnRydWUsInUiOjQyNjUxNTA0MCwiaWF0IjoxNzcwNTUzMjI2LCJleHAiOjIwODYxMjkyMjYsImlzcyI6InB1Yi0wIiwic3ViIjoibGluay1yZWRpcmVjdCJ9.iosLZ3d6ldhHDkExgnnEiLApUiplfnk6p2RtR_-HWjw?
View Email

DemandLoops 2025 Annual Letter

demandloops@substack.com2/8/2026
View this post on the web at https://demandloops.substack.com/p/your-100k-abm-platform-wont-save Sooo many B2B SaaS companies are still locked into multi-year, six-figure contracts (problem #1) AND still targeting the wrong accounts (problem #2). They inherit a 6sense or Demandbase contract. Or the CFO finally approves that big ABM investment. The team gets excited, builds out a massive TAM, creates beautiful account tiers, and loads 2,500 dream accounts into the platform. Then they wait for pipeline to materialize. Six months later, they’ve got maybe 3-5 qualified opportunities and a CFO asking very pointed questions about ROI. The problem isn’t just the platform - though it is really easy to blame the platform. ABM platforms work - especially modern ones! Will do a thorough write up soon of the recent stack combos I’m seeing for ABM. But the problem is that most companies point them at cold accounts who’ve never heard of them and expect the tech to magically create demand. 👋 Hi, it’s Kaylee Edmondson [ https://substack.com/redirect/26a3c1b9-dbca-49c5-a38d-4869acc047f4?j=eyJ1IjoiNzF4cDQwIn0.VLQsNiiAawz-DS2VtWTrcrG2IFeLIxnWNFcK9akSjpY ] and welcome to Looped In [ https://substack.com/redirect/cac78fb5-55a1-4418-b7bd-60396321f216?j=eyJ1IjoiNzF4cDQwIn0.VLQsNiiAawz-DS2VtWTrcrG2IFeLIxnWNFcK9akSjpY ], my newsletter covering demand gen and growth in B2B SaaS. Subscribe to join 2k+ readers who get Looped In delivered to their inbox every Sunday. There’s a Faster Path, Only Few Take Every B2B SaaS company has a CRM stuffed with closed-lost opportunities collecting dust. These are accounts that already qualified themselves. They took demos. They had real pain your product could solve. They got far enough in the sales process to seriously consider buying. Then they said no. Maybe they chose a competitor. Built something in-house. Budget got frozen. Timing was off. Your product was close at the time, but the features they really wanted were still on your product roadmap. But you have documented evidence about who they are, what pain they had, and exactly why they passed. These should be a primary tier in your target list. Not cold accounts. The numbers tell the story. I pulled data recently from a client of mine showing the average cycle time for closed lost re-engagement deals versus net new deals. Closed lost averaged 4.2 months. Net new took 11.8 months. That’s 64% faster. Just saying. The reasons are obvious once you think about it. Awareness already exists. Trust already exists. The problem you solve hasn’t magically disappeared. You know the exact objections because they’re in your CRM. And circumstances change constantly—budget gets approved, DIY solutions break, competitors underdeliver. Everyone Avoids This Because… Going back to accounts that rejected you feels…desperate. Sales teams especially hate revisiting losses. But think about your own buying behavior. How many SaaS tools have you evaluated, passed on, then implemented 12 months later because circumstances changed? I’ve been on the other side of this dozens of times. Evaluated a tool, chose a competitor, realized six months later it wasn’t working, went back to the original vendor. The companies that stayed visible (without being annoying) won our business eventually. The mistake is thinking closed lost means permanently lost. Prioritizing the Graveyard Recent closed lost (last 6 months-ish): These are your highest-probability targets. The pain is fresh. They haven’t fully implemented whatever they chose. Hit them with personalized outreach addressing the specific objections from your CRM notes. Offer a second look with better terms. Show them all the new product updates that have launched. Older closed-lost (6-18 months-ish): They’ve lived with their decision long enough to know if it’s working. Lead with customer stories from similar companies. Use intent signals to identify if they’re actively researching again. Multi-touch campaigns combining ads, email, and direct mail work well. If you were lucky enough to have them tell you which vendor they chose over you, leverage your battlecards citing specifics on why you win over XYZ competitor. Ancient closed-lost (18+ months-ish): These likely need warming before direct sales outreach. Retargeting ads with thought leadership content. Webinar invitations. Quarterly check-ins with industry insights. Wait for high intent signals before sales gets involved. Not saying don’t go spend on net new accounts, too. But just make sure you’re not completely overlooking or overruling targeting your closed losts, too. Making the Business Case Your CEO wants those massive enterprise logos. They won’t love hearing you want to focus on deals you already lost. Frame it as risk mitigation, right? You’re maximizing ROI on the program by targeting accounts with the highest conversion probability. (Whether they’re new or graveyard.) Or say this: “We have 200 accounts that already took demos and reached consideration stage. They know us. They have the pain we solve. They just need the right message at the right time. Give me XX days to prove we can winback a percentage of these.” Most executives will take that bet. See ya next week, Kaylee ✌ Unsubscribe https://substack.com/redirect/2/eyJlIjoiaHR0cHM6Ly9kZW1hbmRsb29wcy5zdWJzdGFjay5jb20vYWN0aW9uL2Rpc2FibGVfZW1haWw_dG9rZW49ZXlKMWMyVnlYMmxrSWpvME1qWTFNVFV3TkRBc0luQnZjM1JmYVdRaU9qRTROak0xT0RnNU5pd2lhV0YwSWpveE56WTVPVFE1TkRNMkxDSmxlSEFpT2pFNE1ERTBPRFUwTXpZc0ltbHpjeUk2SW5CMVlpMHhPRGd5TmpjM0lpd2ljM1ZpSWpvaVpHbHpZV0pzWlY5bGJXRnBiQ0o5Lml1X0JIN3dFaVhNNnpQY1BVWGJBei1RdXF0V2V5aGd5ekFhOXJXMTBFdFEiLCJwIjoxODYzNTg4OTYsInMiOjE4ODI2NzcsImYiOnRydWUsInUiOjQyNjUxNTA0MCwiaWF0IjoxNzY5OTQ5NDM2LCJleHAiOjIwODU1MjU0MzYsImlzcyI6InB1Yi0wIiwic3ViIjoibGluay1yZWRpcmVjdCJ9.KFSnHB0FcipTlpqadNMg2Fi8_U5XGsBnek5VP3Xq0Ho?
View Email

Your $100K ABM Platform Won’t Save You From a Bad Target List

demandloops@substack.com2/1/2026
View this post on the web at https://demandloops.substack.com/p/rethinking-icp-segmentation I’ve been wrong about something for years. Well probably several things fit into that statement, but for the sake of specificity, we’ll keep this article to just one topic in particular. 😅 Not entirely wrong. Just wrong enough that it’s been bothering me. For the last decade, I’ve been preaching a pretty standard gospel when it comes to segmenting ICPs for Account-Based programs: start with geography, layer in company size, add vertical specificity, then customize messaging for enterprise versus mid-market problem sets. It’s worked. I’ve built successful programs this way at Chili Piper, Campaign Monitor, Brightwheel, and for dozens of clients since going solo. But lately? I keep running into situations where this framework breaks. 👋 Hi, it’s Kaylee Edmondson [ https://substack.com/redirect/2777a28c-4c09-4350-8476-f647b3064162?j=eyJ1IjoiNzF4cDQwIn0.VLQsNiiAawz-DS2VtWTrcrG2IFeLIxnWNFcK9akSjpY ] and welcome to Looped In [ https://substack.com/redirect/7323e0f0-348a-4de4-aee7-41fbff2b2e23?j=eyJ1IjoiNzF4cDQwIn0.VLQsNiiAawz-DS2VtWTrcrG2IFeLIxnWNFcK9akSjpY ], my newsletter covering demand gen and growth in B2B SaaS. Subscribe to join 2k+ readers who get Looped In delivered to their inbox every Sunday. The Cracks in Traditional Segmentation A few months ago, I was working with a client selling GTM software. We’d built out their ICP segmentation the “right” way: companies with 10000+ employees got one set of campaigns, companies with 1000 - 9999 got another, and 150-999 another. But something weird started happening. A 180-person company that had tripled headcount in 18 months was asking the exact same questions, facing the exact same problems, and responding to the exact same messaging as an 8,000-person company that had barely grown in three years. Meanwhile, another 180-person company that had been stable for five years needed completely different positioning. They weren’t even close to the same buyer journey as the high-growth 180-person company. The common denominator wasn’t size. It was growth velocity and the operational chaos that comes with that. Seems obvious maybe as I write it, but truth be told I have just never open-mindedly thought this way. What We’re Actually Segmenting When you segment by company size, you’re making assumptions about: Budget availability Decision-making complexity Operational maturity Tech stack sophistication Team structure But these assumptions fall apart when you account for growth rate. A 200-person company growing 200% year-over-year is dealing with: Hiring and onboarding chaos Process breakdown at scale Tool sprawl and integration nightmares Cross-functional alignment issues An 8,000-person company with 5% annual growth might have: Established processes Mature tech stack Clear roles and responsibilities Predictable operations Same problem intensity. Very different company sizes. The Problem-State Framework What if we stopped segmenting primarily by static firmographics and started segmenting by problem intensity and operational state? (Or whatever this might translate to based on what you’re selling.) This has me starting to think about segmentation in three dimensions instead of the traditional one: 1. Growth Velocity Hyper-growth (>100% YoY) High-growth (50-100% YoY) Steady-growth (<50% YoY) Maintenance mode (flat or declining) 2. Problem Acuteness Critical pain (operations breaking, revenue at risk) Active pain (inefficiencies causing measurable issues) Latent pain (aware of problem, not urgent) Future pain (anticipating upcoming challenges) 3. Operational Maturity Building (establishing first processes) Scaling (processes exist but breaking) Optimizing (processes work, seeking efficiency) Transforming (rebuilding for next phase) A company’s position across these three dimensions could tell you way more about their buying journey than their employee count might. What This Looks Like in Practice Let me show you how this shifts messaging for that GTM client: Old approach: Mid-market: “Get visibility into your pipeline” Enterprise: “Enterprise-grade analytics for complex sales orgs” New approach: Hyper-growth + Critical pain: “Stop losing deals in the chaos of rapid scaling” Steady-growth + Latent pain: “See patterns you’re missing in your sales data” Mature + Optimizing: “Fine-tune what’s already working” Same product. But different convos based on where the company is in their growth and operational journey. The Messy Reality Here’s where I admit this gets complicated fast. You can’t just flip a switch and start segmenting by problem-state instead of firmographics. The data isn’t always readily available. Your CRM probably isn’t set up for it. Your sales team is used to the old buckets. And yes, company size still matters for things like: Contract value potential Sales cycle complexity Implementation resources needed CAC thresholds for campaign budgets I’m not suggesting we throw out firmographics entirely. What I’m suggesting is that we stop letting them be the primary organizing principle when they might not be the most predictive variable. Still Figuring This Out I’m sharing this because I’m actively working through it, not because I have it all figured out. Maybe in six months I’ll have more conviction. Maybe I’ll realize this only applies to certain product categories or GTM motions. Maybe traditional segmentation works just fine for most companies and I’m overthinking it. 👆 It could very well be this. IYKYK. But I do know this: when a 150-person company and an 11,500-person company are asking me the exact same questions in discovery calls, our segmentation model is missing something important. And that’s worth exploring. Ooh, also worth exploring! A dear friend of mine, Jason Widup [ https://substack.com/redirect/373f202a-8b54-4d55-afc4-7e084a97d6e7?j=eyJ1IjoiNzF4cDQwIn0.VLQsNiiAawz-DS2VtWTrcrG2IFeLIxnWNFcK9akSjpY ], is climbing 5 volcanoes this summer to raise money for youth mental health. Please consider sharing his story, donating to this critically important cause, or just leaving him a note of encouragement on his journey. 🫶 See ya next week, Kaylee ✌ Unsubscribe https://substack.com/redirect/2/eyJlIjoiaHR0cHM6Ly9kZW1hbmRsb29wcy5zdWJzdGFjay5jb20vYWN0aW9uL2Rpc2FibGVfZW1haWw_dG9rZW49ZXlKMWMyVnlYMmxrSWpvME1qWTFNVFV3TkRBc0luQnZjM1JmYVdRaU9qRTROVFkyT0RVM01pd2lhV0YwSWpveE56WTVNelF3T0RBd0xDSmxlSEFpT2pFNE1EQTROelk0TURBc0ltbHpjeUk2SW5CMVlpMHhPRGd5TmpjM0lpd2ljM1ZpSWpvaVpHbHpZV0pzWlY5bGJXRnBiQ0o5LmFwODM5M0pUSmpHVjB6bUo5RzdWTzlGWWNqWnQ5enEyQms2UlNWa2d4bW8iLCJwIjoxODU2Njg1NzIsInMiOjE4ODI2NzcsImYiOnRydWUsInUiOjQyNjUxNTA0MCwiaWF0IjoxNzY5MzQwODAwLCJleHAiOjIwODQ5MTY4MDAsImlzcyI6InB1Yi0wIiwic3ViIjoibGluay1yZWRpcmVjdCJ9.HweGmfx6z3CULL5tp5nqszZt152JxAlLJ203ZPzYPM4?
View Email

Rethinking ICP Segmentation

demandloops@substack.com1/25/2026
View this post on the web at https://demandloops.substack.com/p/the-great-abm-unbundling-is-here Social trends this week are pulling us straight back to 2016. Your feed is probably full of VSCO-edited, Snapchat-filtered, oddly cropped 2016 highlight reels like we're reliving our glory days. But in B2B marketing, ABM never left 2016. We’re still running the same bloated playbooks, still drowning in 6-figure platform contracts, still spending months prepping campaigns that could launch in weeks. The tech evolved. Buyer behavior changed completely. Budgets got slashed and scrutinized. But somehow, our ABM approach stayed frozen in time. I’ve had three calls this week alone with CMOs at B2B SaaS companies asking the same questions: Should we renew our ABM platform? How do we get our sales team to actually use this thing? Why are we spending $200K annually on tools we barely touch? Is there a better way? These aren’t outlier conversations. This is the norm right now. Or at least it sure feels that way to me. 👋 Hi, it’s Kaylee Edmondson [ https://substack.com/redirect/09c5d21d-b608-4580-8c15-334affea1ee2?j=eyJ1IjoiNzF4cDQwIn0.VLQsNiiAawz-DS2VtWTrcrG2IFeLIxnWNFcK9akSjpY ] and welcome to Looped In [ https://substack.com/redirect/ac22552d-55e9-4be2-8daa-fadde003c524?j=eyJ1IjoiNzF4cDQwIn0.VLQsNiiAawz-DS2VtWTrcrG2IFeLIxnWNFcK9akSjpY ], my newsletter covering demand gen and growth in B2B SaaS. Subscribe to join 2k+ readers who get Looped In delivered to their inbox every Sunday. Insert…The ABM Maturity Gap Most companies fall into one of three camps when I audit their ABM setup: Camp 1: The Holdouts Still running on gut feel and spreadsheets. They know they need something more sophisticated but haven’t taken the plunge. Budget constraints, team capacity, fear of complexity. Whatever the reason, they’re stuck watching competitors move faster. Camp 2: The Over-Investors Went all in on a massive ABM platform 2-3 years ago. Signed a multi-year contract at $150K+ annually. Now they’re using maybe 30% of the features while their sales team ignores the alerts because they’re too noisy. Classic overshooting problem. Camp 3: The Experimenters Testing point solutions across the stack. RB2B for website identification, Clay for enrichment, Warmly for conversational ABM. Moving fast but struggling with integration and data consistency. They have the right instincts but lack the connective tissue. None of these camps are wrong. They’re just operating with outdated assumptions about what ABM requires in 2026. The gap exists because we built ABM practices during a different era. When 6sense and Demandbase launched their platforms, they solved the problems we knew then. Intent data was scarce. Personalization at scale was hard. Integration across tools was a nightmare. Today? Intent data is everywhere. Personalization tools are abundant and affordable. Integration got 10x easier with tools like Clay and Zapier. The old all-in-one model doesn’t make sense anymore, but most teams haven’t caught up. 3 Mistakes Slowing Down Your ABM Motion After working with dozens of B2B teams over the past two years, I keep seeing the same patterns that kill ABM velocity: Mistake 1: Treating ABM Like a Separate Program You spin up an “ABM initiative” with its own budget, its own tool stack, its own Slack channel. Meanwhile, your regular demand gen motion continues unchanged. Sales operates in their own world. Customer success runs separate plays. This creates three versions of customer engagement that rarely sync up. An account might be in your top tier ABM list while sales is ignoring them because they don’t show up as an MQL in Salesforce. Or CS is trying to expand an account that marketing just sent a generic nurture email to. One client a few quarters back had their tier 1 ABM accounts receiving: Personalized SDR outreach (appropriate) Generic paid social ads (not appropriate) Mass email nurtures (definitely not appropriate) Customer marketing emails meant for existing customers (wildly inappropriate) Nobody coordinated. Everyone optimized for their own metrics. The account experience pretty crazy. Mistake 2: Building Before You Have Signal Clarity Companies rush to implement ABM infrastructure before figuring out which signals actually matter for their business. They track everything because they can, then wonder why their sales team tunes out the noise. I worked with a Series B company that was tracking 47 different intent signals across their target accounts. Forty-seven. When I asked which ones correlated with actual closed deals, the team couldn’t answer. They had data everywhere but zero clarity on what mattered. We spent two weeks analyzing their last 24 months of closed-won deals. Turns out, only 4 signals consistently showed up before deals closed: Multiple stakeholders viewing pricing within 7 days Technical decision-maker reviewing API docs after demo Finance stakeholder on pricing page multiple times Job posting, or existing FTE, for a role that their product enables Everything else was noise for them. Expensive, time-consuming noise. Mistake 3: Optimizing for Volume Over Velocity The old ABM playbook says “identify 100-500 target accounts, then hit them with everything.” More touches, more channels, more content. Spray and pray, just with tighter targeting. This approach made sense when data was scarce and personalization was hard. Now it just creates noise and burns out your team. A growth-stage company I advised was running 12 different ABM campaigns simultaneously across their tier 1 accounts. Email sequences, LinkedIn ads, direct mail, SDR outreach, webinar invites, event sponsorships. The works. Problem? Their tier 1 list had 400 accounts. With 2-3 target contacts per account, they were trying to orchestrate 12 plays across 900+ people. The team was underwater. The accounts were overwhelmed. Conversion stayed flat. We cut it down to 3 plays, focused on 50 accounts, and conversion rate doubled in 8 weeks. A New ABM Model for 2025 The companies winning with ABM right now aren’t using the 2016 playbook. They’re operating with different assumptions entirely: Assumption 1: Start with Fit, Layer in Intent Old model: Identify accounts showing intent, then qualify for fit. New model: Map your entire TAM by fit first, then use intent to trigger action. This flip matters because it changes how you structure everything. Your CRM becomes a living map of your market, not just a repository of inbound leads. You can proactively track accounts through their buying journey instead of waiting for them to raise their hand. This concept of account progression is baseline in my opinion. One client mapped 2,000 accounts into their CRM based on ICP fit and deal size. Then we set up intent monitoring across those accounts. When high-intent signals fired, we knew exactly which tier that account belonged to and which play to run. Compare that to the old model where you’re reacting to intent signals from accounts you know nothing about, scrambling to figure out if they’re even worth pursuing. Assumption 2: Orchestration Over Programs Stop thinking in campaigns. Start thinking in orchestrated plays across channels. A campaign has a start and end date. You launch it, measure it, optimize it, maybe run it again next quarter. This creates gaps. Accounts fall through the cracks between campaigns. Orchestration runs continuously. You define plays for different account stages and tiers, then trigger them based on signals. An account moves from one play to another based on their behavior, not your campaign calendar. This requires different infrastructure. You need: Clear account staging (not just lead stages) Signal tracking that feeds into your CRM Workflow automation that respects engagement rules Content that maps to account stage, not campaign theme Assumption 3: Point Solutions Beat Platforms The great unbundling of ABM is happening. The all-in-one platforms that cost $200K annually are losing to nimble point solutions that cost $5K-$30k each. You can now build a sophisticated ABM stack that outperforms legacy platforms. Here are a few tools on the market I keep coming back to: Keyplay for account mapping and ICP modeling Clay for enrichment, signal tracking, play development Vector for website de-anonymization + audience builds Userled for 1:1 ads and microsites that accelerate enterprise deals Cargo for GTM orchestration, workflow automation, and agents for sales v0 + AI models rapid campaign prototyping and experimentation Your existing MAP for marketing orchestration The trade-off though is that you need someone on your team who can stitch these together. RevOps or a technical marketing ops person. But that’s a one-time setup cost, not an annual platform fee. Budget Allocation for Modern ABM Most companies overspend on platforms and underspend on execution. Here’s a generalized recommendation for how’d I’d break down an ABM budget: Events (~45%): High-touch, high-return. Events drive pipeline, strengthen deals, expand customers, and deepen relationships. Paid Ads (~20%): Personalized ads that engage and convert our top-tier accounts, accelerate deals, and expand isn’t customers. Contractors (~15%): AI doesn’t mean more headcount, but it doesn’t mean understaffed either. Some necessary resources like web, design, and video, are done with experts. Tooling (~10%): this is spent on orchestration (ABX, GTM, workflows). Campaigns (~10%): Customers and influencer campaigns, experiments, tactical bets on new formats, channels, or creative ideas – often used alongside AI and gifting. This should without a doubt be tailored to your unique market and marketing advantages, too. Just because something similar worked for someone else, doesn’t mean it will work for you. ABM Orchestration in Practice Let me walk through a real example of how modern ABM orchestration works. A Series B company selling to mid-market SaaS companies had 800 accounts in their target market. They tiered them like this: Tier 1: 50 accounts - Highest fit, highest deal size, active buying signals Tier 2: 200 accounts - High fit, good deal size, moderate signals Tier 3: 550 accounts - Solid fit, smaller deal size, low/no signals Tier 1: Premium Investment Willing to accept 2-3x your normal CAC Justify with higher ACV and LTV potential Budget allows for human touch at every stage Custom everything—research, outreach, content Tier 2: Balanced Investment Aim for 1.5x your normal CAC ceiling Mix of automation and personalization Templated but customized approaches Shared resources across similar accounts Tier 3: Strict Efficiency Must hit standard CAC targets Automation-first, human touch only when earned Leverage existing content and campaigns Minimal incremental spend per account The math is simple: if Tier 1 accounts close at 3x the deal size of Tier 3, you can afford to spend 3x more acquiring them. What This Looks Like in Practice Tier 1 Plays: Unknown stage: 1:1 ad creative, dedicated research budget Aware stage: Multi-channel (email + LinkedIn + direct mail/gifting) Engaged stage: High-touch (executive briefings, reverse demos, workshops) Considering stage: Custom deliverables (ROI analysis, technical reviews) Tier 2 Plays: Unknown stage: Targeted ads with segment-level personalization Aware stage: Automated nurture + SDR outreach Engaged stage: Group demos, case study packages Considering stage: Templated resources, standard sales process Tier 3 Plays: Unknown stage: General demand gen, no incremental spend Aware stage: Retargeting + automated nurture Engaged stage: Self-serve options (interactive demos, product tours) Considering stage: Standard sales process, earn your way to human time Budget Allocation Rule of Thumb Most companies should allocate roughly: 50-60% of ABM budget to Tier 1 (your smallest group, highest investment per account) 30-35% to Tier 2 (your middle group, balanced approach) 10-15% to Tier 3 (your largest group, automation-first) This feels backwards at first. You’re spending the most money on the fewest accounts. But that’s the point. KPIs You Should be Tracking Stop measuring ABM success by marketing metrics. Start measuring it by revenue metrics. Traditional ABM metrics teams track: Target account engagement rate Advertising impressions on target accounts Email open rates from target accounts Website visits from target accounts These are activities, not outcomes. They tell you if your machinery is running, not if your machinery is working. Better metrics: Coverage rate - What percentage of your target accounts are actively engaged with your brand right now? Velocity - How quickly do accounts move from first engagement to opportunity stage? Win rate by tier - Are you actually winning more deals in your tier 1 accounts than tier 2/3? Deal size differential - Do ABM-influenced deals close at higher ACV than non-ABM deals? Account penetration - How many contacts within target accounts are engaged vs. just one champion? One client tracked all the traditional metrics religiously. High engagement rates, great open rates, lots of website visits. But their win rate in tier 1 accounts was actually lower than tier 2. Turns out they were over-indexing on activities that made them feel productive without actually influencing buying decisions. We shifted focus to quality signals and engagement from actual decision-makers. Win rate improved by 23% in the next quarter. Making the Shift If you’re stuck in 2016 ABM mode, here’s how to start moving forward: Audit your current state Map out every ABM tool, workflow, and campaign you’re running right now. Be honest about what’s working and what’s not. Define your signal framework Pull your last 2 quarters of closed-won deals. Document what signals appeared before they closed. Those are your high-value signals. Tier your target accounts Map your entire TAM into tiers based on fit and deal size, and anything else that’s uniquely relevant to your biz. Not just your “ABM list” of 100 accounts. Design 3 plays One play per account tier. Keep them simple. You can add complexity later. Run a pilot Pick 25, 50, 70 tier 1 accounts. Whatever you realistically have the buy-in, budget, and resourcing for. Run your new play. Measure everything. Iterate weekly. Scale what works Once you prove the model on tier 1, expand to tier 2 and 3 with appropriate modifications. The companies that move fastest on this transition will build significant competitive advantage. ABM in 2025 doesn’t look like ABM from 2016. Smaller budgets, nimble tools, continuous orchestration, and tight alignment between marketing and sales. Time to update your playbook. See ya next week, Kaylee ✌ Unsubscribe https://substack.com/redirect/2/eyJlIjoiaHR0cHM6Ly9kZW1hbmRsb29wcy5zdWJzdGFjay5jb20vYWN0aW9uL2Rpc2FibGVfZW1haWw_dG9rZW49ZXlKMWMyVnlYMmxrSWpvME1qWTFNVFV3TkRBc0luQnZjM1JmYVdRaU9qRTRORGM1TkRnM05pd2lhV0YwSWpveE56WTROek01T0RVNExDSmxlSEFpT2pFNE1EQXlOelU0TlRnc0ltbHpjeUk2SW5CMVlpMHhPRGd5TmpjM0lpd2ljM1ZpSWpvaVpHbHpZV0pzWlY5bGJXRnBiQ0o5LnhTdVoxREhUemQxTDlHR1FqOXZsZnYxajJUbGdMa1JWUHVIeGhFQlZKb0UiLCJwIjoxODQ3OTQ4NzYsInMiOjE4ODI2NzcsImYiOnRydWUsInUiOjQyNjUxNTA0MCwiaWF0IjoxNzY4NzM5ODU4LCJleHAiOjIwODQzMTU4NTgsImlzcyI6InB1Yi0wIiwic3ViIjoibGluay1yZWRpcmVjdCJ9.iVGB-jjmXsR9dVfLmDtNqi6CbeCdQAXWknVcd7MGXDY?
View Email

The Great ABM Unbundling Is Here

demandloops@substack.com1/18/2026
View this post on the web at https://demandloops.substack.com/p/yes-but-only-if I almost took a job that would have been a disaster. This was about four years into my marketing career. A Series B startup, decent funding, impressive leadership team. The VP of Demand Gen role was a huge step up from where I was. They wanted me to build their demand gen function from scratch. I said yes to the interview. Full, enthusiastic, unconditional yes. I made it through their five rounds of interviews, but then during the offer negotiation, I started asking questions some more precise questions. Turns out “building from scratch” meant I’d have zero budget for six months while they “proved out the ROI.” What does that even mean? I’d be reporting to a CEO who wanted daily metrics updates but wouldn’t commit to any specific headcount. And the “impressive leadership team” had turned over three demand gen leaders in eighteen months. I backed out. Felt terrible about it. Worried I’d burned a bridge. This leadership team is well connected in this space. Everyone knows their names. But backing out of a bad yes taught me something more valuable than any role could have: the power of conditional acceptance. 👋 Hi, it’s Kaylee Edmondson [ https://substack.com/redirect/1febdd2b-7e9b-4aa0-8635-43504e7b662b?j=eyJ1IjoiNzF4cDQwIn0.VLQsNiiAawz-DS2VtWTrcrG2IFeLIxnWNFcK9akSjpY ] and welcome to Looped In [ https://substack.com/redirect/6fc8cc59-4339-4379-9f11-1f643731073a?j=eyJ1IjoiNzF4cDQwIn0.VLQsNiiAawz-DS2VtWTrcrG2IFeLIxnWNFcK9akSjpY ], my newsletter covering demand gen and growth in B2B SaaS. Subscribe to join 2k+ readers who get Looped In delivered to their inbox every Sunday. The Unconditional Yes Trap We’re taught that ambitious people say yes. That growth comes from taking on more. That opportunities favor the eager. And look, there’s some truth there. I didn’t get from marketing coordinator to VP in six years by turning down challenges. But the marketers I see thriving right now aren’t the ones with the longest task lists. Look at any of them. I’m sure you’ve got a few top of mind right now. They all have this mastered ability for strategic, conditional acceptance. “Yes, but only if...” This phrase has become my filter for everything from client work to speaking opportunities to that random LinkedIn coffee chat request. And the CMOs I work with who struggle most are usually drowning in commitments they made without conditions. One CMO I worked with recently had said yes to launching ABM, building a content engine, running a rebrand, AND expanding into two new markets. All in Q1. With the same team size and budget as the previous year. When I asked why she’d agreed to all of it, she said “I didn’t want to seem like I wasn’t up for the challenge.” That’s the trap. We confuse conditional acceptance with weakness. Career Opportunities: Protecting Your Trajectory Let me tell you about when I joined Chili Piper as Director of Demand Gen. They approached me about building their demand function as one of the founding marketers. Zero to one build, direct report to CEO, all the sexy startup stuff. I wanted it. Bad. But I didn’t just say yes. I said “yes, but only if...” I needed to report directly to the CEO, not through another layer. I needed confirmed budget for at least the first two quarters. I needed clarity on what success looked like in months 3, 6, and 12. And I needed to know that if I built something that worked, I’d have a path to build the team underneath it. Oh, and on a personal note, it was Q1 of 2020 (IYKYK) and two days before accepting the role, I’d just found out I was pregnant. So I needed those conditions to be wildly accepted as well. All of those conditions were met. And that role became one of the most formative experiences of my career. Compare that to a friend who took a “VP of Growth” role at a startup. She said yes immediately, excited about the title bump. Three months in, she realized the role meant demand gen, product growth, customer marketing, AND partnerships. With a team of zero. She lasted six months before burning out. The conditions you should set for career opportunities: Authority matching responsibility. If they want you to own outcomes, you need decision-making power. “Yes, but only if I have final say on budget allocation and team structure.” Resources matching expectations. Ambitious goals are great, but they require fuel. “Yes, but only if we can commit to X budget and Y headcount within the first quarter.” Clear success metrics. Vague goals lead to vague performance reviews. “Yes, but only if we align on specific KPIs and how success will be measured.” Alignment with your 2-3 year plan. Every role should build toward where you want to be. “Yes, but only if this gives me experience in [specific area] that I need for my next move.” The good opportunities can meet these conditions. The bad ones reveal themselves immediately. Work Responsibilities: Guarding Your Capacity About four months into going solo, a client asked if I could “quickly” help them with a rebrand project. Old me would have said yes immediately. Need to keep clients happy, right? New me said “yes, but only if we can push the demand gen strategy work to next month, or if you want to bring in additional resources for the rebrand.” They chose to bring in a brand specialist. Project went great. I stayed focused on what I actually do best. When I was managing a $XXM demand gen budget at Campaign Monitor, the requests were constant. “Can you hop on this customer call?” “Can DG own this launch?” “Can you scale this across two new channels?” Every single request seemed reasonable in isolation. But unconditional yeses meant my team was scattered across a million different priorities. I started requiring conditions for any new ask: “Yes, but only if we can deprioritize the content calendar refresh we planned for this quarter.” “Yes, but only if success is defined as X measurable outcome, not just ‘launch the thing.’” Some people pushed back. Said I was being difficult. But the requests that were actually important found a way to meet the conditions. The ones that were just someone else’s problem being punted to marketing quietly went away. The conditions you should set for work requests: Resource reallocation. Nothing is free. “Yes, but only if we agree on what comes off the roadmap to make room.” Defined success criteria. If you can’t measure it, you can’t manage it. “Yes, but only if we align on what done looks like before we start.” Appropriate timeline. Urgency doesn’t mean immediately. “Yes, but only if we have realistic time to do this right, not just fast.” Clear ownership. Someone needs to be the DRI. “Yes, but only if I have final decision-making authority OR if someone else is clearly the lead.” These conditions force the asker to clarify what they actually need. Half the time, once conditions are stated, the request changes completely. Personal Responsibilities: Protecting Your Energy I get probably 15-20 LinkedIn messages a week asking to “pick my brain” about demand gen, solopreneurship, or career stuff. I want to help people. But I learned pretty quickly that unconditional yes to all these requests was destroying my ability to do actual client work. So I started using “yes, but only if...” “I’d love to help! I do most of my mentoring through this newsletter, but I’m happy to do a quick async exchange if you can send me your three most specific questions first.” About 60% of requests never follow up. Which tells me they weren’t that serious to begin with. The 40% who do follow up? Those conversations are focused, valuable for both of us, and don’t require me to block out 30-60 minutes I don’t have. Same thing with speaking opportunities. I love sharing what I’ve learned. But not every conference or podcast is worth the time investment. “Yes, but only if the audience is primarily B2B SaaS marketers in demand gen or growth roles.” “Yes, but only if I can do it remotely (since I’m balancing client work and family).” These are my conditions that ensure mutual value. The right opportunities can meet them easily. The conditions you should set for personal requests: Mutual value. Your time is finite. “Yes, but only if we can make this valuable for both of us.” Specific parameters. Vague asks lead to vague outcomes. “Yes, but only if you can send me specific questions in advance.” Realistic time investment. Not everything needs to be a 60-minute call. “Yes, but only if we can do this async/in 15 minutes/with clear start and end time.” Your Move Next time someone asks you to take on a new project, consider a role, or help with something outside your core work, try this: Pause before answering. Even if that pause is just “let me think about that and get back to you in 24 hours.” Ask yourself what would need to be true for this to be a clear yes. Then communicate those conditions clearly. You’re not being difficult. You’re being strategic about where your finite time and energy go. The right opportunities will meet your conditions. The wrong ones will filter themselves out. And you’ll end up with a career and workload built on intentional choices, not accidental obligations. Here’s to growth. See ya next week, Kaylee ✌ Unsubscribe https://substack.com/redirect/2/eyJlIjoiaHR0cHM6Ly9kZW1hbmRsb29wcy5zdWJzdGFjay5jb20vYWN0aW9uL2Rpc2FibGVfZW1haWw_dG9rZW49ZXlKMWMyVnlYMmxrSWpvME1qWTFNVFV3TkRBc0luQnZjM1JmYVdRaU9qRTROREEzTURNek9Td2lhV0YwSWpveE56WTRNVFF3TXpRM0xDSmxlSEFpT2pFM09UazJOell6TkRjc0ltbHpjeUk2SW5CMVlpMHhPRGd5TmpjM0lpd2ljM1ZpSWpvaVpHbHpZV0pzWlY5bGJXRnBiQ0o5LmVkRkREWUkzakxhUlNYM1BObXVkdy01OER6YXdaUkctN0c4cTlTLVk2ZnciLCJwIjoxODQwNzAzMzksInMiOjE4ODI2NzcsImYiOnRydWUsInUiOjQyNjUxNTA0MCwiaWF0IjoxNzY4MTQwMzQ3LCJleHAiOjIwODM3MTYzNDcsImlzcyI6InB1Yi0wIiwic3ViIjoibGluay1yZWRpcmVjdCJ9.9UdsoTM9nv1wJ4annzUbFJFUfNotyBLc7SbLP2Jx8KE?
View Email

Yes, but only if...

demandloops@substack.com1/11/2026
View this post on the web at https://demandloops.substack.com/p/book-highlights-from-2025 I read more this year than I have in the last five combined. Not because I suddenly had more time. I didn’t. What changed was realizing that most marketing advice on LinkedIn is just recycled takes from the same three frameworks everyone already knows. My feed became more of an echo chamber earlier in the year and it wasn’t sparking the same inspo…so back to books I went. Books force you to sit with an idea long enough to actually understand it. And I think that was the main switch up I was really needing. So I started blocking 30 minutes every morning. Before client calls, before Slack, before the daily fire drill of fractional work. Typically curled up in a chair in my office, my Kindle, and some coffee. I found myself returning to marketing classics from years ago, and a few new ones that I left lots of highlights in. 👋 Hi, it’s Kaylee Edmondson [ https://substack.com/redirect/c4861944-5f6b-48b7-b1a7-8a9e25e5977f?j=eyJ1IjoiNzF4cDQwIn0.VLQsNiiAawz-DS2VtWTrcrG2IFeLIxnWNFcK9akSjpY ] and welcome to Looped In [ https://substack.com/redirect/3a44f8d6-f149-4e32-b616-ff641409f0cd?j=eyJ1IjoiNzF4cDQwIn0.VLQsNiiAawz-DS2VtWTrcrG2IFeLIxnWNFcK9akSjpY ], my newsletter covering demand gen and growth in B2B SaaS. Subscribe to join 2k+ readers who get Looped In delivered to their inbox every Sunday. I read 30+ business books in 2025, but to save you from another comprehensive list of titles, these are the ones I either found myself re-reading or reading for the first time but loving. The ones with the most highlights logged in my Kindle. The ones I kept coming back to when I was stuck on a positioning problem or trying to explain why a client's demand gen strategy wasn’t working. The Marketing Fundamentals Revisited 🔗 Storynomics [ https://substack.com/redirect/26527856-ed83-4545-a569-a3a5092bec9e?j=eyJ1IjoiNzF4cDQwIn0.VLQsNiiAawz-DS2VtWTrcrG2IFeLIxnWNFcK9akSjpY ] by Robert McKee & Thomas Gerace Every CMO I work with thinks they need better content. But really they need a better story. This book breaks down why facts don’t persuade but stories do. McKee’s framework for narrative structure applies directly to B2B messaging. I started using his “inciting incident” concept in client positioning work and it’s been a total shift in how we talk about the problem our product solves. 🔗 Obviously Awesome [ https://substack.com/redirect/aea80fd7-77e5-45ae-bf49-d4a73ffed423?j=eyJ1IjoiNzF4cDQwIn0.VLQsNiiAawz-DS2VtWTrcrG2IFeLIxnWNFcK9akSjpY ] by April Dunford I’ve recommended this book to probably 15 clients. Dunford’s positioning framework is the most practical thing I’ve found for startups who know their product is good but can’t explain why someone should buy it. Her five-component framework (competitive alternatives, unique attributes, value, target market, market category) is now the first thing I walk through in discovery calls. If you can’t answer these five questions clearly, your demand gen strategy won’t matter. 🔗 How Brands Grow [ https://substack.com/redirect/b64b2147-adfa-449c-8065-93265e21517f?j=eyJ1IjoiNzF4cDQwIn0.VLQsNiiAawz-DS2VtWTrcrG2IFeLIxnWNFcK9akSjpY ] by Byron Sharp This book makes me question everything I thought I knew about B2B marketing. Sharp’s research proving that brand growth comes from reaching more buyers, not from increasing loyalty in existing customers, goes against every ABM playbook out there. His data on mental and physical availability explains why companies with massive tier 1 account lists and zero brand awareness struggle to hit pipeline. I now start every client engagement by asking how many people in their TAM have even heard of them. Usually the answer is depressing. That’s the real problem to solve. The Books That Made Me Better at the Craft 🔗 Hey Whipple, Squeeze This [ https://substack.com/redirect/386b0ecc-bd80-4a2d-98c4-f2d4603e1a47?j=eyJ1IjoiNzF4cDQwIn0.VLQsNiiAawz-DS2VtWTrcrG2IFeLIxnWNFcK9akSjpY ] by Luke Sullivan This is technically an advertising book but it’s the best thing I’ve read on creative strategy. Sullivan’s chapter on how to write a headline applies directly to email subject lines, LinkedIn posts, and landing page copy. His rule about never using puns unless they make the idea clearer has saved me from a lot of bad campaign names. The section on testing your idea by explaining it to someone outside marketing is something I do before every campaign launch now. 🔗 Very Good Copy [ https://substack.com/redirect/bc3ed825-62fe-4f61-bcc7-36a6bcdbfe26?j=eyJ1IjoiNzF4cDQwIn0.VLQsNiiAawz-DS2VtWTrcrG2IFeLIxnWNFcK9akSjpY ] by Eddie Shleyner What I love about this book is how Shleyner deconstructs actual marketing copy line by line. He shows you the before and after. He explains the exact word choices that make something click. I actually keep a physical copy of this one on my desk so that I can flip to whatever technique I’m struggling with in the moment. The chapter on writing subject lines that get opened without being clickbait has probably saved me from a dozen terrible A/B tests. 🔗 Steal Like An Artist [ https://substack.com/redirect/60a5ff4b-1496-417a-8624-8bd6c1c4b80d?j=eyJ1IjoiNzF4cDQwIn0.VLQsNiiAawz-DS2VtWTrcrG2IFeLIxnWNFcK9akSjpY ] by Austin Kleon This book is about creativity but the principles apply to marketing as much as anything else. Kleon’s idea that you should steal from multiple sources and remix them into something new is exactly what good marketers do. I started keeping a swipe file of competitor campaigns, customer stories, and successful plays from other industries. When I’m stuck on campaign strategy, I pull patterns from what’s working elsewhere and adapt them. Creative theft beats staring at a blank page. The Ones About Working With People 🔗 Likeable Badass [ https://substack.com/redirect/43216642-6a71-47a8-b357-999c521f9c36?j=eyJ1IjoiNzF4cDQwIn0.VLQsNiiAawz-DS2VtWTrcrG2IFeLIxnWNFcK9akSjpY ] by Alison Fragale I picked this up based on a glowing recommendation from a friend. Fragale’s framework for combining warmth with competence is something I now think about in every stakeholder meeting. Her research on how women especially get penalized for being too assertive but also for being too nice is something every woman in B2B marketing should read. Changed how I structure difficult conversations. 🔗 Busting Silos [ https://substack.com/redirect/7e7d610f-9de6-4502-a284-797125660b15?j=eyJ1IjoiNzF4cDQwIn0.VLQsNiiAawz-DS2VtWTrcrG2IFeLIxnWNFcK9akSjpY ] by Hillary Carpio and Travis Henry Most marketing teams I work with are accidentally working against each other. Demand gen optimizes for SQLs, product marketing optimizes for launches, brand optimizes for awareness. Carpio and Henry break down how Snowflake built a marketing function where every team pulls in the same direction. Their framework for anticipating customer needs across the entire journey, not just your team's slice of it, is something I reference in every org design conversation now. Also a brilliant look into designing a “one-team” ABM program. The Guides Worth Reading 🔗 Perplexity at Work [ https://substack.com/redirect/d503bf1f-c85c-423a-b045-96b5232d8fc3?j=eyJ1IjoiNzF4cDQwIn0.VLQsNiiAawz-DS2VtWTrcrG2IFeLIxnWNFcK9akSjpY ]: A Guide to Getting More Done I was skeptical about another AI productivity guide but this one is different. It’s not about prompt engineering or trying to get AI to write your emails. It’s about using AI for research, data analysis, and exploration. The section on how to validate AI-generated insights before using them in client work is something I reference weekly. Made me way better at market research and competitive analysis. 🔗 Claude’s Prompt Library [ https://substack.com/redirect/4b33fd5d-edc0-4d52-98ad-d6e2bbd0555f?j=eyJ1IjoiNzF4cDQwIn0.VLQsNiiAawz-DS2VtWTrcrG2IFeLIxnWNFcK9akSjpY ] If you’re still writing prompts from scratch every time, maybe try this prompt library instead. Anthropic’s prompt library has templates for everything from data extraction to content analysis. I adapted their “entity extraction” prompt for pulling signals from CRM data. Their “meeting transcript analysis” template saves me hours after client calls. These aren’t magic, but moreso just well-structured starting points that you can customize. 🔗 Lovable Tips and Tricks [ https://substack.com/redirect/7fde131a-64a2-400f-8b39-634f091067b3?j=eyJ1IjoiNzF4cDQwIn0.VLQsNiiAawz-DS2VtWTrcrG2IFeLIxnWNFcK9akSjpY ] I started using Lovable for quick prototypes and MVPs this year. Their best practices guide is surprisingly good at explaining how to think about building with AI. The section on when to use no-code versus traditional development changed how I scope projects. Their tips on testing and iteration apply beyond just their platform. Honestly just an insane tool for changing the pace from ideation to shipping. This Year Felt Different I stopped reading to finish books and started reading to solve problems. When I hit a positioning challenge with a client, I’d grab Obviously Awesome. When I needed to explain why their brand awareness budget mattered, I’d revisit How Brands Grow. When I was stuck on campaign copy, I’d flip through Very Good Copy for examples. Books became tools instead of trophies. I highlighted more Kindle notes, I dog-eared pages, wrote in margins, referenced them in client presentations. Some of these I read cover to cover. Others I skimmed for the one framework I needed right then. What I’m Reading Next I’ve got a stack for 2026 that’s already too ambitious. Some at the top of the stack include: - Exit Path by Touraj Parang (to better understand valuations) - The Mom Test by Rob Fitzpatrick (customer research is still my weakest skill) - Play Bigger by Al, Dave, Christopher, Kevin (category design has always been an interesting debate and I want to read their take) But I’m also giving myself permission to abandon books that aren’t useful. Life’s too short to finish something just because you started it. What were your best reads this year? Anything I should add to the 2026 stack? Hit reply and let me know. Here’s to growth! See ya next week, Kaylee ✌ Unsubscribe https://substack.com/redirect/2/eyJlIjoiaHR0cHM6Ly9kZW1hbmRsb29wcy5zdWJzdGFjay5jb20vYWN0aW9uL2Rpc2FibGVfZW1haWw_dG9rZW49ZXlKMWMyVnlYMmxrSWpvME1qWTFNVFV3TkRBc0luQnZjM1JmYVdRaU9qRTRNak16TVRnMU15d2lhV0YwSWpveE56WTNOVE0wT1RnM0xDSmxlSEFpT2pFM09Ua3dOekE1T0Rjc0ltbHpjeUk2SW5CMVlpMHhPRGd5TmpjM0lpd2ljM1ZpSWpvaVpHbHpZV0pzWlY5bGJXRnBiQ0o5LnppNTFLT29LMlB0UUFkdlJjSEZpS09DcTNmM0FYRk03dzlIV0VHcUhoUk0iLCJwIjoxODIzMzE4NTMsInMiOjE4ODI2NzcsImYiOnRydWUsInUiOjQyNjUxNTA0MCwiaWF0IjoxNzY3NTM0OTg3LCJleHAiOjIwODMxMTA5ODcsImlzcyI6InB1Yi0wIiwic3ViIjoibGluay1yZWRpcmVjdCJ9.C1LTtn49Boyhc798SfNQH0quyxBBp2vOgxfbHDTisLU?
View Email

Book Highlights from 2025

demandloops@substack.com1/4/2026
[https://eotrx.substackcdn.com/open?token=eyJtIjoiPDIwMjUxMjE3MTQwMDAxLjMuNmQ0OGRjNzJjZjBkODU5YS56c2Vib3QycEBtZy1kMS5zdWJzdGFjay5jb20-IiwidSI6NDI2NTE1MDQwLCJyIjoiYkBlbWFpbC5nb21vZHVsci5jb20iLCJkIjoibWctZDEuc3Vic3RhY2suY29tIiwicCI6bnVsbCwidCI6bnVsbCwiYSI6bnVsbCwicyI6MTg4MjY3NywiYyI6InVuZmluaXNoZWQtc3Vic2NyaXB0aW9uIiwiZiI6dHJ1ZSwicG9zaXRpb24iOiJ0b3AiLCJpYXQiOjE3NjU5ODAwMDEsImV4cCI6MTc2ODU3MjAwMSwiaXNzIjoicHViLTAiLCJzdWIiOiJlbyJ9.1UYUY0Cbw4dDwgJKsQDCd9aPpMPC3gVV5rxZ2J8R8uA] Thank you so much for subscribing to Looped In! ͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­ Looped In [https://substackcdn.com/image/fetch/$s_!4X_X!,w_80,h_80,c_fill,f_auto,q_auto:good,fl_progressive:steep,g_auto/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb65113a7-172b-437d-a64a-6a595093002b_256x256.png] Thank you so much for subscribing to Looped In! It really means the world to me. I wanted to create a space to share demand gen and growth learnings and insights, a bit about career development and acceleration, too. So that’s exactly what you’ll get — plus tons of free tips, resources, and guides to help you in your day-to-day. WHAT TO EXPECT You can expect an email from me every week covering topics like: Career acceleration: * Building your career as a product [https://email.mg-d1.substack.com/c/eJx00LluhDAUBdCvwR3INuClcJGG30BeHowVvMhLpMnXR0yaSZH66b57dK1ucKbyVD0ePvr6ADfWbqotPjefInJqlgYLyxAowtkqBcaYIAjaX_sJEYpu4Hbd3q4zkeihQHDNgVHLLZfEWLZKK2csj2PBmglAXlFMV0IJJ8sdm-aJuUU4y6k9sBOr1NN3BZMazcOCwzk6Mt24pu3nZFNAvu5HgZdFtdIBXerRWq7D_DHQbaCbg6Cju1LK9U9yoFse6Ga6v5yP5_hMvYxWF4Ay6jrqMZfkum0od7PbFEKPvj13iNpc4H67cjeXt_peafdOESEo4xwVZYYFv0zTmUJy_SovbO3GpaB9VG8q1P5dvlco9-OFspWseMHoS9GfAAAA__8XS5F2] * The rise of soloprenuership [https://email.mg-d1.substack.com/c/eJx00DvO3SAQBeDVmM4WYJtHQZHmbsMCZnyNYh7iEenP6iPfNH-K1KMz59PxtuM71y8z0hlSaBfC3IZrvobSQ04EzKodVV4QNEyKXStKKSMYbbiPNyastiMctn-7rkyTy2gBVNHdryDUvjK9SpCImp-CWqk5I8FwynfGmWTbE1vWRcCmwEvuTwpq13b53dDlzsu00fiegS0Prlv_c_E5ktCOs-LHYnodSG5z9V7atP6Y-GviL8BoE9w5l_ZPcuKvMvFXv3CuoeGcz7nlO5eKaWBtVyikDHf4HONIoX8dmKy7Ef62lOHu4O2zzxHAMKW4kJJU46aNfjTLO8cM464fZhsOcrQhmW8e0v-7-WhYn8cbFzvb6UbJL8P_BAAA___g9I7K] Building your team: * A guide to hiring your first B2B SaaS demand gen marketer [https://email.mg-d1.substack.com/c/eJx00LuOrSAUxvGnkQ4DKIIFxWl8DcNlqeQIGC6TOE8_ce9mTzE1-bN--ayusKd8qxY3H305wOHSTLHZX9WniJwaZkOknRAoKiY-S0IIRRC0P9cdImRdwa26frwOdEaH4ppOXAPjsxV25GIeBmKIHojlchYgkVeMME4ZFXR8sn7oJzdKZwWzG3GSz7r_LmBSZVc3krBjR_sHV7X939sUkC_rluFlUTU3QKc6ar1KN_zr2NKxxUHQ0Z0pXeVX2bHl6tiSQbsbbynjO7WMN59Lxe8G7xDx4TOgq5nVphBa9PVeIWpzgnufu5o5vdXPUKt3ikrJJiFQVqYbyYvV7ykk18788pZmXAraR_UBQ_XP8VuB_Hw8solTTkaCvhT7CQAA___vU5HX] * Structuring your demand gen interview [https://email.mg-d1.substack.com/c/eJx00DtuhDAQBuDT4A5kDxhM4SIN10B-DGAF28iPKJvTR-w2myL1zD_z6Teq4B7TQ9awueDygbbNVWeT3FVcDMTKftZUmJGgZNPIZ0EpZQS9cue6Y8CkCtpVlbdpz2ZySORCU245HbZpNsAYoEC6zbqfGIVRECeBAmfAJjbcsa7vRjsIayYwG7WCz6r7yahjgasZqN9by7obV5T57Ez0xOV1S_i0yJIqklMepVy56T8aWBpYLHoV7Bnjlf8kG1iuBpbDJRf29hFragN-l_a13-4YWq-C2jGRq-rVRO9rcOWxYlD6RPv6dlV9OqPunlZnJRMCxmkiSepmoE9Vt0cfbT3Tk5urttErF-Sbi5R_u68Z0314gJEzTgdKviT8BgAA__9vqpGV] Tactical growth tips: * Enabling your SDRs: Intent Data + Social Listening [https://email.mg-d1.substack.com/c/eJx00M8OnCAQx_GnkZsGWFQ8cOjF1zADM7qkAoY_TbZP32gv20PP5Mt88nNQ6Uj5Y1rcffTlTdiXZovL_qo-RYbmtViu3cTIiHkaF805F4wC-HM7KFKGSrhB_Xp9iYW9jd3tBEJZ0DROXCMJuSBpi6DdKJVm3kguRyHFLNSdDa9hQqXRzdLtHPW4wPC7kE1VXp3i4ehRDDeugvs5uBSYL9ue6bGYmhux07xrvUr3-tHJtZMrUoCIZ0pX-afs5Hp1cqUI9vTx6D-p5b5gLr2PlWLtESr0JTkPJ7ua3VwKoUVfP9vTEP69dzV7egf3UptHI7SW0zyzbGyn-OMajhQStjM_4NIspgA-mi8Zq_9dvxXK98dKTqMYueLsl5F_AgAA__8jSZOb] * Rethinking event strategies [https://email.mg-d1.substack.com/c/eJx00DtuhDAQBuDT4A5kGwx24SIN10B-DGAtfsiPlTanj9g0myL16J_55jeqwhHzS7awu-DKCbYvTReTXaouBmTlKDTmZkYgyTIzwTHGBIFX7toOCJBVBbup-jEdiUCnpCMxIAQsjHNFxM44VYxZoZd9BIEJcpJiygglC5nu2DAOs524NQs1O7acCTV8F9Cx0tRN2B-9JcONq8o8BhM9cmXbM7wtsuYG6JJnral041dH146uFrwK9ooxlT_Jjq6po2uGerrwcOHo4Qmh9qXe7xwOCkpNbyZ634Krrw2C0hfY3yup6csZdfezOSsJ53ReFpSl7ib81gxH9NG2K7-ZpWkbvXJBfnhQ_bfzViDfiyc6M8LwhNFT0p8AAAD__32hj1g] The hope for this newsletter is to be something you look forward to reading because you know it will spark inspiration, action, and help you build confidence in your career. 💙 So with that, one last thing: Can you take 30 seconds to respond to this email? Hit reply and tell me how you found my newsletter and what you’re hoping to learn. I seriously read and reply to every single one. Thanks again for being here. All the best, Kaylee © 2025 Kaylee Edmondson 548 Market Street PMB 72296, San Francisco, CA 94104 Unsubscribe [https://email.mg-d1.substack.com/c/eJx0kk1zmzAYhH8N3MIgic8DBzeOUzyFjCm241wYfWFkQGKQCMG_voN7SQ89v-_u7D6zFBt-VeOSTLIWUuiGsyc9EU1HMRihpM0SFBM3ooHNExAGfhy5rgts3mPRVVcu-YgNZxU2364IxHaTxJRSRBkAoccDUgdh6NeIxrAmNOBeGNkigS70AQQh8FaZg5yAeRGjIaS1yyI_xs5dc6IMHCzP7a9PDDhrOINp61DV20JX9cgfWRIzTtzuksaYQVtoY8GdBXeM91iyTqlB_6O04A7Ttd_6IzQmHa8eNhbaGdVyaaEtX_aAwtPyDrs2vSk3u11AVh7nfLvRqfzxSVFR4_NBvPU-IK-zTvuuYc9pkJUU5mXrZdvNnP2excd7M6c39ZXfW5SXLzDbbr5-Pe8HijLxJvYzO6cmK69edrugfEl1KnNwEWmQ9sVA4U6Q11P9cQYNPs-iPjjN69g028vP8tCe_KVqRZznoItjUFR39nx84bKoC1mMl9vRHiZSUdX3kxRmqbhce7K_pIaJdILilUElWAKiCAZhaI8JsTz3gcK5ql6xqRsfqPVEmOqxkMk3prb5724mzcfV2IOBD3zXc-3PBP4JAAD__4EG0aU] Get the app [https://substackcdn.com/image/fetch/$s_!IzGP!,w_262,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack.com%2Fimg%2Femail%2Fgeneric-app-button%402x.png]https://email.mg-d1.substack.com/c/eJx0kbuOpjAMhZ8m6UC5QIAixUqj_zVQLoY_GnJR4qw0-_QrmGaaadyc4-PPtjMIZ65fuqcjpNDe4IfWbXM1FAw5Ua_lZtnqFAXNFzVvK2OMU4gmXPsJCapB8LvBH6rkG31ra7xd2SGZ3MTsNmenY1mV35xSdnKS06AFEzMXfOHT3TbKUflp9W4R7mB-nTcz_mtgM4pCJhbPwfPxhkPjPkeXIw1tPyo8LBprB3rpN2JpRP4h4kXE66ebiJcp5bsODXOFoYIPFRwS-eoYd2diMeFMRH7cnmjqJ2BIJxHqkXNCSEjkxzNxOHJGqIPtiDnR0u3ucow9BfzaIRl7gf-mKt1ewZn7nnvwmq-rUMtCq7ZkYk_WeOaYfb_qs1br1udoQtIeokn-yrk0ir_-qDeod_Ak1MxnNjH6V4v_AQAA__9HBKDWStart writing [https://substackcdn.com/image/fetch/$s_!LkrL!,w_270,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack.com%2Fimg%2Femail%2Fpublish-button%402x.png]https://email.mg-d1.substack.com/c/eJx0kUuOnTAQRVdjzx7yBwwMPIhE3gayAGTKBc8K_sifKJ3VR9DpqAfJ0NfXx6dUYCoeMb_pFnYXXHmhfZS2FcguVRcDtVrOG5tAUdR8VMM8McY4RW_cuR4YMJuKdjX1063kM33pHkBNgIOaZ9jncRAGLIwKzAhg0CjqtGBi4IKPvL-edbJTtp8sjAJ2ZqdhNt2vglusIpGe-eNheXfJVQPfO4ieurLuGW8XXXNDeupXrakQ-YWIJxHPz-3r6I7QEpHPVv1aYsuARC4fJSLUlXu0rnkil5v7J4QYKoZK5LLHWDF_xMYn445A5GJajbs7T7SPv5VL7tv959ebJZeN9Ozmdkf00bYzv6upTOQy8p-pZzS1bYXofQuuvq0YzHaifZ8vte10YK7NrM5qPk1CjSPN-t9cWtpmozcuaIveBHvGmAqt_912K5gvcC_UwAfWM_pDi98BAAD__-ftup0 [https://eotrx.substackcdn.com/open?token=eyJtIjoiPDIwMjUxMjE3MTQwMDAxLjMuNmQ0OGRjNzJjZjBkODU5YS56c2Vib3QycEBtZy1kMS5zdWJzdGFjay5jb20-IiwidSI6NDI2NTE1MDQwLCJyIjoiYkBlbWFpbC5nb21vZHVsci5jb20iLCJkIjoibWctZDEuc3Vic3RhY2suY29tIiwicCI6bnVsbCwidCI6bnVsbCwiYSI6bnVsbCwicyI6MTg4MjY3NywiYyI6InVuZmluaXNoZWQtc3Vic2NyaXB0aW9uIiwiZiI6dHJ1ZSwicG9zaXRpb24iOiJib3R0b20iLCJpYXQiOjE3NjU5ODAwMDEsImV4cCI6MTc2ODU3MjAwMSwiaXNzIjoicHViLTAiLCJzdWIiOiJlbyJ9.ICh1N4PaXXpUdu5CRndoslUXNWx4R6nudE8PWeBHl0o][https://email.mg-d1.substack.com/o/eJx0kD2O7CAQBk8zZGMBBoMDzmI1dNuDngGLnyfNnn7lCVabbFzqUvUXoNNR6tuNvMcc24vw2YZvocarx5IZunn13IaFkRNm0avlnAtGCeK5HZSpQifcoP-is1jZy2Hw3nqFRhsEsGT0vs7aLmHWK5eeWHSSSy2kMELdZ9M8LagsBiPDztHqFaavRr50eT0UT8cTxXTHdQj_plASi23bK31aXK-D2DX8FkpKI8f-3iiDPwl_0BkD3E9tEZ2wVi7GsOr8Q_GPYjpKKjjO-nG34bEkiNkhJch4lnI11v8cajSqt1jJRQvNFWf_nfwOAAD__2D7dRU]
View Email

You're in the loop!

demandloops@substack.com12/17/2025