2026.17: Let the Machine Lift. You Lead.
AI compresses time. It does not compress responsibility.
Happy Friday from sunny Phoenix!
Years ago, I worked with a utility company that had just lost a worker in the field.
Their ask was simple. Move our SOPs from 3-ring binders to iPads. Substitute paper for a screen. Ship it. Done.
We didn’t stop there.
We built something that changed how the work happened, not just where it was stored.
That engagement taught me something I keep using. The technology mattered. The judgment about how to use the technology mattered more. Technology without the judgment behind it would have just been a worse binder.
I’ve been thinking about that engagement a lot this month. Three articles I read are all making the same warning, in different vocabulary, about the moment we’re in with AI. Different sources, different audiences, same line in the sand.
That’s a Signal worth slowing down for.
Let’s break it down.
Signal:
The pressure is real, and it’s pushing leaders sideways
Boards ask about AI. Customers ask. Investors ask. Most leaders I talk to feel like they’re behind, even when they’re using it more than they admit.
That pressure produces a specific failure mode: people start handing off the calls they get paid to make. Not the prep work. Not the data crunching. The judgment.
Wrong trade. AI compresses time. It does not compress responsibility. The faster you move, the more deliberately you have to think about what you’re still on the hook for.
Dr. Ruben Pedentura’s SAMR model has been kicking around education for decades. Anyone who knows me has heard me beat that drum for most of them. Substitution, Augmentation, Modification, Redefinition. Most digital transformation work, in my experience, stops at Substitution. We took a paper binder and made it a PDF on an iPad. Same job, different surface. We took an in-person meeting and made it a Zoom call. Same meeting, lower lighting.
Same thing’s happening with AI right now. People are using it as a substitute for thinking instead of as a tool that changes what thinking can produce. That’s a missed opportunity at best. At worst, well I’d rather not think about it.
Digital transformation has largely been performative. AI transformation is heading the same way unless leaders draw a clearer line.
Here’s what each of the 3 articles says about that line, and what to do about it.
Article 1: Get literate enough to ask the right question
Source: MIT Sloan Executive Education, “AI and Leadership: Navigating Strategy, Ethics, and Opportunity” (April 24, 2026)
The MIT Sloan Executive Education team makes a useful argument for leaders who feel like they need a computer science degree to keep up. They cite Gallup data showing 69% of leaders now use AI at work, and 19% use it daily. That’s a lot of people relying on outputs they don’t fully understand.
Their position: you don’t need to build models to lead well in this environment. You need enough literacy to interpret what the model is telling you, and to connect it to what the business is actually trying to do. They call it AI literacy. It’s the difference between knowing how a tool works and knowing whether the tool is solving a real problem.
The trap they flag is one I see every week. AI initiatives that aren’t tied to clear business priorities tend to produce insights nobody can act on. Pilots get launched. Decks get presented. Nothing changes. 6 months later, the tool is forgotten and the budget is gone.
Whenever a student or a client tells me they want to use AI, my first question is the same. What problem are you trying to solve? If they can’t answer, we’re not having a serious conversation. We’re doing technology in search of a problem, which is the most expensive form of theatre I know.
Takeaway: Get literate enough to ask the right questions. Tie every AI initiative to a business outcome you can measure. Then build the team’s ability to use it, not just yours.
Article 2: Speed is not the same as quality
Source: Loeb Leadership, “Why Human Judgment Is the Ultimate Competitive Advantage in the AI Era” (April 27, 2026)
The Loeb piece makes a sharper point. AI outputs sound authoritative because they’re confident, fluent, and well-packaged. They feel like answers.
They’re not. They’re patterns the system found inside the data and the design choices it was given. Every output reflects somebody’s assumptions about what counts as a good answer. The model doesn’t tell you that. You have to remember it.
Their warning lands hard: when you compress decision time, the margin for error compresses too. Speed without scrutiny gets you to the wrong conclusion faster than you used to get there. That’s not progress. That’s risk dressed up as productivity.
Loeb pulls out 4 habits worth keeping in this environment:
Frame the trade-offs clearly
Look at second-order consequences
Sort signal from noise
Resist the pull to certainty
None of those are technical. They’re the same calls a good operator has always made. The only thing that changed is the speed of the inputs hitting your desk.
The other piece of their argument I want to flag is governance. AI governance gets called a compliance issue or an IT problem. It’s neither. The piece argues, and I agree, that AI governance is a leadership job. Who validates outputs before they shape decisions. What escalates and to whom. How you watch for bias and drift. Where accountability sits when something goes sideways. Those are leader questions. They don’t get answered by a vendor or a Slack channel for the IT team.
If you haven’t had this conversation with your senior team yet, that’s your tell. Not “are we using AI enough.” But “do we know who is on the hook when AI gets it wrong.” The answer needs to be a person, not a policy.
I’ll say this part plainly. When one of the big 5 consulting firms hands you an 800-page report on how AI will transform your business but spends no actual time inside your business understanding how the work works, that report is bullshit. Governance is not delegated. It’s yours.
Takeaway: AI outputs are not neutral. Treat them like a draft from a smart but biased analyst. Speed is fine. Skipping the second look is not. And governance lives in your office, not down the hall.
Article 3: Tasks belong to software. Trust belongs to you.
Source: MIT Sloan Management Review, “When Not to Use AI” by Benjamin Laker (March 30, 2026)
This is the cleanest framing I’ve read on the subject. Laker splits your work into 2 buckets: tasks and trust.
Tasks are repeatable. They benefit from speed. Hand those off. Let the model draft the timeline. Let it crunch the numbers. Let it generate the slide skeleton. That’s what it’s good at.
Trust is the human currency of management. Feedback. Relationships. Difficult news. Hiring calls where you’re reading resilience between the lines of how someone tells a story. Strategy decisions that hit how the team feels about the future. Don’t hand those off.
His test, which I think is the best line in any of these 3 articles: would I stand by this if my name were on it alone? Would I say it out loud to someone I respect? If the answer is no, you slow down and re-engage.
Laker also flags a sneaky failure. AI tools are designed to be agreeable. They produce arguments that support whatever direction you’re already leaning. That feels like decisiveness. It’s actually a narrowing of your thinking. You ask a leading question, you get a confirmation. You feel sharp. You’re actually getting dumber, slowly.
His fix is the one I’d steal first. Occasionally instruct the tool to argue the other side. If you’re ready to reorganize a team, ask for the strongest case against. If you’re set on hiring someone, ask for the reasons it might not work. Force the counter-argument before you commit.
The line that stuck with me from his piece is about leaving the lifting to the machine and the leading to you. That’s the whole game right there.
Takeaway: Sort your week into tasks and trust. Software does the tasks. You do the trust. Once a week, ask the model to disagree with you, hard.
Scale:
The mistakes I see most often
Across the 3 articles, the same failure modes show up. Worth naming them plain.
Sending what the model wrote. If the output is going out under your name, your hands need to be on it. Skim doesn’t count. Edit until you’d defend every sentence to the person on the receiving end.
Treating velocity as virtue. Faster decisions aren’t better decisions. They’re just faster. Board pressure to “move on AI” doesn’t override your job to think. Decision quality has to outpace decision velocity, and that gap is where you actually earn your title.
Letting the model agree with you. If you only ask AI to support your view, you’re using it as a mirror. Mirrors don’t sharpen thinking. Counter-arguments do.
Outsourcing the relational calls. Performance feedback. Layoffs. Promotions. The opening of a hard conversation. None of these belong to a model. They belong to the person who has to live with the result.
Treating governance as somebody else’s problem. Compliance is not governance. IT policy is not governance. The questions about who validates, who escalates, and who’s accountable when things go wrong have to be answered at the leadership table. If they’re not, you have a hole. The hole shows up later, usually in front of customers or regulators.
Try this for a week
No new platform required. Better hygiene on what you’re already doing will get you most of the way there.
Pick 3 decisions on your calendar for the coming week.
For the first one, go AI-free. No prompts, no drafts, no summaries. Think it through yourself, with whatever inputs you’d normally have. Notice what changes when you have to carry the full weight. Most leaders find their reasoning gets sharper, not slower.
For the second, let AI do the prep. The data pull, the timeline, the meeting agenda. Then make the call yourself, in your own words, with your name on it. Keep the lifting and the leading separate. On purpose.
For the third, push back on yourself. Ask AI for the strongest argument against the direction you’re leaning. Read it carefully. If it doesn’t change your mind, you’ll know more clearly why. If it does, you just saved yourself a bad call.
That’s a workable first month. Snow melts from the edges. The leaders who handle this well will not be the ones who shouted “AI everywhere” the loudest. They’ll be the ones who quietly figured out where it belonged in their work and where it didn’t, and who kept their hands on the wheel where it counted.
The 3 articles converge on one idea, even though none of them say it quite this way. AI is not the threat to leadership. The threat is leaders who quietly stop leading because the tool is doing enough of the work to look fine from the outside.
It’s not fine. The people on your team can tell the difference between a message you wrote and a message you forwarded. The candidate you interviewed can tell whether you were actually listening. The customer reading your apology email knows whether somebody meant it.
Let the machine lift. You lead.
Deep Dive:
No deep dive this week. The 3 articles above are doing the work, and I’d rather you spend your time with them than with another piece of mine layered on top.
Go read the 3 articles. I’ll meet you back here next week.
Thanks for reading!
Where have you seen this go right? Where have you seen it go sideways?
I’m genuinely curious. The patterns are still being written, and the ones we name out loud are the ones we get better at handling.
Drop a comment, send me a note at jt@jasontate.ca, or push back on anything I got wrong. The newsletter isn’t the conversation.
The conversation is the conversation.
See you next Friday.
Best,
JT
PS - If someone forwarded this to you and you want it in your inbox directly, subscribe HERE.

