The AI Workflow Problem
Last week I was in a workshop with a product team from a software company. They build highly technical, domain-specific products for technical buyers, but they're shifting toward non-technical customers and trying to move faster with AI tools.
What struck me wasn't the AI discussion itself. It was watching teammates hear, for the first time, how the person sitting next to them actually uses AI.
The designer had been producing frontend code and mockups with Bolt. The product manager was drafting requirements in ChatGPT but frustrated with how verbose the output was. Engineers were writing test plans for AI-generated code but struggling to communicate them to an offshore QA team. Everyone was using AI. No one had talked about it. The tools were completely individualized. Personal productivity hacks that never surfaced into shared practice.
The pattern I kept hearing: AI helped individuals move faster, but not the team. Which created its own kind of frustration.
The assumption that isn't holding
There's an implicit theory behind most AI adoption: faster individuals make faster teams. If everyone accelerates their own work, the whole organization speeds up.
But that's not how collaborative work actually functions.
When a designer ships frontend code the engineer didn't know was coming, that's not acceleration. It's surprise. When AI-generated requirements are verbose in ways that create more work downstream, the PM saved time, and the engineer lost it. When test plans exist but don't reach the offshore team in usable form, individual productivity created collective friction.
Casey Newton's latest Platformer piece surfaces data that makes more sense through this lens. A METR study found experienced developers using AI completed tasks 19 percent slower, but reported feeling 20 percent faster. A Section survey found two-thirds of workers say AI saves them zero to two hours weekly, while executives claim eight or more. PwC reports 56 percent of companies are getting "nothing" from AI investments.
The conventional explanation is that workers are undertrained or resistant. But there's a simpler one: we're measuring individual output when the work that matters is collective.
Where the time actually goes
A Workday study found that time saved using AI was largely offset by extended reviews of AI-generated content. Newton calls this output "workslop," AI-generated work that looks like good work but lacks substance.
The burden doesn't disappear. It shifts. Executives use AI to produce more slides, more emails, more documents. Someone downstream reviews and corrects it. The executive feels productive. The worker feels buried.
This is what I saw in that workshop, just at the team level instead of the org chart. Everyone was producing more. No one was sure the production was landing anywhere useful. The tools hadn't changed how work flowed between people. They'd just increased the volume flowing through unchanged channels.
The real unlock isn't individual speed
Here's what I think the team I observed was bumping into: AI doesn't just let you do your job faster. It lets you do parts of someone else's job. The designer can produce code. The PM can generate test scenarios. The engineer can draft user-facing copy.
This is genuinely new. But it's disorienting if you're still organized around rigid role boundaries and handoff-based workflows.
The teams I've seen actually capture AI's value aren't asking "How can AI help each person move faster?" They're asking "What can we do together now that we couldn't before?" That's a different question. It requires rethinking how work flows, who does what, and what "done" means at each stage.
That's hard. It's change management, not tool deployment. Most organizations don't want to do that work, so they give everyone AI access and hope individual acceleration compounds into team performance. The research, and what I saw last week, suggests it doesn't.
The trust problem is real
I should be careful here. The obvious response is to redesign workflows so AI handles more work end-to-end without human review. But that runs into a barrier organizations can't minimize: AI isn't reliable enough for autonomous operation in most contexts.
Models hallucinate. They miss context. They optimize for plausible over correct. In domains with real consequences, letting AI work without oversight isn't transformation. It's recklessness.
So there's a genuine dilemma. Human review of AI output adds work. Skipping review creates quality problems. Neither path delivers what everyone expected.
The way through is being honest about where transformation is actually possible right now: where trust in AI output exists, where the capability matches the task, where risk is bounded, and where the workflow can genuinely change. That's a smaller set of use cases than the hype suggests. But it's not empty.
What I'm paying attention to
I don't think the story ends with "AI disappoints." I think we're in the awkward middle period where tools have outpaced how we organize work.
The teams figuring this out aren't treating AI as individual productivity enhancement. They're using it to renegotiate boundaries. Between roles, between stages of work, between what requires human judgment and what doesn't. That's slower to implement than handing out ChatGPT licenses. But it's where the actual value lives.
Back in that workshop, the most useful part wasn't the AI discussion itself. It was people finally talking to each other about how they work. The tools had been siloed because the conversations had been siloed.
Maybe that's the real unlock. Not faster individuals, but teams that actually understand how their work connects and can redesign those connections with AI as a catalyst rather than just an accelerant.
Member discussion