8 min read

The Work We're Losing

The companies effectively using AI aren't automating everything—they're strategically preserving the expertise-building work that creates competitive advantage when technology inevitably fails.
The Work We're Losing
Photo by Steve Johnson / Unsplash

On July 19, 2024, a routine software update from CrowdStrike triggered the largest IT outage in history. Airlines worldwide scrambled to recover from system failures that grounded flights and stranded passengers. Most major carriers—United, American, Southwest—restored operations within hours using manual backup procedures.

Delta Air Lines took five days.

The airline that had most aggressively optimized for automated efficiency suffered the most devastating failure. While competitors quickly switched to manual crew scheduling, Delta's staff had lost the expertise to match pilots and flight attendants to aircraft without their automated systems. The company lost $550 million in five days and cancelled 7,000 flights, affecting 1.4 million passengers.

The failure revealed something profound about optimization strategy. Delta hadn't just automated crew scheduling—they had automated away the human expertise needed when systems failed. The airline that prided itself on operational excellence had optimized away the capabilities that create true resilience.

This pattern—where the pursuit of automation inadvertently eliminates crucial human capabilities—has become a challenge in our technological moment. Organizations face a question they often don't realize they're asking: Which work should we preserve for humans, not because machines can't do it, but because doing it builds the capabilities we need?

The Intentional Friction Principle

The pressure to automate is rational and urgent. Every quarter brings headlines about competitors achieving efficiency gains through AI. Investors expect technology ROI. Teams are stretched thin. When AI can handle tasks that consume hours of human time, the business case often seems obvious.

Yet research from user experience design reveals something counterintuitive: removing all friction from processes often makes them worse, not better. Studies consistently show that strategic obstacles—what researchers call "intentional friction"—can improve decision-making, reduce errors, and enhance learning.

In interfaces, friction prevents mindless automatic interactions and prompts reflection. In workflows, it creates opportunities for pattern recognition and skill development. The principle applies broadly: the optimal level of automation is rarely 100%.

This insight challenges a fundamental assumption driving many AI implementations. We assume that eliminating human effort always improves outcomes. But what if some human effort serves a dual purpose—producing immediate outputs while building the expertise that enables future innovation and adaptation?

The Hidden Curriculum

In every job, there's a formal curriculum and a hidden one. The formal curriculum consists of explicit responsibilities—tasks that appear on job descriptions and performance reviews. The hidden curriculum consists of everything else people learn while doing those tasks: pattern recognition, contextual judgment, and the ability to navigate situations that don't fit standard procedures.

A customer service representative doesn't just resolve complaints, they develop intuition about which customers are likely to churn and why. A financial analyst doesn't just run reports, they build instincts about market dynamics that inform strategic decisions. A nurse doesn't just follow protocols, they develop clinical judgment that catches subtle signs the guidelines might miss.

This expertise accumulates gradually through what psychologist K. Anders Ericsson termed "deliberate practice"—sustained engagement with problems that stretch current capabilities. Research shows that expert performance requires a minimum of 10 years of purposeful practice, with individual differences directly correlating to accumulated engagement hours.

When AI handles these tasks completely, it eliminates what researchers call the "practice ground" where expertise develops. The immediate efficiency gains are visible and measurable. The long-term capability losses are neither.

The Pattern of Reversal

Delta's failure illustrates a broader pattern across industries. The past three years have produced a series of high-profile automation reversals that follow predictable stages: initial efficiency gains, growing dependence on automated systems, and catastrophic vulnerability when those systems fail.

Amazon removed its "Just Walk Out" technology from all U.S. Amazon Fresh stores in April 2024 after discovering that their supposedly automated system required over 1,000 human contractors in India manually reviewing transactions. What was marketed as seamless automation was actually human labor rendered invisible and relocated offshore.

Tesla has recalled over 2 million vehicles due to Autopilot system failures, with ongoing federal investigations into crashes that occurred even after recall updates. The promise of self-driving capabilities has repeatedly confronted the complexity of real-world scenarios that exceed current AI capabilities.

McDonald's ended its three-year AI drive-thru experiment in July 2024 after viral failures where the system misinterpreted orders and frustrated customers. The technology was quietly removed from over 100 locations—a complete reversal of what was supposed to revolutionize fast food ordering.

These failures share common characteristics: they occurred not because the AI was poorly designed, but because organizations had eliminated the human expertise needed to handle exceptions, emergencies, and edge cases that fall outside algorithmic training data.

The Research Evidence

MIT's comprehensive meta-analysis of human-AI collaboration provides the most systematic evidence to date about optimal automation levels. Analyzing 106 experiments across multiple domains, researchers found something surprising: hybrid human-AI teams often performed worse than the best individual performance of either humans or AI alone.

However, the study revealed crucial nuances. Task design mattered enormously. When AI handled information processing while humans maintained decision authority, performance improved. When AI replaced human judgment entirely, performance often declined. The researchers concluded that successful collaboration requires careful attention to which capabilities each party contributes.

This aligns with decades of research on automation in safety-critical industries. NASA studies of airline pilots found that automation-induced skill decay follows predictable patterns, with cognitive abilities like spatial awareness and troubleshooting deteriorating faster than physical skills. Medical research shows that healthcare provider skills can decline as early as three months without practice, with significant deterioration occurring at six months.

The aviation industry has responded with explicit regulation. The FAA now encourages airlines to provide more opportunities for manual flying, stating that "maintaining and improving the knowledge and skills needed for manual flight operations is necessary for safe flight operations." This shift followed incidents like Air France Flight 447, where pilots lacking sufficient manual flying skills couldn't recover from an automation failure.

Three Categories of Work

The most successful AI implementations I've observed distinguish between three types of work before making automation decisions:

Category One: Pure Efficiency Work 

Tasks that are routine, well-defined, and contribute little to capability development. Processing invoices, scheduling appointments, data entry, basic report generation. These are ideal candidates for full automation because they free human capacity without sacrificing learning opportunities.

Category Two: Expertise-Building Work 

Tasks that require judgment, pattern recognition, or contextual decision-making. Complex problem diagnosis, difficult customer relationships, strategic analysis, creative problem-solving. Automating these entirely eliminates opportunities for humans to develop capabilities the organization needs for adaptation and innovation.

Category Three: Collaborative Work 

Tasks where AI can handle information processing while humans maintain decision authority and learning opportunities. AI analyzes data patterns while humans interpret strategic implications. AI drafts initial responses while humans provide contextual refinement. AI surfaces anomalies while humans investigate root causes.

The key insight: Category Two work should be preserved not because humans are better at it today, but because doing it builds the expertise that enables competitive advantage tomorrow.


Before automating any workflow, organizations should ask: "What does a human learn by doing this work, and do we need that learning organizationally?"

Consider two approaches to automating financial analysis:

Approach A: Full Replacement: AI analyzes market data, identifies trends, and generates investment recommendations. Humans implement the recommendations. This maximizes immediate efficiency but eliminates the practice ground where analysts develop market intuition.

Approach B: Augmented Expertise: AI processes vast amounts of market data and surfaces patterns for human analysis. Humans investigate anomalies, develop theories about market behavior, and make investment decisions with AI support. This preserves the learning while eliminating routine data processing.

Both approaches use the same AI capabilities. The difference is strategic intent—whether the goal is to replace human judgment or enhance it.

Research consistently demonstrates that augmented approaches often outperform full automation in complex domains. Harvard Business Review studies show that warehousing operations blending human labor with robotics achieve greater efficiency than full automation alone. Companies like DHL report not just productivity gains but enhanced worker satisfaction and reduced fatigue.

A Cautionary Tale

Klarna's recent reversal illustrates what happens when this distinction gets overlooked. In early 2024, CEO Sebastian Siemiatkowski announced that AI could handle "the work of 700 customer service agents." Response times dropped from 11 minutes to under 2 minutes. The company projected $40 million in annual savings.

By May, Siemiatkowski was publicly acknowledging the strategy had failed. "We went too far," he admitted, noting that "cost unfortunately seems to have been a too predominant evaluation factor" resulting in "lower quality" service.

The technical performance wasn't the issue—the AI handled routine inquiries efficiently. The problem was strategic. Klarna had eliminated the learning environment where future customer service leaders develop judgment about complex financial situations and build relationships that create customer loyalty.

IBM data shows this pattern is widespread: only 25% of AI projects deliver expected ROI, with 55% of companies now regretting decisions to replace human workers entirely.

Collaborative Intelligence

The organizations succeeding with AI aren't just implementing technology. They're designing new forms of human-machine collaboration that preserve expertise development while achieving efficiency gains.

This requires rethinking job design around complementary capabilities. AI excels at information processing, pattern detection in large datasets, and executing well-defined procedures. Humans excel at contextual interpretation, creative problem-solving, and adapting to unprecedented situations.

A clinical application I worked with illustrates this approach. Instead of having AI make diagnostic recommendations, they designed a system where AI analyzes patient data and highlights unusual patterns for physician review. Doctors spend less time on routine data analysis but maintain full engagement with complex diagnostic reasoning. The result has been improved efficiency without sacrificing the clinical judgment that develops through deliberate practice.

The European AI Act of 2024 now mandates human oversight for high-risk applications, requiring that human operators can fully understand AI outputs, intervene when necessary, and override AI decisions. This regulatory framework reflects growing recognition that human oversight isn't a limitation to overcome but a critical capability to preserve.

Rethinking The Metrics

Traditional metrics often obscure the trade-off between immediate efficiency and long-term capability. Most organizations track productivity gains, cost reductions, and task completion times—all of which favor full automation.

But these metrics miss crucial questions: How quickly can the organization adapt when market conditions change? What happens when AI systems encounter unprecedented situations? Is the organization building or losing institutional knowledge?

The most strategically minded organizations supplement efficiency metrics with capability indicators: employee skill development rates, problem-solving effectiveness, and adaptability to new challenges. They measure not just what gets done faster, but what capabilities are being preserved or developed.

This distinction matters because competitive advantage increasingly depends on organizational learning speed. In rapidly changing markets, the companies that perform best are those that can quickly develop new capabilities, adapt to unexpected challenges, and innovate beyond what their current tools can handle.

Research on "collaborative intelligence" shows that organizations combining automation with deliberate expertise preservation achieve up to 85% error reduction compared to either humans or AI working alone. They maintain the human capabilities needed for innovation while freeing capacity from routine work.

Organizations that automate indiscriminately may achieve impressive short-term efficiency gains while inadvertently weakening their ability to adapt and innovate over time.


I'm not arguing to resist automation or avoid AI. I'm reinforcing the message that we should be strategic about what to automate and what to preserve.

Start by mapping the learning that happens in different roles. Which tasks build judgment, pattern recognition, or contextual understanding? Which are purely routine? Where might AI augment human learning rather than replace it?

Design AI implementations around these insights. Automate the work that consumes time without building capability. Preserve the work that develops expertise. Create collaborative approaches where AI handles information processing while humans maintain decision authority and learning opportunities.

The goal isn't to slow down efficiency gains—it's to achieve them without sacrificing the human capabilities that create long-term competitive advantage.

What expertise-building work is your organization at risk of automating away? And how might you design AI collaboration that preserves learning while achieving efficiency gains?