In the 1830s, an Indian widow wrote to Samachar Darpan, a Bengali newspaper, begging for help after British textile mills destroyed her livelihood as a hand-spinner. The machines offered 10x-100x leverage on labor—one mill worker could produce what hundreds of hand-spinners produced. The economics were irresistible.
Leverage determines winners. Always has.
Software engineers treating AI coding as the destination? We’re repeating history.
The Pattern We Keep Missing
Across the history of civilization, builders and specialists were always the first to adopt new technology. When computers emerged in the mid-20th century, it wasn’t “computer scientists” who first used them—that profession didn’t exist yet. It was mathematicians and physicists who needed computational power to solve problems in their domains: simulating differential equations, calculating ballistic trajectories, analyzing atomic structures.
They didn’t say “computers are for math and physics.” They just happened to be the builders who needed what computers could do. They couldn’t foresee we’d be using them to play Candy Crush and look at cat videos.
The same pattern is unfolding with AI. Software engineers are the early adopters, and we naturally see coding as the problem domain to solve. We’re building better code completion, smarter debuggers, automated test generation. But AI coding tools like Claude Code and GitHub Copilot, despite starting as “code agents,” are actually generic platforms—you can equip them with tools and procedural knowledge to do pretty much anything, especially knowledge work, though not limited to it. This isn’t the destination—it’s just the starting line.
The Universal Technology Adoption Pattern
Throughout history, transformative technologies follow a predictable path:
graph TD
A[New Technology Emerges] --> B[Specialists/Builders Adopt First]
B --> C[Specialists Think: This is FOR Our Domain]
C --> D[Economics Drive Broader Adoption]
D --> E[Every Domain Adopts the Technology]
E --> F[Resisters Are Displaced, Adapters Thrive]
style A fill:#1e3a8a,stroke:#1e40af,color:#fff
style B fill:#4338ca,stroke:#4f46e5,color:#fff
style C fill:#d97706,stroke:#f59e0b,color:#fff
style D fill:#0f766e,stroke:#0d9488,color:#fff
style E fill:#047857,stroke:#059669,color:#fff
style F fill:#be123c,stroke:#e11d48,color:#fff
| Examples: Computers (Math/Physics → Everyone) | AI (Software Engineering → Every Profession) |
We’ve already started moving past step 3 with AI. We’re no longer just “coding agents”—we’re seeing domain-specific AI like Claude for Life Sciences and Claude for Financial Services, where AI is being tailored for specific professional domains beyond software development.
The Historical Precedent: Two Cautionary Tales
History provides stark examples of what happens when technology disrupts established crafts. The human cost is real, but so is the inevitability.
The Indian Widow’s Plea (1830s)
In the early 19th century, a widow wrote to Samachar Darpan, a Bengali newspaper, with a desperate plea. She begged readers to understand what was happening to her livelihood. Machine-made yarn from British mills in Manchester had flooded the Indian market. The yarn was cheaper, more consistent, and produced at a scale that made hand-spinning economically impossible.
Her letter wasn’t asking for sympathy—it was documenting the destruction of an entire way of life. The machines won. Not because they were morally right, but because the economics were irresistible.
The Lyon Weavers’ Resistance (1800s)
When Joseph Marie Jacquard invented his automated loom in Lyon, France, the weavers’ response was violent. They destroyed the machines. They threatened Jacquard’s life. They saw automation as an existential threat to their craft and their identity as skilled artisans.
But the resistance couldn’t last. Eventually, some weavers made a different choice: they learned to work with the automated looms instead of against them. Those weavers—the ones who embraced the technology—transformed Lyon into a manufacturing powerhouse. The ones who refused? They disappeared from history.
The lesson isn’t that automation is good or bad. It’s that when the economics shift, resistance is temporary. Adaptation is survival.
Why Software Engineers Think We’re Special
We’re conflating who adopts first with what the technology is for. Just like mathematicians in the 1950s didn’t define the computer’s destiny, software engineers in the 2020s don’t define AI’s destiny.
The Real Future: AI for Everyone, Everything
Every profession will use AI. Not because software engineers evangelized it, but because the economics will be irresistible. Consider:
Doctors: AI analyzing medical imaging, suggesting diagnoses, predicting patient outcomes based on millions of cases no single physician could ever see.
Lawyers: AI reviewing case law, drafting contracts, identifying precedents across decades of legal documents.
Teachers: AI personalizing curriculum for each student’s learning style, generating practice problems, providing instant feedback at 3 AM when the teacher is asleep.
Designers: AI generating variations, testing accessibility, optimizing layouts based on user behavior patterns.
Architects: AI simulating structural integrity, optimizing energy efficiency, generating design alternatives that meet complex constraints.
The pattern is universal: domain experts using AI to amplify their expertise, not replace it. Just like mathematicians used computers to solve bigger equations, not to eliminate mathematics.
The Real Work Ahead
If coding isn’t the destination, what is?
Everything.
We’re not building “AI for coding.” We’re building the foundational layer that makes AI accessible to everyone, for everything. The infrastructure work includes:
- Security and privacy models that let sensitive domains (healthcare, finance, legal) trust AI with their data
- Agentic workflow patterns that work across operating systems and applications
- Context management systems that let AI understand domain-specific problems without requiring software engineering expertise
- Tool ecosystems like the Model Context Protocol (MCP) that let any domain expert connect AI to their specialized tools
- User experience paradigms that make AI accessible to people who’ve never written a line of code
This is the real work. The work that determines whether AI becomes available to doctors, teachers, and architects—or stays locked in development environments.
The “I Know Kung Fu” Moment: Skills and Commoditized Expertise
We’re already seeing this transformation accelerate with GitHub Copilot Skills and similar approaches like Anthropic’s agent skills. These systems are packetizing knowledge in a way that’s shareable across agents—commoditizing expertise itself.
Think of it as the “I know Kung Fu” moment from The Matrix. Neo downloads martial arts expertise in seconds. Similarly, an agent can acquire a “design UI” skill and suddenly have capabilities that previously required years of training.
The irony: Code becomes central to how agents work, even when the task has nothing to do with coding.
Consider an agent writing a book. It might use skills and a code sandbox to:
- Lay out text with images by writing layout code
- Generate images by making API calls to models
- Format chapters by manipulating document structures
- Create data visualizations using Python plotting libraries
The agent writes and executes code in a sandboxed environment, leveraging the vast Python/JavaScript ecosystem—not because it’s a “coding task,” but because code is the universal language for getting things done programmatically.
This is both powerful and unsettling. I don’t need a designer if an agent can design my UI well enough using design skills. The specialists aren’t eliminated—but many are. Their expertise is captured, packaged, and distributed at scale, much like artisans who made hand-spun ceramics were largely displaced by mass-market machine-produced pottery.
The Lyon weavers who learned to work with Jacquard looms didn’t just survive—they became more productive than they could have imagined. But the ones who refused? They became redundant. Skills represent the same inflection point for knowledge workers today: adapt and leverage the technology, or become obsolete.
The Question Isn’t “If”—It’s “Who Builds It”
The Indian widow couldn’t stop the machines from Manchester. The Lyon weavers couldn’t stop the Jacquard loom. And software engineers won’t determine whether AI spreads beyond coding.
Why should we build for that future? Economics and leverage.
Machines offered 10x-100x leverage on labor—one mill worker could produce what hundreds of hand-spinners produced. The same pattern drives AI adoption today. A doctor using AI to analyze medical imaging has 10x leverage. A lawyer using AI for case research can cover 100x more precedents. A teacher using AI for personalized curriculum can serve 1000x more learning paths.
But here’s the critical balance: leverage amplifies both productivity and mistakes. That doctor with 10x leverage also has 10x the impact when the AI misses a diagnosis. The lawyer who processes 100x more precedents can make 100x more harmful errors if the AI hallucinates case law. The teacher serving 1000x more learning paths could misdirect 1000x more students if the AI generates incorrect content.
Leverage determines winners—but only when balanced with the cost of getting things wrong. The teams building infrastructure that maximizes leverage while managing the amplified consequences will capture the value.
What I’m Watching For
At enterprise scale, I’m tracking these signals:
- Healthcare AI adoption rates - Are doctors using AI tools directly for diagnosis and treatment planning?
- Legal tech startups - How fast are law firms adopting AI for case research and contract analysis?
- Education platforms - Are teachers using AI to personalize curriculum and generate adaptive learning?
- Design tool integration - Are designers orchestrating AI as part of their creative process?
These adoption curves will tell us whether we built the right infrastructure—whether we enabled AI for everyone, or just for ourselves.
The Uncomfortable Truth
We’re the builders, so we naturally focus on building better tools. But “better code completion” is to AI what “faster differential equation solvers” was to computers—important for the first adopters, irrelevant to the eventual transformation.
The real transformation happens when every domain expert can orchestrate AI for their problems—when doctors, lawyers, teachers, and designers use AI as naturally as they use computers today.
What domain outside software are you seeing adopt AI fastest? Connect with me on LinkedIn to share your observations.