The New Small Team
Most people think AI gives startups more code. I think the bigger gift is process — and that changes everything about what a small team can be.
Here is something that surprised me about building a company with a very small team: the hardest part was never writing the code.
I know that sounds wrong. If you asked most people what AI is going to do for startups, they would say it helps them write code faster. And it does. But that is like saying the printing press helped people write faster. Technically true, but it misses almost everything interesting about what actually changed.
The thing that actually kills small teams is not the speed at which they can build features. It is everything else. The review that did not happen. The test that nobody wrote. The migration that got forgotten in the deploy. The postmortem that never took place because everyone was already sprinting toward the next thing. The documentation that exists only in one founder's head. The edge case that seemed unlikely until it brought down production on a Sunday evening.
I think most people in the technology industry have this backwards. They look at AI and see a tool that writes code. What I see is a tool that can finally give small teams something they have never been able to afford: process.
And that, it turns out, matters a lot more.
What Process Really Means#
Let me tell you what I mean by process, because the word has a terrible reputation among startup people, and honestly, for good reason.
When most founders hear "process," they picture the worst version of corporate life. Meetings about meetings. Jira boards with seventeen columns. A change review committee that meets biweekly. Somebody whose entire job is to update a Confluence page that nobody reads. The whole apparatus of big-company engineering that exists, as far as anyone can tell, primarily to slow things down.
And here is the thing: that instinct is not wrong. Most process in most organizations is bad. It accretes over time, like barnacles on a hull. Each individual rule was created in response to some specific incident, but nobody ever removes rules, so the system only gets heavier. Eventually you end up with a company where it takes three weeks to ship a button color change, and everyone involved will explain to you, with genuine conviction, why each step in that three-week journey is absolutely necessary.
So startups rightly run from all of that. "Move fast and break things" was not just a slogan. It was a survival strategy. When you are four people trying to find product-market fit, you cannot afford the overhead of a mature engineering organization. You just have to ship and see what happens.
But there is a cost to this that people do not talk about enough.
How Startups Actually Die#
I have been thinking about this a lot since we started building DenchClaw, and especially since going through Y Combinator. If you watch enough startups closely enough, you start to notice a pattern in how the ones that struggle tend to struggle.
It is rarely a talent problem. YC companies are filled with absurdly talented people. The median technical ability in any YC batch is remarkably high. These are people who can build almost anything. That is not the bottleneck.
It is also rarely an idea problem, at least not in the way people usually mean. Most YC companies are working on something reasonable. The idea might need refinement, but the core insight is usually there.
What I have seen kill startups, or at least severely wound them, is something more mundane: they ship things that are not quite ready, in ways they do not quite understand, and then spend enormous amounts of time firefighting problems that were entirely preventable.
A founder pushes a database migration on Friday afternoon without a rollback plan. The migration has a subtle bug that corrupts a column for about twelve percent of users. They do not notice until Monday because there is no monitoring. They spend Monday through Wednesday fixing it, which pushes back the feature they promised a key customer by Thursday. The customer churns. The founder tells themselves the problem was "we need to hire someone to handle ops." But the actual problem was simpler: nobody reviewed the migration before it went out.
This kind of thing happens constantly. Not the dramatic, movie-version kind of startup failure where the founders have a falling out or the market disappears. Just a steady accumulation of small wounds, each one caused by skipping a step that everyone knows they should not have skipped.
I call this process debt, by analogy with technical debt. And like technical debt, it compounds.
The Economics of Attention#
Why do small teams skip these steps? It is not because they are lazy or because they do not know better. It is pure economics.
Think about what a large engineering organization actually buys you. Not just more hands to write code. That is part of it, but there is something subtler going on. A big team buys you specialized attention.
At a company with a hundred engineers, there is someone whose job it is to think about database migrations. Not just to run them, but to think about what could go wrong. There is someone who reviews every deploy for security implications. There is someone who maintains the test infrastructure. Someone who writes the runbooks. Someone who monitors the canary deployment. Someone who does the weekly retrospective. Someone who keeps the documentation current.
None of these people are doing anything that a smart founder could not also do. The problem is not capability. The problem is that a smart founder is already doing seventeen other things. When you are the CEO, the CTO, the head of product, the support team, and the person who fixes the printer, you do not also have time to carefully review every migration for edge cases. You know you should. You just do not have the hours.
This is what I mean when I say big teams have a hidden advantage. It looks like their advantage is having more people who can write code. But really, their advantage is having more people who can pay attention to different things at the same time.
And attention, it turns out, is the thing that actually determines whether software is good or bad. Not the raw coding ability. The attention.
I think about it like this: imagine two restaurants. One has a single chef who is extraordinarily talented. The other has a good chef plus a sous chef, a pastry specialist, someone managing the line, someone doing quality control on every plate before it goes out, and a manager watching the whole operation. The single extraordinary chef might produce a better dish on their best night. But the restaurant with the team will produce a more consistently excellent experience across every table, every night, every week. Because the second restaurant has specialized attention covering every part of the operation. Nothing falls through the cracks.
That is what headcount buys large engineering organizations. Not genius. Consistency. And consistency, in the long run, is what customers actually care about.
When Attention Gets Cheap#
The reason this matters right now is that for the first time in the history of software, the cost of specialized attention is collapsing.
This is not a small thing. This is one of those shifts that is easy to underestimate because it does not look dramatic from the outside. There is no single moment where everything changes. It just gradually becomes possible to do things that were previously impossible, and the people who notice first get an enormous advantage.
It has happened before. Several times, actually.
Think about what happened when personal computers became cheap enough for small businesses to afford. Before that, you needed a mainframe and a staff of operators to do serious computation. That meant only big companies could do things like real-time inventory management or sophisticated financial modeling. When PCs arrived, suddenly a two-person accounting firm could do things that previously required a department of twenty. The capability was democratized. But what specifically was democratized? Not the ability to add numbers faster. Calculators already existed. What was democratized was the ability to maintain complex, structured processes around the numbers: spreadsheets, databases, automated reports. The process around the work.
Or think about what the internet did to publishing. Before the web, publishing required editors, typesetters, printers, distributors, and retailers. The New York Times had hundreds of people involved in getting a story from a reporter's notebook to a reader's doorstep. When blogging arrived, a single person could publish to the entire world. But again, the interesting change was not that writing became faster. People could always write fast. The interesting change was that one person could now handle the entire publication process — editing, layout, distribution, reader feedback — that used to require a team.
In both cases, what technology actually did was collapse the minimum team size required to achieve a certain level of quality and rigor. Not by making individuals faster at their specific task, but by making the surrounding process achievable by fewer people.
I think AI is doing the same thing to software development right now. And like those previous shifts, most people are focusing on the wrong part of it.
Rigor on Demand#
Here is a concrete example from our own work.
We built DenchClaw on top of something called gstack, which is a structured workflow that breaks the development process into stages: Think, Plan, Build, Review, Test, Ship, Reflect. Each stage has what we call specialist roles — about eighteen of them — that approximate the kind of specialized attention a larger team would provide.
So when a developer is working on a feature, they are not just writing code and pushing it. Before building, there is a scoping step that forces the work to get sharper. Like a good staff engineer sitting across from you, asking uncomfortable questions. What exactly are you trying to accomplish? What are you explicitly not doing? What could go wrong?
Before merging, there is a review that looks at the change from multiple angles: data safety, trust boundaries, scope drift. Not just "does the code work" but "should this code exist in this form." This is the kind of review that in a big company would be done by a senior engineer who has been burned enough times to know where the bodies are buried.
Before shipping, there is a QA pass and a release checklist. After shipping, there is a step that captures what changed and what was learned. Over time, these reflections accumulate into institutional knowledge, which is something small teams have traditionally been terrible at building.
Now, the names of these stages do not matter. You could call them anything. The insight is the structure underneath: the idea that a solo developer or a two-person team can invoke specific kinds of attention at specific moments in the development cycle. Not constantly. Not as standing overhead. Just when it is needed.
This is profoundly different from both the startup approach (ship and pray) and the big company approach (process as permanent overhead). It is more like having a team of specialists on call. You do not pay for them full-time. But when you need the security reviewer's perspective, or the QA engineer's eye, or the staff engineer's architectural judgment, you can summon it.
I sometimes describe it as the difference between "I shipped it and hope it works" and "I shipped it with the rigor of a team of ten." And that distinction, once you have experienced it, is hard to go back from.
Let me be more specific about what I mean by "rigor" here, because it is an abstract word that can mean almost anything.
Here is a thing that happened to a friend's startup last year. They were a two-person team, moving fast, doing well. They had a feature that let users export their data as a CSV file. Straightforward stuff. One of the founders added a new field to the export and pushed it to production. He did not write a test for the export because, well, the export had always worked and the change was trivial.
The new field happened to contain commas in some of the data. The CSV was not properly escaping these commas. So for about ten percent of users, the export was generating malformed files. Some of those users were importing the data into other systems. Those systems broke.
The founders did not hear about it for three days because their error monitoring was not set up to catch this kind of silent data corruption. When they did hear about it, it was through angry emails from two enterprise customers who had built workflows on top of the export. One of those customers left.
The total engineering time to fix the actual bug was about twenty minutes. The business cost was one churned enterprise customer and significant reputation damage.
Would a code review have caught this? Probably. Any experienced engineer looking at the diff would have thought "wait, are we escaping commas in the CSV output?" It is exactly the kind of thing a second pair of eyes catches. But there was no second pair of eyes because there was no second engineer.
Would a test have caught this? Definitely. A test that checked the export with data containing special characters would have failed immediately. But nobody wrote that test because the feature "already worked."
Would a QA step before deploy have caught this? Almost certainly. Running the export with a few different data shapes is the most basic thing a QA person would do.
Any one of these safety nets would have prevented the problem. The startup had zero of them. Not because the founders were sloppy — they were genuinely talented engineers — but because they did not have the bandwidth to maintain all of those nets while also building new features, talking to customers, raising money, and doing everything else a two-person startup requires.
This is the kind of story I hear constantly. Not dramatic failures. Just small, preventable incidents that bleed a startup's energy and credibility, one paper cut at a time.
And this is exactly what on-demand process can fix. Not by replacing the founders' judgment about what to build or how to build it. Just by making sure the boring, important steps actually happen.
What AI Cannot Do#
There is something important I want to be honest about here, because I think the AI discourse right now is plagued by overclaiming, and I do not want to add to it.
AI cannot supply taste.
It seems to me that this is the single most important thing to understand about what AI can and cannot do for a team. AI is very good at applying rules. It is very good at checking things against criteria. It is very good at being thorough in a way that tired humans are not.
But it is not good at deciding what matters. It cannot tell you whether your product idea is worth pursuing. It cannot tell you whether a feature is elegant or merely functional. It cannot tell you when your interface design is technically correct but emotionally wrong. It cannot feel the difference between software that people tolerate and software that people love.
Taste, conviction, judgment about what to build in the first place — these remain stubbornly human skills. And I think they will remain human skills for a long time, possibly forever, though I am not confident enough to make that claim absolutely.
This is why I get nervous when people talk about AI replacing entire engineering teams. What it can replace is the mechanical, systematic parts of the engineering process. The review, the testing, the documentation, the deployment checks, the monitoring. These are things that require consistency and thoroughness, not creative insight.
But the creative insight — the decision to build this thing rather than that thing, the judgment call about what is good enough to ship, the instinct for when a feature is going to confuse users despite being logically sound — that is where humans earn their keep. And I would argue that in an AI-augmented world, taste becomes more important, not less. Because when the cost of building goes down, the cost of building the wrong thing stays the same. If you can ship ten features in the time it used to take to ship one, the question of which ten features to ship becomes proportionally more important.
So AI does not eliminate the need for great people. It changes what great people spend their time on. Less time on the mechanical execution. More time on the decisions that actually matter. And honestly, I think that is a much better version of the job.
The Minimum Viable Team#
I want to take a slight digression here because there is a historical pattern that I find really interesting and relevant.
Every time a new technology makes some aspect of production dramatically cheaper, there is a period where people focus entirely on the direct cost savings, and they miss the structural changes that end up mattering much more.
When the assembly line was invented, the immediate story was "cars are cheaper now." And they were. But the much bigger story was that the assembly line changed the minimum viable manufacturing operation. Before Ford, building a car was an artisanal process. A small team of skilled craftsmen would build each car mostly by hand. The assembly line made it possible to produce cars with less-skilled workers, but it also required massive capital investment and huge teams. So the minimum viable car company went from "small workshop" to "enormous factory."
Now here is what is interesting: AI is doing the opposite. Instead of raising the minimum team size needed for quality output, it is lowering it. Dramatically.
This has happened a few times in the history of technology, and when it does, the effects are profound. When the personal computer replaced the mainframe, the minimum team size for serious computation dropped from dozens to one. When SaaS replaced on-premises software, the minimum team size for running a business application dropped from a whole IT department to one person with a credit card. When AWS replaced physical data centers, the minimum team size for running a global web service dropped from a whole ops team to a single developer.
Each of these transitions created an explosion of new companies. Not because the technology was better in some abstract sense, but because it lowered the barrier to entry enough that many more people could attempt to build something. And when more people attempt to build things, more good things get built. That is just statistics.
I think AI-powered process is the next version of this. When a two-person team can operate with the rigor of a twenty-person team, a lot of problems that used to require twenty people to solve become accessible to two-person teams. And there are a lot more two-person teams in the world than twenty-person teams.
I have been going through YC twice now — first as a founder in S24, and then watching subsequent batches closely from the alumni community. And I can see the shift happening in real time.
Five years ago, a typical YC company at demo day had somewhere between two and five people. The technical cofounder was usually buried in code, moving as fast as humanly possible. The process was basically: write code, push it, see if anything breaks, fix whatever breaks, repeat. Maybe there is a Slack channel where they paste error logs. Maybe there is a GitHub issues list that functions as a to-do list. But there is no formal review, no systematic testing, no deployment process beyond "push to main and pray."
This was fine, honestly. At the earliest stages, speed matters more than rigor. You are trying to find out if anyone even wants what you are building. Getting the answer to that question one week faster is worth a lot. Getting the answer with perfect code quality is worth basically nothing.
But here is the thing. There was always a transition point — usually around the time a company gets its first serious customers, or raises its Series A, or both — where the startup approach stops working and you need to start building real process. And that transition was always incredibly painful. Because you have to slow down to speed up. You have to hire people specifically for process roles. You have to convince engineers who joined a startup because they wanted to "move fast" that they now need to write tests and do code reviews and follow deployment checklists. It is a culture shift, and culture shifts are hard.
What I see happening now is that companies are able to skip, or at least dramatically smooth out, that transition. Because they can have the rigor from the beginning, without the overhead. A two-person team in 2026 can have automated review, automated testing, deployment checks, and post-ship reflection as part of their natural workflow, invoked on demand, without hiring anyone.
This means the difference between a two-person team and a twenty-person team is shrinking. Not in terms of raw coding output — that gap is also shrinking, but that is the less interesting part. In terms of quality, consistency, and process maturity. The two-person team can now operate like a much more mature organization than its headcount would suggest.
And I think this is going to be one of the most important changes in the startup landscape over the next few years.
Let me talk about what this means for the twenty-person team, because that is the question people always ask, and it is a fair one.
If a two-person team can now have the rigor of a twenty-person team, what happens to the actual twenty-person teams?
I think the honest answer is: some of them shrink, and the ones that do not shrink become capable of much more ambitious things.
There is a version of this story that is purely about cost-cutting, and I do not find that version very interesting. Yes, some companies will use AI to do the same things with fewer people. That will happen. It is already happening.
But the version I find more compelling is the one where the twenty-person team does not shrink to two. Instead, it stays at twenty but operates at a level that previously would have required two hundred. The scope of what they can attempt expands dramatically because all twenty people now have AI-powered process supporting them. The reviews are better. The tests are more comprehensive. The deployments are more reliable. The documentation is more complete. Every person on the team is more effective because the surrounding systems are better.
This is what usually happens with technology-driven productivity improvements, by the way. When spreadsheets replaced manual bookkeeping, we did not end up with fewer accountants. We ended up with accountants who could do vastly more sophisticated work. When word processors replaced typewriters, we did not end up with fewer writers. We ended up with writers who could revise more, experiment more, and produce more polished work.
I think the same thing will happen here. Some teams will get smaller. But many teams will stay the same size and get dramatically more capable. The total amount of software in the world will increase, and its average quality will increase too, because the process around the building will be better at every team size.
There is a specific thing about the gstack approach that I want to highlight because I think it illustrates a deeper point.
The workflow has about eighteen specialist roles. Office Hours, CEO Review, Engineering Review, Design Review, Staff Code Review, QA, Benchmark, Ship, Deploy, Canary, Weekly Retro, and others. Each of these is a perspective, a way of looking at the work through a specific lens.
What strikes me about this is how it mirrors the way great teams actually work. If you watch a high-functioning engineering team at a good company, you will see that the team members are constantly switching between these roles. During a planning meeting, someone is thinking like a product manager. During code review, someone is thinking like a security engineer. During a postmortem, someone is thinking like a systems reliability engineer.
The magic of a great team is not that each person is great at everything. It is that the team collectively has all of these perspectives covered, and the culture ensures that the right perspective gets applied at the right time. The senior engineer knows when to put on the "what could go wrong" hat. The product manager knows when to push back on scope. The QA person knows when to insist on more testing.
What AI can do is make these perspectives available without requiring dedicated people to embody them full-time. You do not need a full-time QA person to ask the QA questions. You do not need a full-time security reviewer to think about security implications. You need the questions to be asked and the analysis to be done. The role is what matters, not the headcount.
This is a subtle but important distinction. We are not talking about replacing people. We are talking about making roles portable and on-demand. A solo developer can invoke the "security reviewer" perspective when they need it, just like they might invoke a linter or a type checker. It is a tool that applies a specific kind of attention to the work.
And I think this way of thinking about it — roles as attention patterns rather than job titles — is going to change how we think about team structure in general. Not just in software, but in many knowledge work domains.
I want to share something personal here because I think it is relevant to understanding why I care about this so deeply.
When we were building DenchClaw, we were a tiny team. And we hit every single one of the problems I have been describing. We skipped reviews because there was nobody to review. We skipped tests because writing tests felt slower than just testing manually. We pushed things to production that were not really ready because the pressure to ship was immense.
And we paid for it. We had incidents that cost us days of debugging. We had bugs that users found before we did. We had features that shipped with subtle problems that were embarrassing in retrospect. Each one of these was preventable. Each one was caused by skipping a step that we knew, intellectually, we should not have skipped.
The irony is that we were building a tool to solve exactly this problem. We were building the thing that would have prevented our own mistakes. Which, if you think about it, is a pretty common story in the history of software. Larry Wall created Perl because he needed a better text processing tool for the report generation work he was already doing. Linus Torvalds created Git because he needed a better version control system for Linux kernel development. You build the tool you need because you feel the pain of not having it.
The thing that changed for us was when we started actually using our own workflow. Dog-fooding, as the industry calls it. And the difference was immediate and obvious. Not because the AI was writing better code than we could. It was not. But because the process around the code — the review, the testing, the deployment checks, the documentation — was actually happening. Consistently. Every time. Not just when we remembered, or when we had time, or when we felt like it.
It felt like having teammates. Not teammates who wrote code for us, but teammates who made sure we did not skip the important stuff. The nagger who reminds you to write a test. The reviewer who asks if you considered the edge case. The ops person who checks the deployment plan. All of those people, available on demand, without the overhead of managing a larger team.
I remember one specific incident. I was pushing a change to our data layer that seemed straightforward. The review step flagged a potential issue with how we were handling concurrent writes. I had not thought about it because the concurrent case seemed unlikely. But the review was right — under load, the race condition would have caused data loss. Fixing it took about an hour. If it had shipped to production and caused data loss, fixing it would have taken days or weeks, plus the trust damage with users.
That is the kind of moment that makes you a believer. Not the productivity gains, not the speed improvements. The disaster that did not happen.
There is a broader philosophical point here that I keep coming back to.
For most of the history of software engineering, we have been told that there is a fundamental tradeoff between speed and quality. Move fast or be thorough. Ship quickly or ship carefully. This is so deeply embedded in the culture of our industry that it feels like a law of nature.
But I do not think it is a law of nature. I think it is an artifact of the cost structure of attention. When attention is expensive — when the only way to get a review is to hire a reviewer, the only way to get QA is to hire a QA engineer, the only way to get ops coverage is to hire an ops person — then yes, there is a real tradeoff between speed and quality. Every person you add to the team adds process overhead, communication overhead, coordination costs. There is a natural tension between having enough people to be thorough and having few enough people to be fast.
But when AI makes attention cheap, that tradeoff changes. You can have the review without the reviewer's salary. You can have the QA without the QA engineer's standup meetings. You can have the ops coverage without the ops team's on-call rotation. The attention is there, on demand, without the overhead.
This does not completely eliminate the tradeoff — there is still a time cost to going through the process steps, even when they are AI-powered. But it dramatically changes the equation. The cost of rigor drops by an order of magnitude, and suddenly it makes sense to be rigorous about things that you previously would have skipped.
It is like the transition from film photography to digital. When every photo cost money to develop, you were careful about what you shot. You composed carefully, waited for the right moment, and only pressed the shutter when you were fairly confident the shot would be good. When photos became free, people started shooting everything. Many of those shots were garbage, but the ones that were good were often better than what they would have gotten with the careful approach, because they were experimenting more and catching moments they would have missed.
Similarly, when process becomes cheap, teams can be thorough about things they previously would have gambled on. And many of those thoroughness investments will pay off in bugs prevented, incidents avoided, and customers retained.
I want to be clear about something: I am not arguing that AI is going to make all teams equally good. That would be naive.
The teams that will benefit most from this shift are the ones that already have good judgment about what to build and how to build it. AI-powered process amplifies good decisions. It also amplifies bad ones. If you are building the wrong thing, having a rigorous process for building the wrong thing does not help you. It might actually hurt you, because you will build the wrong thing more completely and with more confidence.
This is why taste matters so much. The teams that will thrive in this new environment are the ones that combine human judgment about what matters with AI-powered rigor in execution. They will make fewer mistakes in the "how" while bringing better judgment to the "what."
And I think this points to something interesting about what the ideal team member looks like in this new world. It is not the person who can write the most code the fastest. It is the person who has the best instincts about what to build, the best judgment about what is good enough, and the discipline to use the available tools effectively.
In other words, the new premium skill is taste. Not taste in the abstract, aesthetic sense — though that matters too — but taste in the engineering sense. The ability to look at a system and see what is right and what is wrong. The ability to look at a feature idea and predict whether users will find it intuitive or confusing. The ability to look at a technical architecture and sense whether it will scale gracefully or collapse under load.
These are skills that cannot be automated. They come from experience, from paying attention, from caring about the details. And they are going to be more valuable than ever, precisely because AI has made everything else cheaper.
Ten Years from Now#
I sometimes wonder what software development will look like in ten years. Predictions are dangerous, and most of them turn out to be wrong, so take this with appropriate skepticism. But here is what I think might happen.
I think the average team size for a successful software product will drop significantly. Not because companies want fewer people, but because fewer people will be needed to achieve the same level of quality and process maturity. A startup that would have needed fifteen engineers and five ops people will need five engineers and maybe one ops person, because the AI-powered process will handle a lot of the work that the other fifteen people used to do.
I think the quality floor will rise. Today, there is a huge variance in software quality. Some products are beautifully built and thoroughly tested. Others are held together with duct tape and prayers. When process becomes cheap and on-demand, the duct-tape products will get a lot better, because even the most resource-constrained teams will be able to afford basic rigor.
I think we will see many more software products in the world, because the barrier to building a good one will be lower. Right now, there are a lot of problems that nobody has built a solution for, not because the problems are hard, but because building a reliable, well-tested, well-documented solution requires more people and more process than anyone has been willing to invest. When the cost of that process drops, more of those solutions will get built.
And I think the role of the software engineer will shift. Less time spent on the mechanical parts of the job — the testing, the reviewing, the deploying, the documenting — and more time spent on the creative parts. Deciding what to build. Designing how it should work. Making judgment calls about tradeoffs. Talking to users. Thinking about the problem deeply.
Honestly, that sounds like a better job to me.
The title of this essay is "The New Small Team," and I want to end by saying clearly what I think the new small team actually looks like.
It used to be a few unusually scrappy people doing everything themselves. Working eighty-hour weeks. Skipping tests because there is no time. Skipping reviews because there is no one to review. Pushing to production and hoping for the best. Heroic effort, inconsistent results. Some of these teams built amazing things. Many of them burned out or made avoidable mistakes that cost them everything.
The new small team is different. It is a few people with good taste and strong judgment, surrounded by systems that help them think, review, test, ship, and document with much more discipline than their headcount would suggest. They still move fast, but they move fast with a safety net. They still make judgment calls, but those judgment calls are better informed. They still ship aggressively, but what they ship has been through a process that catches the mistakes they would have missed on their own.
The new small team does not pretend to be a big company. It does not adopt big-company process wholesale. It borrows the useful parts — the specialized attention, the systematic checks, the structured reflection — while keeping the speed, the informality, and the creative intensity of a small team.
It is the best of both worlds. Not perfectly. Not without tradeoffs. But close enough that the difference between being small and being big, in terms of the quality of what you can ship, has narrowed dramatically.
I think this is one of the most significant changes happening in technology right now. Not the most visible change — the flashy demos and the chatbots and the image generators get more attention. But possibly the most important one. Because it changes the fundamental economics of building software, and the fundamental economics of building software determine what gets built and who gets to build it.
For the first time, being small does not have to mean being sloppy. And that matters more than any amount of faster code.
