March 25, 2026
The underlying assumption of Big Tech engineering organizations over the past decade has been that execution is expensive. Writing code is expensive, design is expensive, testing is expensive, integration is expensive. Because execution is expensive, division of labor follows: frontend, backend, iOS, Android, design, QA — each link in the chain staffed by specialists, stitched together through hierarchy and process. Your role’s value is, to a significant extent, built on top of this division of labor: you are the person responsible for a particular link, and that link is complex enough to require your specialized skill.
AI is pushing execution costs toward a tipping point. The marginal cost of code production is already approaching zero. The speed at which design mockups, prototypes, and test cases can be generated has increased several-fold in the past year alone. When execution costs drop sharply, the division-of-labor logic built on execution complexity starts to loosen. If a single engineer, aided by AI tools, can complete work that previously required three people coordinating, then headcount, reporting relationships, and coordination overhead all need to be recalculated.
This is not a hypothetical exercise. In March 2026, a developer tools team of roughly 1,000 people inside Meta’s Reality Labs took a concrete step in this direction: eliminating traditional function-based titles, restructuring into small cross-functional pods, and introducing AI-assisted performance evaluation and promotion decisions. It is changing three things at once — role definitions, team composition, and management practices.
This report is not a news summary. The question it attempts to answer is: if the direction of Meta’s experiment is adopted more widely, what happens to the day-to-day work, evaluation criteria, and career paths of engineers, PMs, and managers? What should you be preparing for?
The way most companies introduce AI is to keep existing organizational structures and processes intact and embed assistive tools at the execution layer. Engineers continue working within their original team boundaries and reporting lines, simply gaining a Copilot when writing code. The gains from this approach are real, but the ceiling is clear — efficiency improvements at the tool layer will be partially consumed by organizational friction.
The direction Meta is testing in Reality Labs is different. It is simultaneously altering role definitions (three titles replacing the old functional divisions), team composition (small pods replacing large feature teams), and management practices (AI-assisted performance evaluation and promotion decisions). Moving all three at once means the change is happening at the system level, not the tool level.
A more precise way to understand it: Meta is attempting to rewrite the engineering management control plane for the AI era. The pod is the execution surface; what is being replaced underneath is the way context is distributed, how team interfaces are designed, and how the evaluation and performance cycle operates. The leaked internal memo explicitly states that the goal is to achieve a step change in engineering productivity and product quality. That goal points not to making existing processes run faster, but to replacing the processes themselves.
A distinction should be drawn here: what Meta is doing should be understood as an AI-native engineering management experiment, not simply organizational flattening, and not the disappearance of manager roles. There are fewer layers, but the demands on the management system have increased. This point will be expanded below.
This section is organized by role. Regardless of which Big Tech company you work at, if your company is seriously pursuing AI tool adoption, the directional shifts here are worth considering.
If you are an IC. The most immediate change is an expanded expectation of capability scope. In the pod model, every member needs to cover more ground beyond their core specialty. This does not mean becoming a full-stack engineer, but the range of situations where “that’s not my responsibility” applies is shrinking. Being able to use AI tools to quickly enter unfamiliar domains and deliver work at 70–80% quality is shifting from a nice-to-have to a baseline capability. At the same time, in a pod of 3–5 people, each person’s contributions and gaps are difficult to dilute — output visibility increases dramatically.
The deeper signal concerns a migration of where personal value is anchored. The scarcity of the act of writing code itself is in steady decline. The trend of AI driving the cost of code production toward zero is already quite clear. Under this trend, the scarcer capability is judgment: defining what is worth building, evaluating output quality, choosing the best path among multiple viable options. The pod model accelerates this shift, because in a small team there are not enough layers of hierarchy to make those judgments for you. You must judge for yourself, and the quality of your judgment will be directly visible in the pod’s output.
If you are a manager. The pod model is not a signal that management positions are disappearing; it is a signal that the content of management work is being redefined. Day-to-day task allocation and progress tracking — the most concrete and visible parts of a traditional engineering manager’s work — are being displaced by self-organization within pods and AI tools. What remains, and what is harder to replace, is: designing evaluation criteria, building cross-team coordination mechanisms, resolving ambiguous priority conflicts, and developing people. If your core value rests on daily coordination, that position is indeed being weakened. But if your core value rests on system design and people development, this is actually an opportunity for expanded influence.
If you are a tech lead. Technical decision-making authority may become more distributed. In a traditional team, a tech lead has relatively concentrated influence over the technical direction of a domain. In the pod model, technical decisions are dispersed across individual pods, which demands a shift from “making decisions yourself” to “designing decision frameworks that enable others to make good decisions.” Cross-pod technical consistency becomes something that must be actively maintained, rather than a natural byproduct.
A signal that cuts across all roles. In an AI-native organization, the ability to produce and transmit context becomes critically important. The effectiveness of AI tools is highly dependent on the quality of input context — good prompts, clear specifications, knowledge bases amenable to structured retrieval. Being able to translate ambiguous business requirements into context that AI can consume is a capability with significant leverage in this new organizational form. Put differently, your value within the organization increasingly depends on the quality of context you can provide for AI (and for your colleagues), rather than how many lines of code you can personally produce.
To understand the substance of this experiment, the pod needs to be unpacked in terms of the mechanism changes behind it.
Role compression and boundary blurring. The AI Builder title eliminates functional labels among engineers, designers, and product managers. Division of work within a team is driven by tasks, not job titles. When a pod has only a few people, the cost of waiting for “the right person” to handle a particular step is too high; everyone needs to extend one layer beyond their core competency. AI tools serve to lower the barrier to that extension: you do not need to be a professional frontend engineer, but you do need to be able to complete frontend tasks with AI assistance.
Unbundling of management authority. A traditional engineering manager carries two types of work simultaneously: day-to-day execution coordination (project progress, task assignment, blocker removal) and people management (performance evaluation, promotion, career development). The pod model separates these — the Pod Lead handles the former, the Org Lead handles the latter, with AI assistance explicitly introduced into the latter. The assumption behind this split is that these two types of management work require different information densities and decision cadences, and handling both within a single context and a single person is not optimally efficient.
Restructuring the evaluation system. When the Org Lead uses AI to assist with performance evaluation, the underlying assumption is that information previously dependent on a manager’s subjective memory and personal observation can be collected and organized more systematically. This points toward an “evaluation-first” management paradigm — first define what good output looks like, build an observable metrics framework, then design management processes around that framework. The specific implementation details remain highly opaque in publicly available information, but the direction merits attention.
Changes in context distribution. In a traditional large team, context flows primarily through hierarchy: VP tells Director, Director tells Manager, Manager tells IC. The pod model compresses hierarchy, which means context must find new distribution paths. Without accompanying context infrastructure (documentation systems, knowledge bases, AI-assisted information retrieval), compressing hierarchy does not cause context to flow automatically to those who need it — it simply creates an information vacuum.
In the broader picture, Meta’s capital investment in AI continues to accelerate. Actual capital expenditure in 2025 was $72.2 billion, with the 2026 guidance range at approximately $115 billion to $135 billion. Zuckerberg has publicly indicated a direction toward internal AI agentification — including having AI serve as a CEO-level advisor. Meta’s acquisition of the AI agent company Manus is part of a strategic positioning effort, but there is no direct causal or temporal synchronization between it and the pod reorganization. During the same period, Meta conducted layoffs across multiple divisions, but there is no public evidence that these layoffs are a direct result of the pod reorganization. The two overlap in timing; they cannot be simply linked in causation.
This is the part of the experiment most easily misread.
“Flattening” and “AI replacing managers” are the labels that travel fastest in media narratives. But if you look carefully at what Meta is actually doing, the direction is the opposite: the pod model raises the bar on the management system.
In a traditional hierarchical organization, a large amount of management work happens implicitly: a manager learns an IC’s status in one-on-ones, distributes context in team meetings, resolves small cross-team frictions in hallway conversations. This work does not appear on any process diagram, yet it keeps the organization running. When hierarchy is compressed and teams are shrunk, this implicit management work does not disappear — it needs to be made explicit, systematized, or replaced by new mechanisms.
Specifically, the pod model increases the load on the management system in at least the following areas.
First, cross-pod coordination. When a 1,000-person organization is broken into hundreds of small pods, the number of interfaces between pods increases dramatically. Who defines the dependencies between pods? Who resolves priority conflicts? Who ensures that the outputs of different pods can be integrated? In large teams, these questions are absorbed by the management hierarchy; in the pod model, they require new coordination mechanisms.
Second, quality standards and consistency. Small teams have strong internal consistency, but cross-team standards drift is a well-known problem. When every pod has a high degree of autonomy, code style, design language, and technology choices can diverge quickly. Without enforced alignment mechanisms, that divergence becomes technical debt.
Third, people development and promotion. When management authority is split between Pod Lead and Org Lead, who is responsible for an AI Builder’s long-term development? The Org Lead makes promotion decisions but lacks day-to-day observation; the Pod Lead has day-to-day observation but lacks promotion authority. This separation of information and authority requires extremely careful institutional design to avoid becoming a vacuum where no one is truly accountable.
Fourth, the legitimacy of AI-assisted performance evaluation. Using AI to assist with evaluation and promotion decisions is a technically viable direction, but it introduces new governance challenges: Are the data sources comprehensive? How are model biases audited? Do the people being evaluated trust the system? If an AI performance system is perceived by employees as a black box, it may erode rather than strengthen organizational cohesion.
The existence of these challenges does not mean the pod model will fail; it means the conditions for success are far more complex than “making teams smaller.” The fact that Meta chose to roll this out in a single 1,000-person division rather than company-wide may itself reflect this understanding.
Meta’s pod experiment is currently in its early stages, piloted in a relatively self-contained division. Whether it will expand to more teams and ultimately reshape Meta’s overall organizational form depends on several observable indicators.
The first signal is expansion scope. If, over the next two to three quarters, Meta’s core product teams (Feed, Ads, Messaging) begin adopting a similar organizational form, it would indicate that internal assessment of the experiment is positive. If it remains confined to the pilot, it may suggest the pod model has applicability boundaries.
The second signal is transparency around the AI-assisted performance evaluation system. If Meta begins publicly discussing the design principles, data sources, or audit mechanisms of this system, it would indicate they are seriously addressing the legitimacy question. If this area remains opaque, a degree of skepticism about the system’s sustainability is warranted.
The third signal is actual output change. The leaked memo states the goal is a step change in engineering productivity and product quality. This claim is inherently difficult to verify externally, but indirect observation is possible: whether Reality Labs’ product release cadence and quality show perceptible changes over the next 6–12 months.
The fourth signal is industry follow-on. Consulting firms like Bain are already recommending similar AI pod models to the life sciences industry. Gallup’s data shows that expanding manager span of control is an industry-wide trend. If more companies begin experimenting with the combination of small cross-functional teams and AI-assisted management, then Meta’s experiment represents a directional signal in organizational evolution, rather than an isolated Big Tech story.
Meta is not the first large company to attempt small cross-functional teams. The past two decades offer enough case studies to help calibrate expectations.
Spotify’s squad model is the most recent point of reference. Squads were also small, cross-functional teams with a high degree of autonomy. The core problem Spotify later acknowledged was that autonomy, without alignment, slides toward fragmentation. Individual squads made locally optimal decisions, but cross-squad coordination costs kept rising, eventually offsetting the speed advantages that small teams provided. Meta’s pod model faces the same pressure, and with pods that are even smaller, the frequency of cross-pod coordination will only be higher.
Zappos’ holacracy experiment provides a more extreme reference point. Zappos attempted to replace management hierarchy with self-organization. What they found was that management work (assigning responsibilities, resolving conflicts, making trade-offs) did not disappear — it simply migrated from people with formal authority to people without it, becoming more hidden and less efficient. The lesson: compressing management hierarchy and eliminating the need for management are two entirely different things.
Amazon’s organizational flattening in recent years took yet another path. It increased manager span of control, reduced intermediate layers, but the core motivation leaned more toward cost optimization than a transformation in how work is done. Amazon’s model proved that large companies can operate with fewer management layers, but it also exposed a side effect: when a manager oversees 15–20 people, the quality of coaching and career development deteriorates noticeably.
Buurtzorg in the Netherlands is a success case — small self-managing nursing teams of roughly 12 people each, with a high degree of autonomy. But the preconditions for success were: members were highly experienced professionals, workflows were relatively standardized, and an extremely lean back-office support system was in place. Transplanting this model into software engineering requires revalidation on every one of those preconditions.
Netflix’s experience illustrates yet another dimension: a critical condition for flattening and high autonomy to work is exceptionally high talent density. Netflix maintains this condition through high compensation and high attrition, which itself implies that this model places demanding requirements on team composition.
Taken together, the organizational primitives of small teams, cross-functionality, and high autonomy are demonstrably effective under specific conditions, but no case supports the conclusion that changing organizational form automatically delivers efficiency gains. The successful cases almost universally involved rebuilding supporting systems — coordination mechanisms, evaluation frameworks, information infrastructure. Whether Meta’s pod experiment succeeds depends on how much it invests in those supporting systems, not on the size of the pods themselves.
Ultimately, the core bet of the pod model is not whether AI tools can increase individual productivity — there is already substantial empirical evidence on that question. The real bet is whether AI can enable a company to run engineering through smaller execution units combined with a stronger results-oriented management system. The answer to that question still requires time and practice to determine.
This report is based on publicly accessible information and documented sources. The following outlines the main boundaries between fact and inference in this text.
Confirmed facts. Business Insider reported that a developer tools team of approximately 1,000 people in Reality Labs transitioned to an AI-native pod model using three titles (AI Builder, AI Pod Lead, AI Org Lead). The leaked memo explicitly referenced the goal of a “step change in engineering productivity and product quality.” Cross-functional collaboration and role boundary blurring are explicit elements of the design. The Pod Lead is responsible for day-to-day operations; the Org Lead is responsible for performance evaluation and promotion decisions, with the latter incorporating AI assistance. Meta’s actual capital expenditure in 2025 was $72.2 billion, with 2026 guidance at approximately $115 billion to $135 billion.
Partially confirmed information requiring caution. The specific pod size (3–5 people) has been mentioned in public reporting but has not been fully corroborated in publicly available text. Layoffs during the same period overlap temporally with the pod reorganization, but no public evidence supports a direct causal link. Details on the implementation of the AI-assisted performance evaluation system are very limited in public sources.
Inference and analytical judgment in this report. The discussion of the pod model’s specific impact on the work of ICs, managers, and tech leads is based on mechanism analysis rather than official Meta statements. The discussions of the importance of context infrastructure, the legitimacy challenges of the evaluation system, and cross-pod coordination costs are analytical judgments grounded in organizational design principles and historical cases. The opening framework of “declining execution costs causing division-of-labor logic to loosen” is this report’s core analytical lens, not Meta’s official narrative.
Maintaining clarity on these boundaries is intended to help readers distinguish which information can be cited directly and which judgments require further validation against their own circumstances.