Scene from office space movie. The "two Bobs" interrogate Tom: "what would you say you do here?"

The End of the Product Manager (As We Knew It)

There’s a scene in Office Space where two consultants, the infamous “Two Bobs,” ask Tom Smykowski a deceptively simple question: what would you say you do here? Tom bristles. He explains that he talks to customers, translates between teams, and keeps things from falling apart. “I have people skills,” he says, and still fails the interview. Initech is bloated, inefficient, and badly run; the Two Bobs are there to reduce headcount, strip out bureaucracy, and show quick savings. But their logic only worked because technology and process standardization were already absorbing coordination and oversight work. What once required multiple roles could now be combined, or eliminated.

The film predates the economic shifts now underway with generative AI, but the pattern is familiar. As tools become more capable, work that once required multiple specialized roles begins to recombine. The work itself isn’t disappearing, but the categories we use to describe it are breaking down as responsibilities coalesce. Product management isn’t going away; it’s becoming more demanding and more technical. As AI tools absorb coordination and translation work, product managers are increasingly responsible for judgment, ethical tradeoffs, and hands-on experimentation. The historical boundary between defining software and building is collapsing—and product managers are increasingly expected to operate on both sides of that line.

For more than a decade, I worked in international democracy and civic technology programs at the National Democratic Institute, where product work rarely looked like Silicon Valley product management. Budgets were tight, users were diverse, failure carried political and reputational consequences, and technology had to function inside institutions that moved slowly and had low risk tolerance. I was often cast as “the people person,” responsible for translating between program teams, technical constraints, and real-world use. I served as product manager for the DemTools suite—a set of open-source tools NDI hosted and maintained as a shared service for civil society and political actors—defining roadmaps and requirements, managing vendors, and taking responsibility for whether tools actually worked in practice, not just in theory. This was product management in the classical sense, shaped by the realities of international development and democracy support.

While my perspective is grounded in the international development, non-profit and government sectors, the consolidation of product roles is equally applicable to the for-profit and tech industries. Indeed, tech-sector product managers are likely the vanguard in this trend, being among the first to face the need for deeper technical capabilities as AI tools mature.

When the Trump Administration abruptly ended most foreign assistance, I was laid off, along with many others in my sector. That moment forced a reevaluation of my value in the job market—which kinds of work remained in-demand as institutions retrenched. It also created space. For the first time, I could spend sustained time working directly with tools now accelerating this consolidation. At NDI, I had been invited into an internal AI working group, but hands‑on use of contemporary AI coding tools was largely prohibited in day‑to‑day work. Outside those constraints, the shift was clear: even without formal computer science training, these tools have allowed me to expand what product management itself entails. And this experience reflects a broader market trend: as software development becomes more accessible, roles consolidate, and product managers are increasingly expected to build, not just define, the tools they own.

Building Without a Buffer

After my layoff, I began experimenting seriously with AI‑assisted coding tools to solve problems I had previously only managed indirectly. Working inside an integrated development environment (IDE)—the software workspace where code is written, run, and debugged—with a coding agent that can read my codebase, refactor logic, and respond to tightly scoped instructions, I was able to move from defining requirements to implementing and testing them myself. 

I took on work I had previously only specified or reviewed: writing data-cleaning scripts to normalize inconsistent datasets; building small backend services and database schemas; wiring together APIs, authentication, and basic front-end components; and deploying a functioning open-source web application. Work that once required contracts, budgets, and months of coordination now happens in days. As a result, I spend less time coordinating handoffs and more time interrogating outputs—testing assumptions, pressure-testing model behavior against real-world constraints, and deciding where automation ends and responsibility begins. That experience has given me a clearer sense of how to embed institutional policies into practical system behavior: shaping product direction, advising teams on appropriate uses of AI, and setting guardrails that organizations can actually stand behind.

AI hasn’t turned me into a senior engineer, and I wouldn’t ship production‑level code without review. But it has allowed me to turn conceptual understanding into working systems while retaining responsibility for product decisions. At the same time, these tools hollow out traditional entry points on the engineering side. Junior‑level work—boilerplate, scaffolding, translation between systems—is increasingly easy to automate. The developer, product manager, and project manager roles aren’t vanishing; rather they’re collapsing inward, concentrating responsibility in fewer hands.

A Failure That Taught Me More Than the Wins

My first serious attempt to build something more ambitious—an Easy Read generator tool—failed for a number of reasons. First, I started with a product mistake. Instead of defining clear, minimal functional requirements and testing a narrow MVP, I tried to build everything I thought the tool eventually needed to be. I collapsed “prototype” and “platform” into the same effort before validating the core idea.

That mistake collided with a harder constraint. I ran into a real technical limit: current AI tools are still extremely weak at generating Easy Read–style images that actually support reading comprehension for people with intellectual disabilities. The requirement exceeded what the technology can responsibly deliver today—and it also exceeded my abilities as a solo developer. Closing that gap would have required orders of magnitude more time and effort, up to and including training a custom image-generation model—well beyond the practical scope for this project.

The failure wasn’t just technical; it was conceptual. Building directly with AI tools made that misalignment impossible to ignore. There was no vendor buffer and no sprint cycle to hide behind—the system simply stopped cooperating. When you work this close to implementation, bad assumptions fail immediately. Either the requirement was flawed, or I lacked the technical depth to solve it. In this case, it was both.

Human Connection Still Matters

As roles collapse and responsibilities concentrate, human collaboration becomes even more critical. In my own work, this has taken a few concrete forms: regular collaboration with former colleagues who are practicing software developers, and reaching out to others working on similar problems. Sometimes this looks like show-and-tell; other times it takes the form of short, informal working sessions to compare approaches. The emphasis isn’t on tools for their own sake. It’s on clarifying what we’re actually trying to build, catching weak assumptions early, deciding what not to attempt, and making sense of rapidly changing technology together.

Those interactions do work that AI tools don’t. Coding agents accelerate implementation, but they don’t independently challenge framing, surface blind spots, or carry context across decisions. When you’re simultaneously acting as developer, product manager, and project manager, peer-level human feedback becomes the primary check on overconfidence and misjudgment. AI may compress roles, but it also reduces opportunities for feedback. As those feedback loops shrink, collaboration has to become more intentional. Without it, the risk is the accumulation of unrecognized mistakes—problems you don’t realize you’re creating until they surface downstream.

Conclusion (As We Know It)

When I talk about the end of the product manager, I’m not predicting the disappearance of a job title. I’m describing the collapse of a boundary. As tools change the economics of building, the old division of labor—between defining work and implementing it—no longer holds. What’s ending isn’t product work itself, but the idea that it can remain insulated from the act of building.

AI-assisted coding compresses the distance between intent and execution. Product managers who can’t get close to the code risk losing contact with reality; developers who can’t reason about requirements inherit decisions they didn’t make. Responsibility concentrates, feedback loops shrink, and mistakes surface later without intentional human collaboration.

This isn’t a story about replacing expertise or celebrating lone builders. The tools only work when grounded in real technical understanding—and they fail fast when that foundation is missing. What changes is who is expected to carry that understanding, and how early.

The end of the product manager isn’t the end of product work. It’s the end of pretending that thinking and building can be cleanly separated. What comes next belongs to people willing to hold both sides of that responsibility at once.

Leave a Reply

Your email address will not be published. Required fields are marked *