A program has a start date and an end date. A system has a feedback loop. That distinction sounds academic until you realize that the entire $2.6 trillion American nonprofit sector -- the largest in the world, according to the Urban Institute -- is built almost entirely on programs. And 70% of grant-funded programs end when the funding ends. Not because they failed. Because they were never designed to survive without the grant.

This is the distinction nobody in the social impact space wants to confront directly: the difference between serving people and changing the conditions that created the need. A program serves 200 people and writes a report. A system asks why those 200 people needed help in the first place and builds infrastructure so the next 200 don't.

The Program Trap

Here's how the cycle works. A foundation or government agency identifies a problem -- youth unemployment, housing instability, food insecurity. They issue a request for proposals. Organizations compete for the grant. The winner designs a twelve-month or twenty-four-month program, hires staff, recruits participants, delivers services, and submits quarterly reports documenting how many people were served, how many hours of training were delivered, how many certificates were issued.

At the end of the grant period, the funder evaluates the program based on those output metrics. If the numbers look good, maybe they renew for another cycle. If priorities shift -- and priorities always shift -- the program ends. Staff are laid off. Participants lose access to services. The community is back where it started, except now with the added cost of trust erosion. People showed up, engaged, and then watched another initiative disappear.

According to the Urban Institute's research on nonprofit sustainability, roughly 70% of grant-funded programs do not survive beyond their initial funding cycle. That number isn't a failure rate. It's a design feature. The programs were built to run on external fuel. When the fuel stops, the engine stops. Nobody built the engine to generate its own power.

We don't have a funding problem. We have a sustainability architecture problem. The money comes. The money goes. Nothing structural remains.

What a System Looks Like

A system is different from a program in three fundamental ways. First, a system has feedback loops -- it collects data on what's working and what isn't and adjusts in real time, not at the end of a grant cycle. Second, a system has multiple inputs and outputs -- it doesn't depend on a single funding source or a single organization to function. Third, a system persists beyond any individual initiative because it's embedded in infrastructure, not attached to a budget line.

Consider the difference in workforce development. A program trains 200 people in manufacturing skills over twelve months. A system would connect employers, training providers, community colleges, and workforce boards into a shared data infrastructure where employer demand signals flow directly to curriculum design, training pipelines adjust quarterly based on actual hiring patterns, and retention data feeds back to improve both training quality and job placement. The program depends on the grant. The system depends on the relationships and data flows between institutions that exist whether or not any single grant is active.

Detroit illustrates this distinction with painful clarity. Brookings Institution research has documented that a mid-size American city can have 30 or more workforce and social service providers operating in the same geography, serving overlapping populations, with no shared data systems, no coordinated intake processes, and no unified outcome tracking. Each provider runs its own program. Each program has its own metrics. Each funder gets its own report. Nobody has a picture of the whole system because nobody built one.

The Coordination Deficit

The real cost of the program model isn't just the programs that end. It's the coordination that never happens. When thirty organizations in one city are each running independent programs for the same population, the duplication is staggering. Three organizations might be doing job readiness training within a five-mile radius, none of them aware of the others' curricula, schedules, or outcomes. A person in need of services might visit four different intake offices, fill out four different forms, and receive four different assessments before accessing a single hour of actual support.

This isn't incompetence. It's architecture. The funding model creates it. Grants go to individual organizations, not to coordinated systems. Each organization optimizes for its own survival -- its own metrics, its own funder relationships, its own brand visibility. The incentive to coordinate with other providers is zero. In many cases, the incentive is negative -- other providers are competitors for the same limited funding pool.

I've watched this play out in Detroit for years. Good organizations, staffed by committed people, doing genuine work -- and none of them talking to each other in any structured way. A young person might be enrolled in a youth employment program at one nonprofit, receiving mentoring from another, and accessing housing assistance from a third. None of those organizations share data. None of them know what the others are providing. The young person is the only person who holds the complete picture of their own service engagement, and they have no mechanism to communicate it.

Thirty providers in one city, serving the same population, with no shared data and no coordinated strategy. That's not a service delivery model. That's organized fragmentation.

Why Programs Feel Safer Than Systems

Programs persist because they're legible. A funder can point to a program and say: we funded this, it served this many people, here are the outputs. That's clean. That's reportable. That fits in a board presentation and an annual report. A system is messy. A system involves multiple organizations, shared governance, data infrastructure, long time horizons, and outcomes that can't be attributed to any single funder's investment. Systems don't make for good press releases.

There's also a power dynamic. Programs keep funders in control. The funder sets the terms, the timeline, the metrics. The funded organization delivers against those terms. If the funder doesn't like the results, they don't renew. This is a clean power relationship. Systems require funders to share power -- to invest in infrastructure they don't control, to support coordination among organizations that might have competing interests, to accept that outcomes will take longer to materialize and will be harder to attribute.

Most funders aren't ready for that. Not because they're cynical, but because their own boards and stakeholders are asking the same question: what did our money do? Programs answer that question neatly. Systems answer it honestly. And honestly is harder to put on a slide.

What Detroit Teaches Us

Detroit has been the testing ground for program-based approaches for decades. After the 2013 bankruptcy, billions in philanthropic and public investment flowed into the city. Much of it was structured as programs. Workforce training programs. Youth development programs. Neighborhood revitalization programs. Small business support programs. Each one had a funder, a timeline, a set of deliverables, and an end date.

Some of those programs produced real results for real people. That's not in question. What's in question is whether those results accumulated into systemic change. Whether the conditions that created the need -- the misaligned workforce pipelines, the fragmented service delivery, the lack of coordinated data infrastructure -- actually shifted. The evidence suggests they didn't. The same problems that prompted the investment in 2014 are still being described in grant applications in 2026. The language is updated. The underlying architecture is not.

The handful of initiatives in Detroit that have produced lasting change share a common characteristic: they built systems, not programs. They created shared data infrastructure. They established coordination mechanisms across organizations. They designed feedback loops that allowed real-time adjustment. They invested in institutional relationships that persist beyond any single funding cycle. These efforts are harder to fund, harder to measure, and harder to explain. They also work.

The Question That Changes Everything

The standard evaluation question for a social program is: did the program work? Did it meet its targets? Did it serve the projected number of people? Did participants report positive outcomes? These are reasonable questions. They're also the wrong questions if the goal is systemic change.

The right question is: did the system change? After the program ran, after the money was spent, after the reports were filed -- is the underlying infrastructure different? Are institutions coordinating that weren't before? Is data flowing between organizations that used to operate in isolation? Are the conditions that created the need being addressed, or are we just managing the symptoms more efficiently?

A program that serves 200 people is a good thing. A system that reduces the number of people who need that service by 200 is a better thing. The $2.6 trillion nonprofit sector has the resources to do either. Right now, it's overwhelmingly doing the first. The question isn't whether we can afford to build systems. The question is whether we can afford to keep building programs that don't outlive their budgets.

The question isn't "did the program work?" It's "did the system change?"