The Reality of Performance Funding in Higher Ed

By Dan Lang

The government of Ontario will soon introduce a new “performance” fund that will function alongside a slightly modified version of a weighted by program and enrolment-driven funding formula that was introduced in the late 1960s. This will not be Ontario’s first deployment of performance funding. A “Key Performance Indicators” program – three for universities and five for colleges — was introduced in 1998. Initially no funding was attached to the KPIs, as what they soon came to be known. The idea was economic but not fiscal. Perhaps recognizing the extreme asymmetry of higher education markets, the government sought to level the playing field’s asymmetry by requiring and making public standardized information about institutional performance. Student market choice under the enrolment-driven formula would then produce the necessary financial incentive. The government soon became impatient with the “student choice as proxy for performance funding” approach and added a small “performance fund.” It, however, was so small– one per cent of the total annual operating grant – that it had no significant effect on institutional strategic behaviour.

Ontario governments rarely do anything that is truly innovative and new in post-secondary education policy. Several American states have deployed performance funding on and off since the late 1980s. Why “on and off”? The effectiveness and cost of performance funding depends on its context. It works better in some, and worse in others. A lot is already known about performance funding.

It is not possible to discuss performance funding as if it were a single-cell public policy organism. There are several subsets, the most common of which are performance set-asides or earmarks that reserve proportions of public subsidies for higher education to be paid out on the basis of pre-determined metric targets, hence performance indicators. Funding thus reserved is potentially, but not always, open-ended. The public policy objective is to influence institutional behaviour by means of financial incentives. The incentives are exactly that: they are fiscal inducements that may only coincidentally correspond to institutional costs. In certain cases, primarily in Europe, this form of performance funding is called payment for results. The World Bank promotes a competitive version of performance funding in which funding is not open-ended for countries with limited discretionary resources to direct to the development of universities. As expressions of fiscal policy these two versions of performance funding serve different purposes. The first offers benefit advantages. The state promotes and, hopefully, secures institutional performances that are desirable as public policy. The second, because the funding is a fixed sum, offers cost advantages. As performances improve in response to the incentives within the fixed sum unit costs are either contained or, as in the case of a reverse auction, reduced.

Performance funding is never the sole source of government funding. It always works in partnership with an enrolment-based or other form of incremental funding. This is especially the case in undifferentiated higher education systems — like Ontario’s — in which institutions are not genuine peers in terms of either de facto or actual mandates.

It is most effective when it offers truly additive funding as an incentive. Sight should not be lost of the fact that performance funding is often called “incentive” funding for a reason: it is designed to fund change, not to underwrite structural costs.

It is least – almost never — effective when accompanied by an overall reduction in funding. Performance funding is not inherently inefficient, but neither is it designed to promote efficiency or “do more with less,” which seems implicitly to be the expectation of the current Ontario government’s plans for performance funding.

Its track record is mixed when funding is a zero-sum: new funding can replace old funding, but with no overall gain or loss. Individual institutions may gain or lose, but the overall performance of the system is unchanged. This seems to have been the thinking of the previous Liberal Ontario government that began the planning for performance funding which the following Conservative government subsequently took-up. This is also the “pooling” effect of performance funding within institutions. Students choose programs, not institutions. The Ontario operating grant formula funds programs, not institutions. No more than a brief glance at the results of the Ontario Graduate Survey is needed to see that, for example, graduation rates by program vary far more than graduation rates by institution. Within an institution, based on performance indicators, graduation rates may rise in some programs and fall within others, thus leaving the overall or “pooled” performance unchanged and, in turn, blunting the fiscal effect of the incentive.

Performance funding has been deployed more often in [community] college systems than in university systems. This is because there is more institutional homogeneity in college systems than in university systems.

In undifferentiated systems, like most Canadian higher education systems, the balance between performance funding and, for example, “weighted” program- based formula funding, varies among institutional categories. Research-intensive universities receive less performance funding and more formula funding, and the reverse for primarily undergraduate universities.

Performance funding becomes volatile when the amount of funding assigned to it varies unpredictably. This is a serious problem when “performance” cycles span several years, for example rates of graduation cannot be meaningfully calculated over periods of fewer than five or six years. Nor can actions taken to meet performance targets be expected to show results sooner than the end of their respective cycles. Another, perhaps less obvious but even more problematic example, are “economic impact” indicators. It is true that colleges and universities can boost regional economic growth, but there is a difference between meeting local demands for skilled labour and creating such demand, particularly in terms of “performance” cycles and the time taken to generate evidence of economic growth. Turning the funding tap on and off before cycles are complete disables performance funding.

Performance funding is sometimes called “outcomes” funding. Performance funding fails when it demands outcomes prematurely. Whether universities like performance funding or not, they should accept and demand stable commitments to it that are protected from annual provincial budget politics. Governments should neither expect nor promise quick outcomes. Otherwise they risk political embarrassment.

Performance funding is unstable and soon unravels when the metrics of its indicators poorly match the actual costs of the “performances” that it promotes to advance policy. The effectiveness of performance funding in modifying institutional behaviour depends on the match between the amount of funding that is set aside and the “performance” that any given incentive is put in place to promote. If the match is imperfect, performance funding will fail. For example, to improve rates of graduation a university or college might take several steps that involve additional expense: more academic counseling, writing labs, math labs, teaching assistants, and financial aid. The list could be longer, but the length of the list is not the point. The point is the cost of the list. If the amount of funding set aside does not reflect, at least approximately, the marginal cost of the institutional performance being sought, the incentive will be ignored. It often is, not for political or, even, policy reasons, but for economic reasons. Some universities, for example, in Ontario have graduation rates that are on a par with the top two per cent of American public universities. The point here is not to applaud performance. The point is to demonstrate the low utility value of spending to increase a rate of graduation that is already as high as can be reasonably hoped for. The cost of raising, say, Queens’ graduation rate by one point may far outweigh the performance revenue gained by doing so. In other words, the utility value is low. Universities can and should “do the math.” Whether or not the government will is an important but still open question.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s