Full Article - Is OBF the future for education funding ?
Outcomes based funding in education.
(If this message does not format properly in your email - please click the “Read in App / View in Browser” above, top right).
Introduction
Funding for education development in the global south is in crisis and the situation has worsened significantly since the collapse of USAID. Analysis from Charles Kenny and Justin Sandefur at Center for Global Development suggests that USAID cuts have left education hardest hit and new data from Save the Children International suggest UK aid cuts will lead to an estimated 2.2 million fewer children in school and learning.
All this intensifies pressures to demonstrate measurable impact for the monies that are available.
Education outcomes-based-funding (OBF) claims to have a model with a cast iron, “no-win-no-fee” guarantee. If outcomes are not achieved payments are not made. Furthermore, supporters believe the model can raise money and attract new non-traditional funders.
Almost all donors have some version of payment-by-results (of which OBF is one type), whether key performance indicators, disbursement linked indicators or outcomes-based payments. But, although related, in the sense of linking some payments to achievement of pre-agreed outcomes, the structure and incentives of OBF programmes have particular characteristics worth examination.
The most prominent advocate of OBF is the eponymously named Education Outcomes Fund (EOF), but Educate Girls ran the first Development Impact Bond (DIB) in education, in India, and the British Asian Trust are also a major player in this area.
This analysis has looked at OBF models in Sierra Leone through the Sierra Leone Education Impact Challenge (SLEIC), supported by EOF, India where Educate Girls are supporting the Maitri Project and where a consortium of organisations is supporting the LiftEd Project.
The theory and practice of OBF models are examined as follows :
Incentives, Risk and Trust – what’s the theory of change ?
Innovation and Flexibility – how far are the boundaries pushed ?
Efficiency – is OBF better than traditional models ?
Accountability – feet to the fire or fudge ?
New funders – are they impressed ?
But first, the model.
The OBF Model
There are a number of variations, including Social or Development Impact Bonds (SIB/DIB) but I use OBF broadly to cover a model which, in its simplest form, has four parties :
a) a funder who is willing to put capital at risk up front, for a return
b) an implementer who delivers the programme – and the agreed outcomes
c) an outcomes payer who only pays out if the pre-agreed outcomes are achieved
d) a “programme manager” who selects the implementers, manages the evaluation and facilitates the relationship between implementers and outcomes payers.
Most programme managers have a learning function – EOF, for example, describe themselves as a centre of excellence building up a knowledge base about how to improve this kind of programming. Some make a distinction between Performance Managers who support implementers technically and Transaction Managers who manages governance, fund management, coordination etc. I have wrapped both these roles under Programme Manager.
The achievement of the outcomes is determined by an independent evaluation, usually a Randomised Controlled Trial (RCT), conducted by an independent third party. Payments are linked to specific improvements in the pre-agreed learning outcomes (e.g. in Sierra Leone fractions of standard deviations of improvements in learning outcomes). In the Maitri Project payment is made per “retained” girl i.e. a girl who has come, or come back, to school and transitioned to the next grade.
The aspect of this model that is unique to OBF is the separation of investors and outcome payers. Investors raise or advance the capital needed for implementers to undertake the interventions. Outcome payers only have to pay their money when achievements have been independently verified. In India as an example, in the Quality Education India (QEI) project, UBS Optimus provided the up-front capital and received an 8% return on the funds they invested from a pool of outcomes payers.
What are the claims for OBF ?
The main argument advanced by supporters of OBF is that traditional forms of funding education interventions place too much attention on inputs and accept moderate, poor or even negative outcomes. By linking payments to measurable improvements in learning outcomes OBF aims to incentivise a focus on what matters : results. In the words of EOF :
“No longer do funders pay for a pre-agreed list of activities and set programming from education providers. Instead, they define what outcomes they want to see, give providers the flexibility to respond to the needs of the beneficiaries, and only pay for the measurable impact these interventions deliver.” (EOF)
Educate Girls in their assessment of project Maitri add an interesting twist :
“Outcomes based approaches have emerged as an innovative tool to drive impact across a range of sectors, and include instruments such as impact bonds and outcomes funds. While these models have not always been explicitly linked to localisation, there is growing recognition of their potential to promote local ownership and accountability in achieving development outcomes”. (Maitri Report ; emphasis added)
Not only does OBF concentrate minds on what is important, the argument continues, it creates greater efficiency by incentivising providers to find the most cost-effective way of delivering the agreed results and that, in turn, promotes innovation and flexibility as providers are incentivised to find better ways to reach the agreed outcomes. Overall, accountability is enhanced and funders pay only for results not for effort.
So, let’s unpack these claims.
Incentives, Risk and Trust
Incentives
If OBF had a theory of change it would be that only results matter. That seems to be true only within narrowly defined limits (see Innovation below). In all the examples I’ve looked at, programme managers take a lot of interest in the processes, in different ways.
Most programme managers contract implementers on the basis of agreed plans to achieve agreed outcomes. They then, in varying degrees, provide support and monitoring to the implementers. Project Maitri, for example, provides considerable technical support via Educate Girls India, as does LiftEd who engaged Central Square Foundation as FLN advisers during the design phase and Dalberg as performance managers. Given the newness of the experimentation in this area, this may be understandable, but it’s certainly not a hands-off model, and these are usually significant costs.
More importantly, behind this focus on incentives seems to be a theory of change that assumes commercial incentives are what drives organisations in this space. Several implementers reacted quite strongly to this – one even describing it as “insulting”. In several cases the implementing organisation described the commercial “incentive” element as essentially irrelevant.
That may be because the risk is effectively outsourced to specialist finance companies who provide the seed capital to implementers. The two best known of these are UBS Optimus and Bridges Fund Management. In effect these organisations are taking the risk for the implementers – and most, if not all, the rewards.
For the implementers then, there is little or no commercial incentive, nor downside either – they will be paid either way – and this is probably even more true of the individuals at the grass roots. Implementers seem to be more motivated by their internal professionalism – a fact that, presumably, Bridges Fund Management or UBS Optimus carefully assesses before partnering.
In a guide published by the British Asian Trust called “Outcomes-based financing: What do nonprofits need to know?”, the relationship is described this way :
“…the theory and practice of OBF essentially focuses on de-risking nonprofits by bringing in a class of risk investors who provide upfront working capital to nonprofits. These investors are the ones who get paid only if pre-agreed outcomes are met, thereby moving the risk away from the nonprofits.”
The guide continues :
“At the heart of the OBF theory lies the fundamental belief that nonprofits are generally experts in their domain and know their solutions and implementation. As such, OBF instruments try to shift the power dynamics inherent in a grantor–grantee relationship towards a more equity-based partner relationship where funders contribute financial resources but do not micromanage or tell nonprofits what to do. Instead, they trust the nonprofits to bring non-financial resources such as their expertise and understanding of the community and ground realities to the project.”
This is not entirely convincing. As outlined earlier, the conception of OBF seems to be that commercial risk is a motivator – one that ensures all eyes are fixed on the achievement of outcomes. But, if that commercial risk is outsourced to a non-technical party can that motivator actually be removed from the equation of achieving results ? Not only can it, but should it ? Perhaps better in a model like this to increase the implementers’ commercial understanding and motivation than decrease.
In almost all the models examined the relationship between implementer and investor seems strong, and mutually respectful, with the latter giving valuable operational advice and guidance. The implementer doesn’t want to let the investor down, aware that failure will result in financial penalty. Self-interestedly, neither do they want the reputational damage that would come with missing outcomes. Losses, if they occur, can involve not just nominal overhead, but real costs (salaries/expenses) already spent trying to achieve results. Despite commercial risk being placed with an investor, to some important extent it remains with implementers, but it’s not the motivator that implementers feel drives them. And, if commercial risk is not the real motivator of implementers, then why is this a good financing model ?
Risk
A great part of the incentives to deliver outcomes is an assessment of the risks inherent in doing so, as well as the rewards of being successful. In OBF as in all payment-by-results models, the transfer of risk from funder to implementer or outcome payer to investor is a key element.
In a 2014 DFID publication called “Sharpening Incentives to Reform”, the intention of payment-by-results schemes to share or transfer risk was clearly spelled out :
“A key question to consider is the extent to which the implementing organisation is able to manage the additional risk of Payment by Results. A large organisation with strong systems may be able to hold more risk, but a small organisation that is less able to absorb risk may require a significant upfront payment, with only a small proportion of payment on delivery”.
The paper also distinguishes between Results Based Aid (RBA) where government is taking the risk, Results Based Financing (RBF) where implementing partners are taking the risk and Development Impact Bonds where investors are taking the risk.
The role of these investors is to have the tools and experience to assess the management capacity and organisational strength of the implementers before agreeing to shoulder the risk. But, it’s not clear why transferring risk in this way is a good model for investment in the social sectors, let alone a replicable or scalable model.
In fact, more or less explicitly, this has been recognised by all the programmes cited here. There is not one programme which has only paid out only on final completion of programme outcomes (e.g. after three years) which would entail the highest form of risk bearing. Instead, payments, are made for achievement of agreed annual outcomes as stages towards the overall programme outcomes agreed. This is practical and realistic, reducing capital and cash flow risk to investor and implementor, but it’s not really very different to how most other education projects work.
In setting up outcomes as the only metric there is also something of a “straw man” created that implies that “traditional” loan / grant projects only focus on inputs. And while that might have been true 15 years ago, look at any FCDO, EU or World Bank contract payment schedule and it will be apparent that it is certainly not the case now. Often there are significant commercial risks placed on implementers in the form of delayed payments, pre-financing and fixed prices.
Finally, the incentive to disburse must be recognised ; all funding agencies are driven by disbursing funds. The performative nature of KPIs and penalties is usually just that, performative. It’s in everyone’s interest – donors and implementers – to be successful, and being successful means spending funds. Thus, a certain amount of fudging to ensure funds are disbursed, and failure limited, is possible, though, with independent evaluations, much more difficult.
Trust
There seems also to be a cognitive dissonance about trust. On the one hand, implementers are praised as having all the necessary skills and professionalism to make change and achieve results, on the other hand not trusted enough to be paid as the work is done. Even if the investor absolves implementers of this cash flow worry, the fact is that the model is predicated on only trusting / paying on verified results. In some cases there is also a small payment for overachieving results, within a capped limit. Contrast this approach with the increasing examples of unrestricted funding in the philanthropic sector – in which trust in professionalism is the determining factor.
Perhaps that’s why in most of my interviews with programme manager staff the view was that implementers were benefitting from this more flexible and trusting relationship and demonstrably performing better. In my interviews with implementers however, I found a more qualified response. Yes, there was acknowledgement of positive impacts, working with investors, the discipline of having regular challenge, the single-minded focus on outputs etc. But, there was also some resentment at the hoops to be jumped, and the micromanagement that came, not with the day-to-day operations, but the constant measurement. More than one implementer resignedly said “if you want funding, this is the game you have to play”.
Innovation and Flexibility
All OBF champions claim they are encouraging innovation, but it appears to be within quite prescribed limits. For example, an implementer who wanted to pay teachers a supplement in Sierra Leone would not be allowed to. In India an implementer who wanted to extend the school day or pay a stipend to girls would not be able to do so.
In practice therefore, what innovation seems to mean is experimentation with support to teachers and schools, sometimes of a cost reducing nature, sometimes with more effective materials or training, such as structured lesson plans, or coaching and support.
However, in a number of cases, it was clear that the reason implementers signed up an OBF model was that they already had a tried and tested model that needed tweaking and adapting to specific circumstances rather than real innovation. Without this confidence would they have agreed to participate in such a risky enterprise and would investors have backed them ?
Does this mean OBF is best suited to areas where well established implementers exist with a certain level of capacity and performance management expertise, rather than new situations where there may be few existing actors ? Not always perhaps. As a pilot project, Project Maitri demonstrated that smaller hyper local organizations, in remote areas of Bihar where few established NGOs work, were successfully able to adapt and adopt the approach.
Some respondents also argued that it is innovative to engage local organisations directly rather than have them sub to larger NGOs or INGOs. But, given the relatively small sums of money involved (e.g. c. $750,000 per year per contract in SLEIC) this may simply indicate that such contracts were more suitable for smaller organisations. The idea that directly engaging local organisations, rather than INGOs or contractors, can lead to the development of greater local capacity is probably accurate but hardly innovative.
In the Maitri project, because the focus is on out-of-school girls, the innovation being sought is in new strategies to identify, recruit and retain girls in school. Here there is more leeway for real innovation at a local level – whether that is better mapping of where girls are not attending, better communications with reluctant parents or raising awareness and understanding by teachers and schools
It’s also likely that in both India and Sierra Leone one of the main beneficiaries of innovation are the implementing organisations themselves. Several respondents said the discipline of focusing exclusively on outcomes meant that all other extraneous aspects of projects that did not contribute to that goal were excluded. They commented on how they had to rethink what resources were really needed to achieve the ends. There is something to be said for that discipline – and it is consistent with the single-minded focus on outcomes. But, it is a trade-off with potentially significant downsides, on processes for example (see below in Efficiency).
Sometimes too it appears that flexibility is being confused with innovation. The OBF model does offer genuine flexibility to implementers, within the frame of an agreed process and objectives. That is appreciated by all and plenty examples exist of changes in tack, mobilisation of additional resources, additional support to struggling schools etc. in order to keep on track to achieve learning outcomes. But, what elsewhere is called adaptive programming, is a welcome but not innovative aspect of OBF.
Of course, in one sense the main innovation in OBF is the model itself. But, while it might be innovative, is it efficient ?
Efficiency
In a personal blog (Becoming a results focused organisation) reflecting on a 2023 visit to Sierra Leone, Adam Berthoud, Executive Director, Global Programmes, for Save the Children, UK said :
The team explained how they have therefore had to really focus on stripping back the activities to a point that is affordable, and yet still enables us to make a meaningful difference. For example, we would normally implement 4 to 6 training days of Teacher Professional Development across an academic year with significant school-based support and coaching along the way. In SLEIC, however, this has been reduced to just one programme year, with limited additional days in subsequent years and less intensive school-based support overall. Similarly, our learning circles for remedial learning are also much shorter and less intense than we have previously implemented.”
There is nothing wrong with this per se ; fewer, but better designed, training days may achieve better outcomes. But, it also seems possible that this type of “efficiency” will simply be doing less with less rather than more with less and if the only metric that matters is the standard deviations achieved on learning outcomes – how will anyone know ? There needs to be more public data on the cost effectiveness of these initiatives. Two good examples are from the Maitri Report, where the cost of running the Maitri programme is compared to a more standard model, and where the advantages of scaling to reduce unit costs are well set out. The QEI project also produced a useful and practical cost effectiveness analysis.
Commendably, in Sierra Leone, the SLEIC set a maximum per head figure of $36 per child for each innovation, on the grounds that if the government of Sierra Leone were to be seriously interested in picking up the tab for a scaled-up version, they needed to know that it could be affordable (on the assumption that this figure broadly represented the cost of educating one child for a year).
Is this efficiency or underfunding ? If the $36 per child is an underinvestment in education, is it “realistic” to set the target at this level, or is it letting the government off the hook ? For comparison Liberia budgets about $50/60 per year for one child. Perhaps the proof is that in their year two results SLEIC has shown demonstrable learning gains overall – if that is sustained through the whole programme, then it may be that this has been pitched at the right level.
Several respondents suggested that the discipline of keeping to this number had involved some changes in approach, either of the nature mentioned in Berthoud’s article above or in allocating resources to schools differentially to ensure that overall targets were going to be met. And in India Educate Girls seems to have demonstrated cost savings and a reduction in outcomes costs over time, with the QEI DIB demonstrating cost efficiency in delivering target outcomes for less.
Efficiency and simplicity tend to be good bedfellows – and this is certainly a potential weakness of the OBF model. Compared to a traditional funding model of two parties, donor and implementer, the OBF model with at least four parties seems unduly complex. The costs of the programme manager, and any associated technical advisers is an important, but not fully transparent, element aspect of OBF model efficiency (though QEI states an 80/20 ratio implementers / management-evaluation-transaction costs). To date there is insufficient public information on the costs of this management – which is critical to determining whether such models are more efficient than traditional ones.
One implementer suggested there were a number of hidden and unfunded “transaction costs” to their participation due to the need for frequent meetings and updates with the programme manager. In another context the programme manager had effectively taken on the role of both manager and technical adviser to implementers. This layer of management is necessary, the question is : in what optimal configuration ?
A subset of this management cost is the cost of evaluation. The model depends on independent evaluation to give confidence to outcome payers to trigger payments. But, most implementers are themselves monitoring continuously (to give confidence to their investors that they are on track) and there seems a real danger of over-measurement. That cost is in opportunity cost too, not just a financial cost – as the saying goes : “You don’t fatten a pig by weighing it”.
And of course, the only measurement that is now accepted as a “gold standard” even though it’s not cheap, and its methodological shortcomings are well documented, is an RCT. This may work to give confidence to investors and outcome payers in these early days, but it is another constraint on any significant expansion of the model.
Little room is given in all this discussion to process. Is one of the downsides of focusing so heavily on outcomes – and outcomes in an extremely narrow range – that other education outcomes, less easy to measure perhaps, are de-prioritised, if not lost completely ? Almost certainly that is the trade off of focusing on narrowly measured learning outcomes.
Is the same true of processes ? The processes of achieving those results may be captured by implementers and by programme managers in their learning briefs, but is there real experimentation and innovation taking place or is it simply model adaptation and adjustment ?
One interesting aspect of the LiftEd project that seems to address this was the development of a “learning year” at the start of the programme. Because LiftEd focuses on education systems strengthening, and because this was acknowledged as taking investors and implementers into uncharted territory, the programme used one year to trial methods and measurements, giving confidence to the investors as much as to the implementers and programme manager.
Accountability
Greater accountability is the final area commonly cited by supporters of OBF. The basic thesis behind the emphasis of OBF models on accountability seems to be that, in traditional models, implementers are being paid to make inputs without achieving outcomes.
Only EOF, on their website, provide any evidence for this, analysing a subset of 71 studies from a Global Education Evidence Advisory Panel (GEEAP) analysis on cost effectiveness. They highlight that 48% showed no impact on learning outcomes. The analysis goes deeper showing how a small number in interventions account for a large proportion of positive results and move from this to emphasise the importance of using data – and how data is central to the methodology of OBF. See diagram below.
But, that is a related but separate argument to whether traditional models are having expected levels of impact. The paper ends with this significant caveat :
“In the debate about the advantages and limitations of impact bonds and outcomes funds, policymakers should recognize the central role of evidence in these instruments. Funders understandably often ask what evidence exists around the effectiveness of impact bonds, and if they are worth the additional costs involved vs. traditional grant financing modalities. However, the impact bond community have grappled with this question of the ‘SIB effect’ for almost a decade, and found it a nearly impossible question to answer robustly, given the difficulty in isolating the impact of the financing modality among all the other factors that affect the performance of a program, and the inability to define a clear counterfactual. Rather than trying to answer questions about the ‘SIB effect,’ funders and policymakers should start by recognizing that the generation and use of context-specific evidence is central to the design and implementation of outcomes funds and similar instruments and can play a key role in supporting the evidence agenda.” (emphasis added)
Aside from the evident frustration in “not being able to answer the question robustly”, is the suggestion here that traditional models of education funding don’t use evidence to determine their interventions, and that this is the principal difference with OBF models ? It’s not clear – but credit to EOF for setting out the evidence on which their model is based.
New Funding Sources
One of the potential attractions of OBF is the possibility of bringing in more money, and new non-traditional funders, to education in the global south. EOF is explicit in this first aim, stating, as part of their vision :
Our aim is to pool at least USD 1 billion in aid and philanthropic funds by 2030, to transform the lives of over 10 million children and youth.
Yet, most of the people I interviewed did not see bringing in new money as a major motivation for the expansion of OBF, (though some implementers saw it as a way of attracting new investors who might then provide grant funds). Nonetheless, with the paucity of funds now available for education, minds may change.
Bringing in new money in OBF models is of course, a bifurcated question : investors or outcomes payers or both ?
With investors, as noted earlier, there seem to be a limited number of organisations with the appetite for this risk and the ability to assess organisations in the education space. That could be a constraining factor in expansion. It is notable that, after their successful experiment with the Educate Girls DIB, Educate Girls chose to act as an outcome payer in the Maitri Project through their US arm, while being programme manager through their India arm and removing the need for an investor by providing enough upfront capital to commence the project. As a pilot project they chose to test the model first with their own funds knowing it might be difficult to secure investors for this pilot phase.
As for outcomes payers, on paper this should look attractive. One implementer in Sierra Leone was puzzled, noting that the outcomes payers were all traditional development partners like FCDO : “As a value proposition for philanthropists this is unbelievable ; guaranteed outcomes or your money back. Why aren’t they picking it up ?”. Perhaps more funding from outcomes payers would incentivise a greater number of investors.
But, the real missing outcomes payer at the party is government. As with many development projects, education being no exception, there are many fine words said and written about how it is “hoped” that government will take these “successful” initiatives and adopt them, scaling them up. Rather than hoping, what better demonstration could there be than for government to commit to paying when results are demonstrated ?
The practical answer to that question may well revolve around Treasury “rules” and practices, the difficulties of committing money in advance (even in principle) and uncertain economic circumstances. But, political economists would also wonder about how committed governments are, despite their rhetoric to the contrary, to improving education outcomes in this way – rather than, say, funding a salary increase for (voting) teachers or investing in politically advantageous projects.
Despite such scepticism, there are indications of potential. In India in the Skill Impact Bond launched in 2024, a quasi-government agency has agreed to become one of the investors. In Sierra Leone the government pays 10% of outcome funding and in the EOF supported programme in South Africa, 50%. Where governments are willing to commit at 50%+ levels there may be real possibilities for scale and sustainability.
Conclusion
Anything that can move the needle on literacy and numeracy deserves our attention. There are clearly some high achieving OBF programmes that are making a real difference to children’s lives. The results of QEI, SLEIC’s 2nd year outcomes and Project Maitri are impressive, and the ambition of LiftEd to address system strengthening is commendable.
Despite that, the differences with more traditional education projects used to justify OBF seem overhyped, partly because the “traditional” model is one that has been changing over time to focus more clearly on outcomes and partly because the OBF model itself, as seen in these real-life examples, has made a number of practical compromises that weaken its claim to be a radically different model.
Nonetheless, the single-minded focus on outcomes is a strength, one recognised by most implementers, whatever their other criticisms. But, the claims being made for efficiency and innovation are overstated and there is a likelihood that programme managers are convincing themselves of a set of implementer motivations which are simply an outward sign of compliance, not a change in behaviours, attitude or approach. Would implementers given a choice of grant funding or an OBF model, choose the latter ?
Even if that is a one-sided interpretation, the cost of OBF models, including the costs of investors and programme managers need to be more transparently shared (assuming they are fully recorded) and there seems to be only a little research comparing the cost effectiveness of this model with more traditional approaches, which might be thought to be a priority for a model claiming greater cost effectiveness. Further, the real costs to government of actually picking up and scaling these programmes needs greater thought. What sunk costs are being ignored ; would this level of independent evaluation still be needed in a scaled model etc. ? All this might be mitigated if the model was attracting more funds and new investors to the education space. That may still happen.
More interesting, from the perspective of whether OBF is a model that can expand and scale beyond its present confines, is whether it can resolve the commercial motivation and trust tensions at its heart. As one respondent commented : “If OBF is the answer, what’s the problem it’s solving ?”. If OBF is solving a motivational problem, why is that outsourced to a non-technical third party. Surely, one answer would be greater engagement by suppliers in commercial understanding, not less ? If implementers are trusted to achieve results, why then is a commercial motivation needed ? Why not play to those intrinsic motivations of really making a difference and enhancing reputation through an incentive model rather than a penalty model ?
The future of OBF, judging by the examples in India that seem further advanced, seems to lie in addressing those issues through greater collaboration, planning and learning. OBF is likely here to stay, likely even to grow and assume greater importance. Educators working in this space need to engage positively, but critically, with these models.
Andy Brock, April 2025
Links and Resources :
Education Outcomes Fund website
Educate Girls Development Impact Bond : https://www.educategirls.ngo/dib/
Educate Girls Maitri Report : https://educategirls.us/wp-content/uploads/2024/12/Maitri-Report-A4-_15-11-2024.pdf
GoLab website (Government Outcomes Lab)
https://golab.bsg.ox.ac.uk/community/events/educate-girls/
“Outcomes fund sets template for collaborative investment to tackle SDGs, says Legatum CEO”
QEI DIB's Cost Effectiveness Guidebook for Education Interventions - Quality Education India DIB
Subscribe for free and receive each issue of Re Education automatically to your inbox.
If you know someone who would be interested in reading this newsletter, please pass on, by clicking the share button below. Subscription is free - subscribers receive each issue of Re Education automatically.
“Is one of the downsides of focusing so heavily on outcomes… that other education outcomes, less easy to measure perhaps, are deprioritized, if not lost completely?”
The above resonated with me, and was the source of some challenges and frustrations in deploying RBF dollars that had great ambitions.
Thanks for sharing your analysis of OBF, Andy. It's very timely you have researched and published this just at the moment when conventional aid flows to education and other sectors are in such short and fragile supply.
You're not wrong about governments having an opportunity to lean in, but by some measures they only account for perhaps 15% of global wealth now - which is around half of the proportion when we started out back in the day! So it's the challenge of unlocking the other c 85% of private corporations' and individuals' wealth that we all need to work on, in my view...