We measure performance to assess whether an intervention (or person, organization, in fact any agent) is making a change, and as necessary adjust course and approach to improve.
This article considers the high-level impact of recent political developments upon UNAIDS, the Joint UN Programme on HIV/AIDS, before focusing on broader and more nuanced questions of performance monitoring. I hope this may be useful to those grappling with the challenges of measuring impact. It is based on my twenty years of experience with UNAIDS and written in a personal capacity.
Image generated with DALL·E by OpenAI
10 March 2025
Between 20 January and 10 March 2025, 83% of USAID programmes were cut. In 2023 USAID provided 20% of the total global development aid (more in some sectors, such as 40% for UNAIDS). Expertise and capacity built up over 60 years has been dismantled. Some see this as rapidly redirecting resources to where they are needed, others as an ill-advised reduction of a soft power 'tool', and others as global vandalism.
Perspectives around performance and progress have always been political as well as technical, but few expected the world's largest international aid agency to be 'fed through the wood chipper'.
For much of my time with UNAIDS, I worked on performance monitoring. Within the Programme, achievements and challenges have been carefully recorded, summarized and presented annually to the Programme Coordinating Board (PCB). In the last few weeks UNAIDS has been documenting the retrograde effects of the USAID-related cuts.
Funds for HIV in all sectors (UNAIDS represents less than 1% of the total) have been falling for several years, but this has been drip-drip compared to the US whirlpool which entails enormous repercussions. The number of people on antiretroviral therapy (ART) has climbed steadily over the last 25 years so that over three-quarters of people living with HIV receive it. Progress in HIV prevention and tackling HIV-related stigma and discrimination has been less rapid, as well as being less easy to measure. The massive cuts in budgets, particularly on countries more dependent on international aid, will lead to rationing and gaps in the provision of ART, and scale-back of other HIV-related programmes.
In South Africa alone, over the next ten years it is feared that there will be half a million extra AIDS deaths. One practitioner suggested, "Instead of a careful handover, we're being pushed off a cliff”.
Of all epidemics, over the last four decades the response to HIV has benefitted from some of the most concerted, coordinated and resource-intensive efforts. Just as this work will be affected, responses to other diseases will also suffer, ranging from tuberculosis to measles to mpox to a whole host of neglected tropical diseases. Dismantling the infrastructure and programmes of USAID (including purges on ‘gender’ or ‘diversity’ terminology) entails stopping vital routine surveillance and data collection across the globe. Measuring and coordinating less, everyone’s health and wellbeing will be at greater risk.
Measuring HIV in the best of times has always had limitations. The cliff is made less of chalk and more of Emmental. Some of the challenges are elaborated below.
Finding the balance
The response to HIV is multi-sectoral and UNAIDS consists of 12 UN organizations. The UNAIDS Unified Budget, Results and Accountability Framework (UBRAF) includes multiple activities and expected outputs, outcomes, and impacts. Finding the right balance between simplicity and complexity to meaningfully and usefully track progress is not easy. Are the targets set at an appropriate, ‘stretching’ level? Do they allow budgets to be adjusted based on performance and prioritization in a responsive and effective way? Are there enough resources assigned to measuring progress, and are these earmarked well? The answers are not always clear and may vary depending on who you ask.
Contribution or attribution
As noted the UNAIDS budget is less than 1% of the total global HIV response and so, working with many countries and partners, it is nearly impossible to attribute results - showing a direct, causal link - to the Programme. For specific and rare cases, for example provision of ART over several years in a specific place (funded by governments or donors), measurement of attribution can be attempted from input to impact level, but this is still tenuous given there are always other factors in play. For other interventions, particularly in terms of prevention and addressing stigma and discrimination, measurement is more indirect and, in the least rigorous cases, as good as licking one's finger and sticking it in the air. As a consequence, all UNAIDS reporting is expressing contribution: not causal and fuzzier, relying on good faith but also subject to exaggeration.
Continuity of results structures
UNAIDS strategies (for the global AIDS response) and UBRAFs (for UNAIDS own work) have had five- to six-year cycles. The fact that they have been largely constructed from scratch each time allows them to target the priorities of the time, but at the expense of continuity and thus comparison in terms of measuring across periods. We ignore these cycles and their bearing upon the measurement of long-term progress at our peril; like in politics, we are tempted by visions promising future miracles and downplay and simplify the messy reality of past development.
Questions of capacity and politics
Those working on performance monitoring tend to have limited capacity and so tend to be on a treadmill: measuring the last, and/or preparing for the next, one- or two-year period rather than being able to reflect on longer-term issues - or even being able to do a well-considered and sound analysis of trends and gaps in the short term. Diplomats on UN Boards, also stretched in multiple directions, tend to have three-year or shorter tenures, so typically don’t have time to fully appreciate the cycles and history of the organization they follow. Indeed, there has been a tendency in the PCB to ask for ‘more reporting’ one year and then ‘less’ the next. As my first superviser at UNAIDS Secretariat told me, the UBRAF is 'half technical and half political'.
In conclusion: performance measurement is complex, challenging and gets harder when the scale gets larger and the timeframes get longer. I wrote optimistically in my poem Dear Noemi, 'humankind has, throughout its history, waited expectantly for the end of the world but tended to wake up the next day, and the next, to find that life each time, on average, is a little but perhaps imperceptibly better' – a sentiment reflecting Hans Rosling’s book Factfulfuness. Regrettably things seem to be getting perceptibly worse now.
Performance monitoring is essential and needs to be well resourced and managed. There are multiple and evolving tools that fit different contexts. For example Outcome mapping entails stakeholders defining progress at the beginning of an intervention, allowing more nuanced, subjective and meaningful tracking. Artificial intelligence may offer extra capacity, perception and shortcuts to analyze progress more effectively, including over a longer period and taking wider narratives and more datasets into account.
But there are no magic bullets. We should not be complacent and must be aware of the limitations; through intent or necessity, messages will always need to be bounded, so we can read them quickly and move on to the next. Furthermore, years of meticulous effort channelled into building our Emmental castles on the beach may be washed away if the tide suddenly turns.