Article Text
Statistics from Altmetric.com
‘Productivity isn’t everything, but in the long run, it’s almost everything’—Paul Krugman, Nobel Laureate (Economics)
Why such clinical ambivalence towards a word?
Productivity is a dirty little word for many in healthcare. There is something corporate, industrial with a soviet-twist, and anathematic to many clinicians’ sense of why they entered their professions. It smacks of management-speak, with inferred anxieties about the presumptive need for ‘more’, and has been described as ‘a subject guaranteed to kill the attention of clinicians and patients’.1 It also has a flavour of insensitivity in an era of a fatigued workforce with rising levels of reported burn-out and staff quitting healthcare and industrial action over pay and conditions.
Yet in a world of limited resources and opportunity costs, it is clearly sensible and appropriate for healthcare systems to ask, and be asked, what they are ‘doing’ and to what end. The now ubiquitous ‘Triple Aim’ (and its successors, the ‘Quadruple Aim’ and ‘Quintuple Aim’) of quality improvement (QI) in healthcare propagated by the Boston-based Institute for Healthcare Improvement and embedded in UK National Health Service (NHS) provider licensing since 2023, explicitly includes a need to ‘reduce the per capita costs of healthcare’ alongside the need to improve population outcomes, patient and staff experience, and tackle inequities.2
Perhaps clarity over definitions and phrasing is key. The Office for National Statistics (ONS) is often considered the most authoritative source for NHS productivity performance data, and defines productivity as how well healthcare creates outputs (eg, outpatient appointments) from inputs (such as investment in staff and medication).3 However, outputs in healthcare are more complex than in some other industries. Activities per person-hour may be a crude measure for national economic performance, but in healthcare doing more does not always mean more people getting better. For this reason, the ONS typically measures ‘quality adjusted’ outputs when reporting public service healthcare productivity.4 Similarly, the King’s Fund nuances productivity definitions to include the quality of the output, such as patient outcomes, as a defining factor.5
Anecdotally, when, as a leadership team, we posted a question to a room of over 50 senior staff, we observed cynicism at the need for ‘cost improvement programmes’. However less than half agreed that their own service ‘worked in the most efficient way’, and a desire for efficiency improvements on the back of this was unanimous. Clinicians may find themselves more aligned with a related narrative that translates this to the words of enhancing quality, safety and patient experience, in their native language of research, innovation and QI.
Postpandemic money and staff: the questions are not going away
The issue certainly has contemporary resonance. As the dust settles and a postpandemic world emerges with a return of most clinical services, a light is being shone the growing costs of healthcare, and additional investments made during this period.
UK data show an increase in health spending of over £20 billion per annum after pandemic.6 Analysis from the Institute for Fiscal Studies highlighted staffing increases from almost 16% in consultant numbers to just under 25% in junior doctors between 2019 and 2023. Yet, with this extra money and staff, there has been no evident-related improvement in key markers such as waiting times or outpatient appointment numbers offered. In fact, many markers have deteriorated, with reported overall satisfaction in the NHS at its lowest level for 40 years.7
More specifically, in our NHS Integrated Care System, North West London, recorded referral activity in mental health services has not grown at the same rate as additional investment in the past 5 years (figure 1). Although there may be numerous factors to account for this, the divergence between investment and activity is as an apparent ‘productivity gap’ that warrants careful examination.
Commissioners can point to real-terms increases in investment, however, crude temperature checks usually affirm that neither patients nor clinicians feel any uplift or positive change; indeed, the popular narrative typically remains one of service decline.
In the USA, the profitability of health systems has been under pressure due to rising costs and labour shortages, and measures to tackle rising costs by improving productivity and implementing technological solutions, including the adoption of artificial intelligence (AI) within workflows, are seek across the industry8—suggesting that the productivity narrative is not unique to the UK’s system of taxpayer-funded healthcare.
What is happening? One can put forth various putative explanations: the nature of the work has changed after pandemic, with rising clinical need and complexity; there is presenteeism among a low-morale workforce feeling under the cosh; inflation is an inevitable outcome of a crumbling infrastructure. Or are we looking at the wrong things?
Amanda Pritchard, the chief executive of NHS England, has argued that routine measurements fail to capture community care and diagnostics and—crucially we believe—where there has been improvement to care quality rather than quantity delivered.9 Against what many expect on the topic, healthcare systems are typically very lean on managers; as an example, in the UK, they account for about 2% of the public healthcare workforce against a national corporate average closer to 10%.10 Perhaps this is just too thin to effectively manage these complex systems.
Recent Think Tank blogs and healthcare industry publications are abuzz with the topic: writing for the King’s Fund, Siva Anandaciva noted the number of dedicated productivity reviews in recent years, how it remains a political priority, and, provocatively, that ‘if we really knew what was going on with NHS productivity, we wouldn’t be talking about it so much’.11 Thea Stein, the chief executive of the Nuffield Trust, wrote recently, ‘Politicians are clear, as are the Treasury, that they want an answer to why productivity is falling.’12 Aligning with this, at a Nuffield Trust summit in March 2024, the British health secretary Victoria Atkins stated, ‘By the summer NHS England will start reporting against new productivity metrics, not only at national level but also across integrated care boards and trusts…[with] new incentives to reward providers which hit productivity targets,’ promising reinvestment for those who meet these.13 Like it or loath it, productivity is here to stay in healthcare.
Mental health exceptionalism
When it comes to productivity, mental health has typically claimed exceptionalism. How much easier, goes the cry, to measure the number of completed hip replacements, complication rates from cardiac stent insertions or improvement in forced expiratory volume in 1 s to a bronchodilator medication. Far harder, it is posited, is weighing sunshine in mental health and capturing what is sometimes argued to be the realm of the subjective inner mind. This might have some truth to it, but those working in physical health services will likely push back that they remain held accountable to delivery targets on waiting times and financial envelopes more than care quality.
Further, mental health research journals are filled with nothing but clinical measurement and change. Indeed, much of it in recent times has been a rebuttal to a perceived different form of mental health exceptionalism: stigma. There is a contemporary emphasis on how data from mental health research highlight that health outcomes are as good as in the management of any comparable physical health field. But this has seldom seemed to translate into service delivery in terms of wide-scale use of clinical data outside of specific, often non-real-world, research trials.
One result is that we end up tracking and reporting process key performance indicators, such as mean duration of inpatient admission. These tell us something, but perhaps have little value to clinicians or patients, and tell us less than we would like about care quality.14 For example, counting the number of times the patient has been ‘seen’ raises the question of what this actually means: in terms of a mental illness, repeated clinical reviews might ultimately be a measure of ineffective care, when a successful intervention may arguably result in the need for fewer contacts.
This ambiguity and a sense that routinely collected data matter little are too often compounded by information technology (IT) systems that seem to hinder rather than help busy clinicians. In our organisation, we demonstrated that it takes an average of 27 clicks of a computer mouse to complete the mandated measurement of ‘recording and outcoming’ an appointment in our mature electronic record system. Frustratingly, only if this series of actions were taken could this activity be counted by our number-crunching colleagues who generate reports of our productivity. After all this work—if it is done, and done properly—out pops one of a handful of data points such as ‘attended’, ‘did not attend’ and so forth, with little nuance. The UK government commissioned Dr Geraldine Strathdee (former National Clinical Director for Mental Health) to undertake a rapid review to improve the way data and information are used in relation to patient safety in mental health inpatient settings and pathways—her observations were damning15:
When we first established the review, one of our assumptions was that the data burden on staff was too high and that we would need to make recommendations to reduce it. However, we were not prepared for the sheer scale of the issue… it was common for frontline nursing and clinical staff to spend as much as half their shifts in the office entering data. We were told by Trust leaders that roughly half of their analysts’ time was used to flow data to national and local data sets instead of providing support for quality improvement to frontline staff.
Harder still to measure, even with simplistic appointment models of attend/did not attend, is the impact of multiprofessional care. This is particularly pertinent in mental health services that require biopsychosocial assessments and interventions with a team-based approach to treatment. This includes ‘activity’ that may be purchased by healthcare systems but are less amenable to capture with their technology or IT systems, due to delivery by partner agencies, voluntary sector organisations and increasingly by peers with lived experience. The indirect input of clinicians providing consultation, supervision and direction through other team members has been clearly described for psychiatrists for over 20 years, and arguably has been reborn in the UK’s national community framework.16 ‘Advice and guidance’ into primary care-led integrated neighbourhood teams is expected to support a wider and more flexible cohort of patients in the lowest acuity setting—but this ‘activity’, which for some professionals may amount to the majority of their working week, is rarely adequately recorded.
Part of this harks back, in the UK and many other countries, to a mental health ‘block contract’ funding model that has traditionally side stepped the need to capture ‘billable’ work for each individual ‘customer’. Rather, an approximated large amount of money with which to deliver care to a given population is allocated. While perhaps simpler, it may disincentivise the kinds of granular activity recording that can generate ‘profits’, and arguably innovation and quality of care. Further, there are data suggesting that the model typically keeps mental health organisations ‘within budget’ as they work to this rather than trying to show performance data, with any surpluses skimmed off into a system control total for inevitably overspending acute healthcare. This is changing, and perhaps that is to be welcomed. The question is what our key measurements will be. Worldwide, the majority of countries fall short of WHO-recommended levels of mental health investment, however, even where investments are made based on billable specialist activities and interventions, this may be at the expense of community and grassroots activities and primary care-based mental health, which may also impact on societal mental health.17
Over the years, there have been attempts to redress this with varying degrees of success. In the UK, the phrases ‘clustering’, ‘HoNOS’ (Health of the Nation Outcome Scale, a mandated but poorly completed and used attempt at developing a universal clinician-reported outcome measure) and ‘payment by results’ risk triggering traumatic angry responses from mental health clinicians forced to fill out forms on their patients that were argued to map little onto clinical states, added nothing to care, yet formed the basis of putative payment ‘tariffs’. The inevitable outcomes were poor completion and ‘gaming’ of data. In the coming years, a revised UK mental health ‘currency’ is planned which maps nationally captured coded mental health activity data against pricing for core mental health services delivered,18 and it remains to be seen if the same problems will recur. More contemporaneously has been the slow and inconsistent rise of clinical outcome measurements.
One of the largest growth areas in mental health services in the UK over the last 15 years has been the development of Improving Access to Psychological Therapies (IAPT)—now known as ‘Talking Therapies’—services under the last Labour government from 2008. The hypothesis of economist Lord Layard, the father of the service, was that tackling common mental disorders with evidence-based interventions would positively impact on well-being and productivity of the workforce, improving employment rates and reducing welfare costs. Unsurprisingly perhaps, IAPT is one of the few areas where ‘recovery’, measured using validated tools such as Patient Health Questionnaire-9 and Generalized Anxiety Dirorder-7, is robustly captured. In 2022–2023, there were 672 000 individuals who completed treatment in IAPT services, with 49.9% of these individuals achieving ‘recovery’.19
For traditional mental health services, there is more work to do, and we are currently seeing a move towards DIALOG, and a reasoned desire for consistency across services for several cross-comparative and longitudinal reasons. Nevertheless, with few exceptions and despite these slow shifts, in what might sound a strange and uncomfortable statement to utter aloud, in 2024 we are not able to consistently measure and report clinical change in our patients.
Ways forward
So why should we care, and what should we do? We argue that productivity clearly matters, and mental health exceptionalism needs to end. Any complexity in the clinical field needs to be appropriately addressed with the right markers. In our opinion, to get the progress we need, movement is needed in three directions.
First, clinicians need to be engaged in discussions on productivity. While we are awash with QI teams, data suggest we are not really embedding them into how organisations think and learn, and adequately empowering frontline staff to make changes they know will be effective.1 We believe that this is a two-way responsibility: managers need to create this space, but frontline staff need to step into it and inform this discussion. Research is unambiguous that an uninvolved workforce with low morale just perform less well.20 It is not just that it is the right thing to engage staff—of course it is—but this shift cannot succeed without this. Without their voices, we are too liable to default to easier-to-measure but less meaningful process markers, such as mean duration of stay.
In our own services, in West London NHS Trust, we are seeking to engage our workforce at all levels in the accurate capture of their work, both in terms of valuing their outputs (‘Your time counts—let’s make sure it’s counted’) (figure 2) as well as outcomes, engaging with services to incorporate clinician and patient-reported measurements (such as DIALOG) into all our work. By combining the efforts of our clinical, operational management, transformation and business intelligence professionals, we have implemented successive rounds of incremental improvement focusing on improved activity capture, targeted interventions and high-impact service redesign—with the key focus of protecting investment in our services by seeking to eliminate the perceived ‘productivity gap’.
Clinicians have an intuitive, inherent understanding of what makes a ‘good’ or ‘bad’ service, and insights into where things are, or are not working, and/or captured by data. Appropriately supported, they can be effective champions of change, and instigators of more effective initiatives and innovations. We also advocate ‘quick wins’ to show (and showcase), in a manner to how QI typically operates, how together we can attain more productive services, and how this can be of immediate gain. Examples are too numerous to fully list, but might include changing to electronic communication instead of posted letters for patients who prefer this, intelligent rostering of community visits to minimise travel requirements and instigating more effective job plans that more thoughtfully and effectively lay out clinical expectations in a way that provides better and more compassionate care, or harnessing tools such as predictive analytics to identify impending crisis and improve case load prioritisation.
Second, too often information systems are not supporting—and indeed are potentially hurting—clinician engagement. There is a perversity, therefore, that in order to evidence improved productivity, our activities to actually do so distract from the clinical jobs at hand. To our knowledge, no healthcare system (and no mental health system in particular) has mastered the art of passive data capture—where the labours of staff are captured invisibly. This needs to be recognised. Well-intentioned policy documents such as the NHS Workforce Plan21 speak aspirationally of how ‘AI (artificial intelligence) can free up staff time and improve efficiency’, and ‘robotic process…available 24/7 and can undertake tasks 4–10 times faster with fewer errors’. Such statements are frankly just liable to antagonise a workforce faced with multiple electronic patient records that do not adequately speak to each other. Perverse incentives reward (and require) staff spending more time on active data entry away from direct patient care: if that is what we measure, then we cannot complain when it is the outcome. A more realistic and honest approach is required, remedying or ameliorating where possible—and aspiring to a passive recording approach—and in the mean time being honest with staff where IT remains part of a problem. Further, organisations have perhaps been slow to fully tap into other forms of rich information that is often outside of traditional hard-coded databases. Immediate examples include better use of quantitative and qualitative intelligence from patients, carers, and staff feedback, experience and explicit insight gathering.
Third, managers and leaders need to be cognisant of how such discussions can feel in a stretched, tired and sometimes disillusioned workforce. We agree with the Nuffield Trust that there is a lack of shared understanding on this topic,12 and leaders need to breach this divide. Frontline staff roles might keep them blind-sighted to broader regional or national conversations and tariffs. Here is where managers and leaders have a key role informing and guiding what might be within a team, service or organisation’s gift to set up or agree with a commissioner, regional or national regulator or others, and what might not be, with a need to adhere to relevant agreed targets. One critical factor must be to ensure that all data captured are fed back in a digestible way to the staff that enter it, and in a way which allows them to benchmark themselves with their peers. Too often data captured are scrutinised in aggregate at boards and by external oversight meetings, yet the staff members themselves never see or learn to understand it. Nothing is more demoralising than knowing you are working much harder than your immediate colleague, especially if they are paid more through an agency assignment—but perhaps shining a light on this with dashboards visible to all allows staff to compare their workload and self-correct? One US healthcare provider based in New York state (Summit Health) describes this approach as ‘measured accountability’.22
Increased productivity should have us working smarter, not harder, and evidencing the great work that is already occurring. It must benefit clinicians, the care and services they provide and ultimately patient outcomes, and not just help fulfil a Board paper’s integrated performance report. The lead author was recently struck by one of our organisation’s staff posters that said ‘Kindness counts’: this is true, and part of our values, but to turn this around, we do not count kindness as any form of measure. Norrish et al have argued that social capital is as important as financial capital in healthcare,23 and writing in this journal, Klaber et al have recently emphasised that placing kindness at the centre of leadership is underused yet essential to advance care and build productive services.24 Bravery might be required in letting some erstwhile activities and measurements fall by the wayside, rather than the current model that seems to just acquire new things to add to our measurement.
We believe that there is a necessary yet healthy conversation that can be had. Its time has come, but—and perhaps here is mental health’s strength—it needs to be held in the right way with our staff and patients.
Ethics statements
Patient consent for publication
Ethics approval
Not applicable.
Footnotes
X @derektracy1, @@drchrishilton
Contributors The authors both conceived of and wrote the editorial.
Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Competing interests None declared.
Provenance and peer review Not commissioned; externally peer reviewed.