Home » Projects » What improvement looks like: An analyst’s perspective

What improvement looks like: An analyst’s perspective

Produced by:

Martin Chadderton, data manager, is a member of the Sandwell and West Birmingham Future Hospital development site team. Here, he outlines what improvement really looks like in a real-world, complex project.

What improvement doesn't look like

Here you will find two charts that will highlight two examples of methods of presenting data and improvement over time. These charts are not a good way of representing change and they should be treated with a level of concern. 

Figure 1a is a graph of progress over time. Each unit of time between the start and end produces an equal level of improvement. The impressive trend starts near zero and progresses consistently over time to its goal.

Changes in life – weight loss, salary increase, depreciation of car prices, a child’s shoe size, and supposedly more controllable processes such as computer speed and cost of energy – do not increase or decrease in this fashion over time. The reality is that progress is a series of peaks and troughs (such as weight loss) or a series of uneven spikes (such as a child’s shoe size) along the way to the end position.

Figure 1b is an operational dashboard representing progress over time. It is the standard that many business intelligence reports and programme management offices use. The key is; red = bad, green = good and amber = within an agreed tolerance of the two. 

Figure 1b shows a starting position of red with a smooth transition into amber, then green over a 3-day period. This is another example of the linear trend. 

What improvement actually looks like

In Joseph M. Juran’s Quality Trilogy [1] there is an initial phase of general operating conditions, which largely follows a familiar trend (few spikes), during which quality control is applied. An improvement phase, where an intervention or two are made, is added, and then a further zone of quality control as the previous improvement is sustained. Figure 2a outlines this trilogy, with a progress line (in blue) showing a process over time.

Juran's trilogy: a real world example

Let’s imagine a man (perhaps an NHS analyst) is trying to reduce his weight from a hefty 14 stone down to a svelte 13 stone so that he can fit into his favourite pair of jeans. The initial ‘normal’ operating conditions will not be a consistent 14 stone across the week. For example, he will perhaps be lighter on days after exercising and eating healthily.

  • He plans to lose some weight (quality planning) and he weighs himself every day (quality control).

  • After deciding on an appropriate intervention, such as cutting down on sugar, his weight starts to reduce (quality improvement).

It may take a while for the intervention to start working, but eventually his weight starts to reduce. His weight does not change at regular intervals: some days he is a few pounds lighter, the next day there is no change. He learns his lessons about his diet; if he continues to follow the new process he should be able to maintain somewhere close to this weight until he decides on future interventions.

Other ways of measuring this success would be: fitting into his favourite pairs of jeans and saving a little extra cash from having fewer take-aways each week. It is the combination of these quantitative (weight loss/cash) and qualitative (wearing favourite jeans) that shows that an improvement was made.

An NHS analyst's perspective

In 2002, our hospital was tasked with:

  • improving the A&E department wait times
  • meeting a target of treating 90% of patients within 4 hours.

This was one of the first projects I undertook as an NHS analyst. The performance measure was to be implemented in 2003.

I started to measure our performance in a way that we could understand the delays in our service. I also developed a real-time tracking tool which enabled A&E staff to see how long specific patients had waited.

This was one of the first occasions that such close monitoring had taken place within this area and there were pockets of resistance (for example, rejection of the figures and resistance to having a monitoring tool.

Key learning

The original zone of quality control showed our performance moving in a range on 65% to 70% of patients seen within 4 hours and our patient feedback experience scores were average for this area.

  • The metrics started to show that the major delays were around test times within the Minor Injury Unit.
  • By improving pathology and imaging turn-around times we could improve our performance.
  • Over time we realised that there was a group of low risk patients needing little treatment: these people could be identified and referred to the primary care stream.

Through planning, refining and redesigning our capacity and operational teams through quality improvement resulted in our trust hitting 89.96% within the crucial measurement week. The following week, after our performance was submitted, we immediately had a dip back down to below 80% but this stabilised to closer to 85% shortly afterwards.

As an analyst, I was able to illustrate the change, via the above trend. More importantly, I observed the improvement in the staff’s reaction to being measured and their service analysed and patient feedback satisfaction scores increasing.


[1] Juran JM, DeFeo JA. Juran Quality Handbook: The Complete Guide to Performance Excellence, Sixth Edition. London: The McGraw-Hill Companies Inc, 2010