National review of community risk methodology across UK Fire and Rescue Service

Section

Review of Current Practice

Evaluation should be permanent, ongoing practice within the UK FRS. Establishing an evidence base of what evaluation methodologies works in what contexts is a sector intelligence model that should be a national aspiration of the UK FRS. We are not advocating here that the UK FRS should have a compendium of evaluated and approved prevention and protection initiatives, as we recognise what by the very nature of working with communities to reduce risk means what works in one context with one community might not work with another. For this reason, we do not advocate a compendium of prevention and protection work here per se. What we are arguing for is a compendium of evaluation methodologies for the UK FRS to enable evaluation of FRS prevention and protection work to a robust, valid and reliable standard. This would enable each UK FRS to evaluate their own delivery of the prevention and protection work with their own communities in a timely manner that can withstand scrutiny.

The submissions provided a continuum of practice from FRSs fully engaged in evaluation work, right through to FRSs who stated that evaluation work was either something in development, or about to be developed.

Figure 29. Academic Scrutiny: Did published work undergo an academic panel before publishing (top) and was the submission completed in partnership with an academic institute (bottom)?

Very few submissions had been completed alongside academic researchers (Figure 29). Some had been conducted in combination with a 3rd party research company (coded as partly), however we recognise that external research companies may not be unbiased in their approach. Through collaboration with academic partners, FRSs can develop evidence-based practice which is informed by the academic literature and which itself then informs future work in the area.

We used external collaboration with an academic partner as a marker of quality, not because of self-interest, but because in all quality marker criteria we reviewed include partnering with such institutions, rather than a commercial research company, as a marker of quality. This is because academics are regarded as neutral to politics, finance and maintaining a high level of integrity and knowledge.

Figure 30. The number of FRSs who carried out bespoke evaluations on prevention, protection, and prevention activities.

Once levels of risk are identified, FRSs develop strategies and interventions to direct attention to key service priorities. We explored how FRSs evaluated the effectiveness of their activities (Figure 30), which revealed that the majority of FRSs do not evaluate the effectiveness of interventions and strategies aimed at prevention, protection, and response. This has the effect that FRSs cannot be certain that their efforts and resources are being used effectively, and also affects the potential to share good practice with other services.

Often the effectiveness of interventions were evaluated by exploring the difference in target indicators (e.g. quantity of fires, fatalities, etc). However, these general evaluations cannot effectively capture causality, and often the aim of the intervention is to educate and reduce ‘near misses’ (for example).

Bespoke evaluations would enable FRSs to more accurately capture finer detail of which interventions are effective (and why) which is a key component of good practice. There were 26% of submissions which reported conducting bespoke evaluations to assess effectiveness of interventions (where conducted these were often with a University partner). Many of those who do not actively conduct bespoke evaluations showed an awareness of the fact that this would be useful.