National review of community risk methodology across UK Fire and Rescue Service

Section

Evaluation of Risk Management Practices

The evaluation aimed to explore how Fire and Rescue Services (FRSs) assess risk in their communities, including the methodology used to assess risk; how decisions about what interventions to deliver are made, and how the effectiveness of these interventions are evaluated.

We firstly wanted to capture current practices across FRSs, and then to identify:

  1. areas of good practice and
  2. gaps in practice

We then use this analysis to provide suggestions of what future research should be conducted in order to capitalise on the good practice currently observed, and to overcome the gaps in the data. This allows for reflection on the differences in approach that might be observed from the different FRSs due to socio-demographic and geographical variation across the UK.

Definitions of risk were used frequently in the submissions. These included reference to places, individuals, demographic groups, populations, and specific activities.  However, when considering what the risk assessment methodology is trying to achieve, Dorset and Wiltshire FRS offer an informative view that “Part of the process is to challenge the term risk and bring the concept back to hazards, frequencies/risk and consequences/outcomes. The (methodology) defines risk as the likelihood and consequence of fire and rescue related incidents”. This questions what risk methodologies are designed to achieve.

Focussing on the definition of risk, an anonymous FRS stated they defined risk as a “vulnerability/risk greater than that of the underlying population”. If this is accepted as our premise, the next position to take would be to establish what the role of assessing risk is trying to achieve. Devon and Somerset FRS suggest “The objective of our analysis of risk is to provide support to decision making rather than decisions themselves”. We share this position, the aim of the methodology is to create an evidence base to support professional, informed judgements about resourcing to address or mitigate that risk, so this report is written from this perspective.

To empirically address the quality of methodology designs, quality markers were adapted from Nutley, Powell and Davies’ (2013) “What Counts as Good Evidence”, to give four sub-categories for further, in-depth exploration. Further quality markers were adapted from the Social Care Institute for Excellence (SCIE; Rutter et al., 2010; Pawson et al., 2006) and these are both shown in Table 1 below.
Table 1. Quality Markers

  • Methodology quality
    • explicit data used
    • explicit design
    • validity of methodology
    • reliability of methodology
  • Methodology specificity
    • clear target population
    • intended outcomes identified and evidence-based
  • Application of methodology
    • explicit link to how this supports decision making of resources
  • Evaluation of interventions
    • evaluation of intervention captured, assessed through triangulation of information
 
  • Organisational knowledge: FRS knowledge/data used
  • Practitioner knowledge: FRS sector wide knowledge/data used
  • User knowledge: Organisations and community contribution/consultation captured
  • Research knowledge: Academic/expert/evaluation sources used
  • Policy community knowledge: Wider data sets from other partners/sources used

 

We generated 16 questions from the above quality markers, which were used to interrogate the data. Sometimes the responders referred us to their risk management plans, in which case we investigated the risk plan and treated the relevant content as part of their response. These questions are outlined in Box 1.

Box 1. Methodology Design Questions

  1. Has the submission been done in partnership with an Academic Institute?
  2. Has any published work undergone an Academic Panel before publishing?
  3. Has the submission taken into account the current research or literature in the area the submission relates to?
  4. Where current literature or research exists which quotes findings that are different to the submission, does the submission provide suitable weight to support its findings?
  5. Does the submission quote sources of information, research, data, literature used within it?
  6. Does the submission detail the methodology used?
  7. Does the submission fall in line with the NFCC Strategy?
  8. Is the submission based on risk or demand?
  9. Is the methodology used suitable for the outcome the submission wanted to achieve?
  10. Is any methodology used repeatable with the same results?
  11. Is the sources of data and information used in the submission credible (i.e. will they have been scrutinised or produced by reputable agency etc)?
  12. Is the analytical process used suitable?
  13. What size data sets or sample has been used?
  14. Does the submission detail any shortfalls or limitations?
  15. Does the evidence base have an explicit link to decision making?

 

The responses to all but 2 questions could be categorised as having met the criterion (coded as “yes”), partly met the criterion (coded as “partly”), or not meeting the criterion at all (coded as “no”). Questions where the submission and (where included) the RMP did not include enough detail to make a conclusive decision, were coded as “not enough information”. The two questions that do not fit with this coding were Question 8, which asks about the type of modelling that was used in the submission, and question 13, which asks what data were used in the submission. Question 8 was therefore coded as being based on “risk”, “demand”, or “risk and demand”, while Question 13 was coded as having used “local”, “regional”, “national”, or “all” data within the submission.

In the interests of ensuring anonymity of the services, individuating characteristics of the FRS are not considered. Rather, we explore the emerging themes of practices across the UK. This highlights any consistencies and variabilities in practice, as well demonstrating areas of particular good practice.
To enable us to identify methodological strengths of community risk management processes, we first calculated the percentage of criteria that were classed as having been met in each of the 43 submissions.