National review of community risk methodology across UK Fire and Rescue Service

Section

Risk Assessment

This part of the questionnaire asked 8 questions, 3 of which were further subdivided. We received 42 responses that allowed us to record and evaluate a response to all the question. The responses are summarised in the following diagrams. The first figure (Figure 1) shows the extent to which community risks are currently addressed in community risk management.

Figure 1. The extent to which community risks are currently addressed in community risk management.

The scope of how community risks were identified in the submissions was defined as the submission having a broad, comprehensive, rigorous and coherent identification of risks aligned to their risk management plans. Submissions were categorised as having met these four aspects ‘Fully’, ‘Substantially’, ‘Adequately’, ‘Moderately’ and ‘Partially’.

We established four markers of quality for evaluating ‘scope’ (broad, comprehensive, rigorous and coherent). The results for this question are (marginally) the most polarised of all the questions with 20 responses out of 42 (48%) falling in the adequate category. To fall in the fully category we were looking for some form of robust, independent external validation or quality assurance of the service’s assessment of risk.

Figure 2. The extent to which the nature and types of community risk are identified and defined in the submission.

We next established three markers of quality for evaluating ‘the nature and types of risks’ (explicit, multiple and comprehensive). This was conceptualised as risks to communities, groups, geographical regions, individuals, or firefighters. Submissions were categorised as having met these four aspects ‘Fully’, ‘Substantially’, ‘Adequately’, ‘Moderately’, or ‘Partially’. This question was intended to explore how services define risks and how their processes go about identifying them. Instead of asking an open question, the services were given some clear indications as to the types of areas we were looking for (Communities or groups, Geographical Areas, Individuals and Firefighters). As with the previous question, to fall in the fully category we were looking for some form of robust, independent external validation or quality assurance of the service’s assessment of risk. Figure 2 shows that the majority of FRSs identified the type and nature of community risk adequately, there is a clear gap, with no FRS fully identifying the type and nature of the risk.

Collectively, across all the submissions we have collated the risks identified broadly in two categories:

  • Infrastructure – Fire: ADF, major dwelling fire. For other building type fires, Industrial premises,  Heritage premises, RTCs, Flooding, Water rescue, Firefighter safety, Adverse weather, Food prepared on premises, Transport incidents, Terrorism/MTA, Local geography, Local infrastructure, Housing stock, Climate change, Community expansion (housing and industry), Disease and pandemic flu, Areas in the IRS (as opposed to incident data).
  • Sociodemographic (community/group) risk – Age (Over 60s), Lifestyle choice (Number of dependents x cohabitation, Drug and alcohol use, Smokers, Hoarding, Registered bariatric, Health status (Oxygen dependent, Registered mental health challenges), Gender, Ethnicity, Deprivation/poverty, Level of household occupancy (single person occupancy), Multiple sleeping risk/sleeping accommodation above, Forecast population changes, Forecast demographic changes.

Figure 3. The extent to which high-risk areas, groups, and individuals are identified in the submission.

We finally established three markers of quality for identifying ‘high risk or vulnerable areas or groups’ (explicit, multiple and rigorous). Submissions were categorised as having met these four aspects ‘Fully’, ‘Substantially’, ‘Adequately’, ‘Moderately’ or ‘Partially’. Of the first four questions within the survey, this question related to an area where services have clearly been undertaking some considerable work, almost universally with their strategic partners. Results from this analysis are shown in Figure 3.

From our initial analysis, larger services have tended to score more consistently well, although there appears to be more variety in performance within the four categories (large services, metropolitan, combined, county services) than between the categories. Metropolitan services tended to score relatively high with just a single outlier. County authorities scored slightly lower on average than combined authorities. Responses from Welsh services were surprisingly variable over the three services.