Sector Risks

6 Desirable Features Of Operational Risk Indicators

Operational risk indicators are measurable metrics that provide a proxy for operational risk exposure. A change in the value of a metric signals that a particular risk exposure may be changing, that it may be increasing or decreasing in probability or impact, or that a risk event may be about to occur very soon. In this regard, an indicator may signal:

  • A potential change in inherent/gross exposure (the underlying probability and or impact of risk events) to one or more categories of operational risk
  • Control weaknesses and hence a change in residual exposure
  • A decline in the performance of the organisation due to the impact of operational risk exposures
  • A decline in the performance of the operational risk management

Risk Indicators

A risk indicator acts as a proxy for risk exposure. A change in the value of a risk indicator signals a change in probability and or impact. In this regard, risk indicators may relate to the causes or effects of operational risk events. Usually, risk indicators signal a change in an organisation’s underlying exposure to risk, also known as inherent or gross risk. This means that action may be required to enhance the control environment to maintain the current level of residual or net risk (exposure net of controls). If action is not taken, then an increase in inherent or gross risk may translate into an increase in residual risk.

Control indicators

Control effectiveness indicators, usually referred to as Key Control Indicators (KCIs), are metrics that provide information on the extent to which a given operational risk control is meeting its intended objectives. KCIs indicate the effectiveness of particular controls at a particular point in time. Examples of KCIs include the results of formal control testing, along with loss and near-miss information which relates to the success or failure of controls about specific operational risk events.

Performance Indicators

Performance indicators usually referred to as Key Performance Indicators (KPIs), measure performance or the progress made towards the achievement of targets. This might include financial performance, or the performance of processes and systems, along with progress towards the achievement of financial targets, business plans and project risk management plans

Key Indicators

Key Indicators are not automatically key indicators in every context. An indicator becomes ‘key’ when it tracks an especially important risk exposure (a key risk), or it does so especially well (a key indicator in the sense that it is a good proxy for the risk in question). The Institute of Operational Risk’s (part of the IRM Group) perspective is that the Key Risk Indicator should typically be treated as an indicator that is applied to key operational risks. A key operational risk may be one that an organisation has a high exposure to, which threatens the achievement of organisational objectives, or is outside of appetite.

The selection and use of too many operational risk indicators can be as detrimental as too few. This is because decision-makers will struggle to take in all of the information (failing to see the wood for the trees) and because of the costs involved in collecting and reporting the information. Organisations are advised to establish very specific characteristics for the data they use as indicators, separating broad data from specific metrics used to indicate changes in exposure levels (which may include metrics on inherent/gross risk exposure, control effectiveness or risk management plan and performance).

The 6 characteristics that organisations should consider when selecting effective operational risk indicators are outlined below:

  1. Relevant: Operational risk indicators must provide reliable and accurate information on an organisation’s operational risk exposures. This should include providing management with information on both current and future exposures. Relevance can change over time, as new operational risk exposures emerge, and existing exposures are either mitigated or modified. Linking periodic reviews of the selected suite of operational risk indicators with the completion of risk and control self-assessments is an effective way to maintain relevance. As is drawing on the experience, knowledge and understanding of risk and control owners to help select the initial set of indicators and to suggest changes, as necessary (both removals and additions).                                                                                        The following questions are useful to consider when assessing the relevance of existing operational risk indicators, or when considering adopting new ones:
  • Does the metric help quantify or measure the risk?
  • Does the metric help monitor the exposure?
  • Does the metric help manage the exposure and its consequences?

2. Measurable : Indicators should be measurable in a consistent manner. This is to allow the construction of trends to facilitate comparisons over time. It also enables the use of targets, limits and thresholds. This feature requires that indicators should be one of the following:

  • Numbers or counts (number of days, employees, etc)
  • Monetary values
  • Percentages and ratios
  • Time durations
  • A value from some pre-defined rating set (such as that used by a credit rating agency)

Indicators that are described by text are prone to being subjective, can easily be misinterpreted and are subject to manipulation through the structure of the text employed. Hence they are not recommended. Though they are in theory possible.

3. Forward-Looking (Leading) : The use of so-called ‘lagging’ indicators, which provide information on past events or issues is not recommended, unless past trends can be relied upon to indicate the future. Rarely is this the case in the modern world. Much better to use ‘leading’ or ‘preventive’ indicators that provide management with sufficient lead time to correct the situation before operational risk events happen. Critical to this preventative feature of operational risk indicators is the capacity to capture the causes of the risks rather than counting the number of realised events. This is where loss data collection (both internal and external) can be invaluable, where data is collected on the causes of events. Forward-looking scenario analysis can also help identify causes not currently reflected in the loss data. For more guidance please refer to the IOR’s papers on Loss Data and Scenario Analysis which can be downloaded here.

4. Easy To Collect And Monitor : Ease of monitoring means that the value of the data collected for a metric must exceed the costs of collection: this cost-benefit rule is as applicable to investments in risk reporting and monitoring (except for mandatory reporting for regulatory purposes), as it is to any other type of business investment.

An indicator’s cost of collection and ease of monitoring should be at the centre of the selection decision. One way to achieve this is through recycling: reusing what already exists. As explained above organisations make use of a wide range of metrics to support their operations, these metrics offer a good starting point for the selection of operational risk indicators. The use of automated risk indicator systems often provided as part of IT packages for operational risk assessment and reporting, can further reduce the costs of collection and facilitate easy monitoring. However, they are only recommended for organisations with mature risk indicator frameworks.

The IOR recommends starting small, selecting a limited set of indicators and collecting the data manually. This facilitates a good understanding of where the data is coming from, what it indicates, and how it can be used. Once an indicator or set of indicators have proven themselves useful, then consider technology solutions to reduce the manual workload, but in a manner which allows the easy replacement and addition of new indicators. An important aspect relating to the collection process is quality assurance. The collection cycle needs to incorporate specific deadlines for submission and should be auditable in terms of data sources and collection channels. There should also be an independent quality control process to ensure that erroneous or misleading data is not sent to management.

5. Comparable : Indicators must provide data that is comparable with some form of benchmark. Comparability allows management to understand the relative ‘scale’ of the indicator. This helps them to determine when action is required to address the value of the indicator or the risks or controls that it relates to. Relevant benchmarks are either over time or across comparable internal departments or business units and external organisations. An organisation can track its evolution through time, provided that the type of indicator and information collected is stable over a long period. Cross-department/unit or external organisational comparisons are also very useful. They provide a wider context and are not prone to inconsistent historical trends. Some industries share data in less sensitive areas like staff sickness absence or health and safety incidents, for example. Where data is shared in this way it should be used as a benchmark. For example, along with the ‘raw’ metric, an organisation’s position relative to the industry distribution could be provided (4th to 1st quartile). Comparisons between internal departments and units could be made in the same way and used to help facilitate friendly competition to improve the value of indicators and by extension the related operational risk exposures.

6. Auditable :Auditable means that the data used to produce a metric is:

  • Comprehensive and accurate, and that remains consistent over time
  • Comes from a documented course
  • Is constructed using a clear and consistent formula
  • Is reported in a clear and timely manner

For good governance, independent validation of the indicator selection process (including how data is sourced, aggregated and delivered to management) should be undertaken reasonably early in the lifecycle of the organisation’s risk indicator programme. The organisation’s internal audit function should normally perform such a validation.

Periodically, further quality assurance checks should be made to ensure that indicators remain relevant and that the data used is timely, accurate and complete. This may also be conducted by internal audit or the (operational) risk function. The results of any risk indicator audits should be reported to the operational risk committee or equivalent, as well as the statutory audit committee.

admin

You may also like

Leave a reply

Your email address will not be published. Required fields are marked *

More in Sector Risks