Skip to main content

Donald Rumsfeld, a former US Secretary of Defence, added to risk concepts when briefing the press about the state of play in Iraq in 2002.  He can be succinctly paraphrased as:

We know what we know, we know what we don’t know and we don’t know, what we don’t know.

Any risk manager or internal auditor who’s been surprised by a failure or business loss understands Secretary Rumsfeld’s observation.  Even if you’ve gone through the steps of risk identification and control testing, a key process has the potential to surprise (and sometime surprise very badly!).  In most cases, auditors carry out limited testing of transactions and controls; however, the detection of issues is often far from certain. Usually, even if problems are discovered in the testing, the detection occurs well after the event and effective remediation is often not effective (or even possible).

In large corporations, the sheer volume of data creates challenges for risk managers and auditors. Fortunately for auditors, this “tyranny of volume” can be overcome by adopting “process control” methods.  In fact, large volumes of data can prove to be a friend.

The Top 10 Innovation Hubs of 2017

A range of process (or statistical) control techniques were developed in the 20th century and they’ve been incorporated in modern Lean Manufacturing disciplines.  A real-life example of its application by internal auditors follows.

The company was a wholesale grocery distributor and one of its business units operated upwards of 50 sites that sold to retail businesses.  These outlets carried large volumes of highly varied stock types ranging from food to television sets. The business model was well past its prime and the company was suffering sales declines as well as control breakdowns as staff numbers were reduced to maintain profitability.  Stock takes were an endemic problem with large unexplained variances being regularly reported. The business processed large volumes of stock adjustments driven, in part, by inexperienced staff making errors in “put aways” or, in some cases, significant pilferage. Control over stock adjustments – such as approval and review – was often ineffective.

To assist the business, the internal auditors developed a Continuous Monitoring Routine (CMR) using process control techniques.  On a daily basis, all stock adjustments were captured and the mean and standard deviations calculated for a rolling six month period for each outlet.  The model was designed using “the Nelson Rules”.  The Nelson Rules are criteria that can be used to spotlight outliers in any set of data.  There are eight Nelson Rules.  The Nelson Rules are summarised on Wikipedia*.

Three Nelson Rule examples followed:

  • “One (data) point is more than three standard deviations from the mean.”
  • “Nine (or more) points in a row are on the same side of the mean.”
  • “Six (or more) points in a row are continually increasing (or decreasing).”

Stock adjustments that “tripped” any of the eight Nelson Rules were reported on an exception basis with automated emails alerting managers.  This Continuous Monitoring approach produced almost immediate results.

In many ways, this CMR is a very good example of the concept of Continuous Controls Monitoring (CCM).  CCM usually doesn’t monitor control operation directly.  CCM monitors transactions for evidence of control breakdowns. 

The large number of outliers detected using Nelson Rule criteria confirmed that the key stock adjustment controls had largely broken down. Once these key controls were re-established, stock take variances immediately began to decline.

Audit Analytics

The benefit of this CMR type is that it can be applied to a large range of transaction types.  Once the model is built, it can often be adapted to other transaction flows with few changes to the algorithms other than fine tuning.

However, like most powerful CMRs, there are challenges to be overcome.

  • Data should be captured and processed on a regular basis (daily is preferable).
  • The algorithms implementing the Nelson Rules should be designed by people with statistical expertise.
  • There should be an automated means in place to deliver outlier exception details to management on a regular basis (as well as auditing and risk management staff).
  • Ideally, there should be an escalation path to senior management if significant outliers are detected by the CMR.
  • Exceptions should be tracked and analysed to understand their implications and root causes.
  • Most large businesses operate a portfolio of modern and legacy systems and the data can vary in quality and availability. Ideally, the tools being used should have the capability of loading, cleansing and transforming diverse data types into a common format.

The development of a CMR of this type is relatively complex and requires an adequate toolset to manage the capture and processing of the data, but it also the notification of exceptions and the monitoring and analysis of resolutions.  However, once developed, the CMR can be usually be redeployed across multiple high-risk business processes at minimal marginal cost.

You can learn more about Continuous Control Monitoring Satori CCM here. 

* https://en.wikipedia.org/wiki/Nelson_rules