Data, Transformation, Sep 12, 2019

Market Abuse and Conduct Surveillance: Are you drowning in a sea of alerts?

Matt Pockson

A multi-faceted approach to transformation is needed in Financial institutions to combat market abuse.

Read next: Surveillance: Readying for the Seismic Shift

Financial institutions, in keeping with the expectations of risk functions and regulators, are working to strengthen their control environments by extending surveillance coverage of risks, populations, business lines and regions.

But in doing so via rollouts of existing technical solutions, surveillance departments risk the existing waves of false positive alerts developing into an overwhelming swell that chokes off existing successes. As the sea of alerts rises, it must be matched with significant additional investment in headcount to cope – a business model that is simply unsustainable in the long run.

In fact many new products on the market still take a channel centric approach, so even system replacement projects that are commissioned to tackle this problem will fail to deliver the required benefit.

Where does the problem stem from?

Technology and detection algorithms are typically deployed in silos on single data sources, with thresholds set conservatively and requiring tuning. They lack the broader context that enable better decisions to be made, whether that be from internal or external data sources that can support integrated analytics or the deployment of more sophisticated behavioural models. These technologies also focus on detection but leave the surveillance officer ill-equipped to perform analysis or support investigatory work.

The output of these systems are ‘alerts’. These identify non-compliance against their coded rulesets and emit huge volumes of alerts. However, most of these do not lead to identifying a root cause issue; in fact the use of the term alerts is a problem in itself, as they do not identify an emergency warning as you would naturally expect from an alert.  Because they are deemed alerts, they are expected to be processed causing the demand for large number of analysts to process them. They are also then subsequently used in MI and KPI measures and conversations with stakeholders and become a de-facto of good or bad performance. But which is better - a higher number of alerts or lower? There are simply a means to an end but not the end itself.

Alerts can therefore be many and varied requiring significant numbers of staff to clear. If policies are too few and thresholds are too tight much of the risk will go undiscovered, conversely on the flip side it may be very difficult to see the wood for the trees as more alerts are generated than can be handled effectively. Large numbers of analysts to process ‘low value’ alerts is an expensive way to do business and drives a false sense of security for control owners and risk stewards – something I’ll save until a future article describing the “Risk Iceberg”. But worth mentioning too that detection systems focussing on single sources of data is itself a flawed approach, adding to the issue of undiscovered risk. A further problem is the lack of attributed value. Without this, there is no mechanism other than chronology for sequencing the processing of these alerts.

So today’s problems are many and varied. A multi-faceted approach to transformation is needed that moves away from must-review detection alerts and toward indicators. These indicators identify signals amongst lots of noise and by joining together data sets, taking a more integrated approach and taking advantage of behavioural models and AI, we can increase the strength of indicated signals (and attribute a value to this) to both differentiate signals from noise and help prioritise amongst identified signals. This will provide a more sustainable model for scaling coverage and feeding the analysis, investigations and reporting elements of the surveillance lifecycle.

And if you still have doubts, consider this… Your existing alerts may not even be uncovering the full risk picture anyway. If your existing detection is not 100% effective then failing to find other ways to explore and analyse your data in conjunction with existing alerting mechanisms, will allow some inappropriate activities to continue undetected. We’ll explore this in our next articles…

We've created a hub of information on our intelligence-led approach to risk management and threat mitigation that responds to the exponential growth our clients are seeing in data volumes and regulatory expectations. To talk to us please use the contact us form.

Read the case study for a global bank

Read more:
Podcast: Is brand loyalty a dying trait in banking?
How banks can achieve the promise of open banking with a data mesh architecture
Data governance as an enabler: Agile data governance
Best practices for better data insights
Five security tips to protect your client's data

Have a question?


Let’s find a solution that fits your challenge.

Contact us