-
Notifications
You must be signed in to change notification settings - Fork 23
Fintechs : Running your risk monitoring system as pure SQL rules is a bad idea ‐ here’s why
Every company has to manage risk, but some of them more than others. Financial services companies are probably among the most aware of this and also among the best equipped for dealing with it. Or are they ?
Sure enough, big banks are used to managing credit risk, but anti money laundering (AML) and transaction fraud are not quite solved problems - notice how much friction your bank has to add when you want to make a large-ish transfer to a new beneficiary, and how even big banks get bashed time and again for failing to enforce AML regulation or facilitate tax evasion.
More importantly, with the fintech boom, lots of new actors have entered the market. It’s hardly a secret that fraud risks is real even at day 1 - especially at day 1 - yet this is exactly when the company is poorest in domain expertise and tech team ressources.
And so, we surprisingly regularly see young fintech companies that kickstart their anti-fraud and AML tools by running a set of scheduled - or even manual - SQL queries in their data warehouse, or their main database, to alert them on fraud schemas they’ve previously observed. In fact, for full transparency, I’ve been exactly in that position, some 6 years ago at Shine (a french neobank for freelancers & SMBs).
Here’s a recap of why.
First, start with the issues from the compliance or operation’s team point of view:
This is is kind of obvious. By construction, running queries to analyze patterns of transactions after they’ve happened and been synchronized to the data warehouse means you’ll always be reacting after the fact.
And to be fair, that was probably ok until not so long ago, as the industry standard was not yet on instantaneous transactions. Sure, your customer could have their account taken over, but you could always review their outgoing transfers overnight, and any new credit card ordered would only arrive by post days later.
Now, with instant transfers becoming a standard and virtual credit cards added to a Google/Apple Pay wallet, the risk is real time by nature. And as that became clearer, legislators have also made it clearer that the bank is usually on the hook if they can’t provide rock-solid proof that the customer was negligent with their account’s security.
AML is slightly less exposed to a system's "real timeness" - or lack thereof. It is less critical that money is stopped immediately, and more important that networks are detected and dismantled. But it has issues of its own with pure SQL rules.
When it comes to AML, a financial institution doesn’t do those controls for its own good - as it does in the case of transaction fraud detection - but rather for the greater good of the society at large. This means that it is accountable to the local regulator for how well it performs those verifications. Now, being accountable means being able to prove that at every point in the past, controls have been performed correctly and in due time. Fintechs that fail to justify they have done so, typically run into trouble.
So not only should a financial institution use state of the art tooling to prove enough effort is done to fight money laundering, it also needs to keep an robust trace of how this was done in the past. This means keeping a read-only copy of every version of the rules used in the past, of the decisions and their outcomes, and of the following investigations if they were necessary. Running SQL rules in your data warehouse will provide none of this (and no, your code’s version control is not a viable tool to prove what rules were running at a point of time months or years in the past).
While there exist some clear-cut signals of definite fraud ("this customer is newly connecting ƒrom a device linked to 10 other account takeovers"), most of the time it’s not so black and white. So, while rule-by-rule alerts will do for the first months of a fintech as long as volumes are low, it quickly becomes necessary to run several rules in parallel (e.g. "transfer to a risky country", "beneficiary name inconsistent", "connection from a new device", "amount large compared to trend", "client device is high security"...) and produce a final score by combining the results.
Now, from a technical point of view this means one of the following:
- Combine independent SQL queries into one mega-query that "calculates all the things" - and quickly becomes unmanageble
- Add an orchestration layer that combines intermediate outputs - and observe how the nice, simple tool from the beginning is turning into an increasingly complex product in the product
SQL is a great language to query data, and will go a long way for many applications. But some things are just best done « in the code » after having pulled data from the database. To convince yourself, just try to express "at any point of time, the customer’s total transactions over a rolling period of 48 hours was larger than X" in SQL (it’s possible, but you’ll find it’s really awkward and under-performant [footnote 1]).
Not to mention doing things like string manipulation in SQL. (Yes, you can sometimes inject python or js code into your data warehouse engine - but is it really the way you want to scale your codebase?).
Your compliance, ops or finance team will want to have visibility on the performance of your rules, observability on their timely calculation, and probably options to try or A/B test modifications to them. Yet this sort of "accessory" features are often sacrificed or half-baked in home-made systems.
While SQL is not the hardest language to learn, it requires practice to be precise and efficient with it. And, fraud or monery laundering mitigation being a critical system, you’ll want a high degree of confidence in the code that is written - typically a higher degree of confidence than for non-techies exploring data on their BI tool.
This starts ok at first, but before long you may end up with a data engineer attending lots of business meetings just to bring the opinion on what’s possible or not to do in the system, which often won't be the best use of their time.
Your compliance officer will never be satisfied with the rules as they are - nor should they. Fraud and money laundering patterns evolve, even as your user base grows and becomes more sophisticated. And so your fraud scenarios will have to evolve regularly, to be expanded and fine tuned, occasionally rolled back. What you’ll never have in this scenario is the peace of mind of "ok, this project is done. I can now focus on other things".
Which brings me to why running AML and fraud monitoring rules as SQL queries in your database is a bad idea, from a tech manager’s point of view:
Ultimately, it’s the same old story, shared a thousand times, on build versus buy and why you want to focus on your business and outsource everything that requires external expertise.
Your initial POC on fraud detection/AML with SQL rules is likely to be simple enough, work well enough, and require little enough work and monitoring at first. Now the "at first" may last for months, but eventually you will be asked to add features, to increase the complexity. This happens incrementally, and every new request is hard to refuse, easy enough to implement.
But accumulate such request for months or years, and you accidentally end up with a custom build, internal use only, product in the product that requires monitoring, debugging, regular development, and documentation. Worse, the monitoring effort of a tech product will usually not increase linearly - and by the time you start hitting some hard limits, you may be tightly wed with what was initially a temporary solution.
Meanwhile, you teams will have built their work processes around the tool you built, and are likely to push back against the removal of features - let alone a switch to third party tooling - out of a force of habit.
So in a nutshell: as a developer, data engineer, or similar role, you have a long term interest in investing in a specialized tool that you can connect to, and avoid the slippery slope of in-house development. Your product value comes from the detection (the rules and model), not the platform to run it.
[1]: You could group the rows by small enough buckets, intercalate any empty buckets for every customer, and then you can use SQL window functions to do running sums filtered in date ranges. But it's neither elegant nor maintainable nor performant.
Marble is an open-source decision engine for fraud detection and anti-money laundering.
We make life easy for risk managers to update their detection and block dubious transactions before they happen, and free the tech roadmap from performance issues, scenario updates and maintenance
We use a fully flexible data model, so you can send and use your data as it is in your DB within Marble, allowing applications not only in a wide range of fintechs, but also marketplaces and telco
Marble is a great place to start if you are hesitating between building your own solution and buying off the shelf.