Many times in life, the discovery of a false positive can be a relief, even a joy. Maybe a frightening medical test came back positive, but was later found to have been a 'false positive' - a test that falsely returned a positive response because some relevant factors appeared to be present, but weren't, or weren't relevant in the right way. You can relax: you don't have bubonic plague after all.
But in fraud prevention, that relief would be misplaced. A false positive in fraud prevention represents an insulted customer, a potentially loyal buyer who was rejected by an over-cautious fraud prevention system. Chances are, given all the alternatives open to them, this annoyed and frustrated consumer won't be coming back.
Unfortunately, false positives can be hard to prevent, because there are times when good customers look like fraudulent ones. Often it feels like you're looking at the picture at the top of this post: only one of those orange windows represents a peek into the life of a fraudster - but do you know which one?
How big is this problem?
It is estimated that $40 billion are lost every year due to unnecessary red flags and transaction blocks.
$40 billion is a lot of money. How much of that is yours?
The worrying truth is that you probably don't really know. By their nature, false positives are tough to measure. Your system flagged a transaction up as fraudulent, and rejected it - and that's usually the end of the story. Do you conduct research into the rejected transactions so that you can try to approve more and more over time? Forter does, actually, but most companies don't.
A QUARTER OF DECLINES ARE FALSE POSITIVES.
That's an enormous amount. Think of the effort that goes into attracting new customers to a website and enticing them through the buying process - and then think of what it means to reject 24% of flagged orders for the wrong reasons. Those are customers who wanted to buy from you. They went through checkout. And you rejected their business.
Merchants sometimes only realize how poor their performance with false positives is when they improve it. An independent Forrester analysis of the impact of Forter on a large Top 500 Internet Retailer company showed that switching to Forter meant that for that company, transaction approvals went from 90% to 97% - an extremely significant increase. False positives have a very real, very measurable effect on a business' bottom line.
Looking beyond the transaction
The trouble with false positives is that you're not only rejecting a transaction. You're rejecting all the orders which that customer might have placed with you in the future, too.
[bctt tweet="False positives: you're not only rejecting a transaction. You're rejecting a customer."]
A MasterCard study showed that nearly 20% of consumers who experienced a fraud-related decline had no future spend on that card 6 months after the decline event.
Now, that's talking about spend on a card - cards which people rely on day-to-day. Just imagine the kind of reaction they'd have regarding the website that rejected them as a fraudulent consumer. Let's be realistic about this - they're not likely to come back.
What causes false positives?
The main answer to the question of what causes false positives is that people are complicated, and rules are too simple.
[bctt tweet="What causes websites to reject customers? People are complex, rules are simplistic."]
That's why you end up with the 'orange window' situation (to go back to our image above). Rules pick up features of a transaction, but don't see the whole story, so sometimes a good customer looks the same to them as a fraudster.
Most fraud prevention relies on rule engines, which means that they're based around a collection of rules which say whether to accept or reject a transaction.
The rules-based approach
So, for example, an order for an expensive briefcase, placed with fast shipping, to be delivered to a hotel far from the billing address of the card, would be classed as fraud, and rejected.
Why? Because fraudsters like fast shipping, which gets them their stolen goods before someone realizes what's going on. And there's no match between shipping and billing address - an AVS mismatch is a bad sign. Hotels are suspect, because it's easy for fraudsters to use them as pick-up zones. And expensive items are high risk because of the amount involved, but popular with criminals because they stand to make a large profit.
The human story
But consider this story from the human perspective, not a rules-based one.
What we have here is a businessman, travelling for work, whose briefcase gives out. He needs a new one to arrive before he moves on to the next place on his itinerary. It's really no more complicated than that.
Sadly, he won't be getting his briefcase, because he's been marked as fraudulent by a rules engine. He has become a false positive.
Is it worth it?
If rules-based systems result in false positives, but are also great at stopping fraud, maybe it's worth the cost. Right? Well, no. The same problem that causes false positives also causes weaknesses in the system's ability to weed out fraud.
The rules are inflexible, slow to adapt. New trends must be noticed and hard-coded in as rules. By the time that's happened, they're probably not new anymore. Fraudsters are fast, and they develop new techniques all the time. That gives them an automatic advantage - which is not good for retailers relying on rules.
The problem with rules-based systems is that they're too inflexible, too slow to adapt to new trends and circumstances, and can't treat transactions on an individual basis. That's why they create false positive situations.
It used to be that there was no alternative, but that's no longer the case. Thanks to machine learning and its ability to leverage the power of big data, combined with essential human expertise, modern technology can provide a faster, more flexible solution which both blocks fraud more effectively and can minimize false positives.
You've probably heard of predictive analytics. It's used for all sorts of things in the age of big data. What it really means is that data from the past is used to predict what will happen in the future.
The machine does this by analyzing the patterns that connect the data and seeing which are relevant in new situations with which it's presented. Obviously, the more data the machine has, the better job it will be able to do at predicting. And if it's guided by human experts, it will learn very fast, and become very accurate very quickly.
[bctt tweet="Websites have a lot of data, and experts have a lot of knowledge. The combination can end false positives."]
When it comes to fraud prevention, the machine has a lot of data. Just think of all the actions consumers take on a website as they browse and move through to checkout. Remember all the things involved in checking out. That's a lot of information. And highly-trained fraud analysts know a lot about fraud, fraudsters and false positives. That's a lot of information, too.
All of this means that machine learning, in conjunction with human expertise, is extremely useful when it comes to predicting whether a transaction will turn out to be fraudulent or not. It's highly accurate, and great at avoiding false positives - because it can differentiate between a customer with a complex but true story, and a fraudster who's just pretending, by comparing the case to all the others it knows about from the past. It can treat each transaction as individual, not merely a jumble of rules.
False positives should no longer be thought of as a cost of doing business online.
[bctt tweet="False positives should no longer be thought of as a cost of doing business online."]
Companies need to start moving towards machine learning and what it can do for their business. In the words of a recent Forrester report on machine learning and fraud prevention, you can 'stop billions in fraud losses with machine learning.' Start stopping those losses today.