Many times in life, the discovery of a false positive can be a relief, even a joy. Maybe a frightening medical test came back positive, but was later found to have been a 'false positive' - a test that falsely returned a positive response because some relevant factors appeared to be present, but weren't, or weren't relevant in the right way. You can relax: you don't have bubonic plague after all.

But in fraud prevention, that relief would be misplaced. A false positive in fraud prevention represents an insulted customer, a potentially loyal buyer who was rejected by an over-cautious fraud prevention system. Chances are, given all the alternatives open to them, this annoyed and frustrated consumer won't be coming back.

Unfortunately, false positives can be hard to prevent, because there are times when good customers look like fraudulent ones. Often it feels like you're looking at the picture at the top of this post: only one of those orange windows represents a peek into the life of a fraudster - but do you know which one?

How big is this problem?

Naturally it depends on the merchant, but it is clear that this is a significant problem. A Javelin study, 'The Financial Impact of Fraud', found that the cost of false positives outweighs the cost of chargebacks by 5:1. Research from Ethoca found that merchants experienced a false positive rate of 52%. 

If Ethoca's study is accurate, more than half of declines are false positives. 

That's an enormous amount. Think of the effort that goes into attracting new customers to a website and enticing them through the buying process - and then think of what it means to reject 52% of flagged orders for the wrong reasons. Those are customers who wanted to buy from you. They went through checkout. And you rejected their business.

The worrying truth is that you probably don't really know your company's false positive rate. By their nature, false positives are tough to measure. Your system flagged a transaction up as fraudulent, and rejected it - and that's usually the end of the story. Do you conduct research into the rejected transactions so that you can try to approve more and more over time? Forter does, actually, but most companies don't.

Merchants sometimes only realize how poor their performance with false positives is when they improve it.

An independent Forrester analysis of the impact of Forter on a large Top 500 Internet Retailer company showed that switching to Forter meant that for that company, transaction approvals went from 90% to 97% - an extremely significant increase. False positives have a very real, very measurable effect on a business' bottom line.

Looking beyond the transaction

The trouble with false positives is that you're not only rejecting a transaction. You're rejecting a customer. You're rejecting all the orders which that customer might have placed with you in the future, too.

Javelin report Future Proofing Card Authorization found that 39% of cardholders will abandon a card post decline, and 25% will move a declined card to the back of the wallet.

Now, that's talking about spend on a card - cards which people rely on day-to-day. Just imagine the kind of reaction they'd have regarding the website that rejected them as a fraudulent consumer. Let's be realistic about this - they're not likely to come back.

What causes false positives?

The main answer to the question of what causes false positives is that people are complicated, and rules are too simple.

That's why you end up with the 'orange window' situation (to go back to our image above). Rules pick up features of a transaction, but don't see the whole story, so sometimes a good customer looks the same to them as a fraudster.

Most fraud prevention relies on rule engines, which means that they're based around a collection of rules which say whether to accept or reject a transaction.

The rules-based approach

So, for example, an order for an expensive briefcase, placed with fast shipping, to be delivered to a hotel far from the billing address of the card, would be classed as fraud, and rejected.

Why? Because fraudsters like fast shipping, which gets them their stolen goods before someone realizes what's going on. There's no match between shipping and billing address - an AVS mismatch is a bad sign. Hotels are suspect, because it's easy for fraudsters to use them as pick-up zones. And expensive items are high risk because of the amount involved, but popular with criminals because they stand to make a large profit.

The human story

But consider this story from the human perspective, not a rules-based one.

What we have here is a businessman, travelling for work, whose briefcase gives out. He needs a new one to arrive before he moves on to the next place on his itinerary. It's really no more complicated than that.

Sadly, he won't be getting his briefcase, because he's been marked as fraudulent by a rules engine. He has become a false positive.

Is it worth it?

If rules-based systems result in false positives, but are also great at stopping fraud, maybe it's worth the cost. Right? Well, no. The same problem that causes false positives also causes weaknesses in the system's ability to weed out fraud.

Rules are inflexible, slow to adapt. New trends must be noticed and hard-coded in as rules. By the time that's happened, they're probably not new anymore. Fraudsters are fast, and they develop new techniques all the time. That gives them an automatic advantage - which is not good for retailers relying on rules.

The solution

The problem with rules-based systems is that they're too inflexible, too slow to adapt to new trends and circumstances, and can't treat transactions on an individual basis. That's why they create false positive situations.

It used to be that there was no alternative, but that's no longer the case. Thanks to machine learning and its ability to leverage the power of big data, combined with essential human expertise, modern technology can provide a faster, more flexible solution which both blocks fraud more effectively and can minimize false positives.

The Right Combination of AI And IQ

You've probably heard of predictive analytics. It's used for all sorts of things in the age of big data. What it really means is that data from the past is used to predict what will happen in the future.

The machine does this by analyzing the patterns that connect the data and seeing which are relevant in new situations with which it's presented. Obviously, the more data the machine has, the better job it will be able to do at predicting. And if it's guided by human experts, it will learn very fast, and become very accurate very quickly.

Websites have a lot of data, and experts have a lot of knowledge. The combination can end false positives.

When it comes to fraud prevention, the machine has a lot of data. Just think of all the actions consumers take on a website as they browse and move through to checkout. Remember all the things involved in checking out. Bear in mind the holistic picture a system can build up by incorporating account data and behaviors into its calculations. That's a lot of information. And highly-trained fraud analysts know a lot about fraud, fraudsters and false positives. That's a lot of information, too.

All of this means that artificial intelligence, in conjunction with human expertise, is extremely useful when it comes to predicting whether a transaction will turn out to be fraudulent or not. It's highly accurate, and great at avoiding false positives - because it can differentiate between a customer with a complex but true story, and a fraudster who's just pretending, by comparing the case to all the others it knows about from the past. It can treat each transaction as individual, not merely a jumble of rules.

False positives should no longer be thought of as a cost of doing business online.

Companies need to start moving towards AI in fraud prevention and the value that it can bring for their business. In the words of a Forrester report on machine learning and fraud prevention, you can 'stop billions in fraud losses with machine learning.' Start stopping those losses today.

Wondering how using AI could impact your false positive numbers? See Forrester's assessment of the value Forter brought to merchants in this TEI report summary.

DOWNLOAD REPORT SUMMARY

 

This article was first published on September 09, 2015 and has been revised to reflect additional or updated information.