The good, the bad, and the just plain stupid as experienced by building a queue from the ground up.

What is a Manual Review Queue?

All this means is that there is a separate queue where orders sit in an interim phase. These are orders that have neither been processed nor rejected yet as they wait for a person to manually review these orders and decide their fate.

This comes in a variety of shapes and sized depending on your company and your strategy. My focus is on a CNP e-commerce platform shipping physical goods to a younger demographic. Below you’ll find some basic tips on how to approach building this out.

Find a good tool to create an automatic Manual Review Queue

We built ours before there were mainstream tools offering it as a service. Due to this, we still have our own internal Manual Review Queue. This means that our orders are reviewed and processed from our own dashboard, rather than from our outsourced platform dashboard.

We use Sift Science and their automated system to flag orders based on a list of criteria. While I obviously can’t share real rules, an example would be: Hold all orders wherein a user’s IP address if greater than 75 miles from their billing zip code. This sends all orders for all customers who meet this criterion in a special manual review queue.

I’ve developed this over the years and can passionately say that an external tool to flag orders combined with a manual review queue within that external tool is that is the way to go. We still have our own internal manual review queue built into our platform, as opposed to using Sift’s new manual review queue in their dashboard, but we rely on Sift to isolate the users that match our rules and trigger to hold their orders.

If you can, avoid hard-coded rules

I passionately say to use an external tool because it means that you can avoid hard-coded rules. What are hard-coded rules? This is when the dev team writes a little bit of code that says ‘hold an order if it meets X criteria.’ Now this sounds great, but trust me, it will not be when you have to make minor tweaks to your rules on a frequent basis.

I will say that this totally depends on your team and the back-end devs available but I just cannot write this page without firmly advising against this.

When you start building your Manual Review Queue, you will need to isolate trends to target. These are the ‘when a user is X’ rules that you derive from analyzing your fraudulent chargebacks. These should be under constant testing until you figure out the best version of your rule. Constantly running back to the dev team to change these hard-coded rules is not an option for smaller teams.

We started with hard-coded rules and I have to admit, this was just plain stupid. As I wanted to create new rules or adjust pre-existing ones, I was always severely bottlenecked.

Ultimately, our fraud team is significantly more nimble since removing hard-coded and switching to using Sift Science.

Send customers a notification if an order goes to your Manual Review Queue

You might be thinking: Wait what? Why would I want the fraudster to know I’m onto them?

Ultimately, manual review is about finding a way specific to your platform to decide whether a real person who really wants that order is behind the screen. One simple rule we follow is that fraudsters want the quickest gain for the most minimal effort. Any secondary steps typical result in a fraudster abandoning ship, creating a new user account, and trying again. It may sound simple, but a huge aspect of your manual review is email verification. If an order goes into our Manual Review Queue, we send them an auto-email stating they must respond with their unique order info to put their order in processing. If a user responds, it almost always means it is not malicious fraud.

Picking your rules

So I’ve mentioned creating these rules to flag orders. You’re probably looking for more specificity on that front. Ultimately, the rules themselves are highly dependent upon your unique platform and consumer base. The best way we’ve isolated these rules is by reviewing users who file a chargeback under the reason that the charge was not authorized.

Contrary to popular belief, I do not think this requires heavy data analysis or a specific data scientist working with your team. All you need is to look at a sample size of chargebacks and pick out a few key trends to get started, though this does require that you collect and have access to enough data about the user and the order within your platform. When you see a chargeback with the same trends, which can be as simple and obvious as noticing several purchasers with chargebacks from Caracas, VE, you create a rule to flag users from Caracas, VE for review.


To wrap it all up…

Building a way to manual review risky orders is a fundamental aspect of creating your fraud strategy. This manual review process will be unique to each company, but there are several platforms that host this for you. The aforementioned tips are some guidelines to frame your approach.