As society becomes more comfortable with storing their data in the cloud, the stakes become even higher when it comes to protecting your customers and your company to fight fraud. In addition, the schemes and behaviors that fraudsters are deploying within financial services have become ever more complex. This can lead to some tough internal conversations between the fraud and growth teams.
Machine learning has been brought up as a term to help in this struggle, but what are the right questions to ask when considering such a framework? Below are some to get you started with context I’ve gleaned over from watching the financial risk space for teams at Square, Facebook and Google.
- How accurate is the model?
This is a biggie. Of course, the effectiveness of a machine learning fraud solution boils down to how well it works at doing what it promises to do: predict fraud. But when talking about accuracy, don’t forget about the flip side of stopping bad users – namely, not stopping good users. How well does the tool do at recognizing your good users?
So, how do you actually measure the accuracy of a new tool you’re not already using? Some platforms may give you the option of trying them out for free, without the commitment of a long-term contract. If you have existing tools you’re considering moving away from, you may even be able to run a test with the new system in parallel, then compare the results.
- What are current customers saying about their results?
In a perfect world, you’d get a trustworthy recommendation from a similar business already in your network, with solid results to back it up. But those aren’t always easy to come by. That’s when testimonials and case studies can really come in handy to give substance to marketing claims. What logos are on the solution’s website? What companies are quoted, and are they providing specific results they achieved with the platform? Anyone can say anything. But when you have a brand name willing to attach themselves to a success story, it lends a lot more credibility.
- How robust is the global customer network?
With a machine learning solution, having large volumes of high-quality data is a key part of the “secret sauce” to getting the best results. Why? Simple: more data means more to learn from. So, it’s in your best interest to look for machine learning solutions that have large and varied networks of customers.
But don’t forget the second part of this equation: how the tool actually leverages the data it collects and turns it into intelligence you can use immediately. In other words, how does it incorporate learnings from the global network to serve you, the customer?
- What industries does it cover?
When asking this question, keep in mind that a niche focus isn’t necessarily best. While a solution that’s focused on serving a particular industry may mean it has deeper knowledge about that particular industry, that knowledge may not translate beyond the sales and marketing pitch into meaningful results for your company.
It really goes back to #3: more data is better, and diverse data is better. Intelligence from different industries can benefit everyone who uses a machine learning fraud detection solution. After all, most fraudsters don’t tend to focus on scamming only food-delivery services, or only airlines. They’re constantly adapting, and you’ll want to benefit from the breadth of knowledge that comes from multi-industry data.
- Can you tailor the model to your unique business needs?
A breadth of high-quality data is amazingly powerful, but equally powerful is the ability to layer on learnings that are specific to your business. That way, you can benefit from customizability as well as breadth. For example, a travel firm may derive great value from incorporating information like destination and time of travel relative to booking into their machine learning model.
- How does the model adapt to changing fraud patterns?
One important differentiating factor for machine learning systems is in how users give feedback to the system, so it can learn and improve. For example, customers that use Sift Science can apply labels indicating whether specific users are “bad” or “not bad.” If a user is marked as “bad,” the system considers all of the behavior and signals of that user to be associated with bad behavior in the future. This type of feedback is crucial for accuracy, and for learning extremely quickly about adapting fraud patterns.
- How quickly can the customer benefit from new learnings?
As we mentioned, “real time” has become a bit of a buzzword, peppered into sales pitches, blog posts, and marketing collateral. But what does it really mean to update risk scores in real time? It’s worth digging into the nitty gritty of promises like these to ask, for example: if a fraudster is flagged by a user in the global network, will that information be reflected immediately (like in milliseconds)? Or will you need to wait for days, weeks, or even months for a model to update?
- Is the pricing model transparent?
Fees and pricing models vary, but can include per-transaction or flat-rate monthly costs, contracts, setup fees, percentage of revenue, maintenance/support costs, and implementation costs. Can you find the information you need on the company’s website, or do you need to talk to a salesperson?
Also: does the solution require contract negotiations, or is it pay-as-you-go? Are the terms of service transparent and easy to understand?
- How easy is it to integrate?
How available is the documentation? Is it simple, transparent, and easy to use? When it comes to integration, it’s a great idea to get your developers involved. But wouldn’t it be even better to be able to skip the initial discussion altogether, just send your developer a link to the documentation, and let them review it directly?
Leave a Reply