However, Blue notes that while the AI system was effective in stopping fraud, it was also opaque and difficult for bankers to understand. They learned that customers needed options for managing the tradeoff between effective fraud prevention and end-user friction. As a result, they carefully tuned the algorithm to balance false positives and false negatives and built a user interface for releasing transactions that had been marked anomalous but weren’t fraudulent. The takeaway is that AI products can't be built in isolation and must consider the end user's needs and problems.
Key takeaways:
- Adam Blue, CTO of Q2, shares the company's unique approach to tackling account takeover (ATO) fraud, which involves building a behavior model of how an account holder normally acts, rather than relying on traditional identification methods.
- The company's first AI product, RFA (Risk and Fraud Analytics), uses machine learning to identify and stop fraud, but also ensures that customers understand what the system is doing.
- One of the challenges faced was balancing the complexity of AI and machine learning with the need for transparency and control, as many customers wanted to use their experience to influence the solution.
- Blue emphasizes the importance of understanding the end user and their problems deeply when deploying machine learning, cautioning against deploying AI for the sake of AI.