Decoupling Decision-Making in Fraud Prevention through Classifier Calibration for Business Logic Action

Machine learning models typically focus on specific targets like creatingclassifiers, often based on known population feature distributions in abusiness context. However, models calculating individual features adapt overtime to improve precision, introducing the concept of decoupling: shifting frompoint evaluation to data distribution. We use calibration strategies asstrategy for decoupling machine learning (ML) classifiers from score-basedactions within business logic frameworks. To evaluate these strategies, weperform a comparative analysis using a real-world business scenario andmultiple ML models. Our findings highlight the trade-offs and performanceimplications of the approach, offering valuable insights for practitionersseeking to optimize their decoupling efforts. In particular, the Isotonic andBeta calibration methods stand out for scenarios in which there is shiftbetween training and testing data.