Breast Cancer Diagnosis in Two-View Mammography Using End-to-End Trained EfficientNet-Based Convolutional Network

Some recent studies have described deep convolutional neural networks todiagnose breast cancer in mammograms with similar or even superior performanceto that of human experts. One of the best techniques does two transferlearnings: the first uses a model trained on natural images to create a "patchclassifier" that categorizes small subimages; the second uses the patchclassifier to scan the whole mammogram and create the "single-view whole-imageclassifier". We propose to make a third transfer learning to obtain a "two-viewclassifier" to use the two mammographic views: bilateral craniocaudal andmediolateral oblique. We use EfficientNet as the basis of our model. We"end-to-end" train the entire system using CBIS-DDSM dataset. To ensurestatistical robustness, we test our system twice using: (a) 5-fold crossvalidation; and (b) the original training/test division of the dataset. Ourtechnique reached an AUC of 0.9344 using 5-fold cross validation (accuracy,sensitivity and specificity are 85.13% at the equal error rate point of ROC).Using the original dataset division, our technique achieved an AUC of 0.8483,as far as we know the highest reported AUC for this problem, although thesubtle differences in the testing conditions of each work do not allow for anaccurate comparison. The inference code and model are available athttps://github.com/dpetrini/two-views-classifier