Learning Memory-guided Normality for Anomaly Detection

We address the problem of anomaly detection, that is, detecting anomalousevents in a video sequence. Anomaly detection methods based on convolutionalneural networks (CNNs) typically leverage proxy tasks, such as reconstructinginput video frames, to learn models describing normality without seeinganomalous samples at training time, and quantify the extent of abnormalitiesusing the reconstruction error at test time. The main drawbacks of theseapproaches are that they do not consider the diversity of normal patternsexplicitly, and the powerful representation capacity of CNNs allows toreconstruct abnormal video frames. To address this problem, we present anunsupervised learning approach to anomaly detection that considers thediversity of normal patterns explicitly, while lessening the representationcapacity of CNNs. To this end, we propose to use a memory module with a newupdate scheme where items in the memory record prototypical patterns of normaldata. We also present novel feature compactness and separateness losses totrain the memory, boosting the discriminative power of both memory items anddeeply learned features from normal data. Experimental results on standardbenchmarks demonstrate the effectiveness and efficiency of our approach, whichoutperforms the state of the art.