HyperAIHyperAI
12 days ago

A large-Scale TV Dataset for partial video copy detection

{and Donatello Conte, Mathieu Delalandre, Van-Hao LE}
A large-Scale TV Dataset for partial video copy detection
Abstract

This paper is interested with the performance evaluation of the partial video copy detection. Several public datasets exist designed from web videos. The detection problem is inherent to the continuous video broadcasting. The alternative is then to process with TV datasets offering a deeper scalability and a control of degradations for a fine performance evaluation. We propose in this paper a TV dataset called STVD. It is designed with a protocol ensuring a scalable capture and robust groundtruthing. STVD is the largest public dataset on the task with a near 83k videos having a total duration of 10, 660 hours. Perfor- mance evaluation results of representative methods on the dataset are reported in the paper for a baseline comparison.

A large-Scale TV Dataset for partial video copy detection | Latest Papers | HyperAI