HyperE2VID: Improving Event-Based Video Reconstruction via Hypernetworks

Event-based cameras are becoming increasingly popular for their ability tocapture high-speed motion with low latency and high dynamic range. However,generating videos from events remains challenging due to the highly sparse andvarying nature of event data. To address this, in this study, we proposeHyperE2VID, a dynamic neural network architecture for event-based videoreconstruction. Our approach uses hypernetworks to generate per-pixel adaptivefilters guided by a context fusion module that combines information from eventvoxel grids and previously reconstructed intensity images. We also employ acurriculum learning strategy to train the network more robustly. Ourcomprehensive experimental evaluations across various benchmark datasets revealthat HyperE2VID not only surpasses current state-of-the-art methods in terms ofreconstruction quality but also achieves this with fewer parameters, reducedcomputational requirements, and accelerated inference times.