“Seeing” ENF from Neuromorphic Events: Modeling and Robust Estimation

1Wuhan University, Wuhan, China
2Infocomm Technology Cluster, Singapore Institute of Technology, Singapore



Our study aims to estimate Electric Network Frequency (ENF) fluctuations, implicit in light flickering, from event streams recorded by event cameras.

The method we introduce in this paper is referred to as E-ENF, which yields superior estimation results in various scenarios as compared to video-based methods (V-ENF).

Abstract

Most artificial lights exhibit subtle fluctuations in intensity and frequency in response to the influence of the grid's alternating current, providing the potential to estimate the Electric Network Frequency (ENF) from conventional frame-based videos. Nevertheless, the performance of Video-based ENF (V-ENF) estimation largely relies on the imaging quality and thus may suffer from significant interference caused by non-ideal sampling, scene diversity, motion interference, and extreme lighting conditions.

In this paper, we show that the ENF can be extracted without the above limitations from a new modality provided by the so-called event camera, a neuromorphic sensor that encodes the light intensity variations and asynchronously emits events with extremely high temporal resolution and high dynamic range. Specifically, we formulate and validate the physical mechanism for the ENF captured in events and then propose a simple yet robust Event-based ENF (E-ENF) estimation method through mode filtering and harmonic enhancement.

To validate the effectiveness, we build the first Event-Video ENF Dataset (EV-ENFD) and its extension EV-ENFD+ with diverse scenarios, including static, dynamic, and extreme lighting scenes. Comprehensive experiments have been conducted on our proposed datasets, showcasing that our proposed E-ENF significantly outperforms the V-ENF in extracting accurate ENF traces, especially in challenging environments.

Event-based ENF estimation (E-ENF)

E-ENF effectively estimates the ENF traces from events, accomplished through sequentially implementing three stages:

uniform-interval temporal sampling, majority-voting spatial sampling, and harmonic-based fine-tuning.

Comparison Results

  • Static Scene
  • Video

    Video-LI

    Video-HI

    Event Stream

    Event

  • Dynamic Scene
  • Video

    Video-LI

    Video-HI

    Event Stream

    Event

  • Extreme Lighting
  • Video

    Video-LI

    Video-HI

    Event Stream

    Event

    References

    [1] Ravi Garg, Avinash L. Varna, Adi Hajj-Ahmad, and Min Wu. “Seeing” ENF: Power-Signature-Based Timestamp for Digital Multimedia via Optical Sensing and Signal Processing. IEEE Transactions on Information Forensics and Security, 8(9):1417–1432, 2013.
    [2] Saffet Vatansever, Ahmet Emir Dirik, and Nasir Memon. Analysis of rolling shutter effect on ENF-based video forensics. IEEE Transactions on Information Forensics and Security, 14(9):2262–2275, 2019.
    [3] Guang Hua, Han Liao, Haijian Zhang, Dengpan Ye, and Jiayi Ma. Robust ENF Estimation Based on Harmonic Enhancement and Maximum Weight Clique. IEEE Transactions on Information Forensics and Security, 16:3874–3887, 2021.
    [4] Mark Sheinin, Yoav Y Schechner, and Kiriakos N Kutulakos. Computational Imaging on the Electric Grid. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6437–6446, 2017.
    [5] Guillermo Gallego, Tobi Delbrück, Garrick Orchard, Chiara Bartolozzi, Brian Taba, Andrea Censi, Stefan Leutenegger, Andrew J Davison, Jörg Conradt, Kostas Daniilidis, et al. Event-based vision: A survey. IEEE transactions on pattern analysis and machine intelligence, 44(1):154–180, 2020.
    [6] Wei Liao, Xiang Zhang, Lei Yu, Shijie Lin, Wen Yang, and Ning Qiao. Synthetic Aperture Imaging With Events and Frames. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 17735– 17744, 2022.
    [7] Ziwei Wang, Dingran Yuan, Yonhon Ng, and Robert Mahony. A Linear Comb Filter for Event Flicker Removal. In IEEE International Conference on Robotics and Automation, pages 398–404, 2022.

    BibTeX

    @inproceedings{xu2023seeing,
      title={"Seeing" Electric Network Frequency From Events},
      author={Xu, Lexuan and Hua, Guang and Zhang, Haijian and Yu, Lei and Qiao, Ning},
      booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
      pages={18022--18031},
      year={2023}}