Rainfall data with high spatial and temporal resolutions are essential for urban hydrological modeling.Ubiquitous surveillance cameras can continuously record rainfall events through video and audio,so they have been ...Rainfall data with high spatial and temporal resolutions are essential for urban hydrological modeling.Ubiquitous surveillance cameras can continuously record rainfall events through video and audio,so they have been recognized as potential rain gauges to supplement professional rainfall observation networks.Since video-based rainfall estimation methods can be affected by variable backgrounds and lighting conditions,audio-based approaches could be a supplement without suffering from these conditions.However,most audio-based approaches focus on rainfall-level classification rather than rainfall intensity estimation.Here,we introduce a dataset named Surveillance Audio Rainfall Intensity Dataset(SARID)and a deep learning model for estimating rainfall intensity.First,we created the dataset through audio of six real-world rainfall events.This dataset's audio recordings are segmented into 12,066 pieces and annotated with rainfall intensity and environmental information,such as underlying surfaces,temperature,humidity,and wind.Then,we developed a deep learning-based baseline using Mel-Frequency Cepstral Coefficients(MFCC)and Transformer architecture to estimate rainfall intensity from surveillance audio.Validated from ground truth data,our baseline achieves a root mean absolute error of 0.88 mm h^(-1) and a coefficient of correlation of 0.765.Our findings demonstrate the potential of surveillance audio-based models as practical and effective tools for rainfall observation systems,initiating a new chapter in rainfall intensity estimation.It offers a novel data source for high-resolution hydrological sensing and contributes to the broader landscape of urban sensing,emergency response,and resilience.展开更多
基金National Key R&D Program of China(2021YFE0112300)State Scholarship Fund from the China Scholarship Council(CSC)(No.201906865016)Special Fund for Public Welfare Scientific Institutions of Fujian Province(No.2020R1002002).
文摘Rainfall data with high spatial and temporal resolutions are essential for urban hydrological modeling.Ubiquitous surveillance cameras can continuously record rainfall events through video and audio,so they have been recognized as potential rain gauges to supplement professional rainfall observation networks.Since video-based rainfall estimation methods can be affected by variable backgrounds and lighting conditions,audio-based approaches could be a supplement without suffering from these conditions.However,most audio-based approaches focus on rainfall-level classification rather than rainfall intensity estimation.Here,we introduce a dataset named Surveillance Audio Rainfall Intensity Dataset(SARID)and a deep learning model for estimating rainfall intensity.First,we created the dataset through audio of six real-world rainfall events.This dataset's audio recordings are segmented into 12,066 pieces and annotated with rainfall intensity and environmental information,such as underlying surfaces,temperature,humidity,and wind.Then,we developed a deep learning-based baseline using Mel-Frequency Cepstral Coefficients(MFCC)and Transformer architecture to estimate rainfall intensity from surveillance audio.Validated from ground truth data,our baseline achieves a root mean absolute error of 0.88 mm h^(-1) and a coefficient of correlation of 0.765.Our findings demonstrate the potential of surveillance audio-based models as practical and effective tools for rainfall observation systems,initiating a new chapter in rainfall intensity estimation.It offers a novel data source for high-resolution hydrological sensing and contributes to the broader landscape of urban sensing,emergency response,and resilience.