Abstract
The atmospheric particulate matter (PM) with a diameter of 2.5 .m or less (PM2.5) is one of the key indicators of air pollutants. Accurate prediction of PM2.5 concentration is very important for air pollution monitoring and public health management. However, the presence of noise in PM2.5 data series is a major challenge of its accurate prediction. A novel hybrid PM2.5 concentration prediction model is proposed in this study by combining complete ensemble empirical mode decomposition (CEEMD) method, Pearson.s correlation analysis, and a deep long short-term memory (LSTM) method. CEEMD was employed to decompose historical PM2.5 concentration data to different frequencies in order to enhance the timing characteristics of data. Pearson.s correlation was used to screen the different frequency intrinsic-mode functions of decomposed data. Finally, the filtered enhancement data were inputted to a deep LSTM network with multiple hidden layers for training and prediction. The results evidenced the potential of the CEEMD-LSTM hybrid model with a prediction accuracy of approximately 80% and model convergence after 700 training epochs. The secondary screening of Pearson.s correlation test improved the model (CEEMD-Pearson) accuracy up to 87% but model convergence after 800 epochs. The hybrid model combining CEEMD-Pearson with the deep LSTM neural network showed a prediction accuracy of nearly 90% and model convergence after 650 interactions. The results provide a clear indication of higher prediction accuracy of PM2.5 with less computation time through hybridization of CEEMD-Pearson with deep LSTM models and its potential to be employed for air pollution monitoring.
Cite this article
Fu, ., Le, C., Fan, et al. Integration of complete ensemble empirical mode decomposition with deep long short-term memory model for particulate matter concentration prediction. Environ Sci Pollut Res (2021). DOI: https://doi.org/10.1007/s11356-021-15574-y
Keywords: PM2.5; Air pollution; Prediction model; Environmental hazard; Deep learning.