Ocean-fog is a type of fog that forms over the ocean and has a visibility of less than 1 km. Ocean-fog frequently causes incidents over oceanic and coastal regions; ocean-fog detection is required regardless of the time of day. Ocean-fog has distinct thermo-optical properties, and spatially and temporally extensive ocean-fog detection methods based on geostationary satellites are typically employed. Infrared channels of Himawari-8 were used to construct three machine learning models for the continuous detection of ocean-fog. In contrast, visible channels are valid only during the daytime. As control models, we used fog products from the National Meteorological Satellite Center (NMSC) and machine learning models trained by adding a visible channel. The extreme gradient boosting model utilizing infrared channels corrected ocean-fog perfectly day and night, with the highest F1 score of 97.93% and a proportion correct of 98.59% throughout the day. In contrast, the NMSC product had a probability of detection of 87.14%, an F1 score of 93.13%, and a proportion correct of 71.9%. As demonstrated by the qualitative evaluation, the NMSC product overestimates clouds over small and coarsely textured ocean-fog regions. In contrast, the proposed model distinguishes between ocean-fog, clear skies, and clouds at the pixel scale. The Shapley additive explanation analysis demonstrated that the difference between channels 14 and 7 was very useful for ocean-fog detection at night, and its extremely low values contributed significantly to distinguishing non-fog during the daytime. Channel 15, affected by water vapor absorption, contributed most to ocean-fog detection among atmospheric window channels. The research findings can be used to improve operational ocean-fog detection and forecasting.
For a long time, researchers have tried to find a way to analyze tropical cyclone (TC) intensity in real-time. Since there is no standardized method for estimating TC intensity and the most widely used method is a manual algorithm using satellite-based cloud images, there is a bias that varies depending on the TC center and shape. In this study, we adopted convolutional neural networks (CNNs) which are part of a state-of-art approach that analyzes image patterns to estimate TC intensity by mimicking human cloud pattern recognition. Both two dimensional-CNN (2D-CNN) and three-dimensional-CNN (3D-CNN) were used to analyze the relationship between multi-spectral geostationary satellite images and TC intensity. Our best-optimized model produced a root mean squared error (RMSE) of 8.32 kts, resulting in better performance (~35%) than the existing model using the CNN-based approach with a single channel image. Moreover, we analyzed the characteristics of multi-spectral satellite-based TC images according to intensity using a heat map, which is one of the visualization means of CNNs. It shows that the stronger the intensity of the TC, the greater the influence of the TC center in the lower atmosphere. This is consistent with the results from the existing TC initialization method with numerical simulations based on dynamical TC models. Our study suggests the possibility that a deep learning approach can be used to interpret the behavior characteristics of TCs.
During recent decades, various downscaling methods of satellite soil moisture (SM) products, which incorporate geophysical variables such as land surface temperature and vegetation, have been studied for improving their spatial resolution. Most of these studies have used least squares regression models built from those variables and have demonstrated partial improvement in the downscaled SM. This study introduces a new downscaling method based on support vector regression (SVR) that includes the geophysical variables with locational weighting. Regarding the in situ SM, the SVR downscaling method exhibited a smaller root mean square error, from 0.09 to 0.07 m 3 ·m −3 , and a larger average correlation coefficient increased, from 0.62 to 0.68, compared to the conventional method. In addition, the SM downscaled using the SVR method had a greater statistical resemblance to that of the original advanced scatterometer SM. A residual magnitude analysis for each model with two independent variables was performed, which indicated that only the residuals from the SVR model were not well correlated, suggesting a more effective performance than regression models with a significant contribution of independent variables to residual magnitude. The spatial variations of the downscaled SM products were affected by the seasonal patterns in temperature-vegetation relationships, and the SVR downscaling method showed more consistent performance in terms of seasonal effect. Based on these results, the suggested SVR downscaling method is an effective approach to improve the spatial resolution of satellite SM measurements.
Identifying important factors (e.g., features and prediction models) for forest aboveground biomass (AGB) estimation can provide a vital reference for accurate AGB estimation. This study proposed a novel feature of the canopy height distribution (CHD), a function of canopy height, that is useful for describing canopy structure for AGB estimation of natural secondary forests (NSFs) by fitting a bimodal Gaussian function. Three machine learning models (Support Vector Regression (SVR), Random Forest (RF), and eXtreme Gradient Boosting (Xgboost)) and three deep learning models (One-dimensional Convolutional Neural Network (1D-CNN4), 1D Visual Geometry Group Network (1D-VGG16), and 1D Residual Network (1D-Resnet34)) were applied. A completely randomized design was utilized to investigate the effects of four feature sets (original CHD features, original LiDAR features, the proposed CHD features fitted by the bimodal Gaussian function, and the LiDAR features selected by the recursive feature elimination algorithm) and models on estimating the AGB of NSFs. Results revealed that the models were the most important factor for AGB estimation, followed by the features. The fitted CHD features significantly outperformed the other three feature sets in most cases. When employing the fitted CHD features, the 1D-Renset34 model demonstrates optimal performance (R2 = 0.80, RMSE = 9.58 Mg/ha, rRMSE = 0.09), surpassing not only other deep learning models (e.g.,1D-VGG16: R2 = 0.65, RMSE = 18.55 Mg/ha, rRMSE = 0.17) but also the best machine learning model (RF: R2 = 0.50, RMSE = 19.42 Mg/ha, rRMSE = 0.16). This study highlights the significant role of the new CHD features fitting a bimodal Gaussian function and the effects between the models and the CHD features, which provide the sound foundations for effective estimation of AGB in NSFs.
Much effort has been spent on the automatic detection and delineation of individual trees from high spatial resolution images. However, delineation errors may lead to an inaccurate crown size when compared with ground measurements. Thus, it is problematic to use delineated crowns to derive information on tree variables, e.g. crown diameter, tree height, diameter at breast height (DBH), stand volume, stem volume or stand competition index. In this study, we investigated two indicators – the mean digital number (MDN) within each delineated crown and the difference between MDNs (DMDNs) for 0.6 m buffer zones outside and inside the boundary of each delineated crown – to separate poorly delineated crowns from well-delineated ones. We modelled the relationships between delineated crowns and field-based crown size, between delineated crowns and tree height, and between delineated crowns and DBH observations in a Norway spruce (Picea abies) stand, separately considering models based on all delineated results and crowns identified as being well delineated. Our results showed that the capability of the two indicators in separating poorly and well-delineated crowns varied under different thresholds. The results also indicated that models considering only well-delineated crowns were more robust and effective in estimating and predicting tree crown diameter, DBH and tree height than models that considered all delineated results.