Climate change poses a significant global challenge, with its effects manifesting prominently through melting and retreating glaciers in the Arctic and Antarctic. Understanding the dynamics of glacier flow is imperative for predicting the future evolution of the Polar ice sheets. Crevasses play an important role in regulating ice flow by acting as a conduit for surface meltwater to reach the bed and speed up ice flow, as well as providing the line of weakness through which icebergs detach from tidewater glacier termini. Furthermore, this study delves into the potential of computer vision techniques that use deep learning, leveraging foundation models trained using self-supervised learning like the Segment Anything Model (SAM) and DINOv2 from Meta AI, to automate crevasse mapping on glacial surfaces. Manual mapping crevasses on any glacier is currently labour-intensive and time-consuming without automation. Therefore, automating the process will allow scientists to map crevasses automatically over time in the exact location and over larger areas. Notably, this research addresses the scarcity of image segmentation datasets specifically tailored for mapping crevasses in polar regions and explores alternative deep learning methodologies, such as domain adaptation and few-shot learning, to overcome data limitations. The evaluation of foundation models harnesses high-resolution satellite imagery sourced from open-source remote sensing satellites such as Sentinel-1 and Sentinel-2 provided by the European Space Agency (ESA). Using multiple high-resolution image data modalities (e.g. Synthetic Aperture Radar (SAR) and optical satellite images) will provide insights into how different image data types help deep learning models generalise to crevasse mapping segmentation applications. The study seeks to develop advanced technological solutions to automate the mapping of crevasses tens of metres in width in order to address the knowledge gap of the role that crevasses play in modulating ice flow, particularly in response to climate warming.
Machine Learning is used in this paper for noise-diagnostics to detect defined anomalies in nuclear plant reactor cores solely from neutron detector measurements. The proposed approach leverages advanced diffusion-based core simulation tools to generate large amounts of simulated data with different types of driving perturbations originating at all theoretically possible locations in the core. Specifically the CORE SIM+ modelling framework is employed, which generates these data in the frequency domain. We train using these vast quantities of simulated data state-of-the-art machine and deep learning models which are used to successfully perform semantic segmentation, classification and localisation of multiple simultaneously occurring in-core perturbations. Actual plant data are then considered, provided by two different reactors, including no labels about perturbation existence. A domain adaptation methodology is subsequently developed to extend the simulated setting to real plant measurements, which uses self-supervised, or unsupervised learning, to align the simulated data with the actual plant data and detect perturbations, whilst classifying their type and estimating their location. Experimental studies illustrate the successful performance of the developed approach and extensions are described that indicate a great potential for further research.