publications
publications by categories in reversed chronological order. generated by jekyll-scholar.
2025
- TPELUncertainty-Aware Stability Analysis of IBR-dominated Power System with Neural NetworksGaladrielle Humblot-Renaux, Yang Wu, Sergio Escalera, Thomas B. Moeslund, Xiongfei Wang, and Heng WuIEEE Transactions on Power Electronics, 2025
Machine learning (ML) technologies have significant potential in accelerating stability screening of modern power systems that are dominated by inverter-based resources (IBRs). Nonetheless, neural network (NN)-based analysis methods cannot guarantee accurate and reliable stability predictions for unseen operating scenarios (OSs), posing safety risks. To address this limitation, this letter proposes an approach combining neural network ensembles with a dual-thresholding framework, which enables the reliable identification of OSs where ML predictions may fail. These uncertain OSs are then flagged for further analysis using physical-based methods, ensuring safety and robustness. The effectiveness of the proposed method is verified by simulation and experimental test.
@article{Humblot-Renaux_2025_TPEL, author = {Humblot-Renaux, Galadrielle and Wu, Yang and Escalera, Sergio and Moeslund, Thomas B. and Wang, Xiongfei and Wu, Heng}, journal = {IEEE Transactions on Power Electronics}, title = {Uncertainty-Aware Stability Analysis of IBR-dominated Power System with Neural Networks}, year = {2025}, volume = {}, number = {}, pages = {1-6}, keywords = {Stability analysis;Artificial neural networks;Power system stability;Uncertainty;Reliability;Training;Power system reliability;Stability criteria;Phase locked loops;Estimation;Stability;inverter-based resources;machine learning;uncertainty estimation}, doi = {10.1109/TPEL.2025.3560236}, }
- ECCVW’24Underwater Uncertainty: A Multi-annotator Image Dataset for Benthic Habitat ClassificationGaladrielle Humblot-Renaux, Anders Skaarup Johansen, Jonathan Eichild Schmidt, Amanda Frederikke Irlind, Niels Madsen, Thomas B. Moeslund, and Malte PedersenIn Computer Vision – ECCV 2024 Workshops , 2025
Continuous inspection and mapping of the seabed allows for monitoring the impact of anthropogenic activities on benthic ecosystems. Compared to traditional manual assessment methods which are impractical at scale, computer vision holds great potential for widespread and long-term monitoring. We deploy an underwater remotely operated vehicle (ROV) in Jammer Bay, a heavily fished area in the Greater North Sea, and capture videos of the seabed for habitat classification. The collected JAMBO dataset is inherently ambiguous: water in the bay is typically turbid which degrades visibility and makes habitats more difficult to identify. To capture the uncertainties involved in manual visual inspection, we employ multiple annotators to classify the same set of images and analyze time spent per annotation, the extent to which annotators agree, and more. We then evaluate the potential of vision foundation models (DINO, OpenCLIP, BioCLIP) for automating image-based benthic habitat classification. We find that despite ambiguity in the dataset, a well chosen pre-trained feature extractor with linear probing can match the performance of manual annotators when evaluated in known locations. However, generalization across time and place is an important challenge.
@inproceedings{Humblot-Renaux_2024_ECCVW, author = {Humblot-Renaux, Galadrielle and Johansen, Anders Skaarup and Schmidt, Jonathan Eichild and Irlind, Amanda Frederikke and Madsen, Niels and Moeslund, Thomas B. and Pedersen, Malte}, editor = {Del Bue, Alessio and Canton, Cristian and Pont-Tuset, Jordi and Tommasi, Tatiana}, title = {Underwater Uncertainty: A Multi-annotator Image Dataset for Benthic Habitat Classification}, booktitle = {Computer Vision -- ECCV 2024 Workshops}, year = {2025}, publisher = {Springer Nature Switzerland}, address = {Cham}, pages = {87--104}, isbn = {978-3-031-92387-6}, }
2024
- CVPR’24A noisy elephant in the room: Is your out-of-distribution detector robust to label noise?Galadrielle Humblot-Renaux, Sergio Escalera, and Thomas B. MoeslundIn Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , 2024
The ability to detect unfamiliar or unexpected images is essential for safe deployment of computer vision systems. In the context of classification, the task of detecting images outside of a model’s training domain is known as out-of-distribution (OOD) detection. While there has been a growing research interest in developing post-hoc OOD detection methods, there has been comparably little discussion around how these methods perform when the underlying classifier is not trained on a clean, carefully curated dataset. In this work, we take a closer look at 20 state-of-the-art OOD detection methods in the (more realistic) scenario where the labels used to train the underlying classifier are unreliable (e.g. crowd-sourced or web-scraped labels). Extensive experiments across different datasets, noise types & levels, architectures and checkpointing strategies provide insights into the effect of class label noise on OOD detection, and show that poor separation between incorrectly classified ID samples vs. OOD samples is an overlooked yet important limitation of existing methods. Code: https://github.com/glhr/ood-labelnoise
@inproceedings{Humblot-Renaux_2024_CVPR, title = {A noisy elephant in the room: Is your out-of-distribution detector robust to label noise?}, author = {Humblot-Renaux, Galadrielle and Escalera, Sergio and Moeslund, Thomas B.}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, year = {2024}, eprint = {https://openaccess.thecvf.com/content/CVPR2024/html/Humblot-Renaux_A_Noisy_Elephant_in_the_Room_Is_Your_Out-of-Distribution_Detector_CVPR_2024_paper.html}, }
2023
- CVPRW’23Beyond AUROC & Co. for Evaluating Out-of-Distribution Detection PerformanceGaladrielle Humblot-Renaux, Sergio Escalera, and Thomas B. MoeslundIn Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops , Jun 2023
While there has been a growing research interest in developing out-of-distribution (OOD) detection methods, there has been comparably little discussion around how these methods should be evaluated. Given their relevance for safe(r) AI, it is important to examine whether the basis for comparing OOD detection methods is consistent with practical needs. In this work, we take a closer look at the go-to metrics for evaluating OOD detection, and question the approach of exclusively reducing OOD detection to a binary classification task with little consideration for the detection threshold. We illustrate the limitations of current metrics (AUROC & its friends) and propose a new metric - Area Under the Threshold Curve (AUTC), which explicitly penalizes poor separation between ID and OOD samples. Scripts and data are available at https://github.com/glhr/beyond-auroc
@inproceedings{Humblot-Renaux_2023_CVPRW, author = {Humblot-Renaux, Galadrielle and Escalera, Sergio and Moeslund, Thomas B.}, title = {Beyond AUROC \& Co. for Evaluating Out-of-Distribution Detection Performance}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, doi = {10.1109/CVPRW59228.2023.00402}, month = jun, year = {2023}, pages = {3881-3890}, }
2022
- ICRA’22Navigation-Oriented Scene Understanding for Robotic Autonomy: Learning to Segment Driveability in Egocentric ImagesGaladrielle Humblot-Renaux, Letizia Marchegiani, Thomas B. Moeslund, and Rikke GadeIEEE Robotics and Automation Letters, Jun 2022
@article{Humblot-Renaux_2021_RAL, author = {Humblot-Renaux, Galadrielle and Marchegiani, Letizia and Moeslund, Thomas B. and Gade, Rikke}, journal = {IEEE Robotics and Automation Letters}, title = {Navigation-Oriented Scene Understanding for Robotic Autonomy: Learning to Segment Driveability in Egocentric Images}, year = {2022}, volume = {7}, number = {2}, pages = {2913-2920}, doi = {10.1109/LRA.2022.3144491}, abs = {This work tackles scene understanding for outdoor robotic navigation, solely relying on images captured by an on-board camera. Conventional visual scene understanding interprets the environment based on specific descriptive categories. However, such a representation is not directly interpretable for decision-making and constrains robot operation to a specific domain. Thus, we propose to segment egocentric images directly in terms of how a robot can navigate in them, and tailor the learning problem to an autonomous navigation task. Building around an image segmentation network, we present a generic affordance consisting of 3 driveability levels which can broadly apply to both urban and off-road scenes. By encoding these levels with soft ordinal labels, we incorporate inter-class distances during learning which improves segmentation compared to standard “hard” one-hot labelling. In addition, we propose a navigation-oriented pixel-wise loss weighting method which assigns higher importance to safety-critical areas. We evaluate our approach on large-scale public image segmentation datasets ranging from sunny city streets to snowy forest trails. In a cross-dataset generalization experiment, we show that our affordance learning scheme can be applied across a diverse mix of datasets and improves driveability estimation in unseen environments compared to general-purpose, single-dataset segmentation.} }