Reflective error: a metric for assessing predictive performance at extreme events

When using machine learning to model environmental systems, it is often a model’s ability to predict extreme behaviors that yields the highest practical value to policy makers. However, most existing error metrics used to evaluate the performance of environmental machine learning models weigh error...

Full description

Saved in:
Bibliographic Details
Main Authors: Robert Edwin Rouse, Henry Moss, Scott Hosking, Allan McRobie, Emily Shuckburgh
Format: Article
Language:English
Published: Cambridge University Press 2025-01-01
Series:Environmental Data Science
Subjects:
Online Access:https://www.cambridge.org/core/product/identifier/S2634460225000160/type/journal_article
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:When using machine learning to model environmental systems, it is often a model’s ability to predict extreme behaviors that yields the highest practical value to policy makers. However, most existing error metrics used to evaluate the performance of environmental machine learning models weigh error equally across test data. Thus, routine performance is prioritized over a model’s ability to robustly quantify extreme behaviors. In this work, we present a new error metric, termed Reflective Error, which quantifies the degree at which our model error is distributed around our extremes, in contrast to existing model evaluation methods that aggregate error over all events. The suitability of our proposed metric is demonstrated on a real-world hydrological modeling problem, where extreme values are of particular concern.
ISSN:2634-4602