Benchmarking Sentence Encoders in Associating Indicators With Sustainable Development Goals and Targets

The United Nations’ 2030 Agenda for Sustainable Development balances the economic, environmental, and social dimension of sustainable development in 17 Sustainable Development Goals (SDGs), monitored through a well-defined set of targets and global indicators. Although essential for human...

Full description

Saved in:
Bibliographic Details
Main Authors: Ana Gjorgjevikj, Kostadin Mishev, Dimitar Trajanov, Ljupco Kocarev
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/11113321/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The United Nations’ 2030 Agenda for Sustainable Development balances the economic, environmental, and social dimension of sustainable development in 17 Sustainable Development Goals (SDGs), monitored through a well-defined set of targets and global indicators. Although essential for humanity’s future well-being, this monitoring is still challenging due to the variable quality of the statistical data of global indicators compiled at the national level and the diversity of indicators used to monitor sustainable development at the subnational level. Associating indicators other than the global ones with the SDGs/targets may help not only to expand the statistical data, but to better align the efforts toward sustainable development taken at (sub)national level. This article presents a model-agnostic framework for associating such indicators with the SDGs and targets by comparing their textual descriptions in a common representation space. While removing the dependence on the quantity and quality of the statistical data of the indicators, it provides human experts with data-driven suggestions on the complex and not always obvious associations between the indicators and the SDGs/targets. A comprehensive domain-specific benchmarking of a diverse sentence encoder portfolio was performed first, followed by fine-tuning of the best ones on a newly created dataset. Five sets of indicators used at the (sub)national level of governance (around 800 indicators in total) were used for the evaluation. Finally, the influence of 40 factors on the results was analyzed using explainable artificial intelligence (xAI) methods. The results show that 1) certain sentence encoders are better suited to solving the task than others (potentially due to their diverse pre-training datasets), 2) the fine-tuning not only improves the predictive performance over the baselines but also reduces the sensitivity to changes in indicator description length (performance drops even by up to 17% for baseline models as length increases, but remains comparable for fine-tuned models), and 3) better selected training instances have the potential to improve the performance even further (taking into account the limited fine-tuning dataset currently used and the insights from the xAI analysis). Most importantly, this article contributes to filling the existing gap in comprehensive benchmarking of AI models in solving the problem.
ISSN:2169-3536