In this paper, we present an overview of the “Task 2: Complexity Spotting, Identifying and explaining difficult concepts” within the context of the Automatic Simplification of Scientific Texts (SimpleText) lab, run as part of CLEF 2024. The primary objective of the SimpleText lab is to advance the accessibility of scientific information by facilitating automatic text simplification, thereby promoting a more inclusive approach to scientific knowledge dissemination. Task 2 focuses on complexity spotting within scientific texts (passage). Thus, the goal is to detect the terms/concepts that require specific background knowledge for understanding the passage, assess their complexity for non-experts, and provide explanations for these detected difficult concepts. A total of 39 submissions were received for this task, originating from 12 distinct teams. In this paper, we describe the data collection process, task configuration, and evaluation methodology employed. Additionally, we provide a brief summary of the various approaches adopted by the participating teams.
Overview of the CLEF 2024 SimpleText Task 2: Identify and Explain Difficult Concepts
Di Nunzio G. M.
;Vezzani F.;Bonato V.;
2024
Abstract
In this paper, we present an overview of the “Task 2: Complexity Spotting, Identifying and explaining difficult concepts” within the context of the Automatic Simplification of Scientific Texts (SimpleText) lab, run as part of CLEF 2024. The primary objective of the SimpleText lab is to advance the accessibility of scientific information by facilitating automatic text simplification, thereby promoting a more inclusive approach to scientific knowledge dissemination. Task 2 focuses on complexity spotting within scientific texts (passage). Thus, the goal is to detect the terms/concepts that require specific background knowledge for understanding the passage, assess their complexity for non-experts, and provide explanations for these detected difficult concepts. A total of 39 submissions were received for this task, originating from 12 distinct teams. In this paper, we describe the data collection process, task configuration, and evaluation methodology employed. Additionally, we provide a brief summary of the various approaches adopted by the participating teams.Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.