In this paper, we present an overview of the “Task 2: Complexity Spotting, Identifying and explaining difficult concepts” within the context of the Automatic Simplification of Scientific Texts (SimpleText) lab, run as part of CLEF 2024. The primary objective of the SimpleText lab is to advance the accessibility of scientific information by facilitating automatic text simplification, thereby promoting a more inclusive approach to scientific knowledge dissemination. Task 2 focuses on complexity spotting within scientific texts (passage). Thus, the goal is to detect the terms/concepts that require specific background knowledge for understanding the passage, assess their complexity for non-experts, and provide explanations for these detected difficult concepts. A total of 39 submissions were received for this task, originating from 12 distinct teams. In this paper, we describe the data collection process, task configuration, and evaluation methodology employed. Additionally, we provide a brief summary of the various approaches adopted by the participating teams.

Overview of the CLEF 2024 SimpleText Task 2: Identify and Explain Difficult Concepts

Di Nunzio G. M.
;
Vezzani F.;Bonato V.;
2024

Abstract

In this paper, we present an overview of the “Task 2: Complexity Spotting, Identifying and explaining difficult concepts” within the context of the Automatic Simplification of Scientific Texts (SimpleText) lab, run as part of CLEF 2024. The primary objective of the SimpleText lab is to advance the accessibility of scientific information by facilitating automatic text simplification, thereby promoting a more inclusive approach to scientific knowledge dissemination. Task 2 focuses on complexity spotting within scientific texts (passage). Thus, the goal is to detect the terms/concepts that require specific background knowledge for understanding the passage, assess their complexity for non-experts, and provide explanations for these detected difficult concepts. A total of 39 submissions were received for this task, originating from 12 distinct teams. In this paper, we describe the data collection process, task configuration, and evaluation methodology employed. Additionally, we provide a brief summary of the various approaches adopted by the participating teams.
2024
CEUR Workshop Proceedings
25th Working Notes of the Conference and Labs of the Evaluation Forum, CLEF 2024
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11577/3542148
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 6
  • ???jsp.display-item.citation.isi??? ND
  • OpenAlex ND
social impact