Nowadays, Search Engines (SEs) are technologies that are employed by the majority of people daily to satisfy information needs. Even though SEs and their underlying algorithms have been improved for several years, there are many challenges that are still to be solved. In this paper, we propose a possible approach to address Task 5 proposed in the CheckThat! Lab at CLEF 2024. The task involves the identification of relevant tweets from a set of authorities that can be used to verify a given rumor expressed in another tweet (i.e., to determine if the rumor can be trusted or not). It is also necessary to report whether the retrieved tweets support or oppose the considered rumor. We also show the results achieved by our system according to some of its possible configurations, analyzing the results and discussing which parameters impacted the performances the most, both in terms of efficiency and effectiveness. We observe that the usage of Large Language Models (LLMs) can boost effectiveness but results in a severe loss in terms of efficiency compared to less complex models. We finally show that our proposed system manages to achieve better results in terms of effectiveness compared to the ones achieved by the baseline provided by the Lab organizers on the English dataset available for this task.

SEUPD@CLEF: Team Axolotl on Rumor Verification using Evidence from Authorities

Pasin A.;Ferro N.
2024

Abstract

Nowadays, Search Engines (SEs) are technologies that are employed by the majority of people daily to satisfy information needs. Even though SEs and their underlying algorithms have been improved for several years, there are many challenges that are still to be solved. In this paper, we propose a possible approach to address Task 5 proposed in the CheckThat! Lab at CLEF 2024. The task involves the identification of relevant tweets from a set of authorities that can be used to verify a given rumor expressed in another tweet (i.e., to determine if the rumor can be trusted or not). It is also necessary to report whether the retrieved tweets support or oppose the considered rumor. We also show the results achieved by our system according to some of its possible configurations, analyzing the results and discussing which parameters impacted the performances the most, both in terms of efficiency and effectiveness. We observe that the usage of Large Language Models (LLMs) can boost effectiveness but results in a severe loss in terms of efficiency compared to less complex models. We finally show that our proposed system manages to achieve better results in terms of effectiveness compared to the ones achieved by the baseline provided by the Lab organizers on the English dataset available for this task.
2024
CEUR Workshop Proceedings
25th Working Notes of the Conference and Labs of the Evaluation Forum, CLEF 2024
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11577/3524163
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
  • OpenAlex ND
social impact