This paper is intended to be a report of the work we have done for the CLEF 2023 LongEval Lab, whose main goal is to evaluate and improve performances of IR models along time. We have implemented a basic retrieval system and then modified and extended it, focusing on different query expansion techniques, involving the use of synonyms and pseudo-relevance feedback. We will provide a description of our ideas, code and other development details, along with statistical analysis of the runs of our systems on different test collections.

SEUPD@CLEF: RAFJAM on Longitudinal Evaluation of Model Performance

Ferro N.
2023

Abstract

This paper is intended to be a report of the work we have done for the CLEF 2023 LongEval Lab, whose main goal is to evaluate and improve performances of IR models along time. We have implemented a basic retrieval system and then modified and extended it, focusing on different query expansion techniques, involving the use of synonyms and pseudo-relevance feedback. We will provide a description of our ideas, code and other development details, along with statistical analysis of the runs of our systems on different test collections.
2023
CEUR Workshop Proceedings
24th Working Notes of the Conference and Labs of the Evaluation Forum, CLEF-WN 2023
File in questo prodotto:
File Dimensione Formato  
paper-187.pdf

accesso aperto

Tipologia: Published (publisher's version)
Licenza: Creative commons
Dimensione 1.77 MB
Formato Adobe PDF
1.77 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11577/3506584
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
  • OpenAlex ND
social impact