Conversational Information Access systems have undergone widespread adoption due to the natural and seamless interactions they enable with the user. In particular, they provide an effective interaction interface for both Conversational Search (CS) and Conversational Recommendation (CR) scenarios. Despite their inherent similarities, current research frequently address CS and CR systems as distinct and isolated entities. The integration of these two capabilities would enable to address complex information access scenarios, including the exploration of unfamiliar features of recommended products, which leads to richer dialogues and enhanced user satisfaction. At current time, the evaluation of integrated by-design CS and CR systems is severely hindered by the limited availability of comprehensive datasets that jointly address both tasks. To bridge this gap, we introduce CoSRec1, the first dataset for joint Conversational Search and Recommendation (CSR) evaluation. The CoSRec test set includes 20 high-quality conversations, with human-made annotations for the quality of conversations, and manually crafted relevance judgments for products and documents. In addition, we provide auxiliary training resources, including partially annotated dialogues and raw conversations, to support diverse learning paradigms. CoSRec is the first resource to model CS and CR tasks within a unified framework, facilitating the design, development, and evaluation of systems capable of dynamically alternating between answering user queries and offering personalized recommendations.
A Dataset for Joint Conversational Search and Recommendation
Alessio M.;Merlo S.;Faggioli G.;Ferrante M.;Ferro N.;
2025
Abstract
Conversational Information Access systems have undergone widespread adoption due to the natural and seamless interactions they enable with the user. In particular, they provide an effective interaction interface for both Conversational Search (CS) and Conversational Recommendation (CR) scenarios. Despite their inherent similarities, current research frequently address CS and CR systems as distinct and isolated entities. The integration of these two capabilities would enable to address complex information access scenarios, including the exploration of unfamiliar features of recommended products, which leads to richer dialogues and enhanced user satisfaction. At current time, the evaluation of integrated by-design CS and CR systems is severely hindered by the limited availability of comprehensive datasets that jointly address both tasks. To bridge this gap, we introduce CoSRec1, the first dataset for joint Conversational Search and Recommendation (CSR) evaluation. The CoSRec test set includes 20 high-quality conversations, with human-made annotations for the quality of conversations, and manually crafted relevance judgments for products and documents. In addition, we provide auxiliary training resources, including partially annotated dialogues and raw conversations, to support diverse learning paradigms. CoSRec is the first resource to model CS and CR tasks within a unified framework, facilitating the design, development, and evaluation of systems capable of dynamically alternating between answering user queries and offering personalized recommendations.Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.




