In many applicative settings there is the interest in ranking a list of items arriving from a data stream. In a human resource application, for example, to help selecting people for a given job role, the person in charge of the selection may want to get a list of candidates sorted according to their profiles and how much they are suited for the target job role. Historical data about past decisions can be analyzed to try to discover rules to help in defining such ranking. Moreover, samples have a temporal dynamics. To exploit this possibly useful information, here we propose a method that incrementally builds a committee of classifiers (experts), each one trained on the newer chunks of samples. The prediction of the committee is obtained as a combination of the rankings proposed by the experts which are "closer" to the data to rank. The experts of the committee are generated using the Preference Learning Model, a recent method which can directly exploit supervision in the form of preferences (partial orders between instances) and thus particularly suitable for rankings. We test our approach on a large dataset coming from many years of human resource selections in a bank.
Application of the Preference Learning Model to a Human Resources Selection Task
AIOLLI, FABIO;SPERDUTI, ALESSANDRO
2009
Abstract
In many applicative settings there is the interest in ranking a list of items arriving from a data stream. In a human resource application, for example, to help selecting people for a given job role, the person in charge of the selection may want to get a list of candidates sorted according to their profiles and how much they are suited for the target job role. Historical data about past decisions can be analyzed to try to discover rules to help in defining such ranking. Moreover, samples have a temporal dynamics. To exploit this possibly useful information, here we propose a method that incrementally builds a committee of classifiers (experts), each one trained on the newer chunks of samples. The prediction of the committee is obtained as a combination of the rankings proposed by the experts which are "closer" to the data to rank. The experts of the committee are generated using the Preference Learning Model, a recent method which can directly exploit supervision in the form of preferences (partial orders between instances) and thus particularly suitable for rankings. We test our approach on a large dataset coming from many years of human resource selections in a bank.Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.