In this volume, methodologies for measuring statistical errors and for designing complex questionnaires are picked out. Data quality assessment is an important issue in any survey, but it tends to the fore just because the sampling error vanishes. This, in turn, requires researchers to learn:- How to measure the quality of, and possibly adjust, the data collected with opinion surveys that, in general, are carried out by means of high-performance technological tools and by specialized personnel who interview samples of respondents? - How to guarantee sufficient methodological standards in data collection from key witnesses whom applied researchers turn to more and more often so to corroborate the results of analyses on inaccessible phenomena, to elicit people’s preferences or hidden behaviours and to forecast social or economical events in the medium or the long run? - How to prune the redundant information that concurrent databases and repetitive records contain? Also, how to screen the statistically valid from the coarse information in excessively loaded databases created for purposes that are alien to statistics?
Survey Data Collection and Integration
FABBRIS, LUIGI
2013
Abstract
In this volume, methodologies for measuring statistical errors and for designing complex questionnaires are picked out. Data quality assessment is an important issue in any survey, but it tends to the fore just because the sampling error vanishes. This, in turn, requires researchers to learn:- How to measure the quality of, and possibly adjust, the data collected with opinion surveys that, in general, are carried out by means of high-performance technological tools and by specialized personnel who interview samples of respondents? - How to guarantee sufficient methodological standards in data collection from key witnesses whom applied researchers turn to more and more often so to corroborate the results of analyses on inaccessible phenomena, to elicit people’s preferences or hidden behaviours and to forecast social or economical events in the medium or the long run? - How to prune the redundant information that concurrent databases and repetitive records contain? Also, how to screen the statistically valid from the coarse information in excessively loaded databases created for purposes that are alien to statistics?Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.