rss_2.0Journal of Official Statistics FeedSciendo RSS Feed for Journal of Official Statisticshttps://sciendo.com/journal/JOShttps://www.sciendo.comJournal of Official Statistics Feedhttps://sciendo-parsed.s3.eu-central-1.amazonaws.com/648ae64a6c8f152e09d7300d/cover-image.jpghttps://sciendo.com/journal/JOS140216A Rejoinder to Garfinkel (2023) – Legacy Statistical Disclosure Limitation Techniques for Protecting 2020 Decennial US Census: Still a Viable Optionhttps://sciendo.com/article/10.2478/jos-2023-0019<abstract> <title style='display:none'>Abstract</title> <p>In our article “Database Reconstruction Is Not So Easy and Is Different from Reidentification”, we show that reconstruction can be averted by properly using traditional statistical disclosure control (SDC) techniques, also sometimes called legacy statistical disclosure limitation (SDL) techniques. Furthermore, we also point out that, even if reconstruction can be performed, it does not imply reidentification. Hence, the risk of reconstruction does not seem to warrant replacing traditional SDC techniques with differential privacy (DP) based protection. In “Legacy Statistical Disclosure Limitation Techniques Were Not an Option for the 2020 US Census of Population and Housing”, by Simson Garfinkel, the author insists that the 2020 Census move to DP was justified. In our view, this latter article contains some misconceptions that we identify and discuss in some detail below. Consequently, we stand by the arguments given in “Database Reconstruction Is Not So Easy:: :”.</p> </abstract>ARTICLEtruehttps://sciendo.com/article/10.2478/jos-2023-00192023-09-07T00:00:00.000+00:00Letter to Editor Quality of 2017 Population Census of Pakistan by Age and Sexhttps://sciendo.com/article/10.2478/jos-2023-0013<abstract> <title style='display:none'>Abstract</title> <p>This Letter to Editor is a supplement to the previously published article in the Journal of Official Statistics (Wazir and Goujon 2021).</p> <p>In 2021, a reconstruction method using demographic analysis for assessing the quality and validity of the 2017 census data has been applied, by critically investigating the demographic changes in the intercensal period at national and provincial levels. However, at the time when the article was written, the age and sex structure of the population from the 2017 census had not yet been published, making it hard to fully appreciate the reconstruction of the national and subnational level populations.</p> <p>In the meantime, detailed data have become available and offer the possibility to assess the reconstruction’s outcome more in detail. Therefore, this letter aims two-fold: (1) to analyze the quality of the age and sex distribution in the 2017 Population census of Pakistan, and (2) to compare the reconstruction by age and sex to the results of the 2017 population census. Our results reveal that the age and sex structure of the population as estimated by the 2017 census suffer from some irregularities. Our analysis by age and sex reinforces the main conclusion of previous article that the next census in Pakistan should increase in quality with an inbuild post-enumeration survey along with post-census demographic analysis.</p> </abstract>ARTICLEtruehttps://sciendo.com/article/10.2478/jos-2023-00132023-09-07T00:00:00.000+00:00Comment to Muralidhar and Domingo-Ferrer (2023) – Legacy Statistical Disclosure Limitation Techniques Were Not An Option for the 2020 US Census of Population And Housinghttps://sciendo.com/article/10.2478/jos-2023-0018<abstract> <title style='display:none'>Abstract</title> <p>The Article Database Reconstruction is Not So Easy and Is Different from Reidentification, by Krish Muralidhar and Josep Domingo-Ferrer, is an extended attack on the decision of the U.S. Census Bureau to turn its back on legacy statistical disclosure limitation techniques and instead use a bespoke algorithm based on differential privacy to protect the published data products of the Census Bureau’s 2020 Census of Population and Housing (henceforth referred to as the 2020 Census). This response explains why differential privacy was the only realistic choice for protecting sensitive data collected for the 2020 Census. However, differential privacy has a social cost: it requires that practitioners admit that there is inherently a trade-off between the utility of published official statistics and the privacy loss of those whose data are collected under a pledge of confidentiality.</p> </abstract>ARTICLEtruehttps://sciendo.com/article/10.2478/jos-2023-00182023-09-07T00:00:00.000+00:00Towards Demand-Driven On-The-Fly Statisticshttps://sciendo.com/article/10.2478/jos-2023-0016<abstract> <title style='display:none'>Abstract</title> <p>A prototype of a question answering (QA) system, called Farseer, for the real-time calculation and dissemination of aggregate statistics is introduced. Using techniques from natural language processing (NLP), machine learning (ML), artificial intelligence (AI) and formal semantics, this framework is capable of correctly interpreting a written request for (aggregate) statistics and subsequently generating appropriate results. It is shown that the framework operates in a way that is independent of a specific statistical domain under consideration, by capturing domain specific information in a knowledge graph that is input to the framework. However, it is also shown that the prototype still has its limitations, lacking statistical disclosure control. Also, searching the knowledge graph is still time-consuming.</p> </abstract>ARTICLEtruehttps://sciendo.com/article/10.2478/jos-2023-00162023-09-07T00:00:00.000+00:00Database Reconstruction Is Not So Easy and Is Different from Reidentificationhttps://sciendo.com/article/10.2478/jos-2023-0017<abstract> <title style='display:none'>Abstract</title> <p>In recent years, it has been claimed that releasing accurate statistical information on a database is likely to allow its complete reconstruction. Differential privacy has been suggested as the appropriate methodology to prevent these attacks. These claims have recently been taken very seriously by the U.S. Census Bureau and led them to adopt differential privacy for releasing U.S. Census data. This in turn has caused consternation among users of the Census data due to the lack of accuracy of the protected outputs. It has also brought legal action against the U.S. Department of Commerce. In this article, we trace the origins of the claim that releasing information on a database automatically makes it vulnerable to being exposed by reconstruction attacks and we show that this claim is, in fact, incorrect. We also show that reconstruction can be averted by properly using traditional statistical disclosure control (SDC) techniques. We further show that the geographic level at which exact counts are released is even more relevant to protection than the actual SDC method employed. Finally, we caution against confusing reconstruction and reidentification: using the quality of reconstruction as a metric of reidentification results in exaggerated reidentification risk figures.</p> </abstract>ARTICLEtruehttps://sciendo.com/article/10.2478/jos-2023-00172023-09-07T00:00:00.000+00:00Looking for a New Approach to Measuring the Spatial Concentration of the Human Populationhttps://sciendo.com/article/10.2478/jos-2023-0014<abstract> <title style='display:none'>Abstract</title> <p>In the article a new approach for measuring the spatial concentration of human population is presented and tested. The new procedure is based on the concept of concentration introduced by Gini and, at the same time, on its spatial extension (i.e., taking into account the concept of spatial autocorrelation, polarization). The proposed indicator, the Spatial Gini Index, is then computed by using two different kind of territorial partitioning methods: MaxMin (MM) and the Constant Step (CS) distance. In this framework an ad hoc extension of the Rey and Smith decomposition method is then introduced. We apply this new approach to the Italian and foreign population resident in almost 7,900 statistical units (Italian municipalities) in 2002, 2010 and 2018. All elaborations are based on a new ad hoc library developed and implemented in Python.</p> </abstract>ARTICLEtruehttps://sciendo.com/article/10.2478/jos-2023-00142023-09-07T00:00:00.000+00:00A Note on the Optimum Allocation of Resources to Follow up Unit Nonrespondents in Probability Surveyshttps://sciendo.com/article/10.2478/jos-2023-0020<abstract> <title style='display:none'>Abstract</title> <p>Common practice to address nonresponse in probability surveys in National Statistical Offices is to follow up every non respondent with a view to lifting response rates. As response rate is an insufficient indicator of data quality, it is argued that one should follow up non respondents with a view to reducing the mean squared error (MSE) of the estimator of the variable of interest. In this article, we propose a method to allocate the nonresponse follow-up resources in such a way as to minimise the MSE under a quasi-randomisation framework. An example to illustrate the method using the 2018/19 Rural Environment and Agricultural Commodities Survey from the Australian Bureau of Statistics is provided.</p> </abstract>ARTICLEtruehttps://sciendo.com/article/10.2478/jos-2023-00202023-09-07T00:00:00.000+00:00Predicting Days to Respondent Contact in Cross-Sectional Surveys Using a Bayesian Approachhttps://sciendo.com/article/10.2478/jos-2023-0015<abstract> <title style='display:none'>Abstract</title> <p>Surveys estimate and monitor a variety of data collection parameters, including response propensity, number of contacts, and data collection costs. These parameters can be used as inputs to a responsive/adaptive design or to monitor the progression of a data collection period against predefined expectations. Recently, Bayesian methods have emerged as a method for combining historical information or external data with data from the in-progress data collection period to improve prediction. We develop a Bayesian method for predicting a measure of case-level progress or productivity, the estimated time lag, in days, between first contact attempt and first respondent contact. We compare the quality of predictions from the Bayesian method to predictions generated from more commonly-used predictive methods that leverage data from only historical data collection periods or the in-progress round of data collection. Using prediction error and misclassification as short- or long- day lags, we demonstrate that the Bayesian method results in improved predictions close to the day of the first contact attempt, when these predictions may be most informative for interventions or interviewer feedback. This application adds to evidence that combining historical and current information about data collection, in a Bayesian framework, can improve predictions of data collection parameters.</p> </abstract>ARTICLEtruehttps://sciendo.com/article/10.2478/jos-2023-00152023-09-07T00:00:00.000+00:00Constructing Building Price Index Using Administrative Datahttps://sciendo.com/article/10.2478/jos-2023-0011<abstract> <title style='display:none'>Abstract</title> <p>Improving the accuracy of deflators is crucial for measuring real GDP and growth rates. However, construction prices are often difficult to measure. This study uses the stratification and hedonic methods to estimate price indices. The estimated indices are based on the actual transaction prices of buildings (contract prices) obtained from the Statistics on Building Starts survey information from the administrative sector in Japan. Compared with the construction cost deflator (CCD), calculated by compounding input costs, the estimated output price indices show higher rates of increase during the economic expansion phase after 2013. This suggests that the profit surge in the construction sector observed in that period is not fully reflected in the CCD. Furthermore, the difference between the two “output-type” indices obtained by stratification and hedonic methods shrinks when the estimation methods are precisely configured.</p> </abstract>ARTICLEtruehttps://sciendo.com/article/10.2478/jos-2023-00112023-06-09T00:00:00.000+00:00Design and Sample Size Determination for Experiments on Nonresponse Followup using a Sequential Regression Modelhttps://sciendo.com/article/10.2478/jos-2023-0009<abstract> <title style='display:none'>Abstract</title> <p>Statistical agencies depend on responses to inquiries made to the public, and occasionally conduct experiments to improve contact procedures. Agencies may wish to assess whether there is significant change in response rates due to an operational refinement. This work considers the assessment of response rates when up to <italic>L</italic> attempts are made to contact each subject, and subjects receive one of <italic>J</italic> possible variations of the operation under experimentation. In particular, the continuation-ratio logit (CRL) model facilitates inference on the probability of success at each step of the sequence, given that failures occurred at previous attempts. The CRL model is investigated as a basis for sample size determination– one of the major decisions faced by an experimenter–to attain a desired power under a Wald test of a general linear hypothesis. An experiment that was conducted for nonresponse followup in the United States 2020 decennial census provides a motivating illustration.</p> </abstract>ARTICLEtruehttps://sciendo.com/article/10.2478/jos-2023-00092023-06-09T00:00:00.000+00:00Effects of Changing Modes on Item Nonresponse in Panel Surveyshttps://sciendo.com/article/10.2478/jos-2023-0007<abstract> <title style='display:none'>Abstract</title> <p>To investigate the effect of a change from the telephone to the web mode on item nonresponse in panel surveys, we use experimental data from a two-wave panel survey. The treatment group changed from the telephone to the web mode after the first wave, while the control group continued in the telephone mode. We find that when changing to the web, “don’t know” answers increase moderately from a low level, while item refusal increases substantially from a very low level. This is the case for all person groups, although socio-demographic characteristics have some additional effects on giving a don’t know or a refusal when changing mode.</p> </abstract>ARTICLEtruehttps://sciendo.com/article/10.2478/jos-2023-00072023-06-09T00:00:00.000+00:00Estimating Intra-Regional Inequality with an Application to German Spatial Planning Regionshttps://sciendo.com/article/10.2478/jos-2023-0010<abstract> <title style='display:none'>Abstract</title> <p>Income inequality is a persistent topic of public and political debate. In this context, the focus often shifts from the national level to a more detailed geographical level. In particular, inequality between or within local communities can be assessed. In this article, the estimation of inequality within regions, that is, between households, is considered at a regionally disaggregated level. From a methodological point of view, a small area estimation of the Gini coefficient is carried out using an area-level model linking survey data with related administrative data. Specifically, the Fay-Herriot model is applied using a logit-transformation followed by a bias-corrected back-transformation. The uncertainty of the point estimate is assessed using a parametric bootstrap procedure to estimate the mean squared error. The validity of the methodology is shown in a model-based simulation for the point estimator as well as for the uncertainty measure. The proposed methodology is illustrated by estimating model-based Gini coefficients for spatial planning regions in Germany, using survey data from the Socio-Economic Panel and aggregate data from the 2011 Census. The results show that intra-regional inequality is more diverse than a consideration only between East and West suggests.</p> </abstract>ARTICLEtruehttps://sciendo.com/article/10.2478/jos-2023-00102023-06-09T00:00:00.000+00:00From Quarterly to Monthly Turnover Figures Using Nowcasting Methodshttps://sciendo.com/article/10.2478/jos-2023-0012<abstract> <title style='display:none'>Abstract</title> <p>Short-term business statistics at Statistics Netherlands are largely based on Value Added Tax (VAT) administrations. Companies may decide to file their tax return on a monthly, quarterly, or annual basis. Most companies file their tax return quarterly. So far, these VAT based short-term business statistics are published with a quarterly frequency as well. In this article we compare different methods to compile monthly figures, even though a major part of these data is observed quarterly. The methods considered to produce a monthly indicator must address two issues. The first issue is to combine a high- and low-frequency series into a single high-frequency series, while both series measure the same phenomenon of the target population. The appropriate method that is designed for this purpose is usually referred to as “benchmarking”. The second issue is a missing data problem, because the first and second month of a quarter are published before the corresponding quarterly data is available. A “nowcast” method can be used to estimate these months. The literature on mixed frequency models provides solutions for both problems, sometimes by dealing with them simultaneously. In this article we combine different benchmarking and nowcasting models and evaluate combinations. Our evaluation distinguishes between relatively stable periods and periods during and after a crisis because different approaches might be optimal under these two conditions. We find that during stable periods the so-called Bridge models perform slightly better than the alternatives considered. Until about fifteen months after a crisis, the models that rely heavier on historic patterns such as the Bridge, MIDAS and structural time series models are outperformed by more straightforward (S)ARIMA approaches.</p> </abstract>ARTICLEtruehttps://sciendo.com/article/10.2478/jos-2023-00122023-06-09T00:00:00.000+00:00Adjusting for Selection Bias in Nonprobability Samples by Empirical Likelihood Approachhttps://sciendo.com/article/10.2478/jos-2023-0008<abstract> <title style='display:none'>Abstract</title> <p>Large amount of data are today available, that are easier and faster to collect than survey data, bringing new challenges. One of them is the nonprobability nature of these big data that may not represent the target population properly and hence result in highly biased estimators. In this article two approaches for dealing with selection bias when the selection process is nonignorable are discussed. The first one, based on the empirical likelihood, does not require parametric specification of the population model but the probability of being in the nonprobability sample needed to be modeled. Auxiliary information known for the population or estimable from a probability sample can be incorporated as calibration constraints, thus enhancing the precision of the estimators. The second one is a mixed approach based on mass imputation and propensity score adjustment requiring that the big data membership is known throughout a probability sample. Finally, two simulation experiments and an application to income data are performed to evaluate the performance of the proposed estimators in terms of robustness and efficiency.</p> </abstract>ARTICLEtruehttps://sciendo.com/article/10.2478/jos-2023-00082023-06-09T00:00:00.000+00:00Characteristics of Respondents to Web-Based or Traditional Interviews in Mixed-Mode Surveys. Evidence from the Italian Permanent Population Censushttps://sciendo.com/article/10.2478/jos-2023-0001<abstract> <title style='display:none'>Abstract</title> <p>In order to provide useful tools for researchers in the design of actions to promote participation in web surveys, it is key to study the characteristics that define the profile of a “web respondent”, so that specific interventions can be planned. In this contribution, which draws on data collected during the 2019 housing population census in Italy, we define the set of familial and geographical characteristics that correspond to a greater probability that the interviewed household will choose to respond online, by estimating a multilevel model. The profile of a “computer-assisted web interview household” (CAWI-H) is then defined, on the basis of the structural characteristics of this population. Moreover, the geographical distribution of households is studied according to their distance from the CAWI-H profile. The results show that households that are more distant from the CAWI-H profile have characteristics that correspond to segments of the population generally affected by economic and social fragility; they are mainly elderly, foreigners, residents in small towns, and people with a low level of education. It is to these households in particular that survey designers can address specific actions that can enhance their willingness to participate in web surveys.</p> </abstract>ARTICLEtruehttps://sciendo.com/article/10.2478/jos-2023-00012023-03-16T00:00:00.000+00:00Investigating an Alternative for Estimation from a Nonprobability Sample: Matching plus Calibrationhttps://sciendo.com/article/10.2478/jos-2023-0003<abstract> <title style='display:none'>Abstract</title> <p>Matching a nonprobability sample to a probability sample is one strategy both for selecting the nonprobability units and for weighting them. This approach has been employed in the past to select subsamples of persons from a large panel of volunteers. One method of weighting, introduced here, is to assign a unit in the nonprobability sample the weight from its matched case in the probability sample. The properties of resulting estimators depend on whether the probability sample weights are inverses of selection probabilities or are calibrated. In addition, imperfect matching can cause estimates from the matched sample to be biased so that its weights need to be adjusted, especially when the size of the volunteer panel is small. Calibration weighting combined with matching is one approach to correct bias and reduce variances. We explore the theoretical properties of the matched and matched, calibrated estimators with respect to a quasirandomization distribution that is assumed to describe how units in the nonprobability sample are observed, a superpopulation model for analysis variables collected in the nonprobability sample, and the randomization distribution for the probability sample. Numerical studies using simulated and real data from the 2015 US Behavioral Risk Factor Surveillance Survey are conducted to examine the performance of the alternative estimators.</p> </abstract>ARTICLEtruehttps://sciendo.com/article/10.2478/jos-2023-00032023-03-16T00:00:00.000+00:00Using Eye-Tracking Methodology to Study Grid Question Designs in Web Surveyshttps://sciendo.com/article/10.2478/jos-2023-0004<abstract> <title style='display:none'>Abstract</title> <p>Grid questions are frequently employed in web surveys due to their assumed response efficiency. In line with this, many previous studies have found shorter response times for grid questions compared to item-by-item formats. Our contribution to this literature is to investigate how altering the question format affects response behavior and the depth of cognitive processing when answering both grid question and item-by-item formats. To answer these questions, we implemented an experiment with three questions in an eye-tracking study. Each question consisted of a set of ten items which respondents answered either on a single page (large grid), on two pages with five items each (small grid), or on ten separate pages (item-by-item). We did not find substantial differences in cognitive processing overall, while the processing of the question stem and the response scale labels was significantly higher for the item-by-item design than for the large grid in all three questions. We, however, found that when answering an item in a grid question, respondents often refer to surrounding items when making a judgement. We discuss the findings and limitations of our study and provide suggestions for practical design decisions.</p> </abstract>ARTICLEtruehttps://sciendo.com/article/10.2478/jos-2023-00042023-03-16T00:00:00.000+00:00A Statistical Comparison of Call Volume Uniformity Due to Mailing Strategyhttps://sciendo.com/article/10.2478/jos-2023-0005<abstract> <title style='display:none'>Abstract</title> <p>For operations such as a decennial census, the U.S. Census Bureau sends mail to potential respondents inviting a self-response. It is suspected that the mailing strategy affects the distribution of call volumes to the U.S. Census Bureau's telephone helplines. For staffing purposes, more uniform call volumes throughout the week are desirable. In this work, we formulate tests and confidence intervals to compare uniformity of call volumes resulting from competing mailing strategies. Regarding the data as multinomial observations, we compare pairs of call volume observations to determine whether one mailing strategy has multinomial cell probabilities closer to the uniform probability vector compared to another strategy. A motivating illustration is provided by call volume data recorded in three studies which were carried out in advance of the 2020 Decennial Census.</p> </abstract>ARTICLEtruehttps://sciendo.com/article/10.2478/jos-2023-00052023-03-16T00:00:00.000+00:00A Multivariate Regression Estimator of Levels and Change for Surveys Over Timehttps://sciendo.com/article/10.2478/jos-2023-0002<abstract> <title style='display:none'>Abstract</title> <p>Rotations are often used for panel surveys, where the observations remain in the sample for a predefined number of periods and then rotate out. The information of previous waves can be exploited to improve current estimates. We propose a multivariate regression estimator which captures all information available from both waves. By adding additional auxiliary variables describing the information of the rotational design, the proposed estimator captures the sample correlation between waves. It can be used for the estimation of levels and changes.</p> </abstract>ARTICLEtruehttps://sciendo.com/article/10.2478/jos-2023-00022023-03-16T00:00:00.000+00:00A Two-Stage Bennet Decomposition of the Change in the Weighted Arithmetic Meanhttps://sciendo.com/article/10.2478/jos-2023-0006<abstract> <title style='display:none'>Abstract</title> <p>The weighted arithmetic mean is used in a wide variety of applications. An infinite number of possible decompositions of the change in the weighted mean are available, and it is therefore an open question which of the possible decompositions should be applied. In this article, we derive a decomposition of the change in the weighted mean based on a two-stage Bennet decomposition. Our proposed decomposition is easy to employ and interpret, and we show that it satisfies the difference counterpart to the index number time reversal test. We illustrate the framework by decomposing aggregate earnings growth from 2020Q4 to 2021Q4 in Norway and compare it with some of the main decompositions proposed in the literature. We find that the wedge between the identified compositional effects from the proposed two-stage Bennet decomposition and the one-stage Bennet decomposition is substantial, and for some industries, the compositional effects have opposite signs.</p> </abstract>ARTICLEtruehttps://sciendo.com/article/10.2478/jos-2023-00062023-03-16T00:00:00.000+00:00en-us-1