rss_2.0Business and Economics FeedSciendo RSS Feed for Business and Economicshttps://www.sciendo.com/subject/EChttps://www.sciendo.comBusiness and Economics Feedhttps://www.sciendo.com/subjectImages/Bussiness_&_Economics.jpg700700Estimation of Small Area Characteristics Using Multivariate Rao-Yu Modelhttps://sciendo.com/article/10.21307/stattrans-2017-009<abstract><title style='display:none'>Abstract</title><p>The growing demand for high-quality statistical data for small areas coming from both the public and private sector makes it necessary to develop appropriate estimation methods. The techniques based on small area models that combine time series and cross-sectional data allow for efficient "borrowing strength" from the entire population and they can also take into account changes over time. In this context, the EBLUP estimation based on multivariate Rao-Yu model, involving both autocorrelated random effects between areas and sampling errors, can be useful. The efficiency of this approach involves the degree of correlation between dependent variables considered in the model. In the paper we take up the subject of the estimation of incomes and expenditure in Poland by means of the multivariate Rao-Yu model based on the sample data coming from the Polish Household Budget Survey and administrative registers. In particular, the advantages and limitations of bivariate models have been discussed. The calculations were performed using the <italic>sae</italic> and <italic>sae2</italic> packages for R-project environment. Direct estimates were performed using the WesVAR software, and the precision of the direct estimates was determined using a balanced repeated replication (BRR) method.</p></abstract>ARTICLE2018-01-22T00:00:00.000+00:00The Inter-Country Comparison of the Cost of Children Maintenance Using Housing Expenditurehttps://sciendo.com/article/10.21307/stattrans-2017-007<abstract><title style='display:none'>Abstract</title><p>It is interesting to compare maintenance costs of children between countries with similar yet different family policy regimes because this could yield valuable lessons for researchers and policy-makers and also for the sake of methodological development.</p><p>In this study, we aim to conduct a comparative analysis of the equivalence scales in Austria, Italy, Poland and France taking into account the age of children. To this end, we use data from the European Income and Living Condition (EU-SILC) to calculate equivalence scales for mono- and duo-parental households for the first and second child. The four countries share common European cultural context, yet differ with respect to social environment, in particular to family policy. We apply the Engel estimation method proposing the share of housing spending in total expenditures as a tool to obtain commodity-specific equivalence scales.</p><p>Our results are consistent with other studies showing that the cost of a first child is higher than that of a later child. The scale values are not the same across all the countries concerned, with the highest cost observed in Italy and the lowest in Poland.</p></abstract>ARTICLE2018-01-22T00:00:00.000+00:00Mapping Poverty at the Level of Subregions in Poland Using Indirect Estimationhttps://sciendo.com/article/10.21307/stattrans-2017-003<abstract><title style='display:none'>Abstract</title><p>The European Survey on Income and Living Conditions (EU-SILC) is the basic source of information published by CSO (the Central Statistical Office of Poland) about the relative poverty indicator, both for the country as a whole and at the regional level (NUTS 1). Estimates at lower levels of the territorial division than regions (NUTS 1) or provinces (NUTS 2, also called ’voivodships’) have not been published so far. These estimates can be calculated by means of indirect estimation methods, which rely on information from outside the subpopulation of interest, which usually increases estimation precision. The main aim of this paper is to show results of estimation of the poverty indicator at a lower level of spatial aggregation than the one used so far, that is at the level of subregions in Poland (NUTS 3) using the small area estimation methodology (SAE), i.e. a model–based technique – the EBLUP estimator based on the Fay–Herriot model. By optimally choosing covariates derived from sources unaffected by random errors we can obtain results with adequate precision. A territorial analysis of the scope of poverty in Poland at NUTS 3 level will be also presented in detail<sup>4</sup>. The article extends the approach presented by Wawrowski (2014).</p></abstract>ARTICLE2018-01-22T00:00:00.000+00:00Relations for Moments of Progressively Type-II Right Censored Order Statistics From Erlang-Truncated Exponential Distributionhttps://sciendo.com/article/10.21307/stattrans-2017-005<abstract><title style='display:none'>Abstract</title><p>In this paper, we establish some new recurrence relations for the single and product moments of progressively Type-II right censored order statistics from the Erlang-truncated exponential distribution. These relations generalize those established by Aggarwala and Balakrishnan (1996) for standard exponential distribution. These recurrence relations enable computation of mean, variances and covariances of all progressively Type-II right censored order statistics for all sample sizes in a simple and efficient manner. Further an algorithm is discussed which enable us to compute all the means, variances and covariances of Erlang-truncated exponential progressive Type-II right censored order statistics for all sample sizes <italic>n</italic> and all censoring schemes (<italic>R</italic><sub>1</sub>, <italic>R</italic><sub>2</sub>,…, <italic>R</italic><sub>m</sub>), <italic>m</italic> &lt; <italic>n</italic>. By using these relations, we tabulate the means and variances of progressively Type-II right censored order statistics of the Erlang-truncated exponential distribution.</p></abstract>ARTICLE2018-01-22T00:00:00.000+00:00New Approaches Using Exponential Type Estimator With Cost Modelling for Population Mean on Successive Waveshttps://sciendo.com/article/10.21307/stattrans-2017-001<abstract><title style='display:none'>Abstract</title><p>The key and fundamental purpose of sampling over successive waves lies in the varying nature of study character, it so may happen with ancillary information if the time lag between two successive waves is sufficiently large. Keeping the varying nature of auxiliary information in consideration, modern approaches have been proposed to estimate population mean over two successive waves. Four exponential ratio type estimators have been designed. The properties of proposed estimators have been elaborated theoretically including the optimum rotation rate.Cost models have also been worked out to minimize the total cost of the survey design over two successive waves. Dominances of the proposed estimators have been shown over well-known existing estimators. Simulation algorithms have been designed and applied to corroborate the theoretical results.</p></abstract>ARTICLE2018-01-22T00:00:00.000+00:00A New Estimator of Mean Using Double Samplinghttps://sciendo.com/article/10.21307/stattrans-2017-004<abstract><title style='display:none'>Abstract</title><p>In this paper, we consider the problem of estimation of population mean of a study variable by making use of first-phase sample mean and first-phase sample median of the auxiliary variable at the estimation stage. The proposed new estimator of the population mean is compared to the sample mean estimator, ratio estimator and the difference type estimator for the fixed cost of the survey by using the concept of two-phase sampling. The magnitude of the relative efficiency of the proposed new estimator has been investigated through simulation study.</p></abstract>ARTICLE2018-01-22T00:00:00.000+00:00Improvement of Fuzzy Mortality Models by Means of Algebraic Methodshttps://sciendo.com/article/10.21307/stattrans-2017-008<abstract><title style='display:none'>Abstract</title><p>The forecasting of mortality is of fundamental importance in many areas, such as the funding of public and private pensions, the care of the elderly, and the provision of health service. The first studies on mortality models date back to the 19th century, but it was only in the last 30 years that the methodology started to develop at a fast rate. Mortality models presented in the literature form two categories (see, e.g. Tabeau <italic>et al</italic>., 2001, Booth, 2006) consisting of the so-called static or stationary models and dynamic models, respectively. Models contained in the first, bigger group contains models use a real or fuzzy variable function with some estimated parameters to represent death probabilities or specific mortality rates. The dynamic models in the second group express death probabilities or mortality rates by means of the solutions of stochastic differential equations, etc.</p><p>The well-known Lee-Carter model (1992), which is widely used today, is considered to belong to the first group, similarly as its fuzzy version published by Koissi and Shapiro (2006). In the paper we propose a new class of fuzzy mortality models based on a fuzzy version of the Lee-Carter model. Theoretical backgrounds are based on the algebraic approach to fuzzy numbers (Ishikawa, 1997a, Kosiński, Prokopowicz and Ślęzak, 2003, Rossa, Socha and Szymański, 2015, Szymański and Rossa, 2014). The essential idea in our approach focuses on representing a membership function of a fuzzy number as an element of C*-Banach algebra. If the membership function <italic>μ(z)</italic> of a fuzzy number is strictly monotonic on two disjoint intervals, then it can be decomposed into strictly decreasing and strictly increasing functions Φ(<italic>z</italic>), Ψ(<italic>z</italic>), and the inverse functions <italic>fu</italic>)=Φ<sup>−1</sup>(<italic>u</italic>) and <italic>g</italic>(<italic>u</italic>)=Ψ<sup>−1</sup>(<italic>u</italic>), <italic>u</italic> ∈ [0, 1] can be found.</p><p>Ishikawa (1997a) proposed foundations of the fuzzy measurement theory, which is a general measurement theory for classical and quantum systems. We have applied this approach, termed C<sup>*</sup>-measurement, as the theoretical foundation of the mortality model. Ishikawa (1997b) introduced also the notions of objective and subjective C<sup>*</sup>-measurement called real and imaginary C<sup>*</sup>-measurements. In our proposal of the mortality model the function <italic>f</italic> is treated as an objective C*-measurement and the function <italic>g</italic> as an subjective C*-measurement, and the membership function <italic>μ</italic>(<italic>z</italic>) is represented by means of a complex-valued function <italic>f</italic>(<italic>u</italic>)+<italic>ig</italic>(<italic>u</italic>), where <italic>i</italic> is the imaginary unit. We use the Hilbert space of quaternion algebra as an introduction to the mortality models.</p></abstract>ARTICLE2018-01-22T00:00:00.000+00:00Bayesian Estimation of Measles Vaccination Coverage Under Ranked Set Samplinghttps://sciendo.com/article/10.21307/stattrans-2017-002<abstract><title style='display:none'>Abstract</title><p>The present article is concerned with the problem of estimating an unknown population proportion <italic>p</italic>, say, of a certain population characteristic in a dichotomous population using the data collected through ranked set sampling (RSS) strategy. Here, it is assumed that the proportion <italic>p</italic> is not fixed but a random quantity. A Bayes estimator of <italic>p</italic> is proposed under squared error loss function assuming that the prior density of <italic>p</italic> belongs to the family of Beta distributions. The performance of the proposed RSS-based Bayes estimator is compared with that of the corresponding classical version estimator based on maximum likelihood principle. The proposed procedure is used to estimate measles vaccination coverage probability among the children of age group 12-23 months in India using the real-life epidemiological data from National Family Health Survey-III.</p></abstract>ARTICLE2018-01-22T00:00:00.000+00:00A Generalized Randomized Response Modelhttps://sciendo.com/article/10.21307/stattrans-2017-006<abstract><title style='display:none'>Abstract</title><p>In this paper we have suggested a generalized version of the Gjestvang and Singh (2006) model and have studied its properties. We have shown that the randomized response models due to Warner (1965), Mangat and Singh (1990), Mangat (1994) and Gjestvang and Singh (2006) are members of the proposed RR model. The conditions are obtained in which the suggested RR model is more efficient than the Warner (1965) model, Mangat and Singh (1990) model and Mangat (1994) model and Gjestvang and Singh (2006) model. A numerical illustration is given in support of the present study.</p></abstract>ARTICLE2018-01-22T00:00:00.000+00:00Sample Allocation in Estimation of Proportion in a Finite Population Divided Among Two Stratahttps://sciendo.com/article/10.21307/stattrans-2016-085<abstract><title style='display:none'>Abstract</title><p>The problem of estimating a proportion of objects with a particular attribute in a finite population is considered. The classical estimator is compared with the estimator, which uses the information that the population is divided among two strata. Theoretical results are illustrated with a numerical example.</p></abstract>ARTICLE2017-11-20T00:00:00.000+00:00Stacked Regression With a Generalization of the Moore-Penrose Pseudoinversehttps://sciendo.com/article/10.21307/stattrans-2016-080<abstract><title style='display:none'>Abstract</title><p>In practice, it often happens that there are a number of classification methods. We are not able to clearly determine which method is optimal. We propose a combined method that allows us to consolidate information from multiple sources in a better classifier. Stacked regression (SR) is a method for forming linear combinations of different classifiers to give improved classification accuracy. The Moore-Penrose (MP) pseudoinverse is a general way to find the solution to a system of linear equations.</p><p>This paper presents the use of a generalization of the MP pseudoinverse of a matrix in SR. However, for data sets with a greater number of features our exact method is computationally too slow to achieve good results: we propose a genetic approach to solve the problem. Experimental results on various real data sets demonstrate that the improvements are efficient and that this approach outperforms the classical SR method, providing a significant reduction in the mean classification error rate.</p></abstract>ARTICLE2017-11-20T00:00:00.000+00:00Population Variance Estimation Using Factor Type Imputation Methodhttps://sciendo.com/article/10.21307/stattrans-2016-076<abstract><title style='display:none'>Abstract</title><p>We propose a variance estimator based on factor type imputation in the presence of non-response. Properties of the proposed classes of estimators are studied and their optimality conditions are derived. The proposed classes of facto r type ratio estimators are shown to be more efficient than some of the existing estimators, namely, the usual unbiased estimator of variance, ratio-type, dual to ratio type and ratio cum dual to ratio estimators. Their performances are assessed on the basis of relative efficiencies. Findings are illustrated based on a simulated and real data set.</p></abstract>ARTICLE2017-11-20T00:00:00.000+00:00On Asymmetry of Prediction Errors in Small Area Estimationhttps://sciendo.com/article/10.21307/stattrans-2016-078<abstract><title style='display:none'>Abstract</title><p>The mean squared error reflects only the average prediction accuracy while the distribution of squared prediction error is positively skewed. Hence, assessing or comparing accuracy based on the MSE (which is the mean of squared errors) is insufficient and even inadequate because we should be interested not only in the average but in the whole distribution of prediction errors. This is the reason why we propose to use different than MSE measures of prediction accuracy in small area estimation. In the prediction accuracy comparisons we take into account our proposal for the empirical best predictor, which is a generalization of the predictor presented by Molina and Rao (2010). The generalization results from the assumption of a longitudinal model and possible changes of the population and subpopulations in time.</p></abstract>ARTICLE2017-11-20T00:00:00.000+00:00Bayesian Model Averaging and Jointness Measures: Theoretical Framework and Application to the Gravity Model of Tradehttps://sciendo.com/article/10.21307/stattrans-2016-077<abstract><title style='display:none'>Abstract</title><p>The following study presents the idea of Bayesian model averaging (BMA), as well as the benefits coming from combining the knowledge obtained on the basis of analysis of different models. The BMA structure is described together with its most important statistics, g prior parameter proposals, prior model size distributions, and also the jointness measures proposed by Ley and Steel (2007), as well as Doppelhofer and Weeks (2009). The application of BMA is illustrated with the gravity model of trade, where determinants of trade are chosen from the list of nine different variables. The employment of BMA enabled the identification of four robust determinants: geographical distance, real GDP product, population product and real GDP <italic>per capita</italic> distance. At the same time applications of jointness measures reveal some rather surprising relationships between the variables, as well as demonstrate the superiority of Ley and Steel’s measure over the one introduced by Dopplehofer and Weeks.</p></abstract>ARTICLE2017-11-20T00:00:00.000+00:00An Additive Risks Regression Model for Middle-Censored Lifetime Datahttps://sciendo.com/article/10.21307/stattrans-2016-081<abstract><title style='display:none'>Abstract</title><p>Middle-censoring refers to data arising in situations where the exact lifetime of study subjects becomes unobservable if it happens to fall in a random censoring interval. In the present paper we propose a semiparametric additive risks regression model for analysing middle-censored lifetime data arising from an unknown population. We estimate the regression parameters and the unknown baseline survival function by two different methods. The first method uses the martingale-based theory and the second method is an iterative method. We report simulation studies to assess the finite sample behaviour of the estimators. Then, we illustrate the utility of the model with a real life data set. The paper ends with a conclusion.</p></abstract>ARTICLE2017-11-20T00:00:00.000+00:00From the Editorhttps://sciendo.com/article/10.21307/stattrans-2016-075ARTICLE2017-11-20T00:00:00.000+00:00Option for Predicting the Czech Republic’S Foreign Trade Time Series as Components in Gross Domestic Producthttps://sciendo.com/article/10.21307/stattrans-2016-082<abstract><title style='display:none'>Abstract</title><p>This paper analyses the time series observed for the foreign trade of the Czech Republic (CR) and predictions in such series with the aid of the SARIMA and transfer-function models. Our goal is to find models suitable for describing the time series of the exports and imports of goods and services from/to the CR and to subsequently use these models for predictions in quarterly estimates of the gross domestic product (GDP) component resources and utilization. As a result we get suitable models with a time lag, and predictions in the time series of the CR exports and imports several months ahead.</p></abstract>ARTICLE2017-11-20T00:00:00.000+00:00Selecting the Optimal Multidimensional Scaling Procedure for Metric Data With R Environmenthttps://sciendo.com/article/10.21307/stattrans-2016-084<abstract><title style='display:none'>Abstract</title><p>In multidimensional scaling (MDS) carried out on the basis of a metric data matrix (interval, ratio), the main decision problems relate to the selection of the method of normalization of the values of the variables, the selection of distance measure and the selection of MDS model. The article proposes a solution that allows choosing the optimal multidimensional scaling procedure according to the normalization methods, distance measures and MDS model applied. The study includes 18 normalization methods, 5 distance measures and 3 types of MDS models (ratio, interval and spline). It uses two criteria for selecting the optimal multidimensional scaling procedure: Kruskal’s <italic>Stress</italic>-1 fit measure and Hirschman-Herfindahl <italic>HHI</italic> index calculated based on Stress per point values. The results are illustrated by an empirical example.</p></abstract>ARTICLE2017-11-20T00:00:00.000+00:00An Application of Functional Multivariate Regression Model to Multiclass Classificationhttps://sciendo.com/article/10.21307/stattrans-2016-079<abstract><title style='display:none'>Abstract</title><p>In this paper, the scale response functional multivariate regression model is considered. By using the basis functions representation of functional predictors and regression coefficients, this model is rewritten as a multivariate regression model. This representation of the functional multivariate regression model is used for multiclass classification for multivariate functional data. Computational experiments performed on real labelled data sets demonstrate the effectiveness of the proposed method for classification for functional data.</p></abstract>ARTICLE2017-11-20T00:00:00.000+00:00Remarks on the Estimation of Position Parametershttps://sciendo.com/article/10.21307/stattrans-2016-086<abstract><title style='display:none'>Abstract</title><p>The article contains some theoretical remarks about selected models of position parameters estimation as well as numerical examples of the problem. We ask a question concerning the existence of possible measures of the quality of interval estimation and we mention some popular measures applied to the task. Point estimation is insufficient in practical problems and it is rather interval estimation that is in wide use. Too wide interval suggests that the information available is not sufficient to make a decision and that we should look for more information, perhaps by increasing the sample size.</p></abstract>ARTICLE2017-11-20T00:00:00.000+00:00en-us-1