Abstract: This paper aims to generate multivariate random vector with prescribed correlation matrix by Johnson system. The probability weighted moment (PWM) is employed to assess the parameters of Johnson system. By equat ing the first four PWMs of Johnson system with those of the target distri bution, a system of equations solved for the parameters is established. With suitable initial values, solutions to the equations are obtained by the New ton iteration procedure. To allow for the generation of random vector with prescribed correlation matrix, approaches to accommodate the dependency are put forward. For the four transformation models of Johnson system, nine cases are addressed. Analytical formulae are derived to determine the equivalent correlation coefficient in the standard normal space for six cases, the rest three ones are handled by an interpolation method. Finally, several numerical examples are given out to check the proposed method.
Abstract: The chi-squared test for independence in two-way categorical tables depends on the assumptions that the data follow the multinomial distribution. Thus, we suggest alternatives when the assumptions of multi nomial distribution do not hold. First, we consider the Bayes factor which is used for hypothesis testing in Bayesian statistics. Unfortunately, this has the problem that it is sensitive to the choice of prior distributions. We note here that the intrinsic Bayes factor is not appropriate because the prior distribu tions under consideration are all proper. Thus, we propose using Bayesian estimation which is generally not as sensitive to prior specifications as the Bayes factor. Our approach is to construct a 95% simultaneous credible re gion (i.e., a hyper-rectangle) for the interactions. A test that all interactions are zero is equivalent to a test of independence in two-way categorical tables. Thus, a 95% simultaneous credible region of the interactions provides a test of independence by inversion.
In this paper, we propose a new generalization of exponentiated modified Weibull distribution, called the McDonald exponentiated modified Weibull distribution. The new distribution has a large number of well-known lifetime special sub-models such as the McDonald exponentiated Weibull, beta exponentiated Weibull, exponentiated Weibull, exponentiated expo- nential, linear exponential distribution, generalized Rayleigh, among others. Some structural properties of the new distribution are studied. Moreover, we discuss the method of maximum likelihood for estimating the model parameters.
Abstract: The classical coupon collector’s problem is concerned with the number of purchases in order to have a complete collection, assuming that on each purchase a consumer can obtain a randomly chosen coupon. For most real situations, a consumer may not just get exactly one coupon on each purchase. Motivated by the classical coupon collector’s problem, in this work, we study the so-called suprenewal process. Let {Xi , i ≥ 1} be a sequence of independent and identically distributed random variables, ∑ Sn = n i=1 Xi , n ≥ 1, S0 = 0. For every t ≥ 0, define Qt = inf{n | n ≥ 0, Sn ≥ t}. For the classical coupon collector’s problem, Qt denotes the minimal number of purchases, such that the total number of coupons that the consumer has owned is greater than or equal to t, t ≥ 0. First the process {Qt, t ≥ 0} and the renewal process {Nt, t ≥ 0}, where Nt = sup{n|n ≥ 0, Sn ≤ t}, generated by the same sequence {Xi , i ≥ 1} are compared. Next some fundamental and interesting properties of {Qt, t ≥ 0} are provided. Finally limiting and some other related results are obtained for the process {Qt, t ≥ 0}.
Abstract: Auditors are often faced with reviewing a sample drawn from special populations. One is the special population where invoices are divided into two categories, according to whether or not invoices are qualified. In other words, the qualified amount follows a nonstandard mixture distribution in which the qualified amount is either zero with a certain probability or the same as the known invoice amount with a certain probability. The other is the population where some invoices are partially qualified. In other words, some invoices have a qualified amount between zero and the full invoice amount. For these settings, the typical sample design is stratified random, with the estimation method employing a ratio type method. This paper focuses on efficient sample design for this setting and provides some guidelines in setting up stratum boundaries, calculating sample size and allocating sample size optimally across strata.
Influential observations do posed a major threat on the performance of regression model. Different influential statistics including Cook’s Distance and DFFITS have been introduced in literatures using Ordinary Least Squares (OLS). The efficiency of these measures will be affected with the presence of multicollinearity in linear regression. However, both problems can jointly exist in a regression model. New diagnostic measures based on the Two-Parameter Liu-Ridge Estimator (TPE) defined by Ozkale and Kaciranlar (2007) was proposed as alternatives to the existing ones. Approximate deletion formulas for the detection of influential cases for TPE are proposed. Finally, the diagnostic measures are illustrated with two real life dataset.
Abstract: Additive model is widely recognized as an effective tool for di mension reduction. Existing methods for estimation of additive regression function, including backfitting, marginal integration, projection and spline methods, do not provide any level of uniform confidence. In this paper a sim ple construction of confidence band is proposed for the additive regression function based on polynomial spline estimation and wild bootstrap. Monte Carlo results show three desirable properties of the proposed band: excellent coverage of the true function, width rapidly shrinking to zero with increasing sample size, and minimal computing time. These properties make he pro cedure is highly recommended for nonparametric regression with confidence when additive modelling is appropriate.
Abstract: Frequentist Null Hypothesis Significance Testing (NHST) is so an integral part of scientists’ behavior that its uses cannot be discontinued by flinging it out of the window. Faced with this situation, the suggested strategy for training students and researchers in statistical inference methods for experimental data analysis involves a smooth transition towards the Bayesian paradigm. Its general outlines are as follows. (1) To present natural Bayesian interpretations of NHST outcomes to draw attention to their shortcomings. (2) To create as a result of this the need for a change of emphasis in the presentation and interpretation of results. (3) Finally to equip users with a real possibility of thinking sensibly about statistical inference problems and behaving in a more reasonable manner. The conclusion is that teaching the Bayesian approach in the context of experimental data analysis appears both desirable and feasible. This feasibility is illustrated for analysis of variance methods.