Abstract: The classical coupon collector’s problem is concerned with the number of purchases in order to have a complete collection, assuming that on each purchase a consumer can obtain a randomly chosen coupon. For most real situations, a consumer may not just get exactly one coupon on each purchase. Motivated by the classical coupon collector’s problem, in this work, we study the so-called suprenewal process. Let {Xi , i ≥ 1} be a sequence of independent and identically distributed random variables, ∑ Sn = n i=1 Xi , n ≥ 1, S0 = 0. For every t ≥ 0, define Qt = inf{n | n ≥ 0, Sn ≥ t}. For the classical coupon collector’s problem, Qt denotes the minimal number of purchases, such that the total number of coupons that the consumer has owned is greater than or equal to t, t ≥ 0. First the process {Qt, t ≥ 0} and the renewal process {Nt, t ≥ 0}, where Nt = sup{n|n ≥ 0, Sn ≤ t}, generated by the same sequence {Xi , i ≥ 1} are compared. Next some fundamental and interesting properties of {Qt, t ≥ 0} are provided. Finally limiting and some other related results are obtained for the process {Qt, t ≥ 0}.
Abstract: Auditors are often faced with reviewing a sample drawn from special populations. One is the special population where invoices are divided into two categories, according to whether or not invoices are qualified. In other words, the qualified amount follows a nonstandard mixture distribution in which the qualified amount is either zero with a certain probability or the same as the known invoice amount with a certain probability. The other is the population where some invoices are partially qualified. In other words, some invoices have a qualified amount between zero and the full invoice amount. For these settings, the typical sample design is stratified random, with the estimation method employing a ratio type method. This paper focuses on efficient sample design for this setting and provides some guidelines in setting up stratum boundaries, calculating sample size and allocating sample size optimally across strata.
Influential observations do posed a major threat on the performance of regression model. Different influential statistics including Cook’s Distance and DFFITS have been introduced in literatures using Ordinary Least Squares (OLS). The efficiency of these measures will be affected with the presence of multicollinearity in linear regression. However, both problems can jointly exist in a regression model. New diagnostic measures based on the Two-Parameter Liu-Ridge Estimator (TPE) defined by Ozkale and Kaciranlar (2007) was proposed as alternatives to the existing ones. Approximate deletion formulas for the detection of influential cases for TPE are proposed. Finally, the diagnostic measures are illustrated with two real life dataset.
Abstract: Additive model is widely recognized as an effective tool for di mension reduction. Existing methods for estimation of additive regression function, including backfitting, marginal integration, projection and spline methods, do not provide any level of uniform confidence. In this paper a sim ple construction of confidence band is proposed for the additive regression function based on polynomial spline estimation and wild bootstrap. Monte Carlo results show three desirable properties of the proposed band: excellent coverage of the true function, width rapidly shrinking to zero with increasing sample size, and minimal computing time. These properties make he pro cedure is highly recommended for nonparametric regression with confidence when additive modelling is appropriate.
Abstract: Frequentist Null Hypothesis Significance Testing (NHST) is so an integral part of scientists’ behavior that its uses cannot be discontinued by flinging it out of the window. Faced with this situation, the suggested strategy for training students and researchers in statistical inference methods for experimental data analysis involves a smooth transition towards the Bayesian paradigm. Its general outlines are as follows. (1) To present natural Bayesian interpretations of NHST outcomes to draw attention to their shortcomings. (2) To create as a result of this the need for a change of emphasis in the presentation and interpretation of results. (3) Finally to equip users with a real possibility of thinking sensibly about statistical inference problems and behaving in a more reasonable manner. The conclusion is that teaching the Bayesian approach in the context of experimental data analysis appears both desirable and feasible. This feasibility is illustrated for analysis of variance methods.
In this paper, we proposed another extension of inverse Lindley distribution, called extended inverse Lindley and studied its fundamental properties such as moments, inverse moments, mean deviation, stochastic ordering and entropy. The flexibility of the proposed distribution is shown by studying monotonicity properties of density and hazard functions. It is shown that the distribution belongs to the family of upside-down bathtub shaped distributions. Maximum likelihood estimators along with asymptotic confidence intervals are constructed for estimating the unknown parameters. An algorithm is presented for random number generation form the distribution. The property of consistency of MLEs has been verified on the basis of simulated samples. The applicability of the extended inverse Lindley distribution is illustrated by means of real data analysis.
Abstract: The actions of the anonymous banker in the highstake television gambling programme Deal or No Deal is examined. If a model can successfully predict his behaviour it might suggest that an automatic process is employed to reach his decisions. Potential strategies associated with a number of games are investigated and a model developed for the offers the anonymous banker makes to buy out the player. This approach is developed into a selection strategy of the optimum stage at which a player should accept the money offered. This is reduced to a simple table, by knowing their current position players can rapidly arrive at an appropriate decision strategy with associated probabilities. These probabilities give a guide as to the confidence to be placed in the choice adopted.