The probability that the estimator is equal to the value of the estimated parameter is zero. Hence in practical applications we provide together with the point estimates their estimated standard errors. Given a distribution of random variable which has heavier tails or thinner tails than a normal distribution, then the confidence interval common in the literature will not be applicable. In this study, we obtained some results on the confidence procedure for the parameters of generalized normal distribution which is robust in any case of heavier or thinner than the normal distribution using pivotal quantities approach, and on the basis of a random sample of fixed size n. Some simulation studies and applications are also examined.
Abstract: In public health, demography and sociology, large-scale surveys often follow a hierarchical data structure as the surveys are based on multistage stratified cluster sampling. The appropriate approach to analyzing such survey data is therefore based on nested sources of variability which come from different levels of the hierarchy. When the variance of the residual errors is correlated between individual observations as a result of these nested structures, traditional logistic regression is inappropriate. We use the 2004 Bangladesh Demographic and Health Survey (BDHS) contraceptive binary data which is a multistage stratified cluster dataset. This dataset is used to exemplify all aspects of working with multilevel logistic regression models, including model conceptualization, model description, understanding of the structure of required multilevel data, estimation of the model via the statistical package MLwiN, comparison between different estimations, and investigation of the selected determinants of contraceptive use.
Abstract: Value at Risk (VaR) plays a central role in risk management. There are several approaches for the estimation of VaR, such as historical simulation, the variance-covariance (also known as analytical), and the Monte Carlo approaches. Whereas the first approach does not assume any distribution, the last two approaches demand the joint distribution to be known, which in the analytical approach is frequently the normal distribution. The copula theory is a fundamental tool in modeling multivariate distributions. It allows the definition of the joint distribution through the marginal distributions and the dependence between the variables. Recently the copula theory has been extended to the conditional case, allowing the use of copulae to model dynamical structures. Time variation in the first and second conditional moments is widely discussed in the literature, so allowing the time variation in the conditional dependence seems to be natural. This work presents some concepts and properties of copula functions and an application of the copula theory in the estimation of VaR of a portfolio composed by Nasdaq and S&P500 stock indices.
Abstract: Given processes that assign binary vectors to data, one wish to test models that simulate those processes and uncover groupings in the processes. It is shown that a suitable test can be derived from a kappa type agreement measure. This is applied to analyze stress placement in spoken phrases, based on experimental data previously obtained. The processes were Portuguese speakers and the grouping corresponds to the Brazilian and European varieties of that language. Optimality Theory gave rise to different models. The agreement measure was successful in pointing the relative fitness of models to language varieties.
Abstract: Current trends in Northern Hemisphere and Central England temperatures are estimated using a variety of statistical signal extraction and filtering techniques and their extrapolations are compared with the pre dictions from coupled atmospheric-ocean general circulation models. Ear lier warming trend epochs are also analysed and compared with the current warming trend, suggesting that the long-run patterns of temperature trends should also be considered alongside the current emphasis on global warming.
Abstract: Functional magnetic resonance imaging (fMRI) has, since its de scription fifteen years ago, become the most common in-vivo neuroimaging technique. FMRI allows the identification of brain areas which are related to specific tasks, by statistical analysis of the BOLD (blood oxigenation level dependent) signal. Classically, the observed BOLD signal is compared to an expected haemodynamic response function (HRF) using a general linear model (GLM). However, the results of GLM rely on the HRF specification, which is usually determined in an ad hoc fashion. For periodic experimental designs, we propose a multisubject frequency domain brain mapping, which requires only the stimulation frequency, and consequently avoids subjective choices of HRF. We present some computational simulations, which demon strate a good performance of the proposed approach in short length time series. In addition, an application to real fMRI datasets is also presented.
Abstract: This paper introduces a visualization technique, SEER, devel oped for policy makers and researchers to graphically analyze and explore massive amounts of categorical data collected in longitudinal surveys. This technique (a) produces panels of graphs for multiple group analysis, where the groups do not have to be mutually exclusive, (b) profiles change pat terns observed in longitudinal data, and (c) clusters data into groups to enable policy makers or researchers to observe the factors associated with the changing patterns. This paper also includes the hash function, of the SEER method, expressed in matrix notation for it to be implemented across computer packages. The SEER technique is illustrated by using a national survey, the Survey of Doctorate Recipients (SDR), administered by the Na tional Science Foundation (NSF). Occupational changes and career paths for a panel sample of 14,901 doctorate recipients are profiled and discussed. Results indicated that doctorate recipients in some science and engineering fields are roughly two times more likely to work in an occupation when it is the discipline in which they received their doctorates.
Abstract: An empirical study is employed to investigate the performance of implied GARCH models in option pricing. The implied GARCH models are established by either the Esscher transform or the extended Girsanov principle. The empirical P-martingale simulation is adopted to compute the options efficiently. The empirical results show that: (i) the implied GARCH models obtain accurate standard option prices even the innova tions are conveniently assumed to be normal distributed; (ii) the Esscher transform describes the data better than the extended Girsanov principle; (iii) significant model risk arises when using implied GARCH model with non-proper innovations in exotic option pricing.
Abstract: Chen, Bunce and Jiang [In: Proceedings of the International Con ference on Computational Intelligence and Software Engineering, pp. 1-4] claim to have proposed a new extreme value distribution. But the formulas given for the distribution do not form a valid probability distribution. Here, we correct their formulas to form a valid probability distribution. For this valid distribution, we provide a comprehensive treatment of mathematical properties, estimate parameters by the method of maximum likelihood and provide the observed information matrix. The flexibility of the distribution is illustrated using a real data set.
Abstract: A seasonal additive nonlinear vector autoregression (SANVAR) model is proposed for multivariate seasonal time series to explore the possible interaction among the various univariate series. Significant lagged variables are selected and additive autoregression functions estimated based on the selected variables using spline smoothing method. Conservative confidence bands are constructed for the additive autoregression function. The model is fitted to two sets of bivariate quarterly unemployment rate data with comparisons made to the linear periodic vector autoregression model. It is found that when the data does not significantly deviate from linearity, the periodic model is preferred. In cases of strong nonlinearity, however, the additive model is more parsimonious and has much higher out-of-sample prediction power. In addition, interactions among various univariate series are automatically detected.