Abstract: An empirical study is employed to investigate the performance of implied GARCH models in option pricing. The implied GARCH models are established by either the Esscher transform or the extended Girsanov principle. The empirical P-martingale simulation is adopted to compute the options efficiently. The empirical results show that: (i) the implied GARCH models obtain accurate standard option prices even the innova tions are conveniently assumed to be normal distributed; (ii) the Esscher transform describes the data better than the extended Girsanov principle; (iii) significant model risk arises when using implied GARCH model with non-proper innovations in exotic option pricing.
Abstract: Chen, Bunce and Jiang [In: Proceedings of the International Con ference on Computational Intelligence and Software Engineering, pp. 1-4] claim to have proposed a new extreme value distribution. But the formulas given for the distribution do not form a valid probability distribution. Here, we correct their formulas to form a valid probability distribution. For this valid distribution, we provide a comprehensive treatment of mathematical properties, estimate parameters by the method of maximum likelihood and provide the observed information matrix. The flexibility of the distribution is illustrated using a real data set.
Abstract: A seasonal additive nonlinear vector autoregression (SANVAR) model is proposed for multivariate seasonal time series to explore the possible interaction among the various univariate series. Significant lagged variables are selected and additive autoregression functions estimated based on the selected variables using spline smoothing method. Conservative confidence bands are constructed for the additive autoregression function. The model is fitted to two sets of bivariate quarterly unemployment rate data with comparisons made to the linear periodic vector autoregression model. It is found that when the data does not significantly deviate from linearity, the periodic model is preferred. In cases of strong nonlinearity, however, the additive model is more parsimonious and has much higher out-of-sample prediction power. In addition, interactions among various univariate series are automatically detected.
Abstract: The current study examines the performance of cluster analysis with dichotomous data using distance measures based on response pattern similarity. In many contexts, such as educational and psychological testing, cluster analysis is a useful means for exploring datasets and identifying underlying groups among individuals. However, standard approaches to cluster analysis assume that the variables used to group observations are continuous in nature. This paper focuses on four methods for calculating distance between individuals using dichotomous data, and the subsequent introduction of these distances to a clustering algorithm such as Ward’s. The four methods in question, are potentially useful for practitioners because they are relatively easy to carry out using standard statistical software such as SAS and SPSS, and have been shown to have potential for correctly grouping observations based on dichotomous data. Results of both a simulation study and application to a set of binary survey responses show that three of the four measures behave similarly, and can yield correct cluster recovery rates of between 60% and 90%. Furthermore, these methods were found to work better, in nearly all cases, than using the raw data with Ward’s clustering algorithm.
Abstract: For many years actuaries and demographers have been doing curve fitting of age-specific mortality data. We use the eight-parameter Heligman Pollard (HP) empirical law to fit the mortality curve. It consists of three nonlinear curves, child mortality, mid-life mortality and adult mortality. It is now well-known that the eight unknown parameters in the HP law are difficult to estimate because numerical algorithms generally do not converge when model fitting is done. We consider a novel idea to fit the three curves (nonlinear splines) separately, and then connect them smoothly at the two knots. To connect the curves smoothly, we express uncertainty about the knots because these curves do not have turning points. We have important prior information about the location of the knots, and this helps in the es timation convergence problem. Thus, the Bayesian paradigm is particularly attractive. We show the theory, method and application of our approach. We discuss estimation of the curve for English and Welsh mortality data. We also make comparisons with the recent Bayesian method.
Abstract: The study explored the association between the use of Internet services and quality of life in Taiwan. The use of broadband, wireless, and mobile Internet is found to be positively correlated with the people’s overall quality of life. The more the Internet services of e-Government are used, the higher the satisfaction with social-economic status and social competence. People using more Internet services in their daily activities also have higher self-esteem and less psychological pressures. However, people who deeply rely on Internet services for e-Business such as online shopping or ticket booking have lower satisfaction with community support.
Although credit score models have been widely applied, one of the important variables-Merchant Category Code (MCC)-is sometimes misused. MCC misuse may cause errors in credit scoring systems. The present study aimed to develop and deploy an MCC misuse detection system with ensemble models, gives insights into the development process and compares different machine learning methods. XGBoost exhibited the best performance, with overall error, sensitivity, specificity, F_1 score, AUC and PRAUC of 0.1095, 0.7777, 0.9672, 0.8518, 0.9095 and 0.9090, respectively. MCC misuse by merchants can be predicted with satisfactory accuracy by using our ensemble-based detection system. The paper can thus not only suggest the MCC misuse cannot be overlooked but also help researchers and practitioners to apply new ensemble machine learning based detection system or similar problems.
Partial Least Squares Discriminant Analysis (PLSDA) is a statistical method for classification and consists of a classical Partial Least Squares Regression in which the dependent variable is a categorical one expressing the class membership of each observation. The aim of this study is both analyzing the performance of PLSDA method in classifying 28 European Union (EU) member countries and 7 candidate countries (Albania, Montenegro, Serbia, Macedonia FYR, Turkey moreover including potential candidates Bosnia and Herzegovina and Kosova) correctly to their pre-defined classes (candidate or member) and determining the economic and/or demographic indicators, which are effective in classifying, by using the data set obtained from database of the World Bank.
The shape parameter of a symmetric probability distribution is often more difficult to estimate accurately than the location and scale parameters. In this paper, we suggest an intuitive but innovative matching quantile estimation method for this parameter. The proposed shape parameter estimate is obtained by setting its value to a level such that the central 1-1/n portion of the distribution will just cover all n observations, while the location and scale parameters are estimated using existing methods such as maximum likelihood (ML). This hybrid estimator is proved to be consistent and is illustrated by two distributions, namely Student-t and Exponential Power. Simulation studies show that the hybrid method provides reasonably accurate estimates. In the presence of extreme observations, this method provides thicker tails than the full ML method and protect inference on the location and scale parameters. This feature offered by the hybrid method is also demonstrated in the empirical study using two real data sets.