Abstract: We introduce and study a new four-parameter lifetime model named the exponentiated generalized extended exponential distribution. The proposed model has the advantage of including as special cases the exponential and exponentiated exponential distributions, among others, and its hazard function can take the classic shapes: bathtub, inverted bathtub, increasing, decreasing and constant, among others. We derive some mathematical properties of the new model such as a representation for the density function as a double mixture of Erlang densities, explicit expressions for the quantile function, ordinary and incomplete moments, mean deviations, Bonferroni and Lorenz curves, generating function, R´enyi entropy, density of order statistics and reliability. We use the maximum likelihood method to estimate the model parameters. Two applications to real data illustrate the flexibility of the proposed model.
Abstract: This paper introduces a new four parameters model called the Weibull Generalized Flexible Weibull extension (WGFWE) distribution which exhibits bathtub-shaped hazard rate. Some of it’s statistical properties are obtained including ordinary and incomplete moments, quantile and generating functions, reliability and order statistics. The method of maximum likelihood is used for estimating the model parameters and the observed Fisher’s information matrix is derived. We illustrate the usefulness of the proposed model by applications to real data.
Abstract: This paper uses a structural time series methodology to test the notion of interconnectedness between the UK and the US credit markets. The empirical tests utilise data on premium for the Banking sector credit default swaps (CDS) and covers the recent period of financial turmoil. The methodology based on Kalman filter is robust in the presence of limited convergence. The long-term steady state convergence in CDS premium is clearly noticeable between these two markets from the results. This observation lends support for the coordinated regulatory policy initiatives to deal with the crisis and offer suggestions for sound operations of the international financial systems.
Abstract: Methods used to detect differentially expressed genes in situations with one control and one treatment are t-tests. These methods do not per- form well when control and treatment variances are different. In situations with a control and more than one treatment, it is common to apply analysis of variance followed by a Tukey and/or Duncan test to identify which treat- ment caused the difference. We propose a Bayesian approach for multiple comparison analysis which is very useful in the context of DNA microarray experiments. It uses a priori Dirichlet process and Polya urn scheme. It is a unified procedure (for cases with one or more treatments) which detects differentially expressed genes and identify treatments causing the difference. We use simulations to verify the performance of the proposed method and compare it with usual methods. In cases with control and one treatment and control and more than one treatment followed by Tukey and Duncan tests, the method presents better performance when variances are different. The method is applied to two real data sets. In these cases, genes not detected by usual methods are identified by the proposed method.
Abstract: This paper, evaluates and compares the heterogeneous balance variation order pair of any two decision-making trial and evaluation laboratory (DEMATEL) theories, in which one has a larger balance and a smaller variation. In contrast, the other one has a smaller balance and a larger variation. With this said, the first author proposed a useful integrated validity index to evaluate any DEMATEL theory presence by combining Liu's balanced coefficient and Liu's variation coefficient. Applying this new validity index, three DEMATELs kinds with a same direct relational matrix are compared that are: the traditional, shrinkage, and balance. Furthermore, conducted is a simple validity experiment Results. show that the balance DEMATEL has the best performance. And that, the shrinkage coefficient's performance is better than that of the traditional DEMATEL.
Abstract: Objectives: Exploratory Factor Analysis (EFA) is a very popular statistical technique for identifying potential latent structure underlying a set of observed indicator variables. EFA is used widely in the social sciences, business and finance, machine learning, and the health sciences, among others. Research has found that standard methods of estimating EFA model parameters do not work well when the sample size is relatively small (e.g. less than 50) and/or when the number of observed variables approaches the sample size in value. The purpose of the current study was to investigate and compare some alternative approaches to fitting EFA in the case of small samples and high dimensional data. Results of both a small simulation study, and an application of the methods to an intelligence test revealed that several alternative approaches designed to reduce the dimensionality of the observed variable covariance matrix worked very well in terms of recovering population factor structure with EFA. Implications of these results for practice are discussed..
Abstract: This paper aims to propose a suitable statistical model for the age distribution of prostate cancer detection. Descriptive studies suggest the onset of prostate cancer after 37 years of age with maximum diagnosis age at around 70 years. The major deficiency of descriptive studies is that the results cannot be generalized for all types of populations usually having non-identical environmental conditions. The proposition follows by checking the suitability of the model through different statistical tools like Akaike Information Criterion, Kolmogorov Smirnov distance, Bayesian Information Criterion and χ2 statistic. The Maximum likelihood estimate of the parameters of the proposed model along with their asymptotic confidence intervals have been obtained for the considered real data set.
Abstract: Objective: Financial fraud has been a big concern for many organizations across industries; billions of dollars are lost yearly because of this fraud. So businesses employ data mining techniques to address this continued and growing problem. This paper aims to review research studies conducted to detect financial fraud using data mining tools within one decade and communicate the current trends to academic scholars and industry practitioners. Method: Various combinations of keywords were used to identify the pertinent articles. The majority of the articles retrieved from Science Direct but the search spanned other online databases (e.g., Emerald, Elsevier, World Scientific, IEEE, and Routledge - Taylor and Francis Group). Our search yielded a sample of 65 relevant articles (58 peer-reviewed journal articles with 7 conference papers). One fifth of the articles was found in Expert Systems with Applications (ESA) while about one-tenth found in Decision Support Systems (DSS). Results: 41 data mining techniques were used to detect fraud across different financial applications such as health insurance and credit card. Logistic regression model appeared to be the leading data mining tool in detecting financial fraud with a 13% of usage.In general, supervised learning tool have been used more frequently than the unsupervised ones. Financial statement fraud and bank fraud are the two largest financial applications being investigated in this area – about 63%, which corresponds to 41 articles out of the 65 reviewed articles. Also, the two primary journal outlets for this topic are ESA and DSS. Conclusion: This review provides a fast and easy-to-use source for both researchers and professionals, classifies financial fraud applications into a high level and detailed-level framework, shows the most significant data mining techniques in this domain, and reveals the most countries exposed to financial fraud.