|
|
 |
Search published articles |
 |
|
Showing 123 results for Type of Study: Applied
Afsaneh Shokrani, Mohammad Khorashadizadeh, Volume 12, Issue 2 (3-2019)
Abstract
This paper first introduces the Kerridge inaccuracy measure as an extension of the Shannon entropy and then the measure of past inaccuracy has been rewritten based on the concept of quantile function. Then, some characterizations results for lifetimes with proportional reversed hazard model property based on quantile past inaccuracy measure are obtained. Also, the class of lifetimes with increasing (decreasing) quantile past inaccuracy property and some of its properties are studied. In addition, via an example of real data, the application of quantile inaccuracy measure is illustrated.
Freshteh Osmani, Ali Akbar Rasekhi, Volume 12, Issue 2 (3-2019)
Abstract
Data loss and missing values is a common problem in data analysis. Therefore, it is important that by estimating missing values, the data was completed and placed in the proper path. Two approaches commonly used to deal with missing data are multiple imputation (MI) and inverse-probability weighting (IPW). In this study, a third approach which is a combination of MI and IPW will be introduced. It can be said by results of the simulation study that IPW/MI can have advantages over alternatives. Regarding the missing values in most studies, especially in the medical field, ignoring them leads to wrong analysis. So, using of robust methods to proper analysis of missing values is essential.
Zahra Ranginian, Maede Behfrouz, Abouzar Bazyari, Volume 12, Issue 2 (3-2019)
Abstract
In this paper, it is shown that using the cliams with Pareto distribution for computing the ruin probabilities could has detriment for the heads of insurance company. With computing the relative error of these cliams it is shown that the estimation of claims mean is not suitable in insurance models. We will show that existance of claims with Pareto distribution in the excess of loss reinsurance model may be detriment for the policyholders of company. Also in this portfolio, with computing the conditional expectation of claims measure show that using the claims with Pareto distribution is not suitable in the estimation of claims. The estimation of conditional expectation of random variable of claims is computed by simulation method for some of the statistical distributions. The results are investigated with real examples.
Ghasem Rekabdar, Rahim Chinipardaz, Behzad Mansouri, Volume 13, Issue 1 (9-2019)
Abstract
In this study, the multi-parameter exponential family of distribution has been used to approximate the distribution of indefinite quadratic forms in normal random vectors. Moments of quadratic forms can be obtained in any orders in terms of representation of the quadratic forms as weighted sum of non-central chi-square random variables. By Stein's identity in exponential family, we estimated parameters of probability density function. The method handled in some examples and we indicated this method suitable for approximating the quadratic form distribution.
Mohammad Reza Yeganegi, Rahim Chinipardaz, Volume 13, Issue 1 (9-2019)
Abstract
This paper is investigating the mixture autoregressive model with constant mixing weights in state space form and generalization to ARMA mixture model. Using a sequential Monte Carlo method, the forecasting, filtering and smoothing distributions are approximated and parameters f the model is estimated via the EM algorithm. The results show the dimension of parameter vector in state space representation reduces. The results of the simulation study show that the proposed filtering algorithm has a steady state close to the real values of the state vector. Moreover, according to simulation results, the mean vectors of filtering and smoothing distribution converges to state vector quickly.
Abdolrahman Rasekh, Behzad Mansouri, Narges Hedayatpoor, Volume 13, Issue 1 (9-2019)
Abstract
The study of regression diagnostic, including identification of the influential observations and outliers, is of particular importance. The sensitivity of least squares estimators to the outliers and influential observations lead to extending the regression diagnostic in order to provide criteria to assess the anomalous observations. Detecting influential observations and outliers in the presence of collinearity is a complicated task, in the sense that collinearity may cover some of the unusual data. One of the considerable methods to identify outliers is the mean shift outliers method. In this article, we extend the mean shift outliers method to the ridge estimates under linear stochastic restrictions, which is used to reduce the effect of collinearity, and to provide the test statistic to identify the outliers in these estimators. Finally, we show the ability of our proposed method using a practical example of real data.
Maryam Ahangari, Sedigheh Shams, Volume 13, Issue 1 (9-2019)
Abstract
One of the applicable tools, in order to develop the economy's politics, is Iranian's cooperation in increasing their level of public knowledge and the humanization of economic. Economical index, rate, price, and percentage are not informative only. From this point of view, one of the scientific ways to study the economic data is "Statistical Modeling" through the applicable concept of "Copula Function". In this paper, through the copula functions and the applicable concept of dependence, called "Directional dependence", the dependence structure between variations in family's income and the expenses allocated to buy cultural and miscellaneous goods would be widely studied. Simulation results show that by decreasing the level of income, Iranian families tend to decrease their cultural costs rather than unnecessary miscellaneous costs.
Meysam Moghimbeygi, Mousa Golalizadeh, Volume 13, Issue 1 (9-2019)
Abstract
Recalling the definition of shape as a point on hyper-sphere, proposed by Kendall, the regression model is studied in this paper. In order to simplify the modeling, the triangulation via two landmarks is proposed. The triangulation not only simplifies the regression modelling of the shapes but also provides straightforward computation procedure to reconstruct geometrical structure of the objects. Novelty of the proposed method in this paper is on using the predictor variable, based upon the shape, which suitably describes the geometrical variability of the response. The comparison and evaluation of the proposed methods with the full Procrustes matching through the mean square error criteria are done. Application of two models for the configurations of rat skulls is investigated.
Vahid Tadayon, Abdolrahman Rasekh, Volume 13, Issue 1 (9-2019)
Abstract
Uncertainty is an inherent characteristic of biological and geospatial data which is almost made by measurement error in the observed values of the quantity of interest. Ignoring measurement error can lead to biased estimates and inflated variances and so an inappropriate inference. In this paper, the Gaussian spatial model is fitted based on covariate measurement error. For this purpose, we adopt the Bayesian approach and utilize the Markov chain Monte Carlo algorithms and data augmentations to carry out calculations. The methodology is illustrated using simulated data.
Mozhgan Dehghani, Mohammad Reza Zadkarami, Mohammad Reza Akhoond, Volume 13, Issue 1 (9-2019)
Abstract
In the last decade, Poisson regression has been used for modeling count response variables. Poisson regression is not a suitable choice when count data bears superfluity of zero numbers. In this article, two models zero-inflated Poisson regression and bivariate zero-inflated Poisson regression with random effect are used to modeling count responses with a superfluity of zero numbers. Usually, distribution of the random effect is considered normal, but we intend to employ more flexible skew-normal distribution for the distribution of the random effect. Finally, the purpose model is applied to data which as obtained from the Shahid Chamran University of Ahvaz concerning the number of failed courses and fail grade point average semesters. we used a simulation method to verify parameter estimations.
Mohammmad Arast, Mohammmad Arashi, Mohammmad Reza Rabie, Volume 13, Issue 1 (9-2019)
Abstract
Often, in high dimensional problems, where the number of variables is large the number of observations, penalized estimators based on shrinkage methods have better efficiency than the OLS estimator from the prediction error viewpoint. In these estimators, the tuning or shrinkage parameter plays a deterministic role in variable selection. The bridge estimator is an estimator which simplifies to ridge or LASSO estimators varying the tuning parameter. In these paper, the shrinkage bridge estimator is derived under a linear constraint on regression coefficients and its consistency is proved. Furthermore, its efficiency is evaluated in a simulation study and a real example.
Dariush Najarzadeh, Volume 13, Issue 1 (9-2019)
Abstract
Testing the Hypothesis of independence of a p-variate vector subvectors, as a pretest for many others related tests, is always as a matter of interest. When the sample size n is much larger than the dimension p, the likelihood ratio test (LRT) with chisquare approximation, has an acceptable performance. However, for moderately high-dimensional data by which n is not much larger than p, the chisquare approximation for null distribution of the LRT statistic is no more usable. As a general case, here, a simultaneous subvectors independence testing procedure in all k p-variate normal distributions is considered. To test this hypothesis, a normal approximation for the null distribution of the LRT statistic was proposed. A simulation study was performed to show that the proposed normal approximation outperforms the chisquare approximation. Finally, the proposed testing procedure was applied on prostate cancer data.
Ali Sakhaei, Parviz Nasiri, Volume 13, Issue 2 (2-2020)
Abstract
The non-homogeneous bivariate compound Poisson process with short term periodic intensity function is used for modeling the events with seasonal patterns or periodic trends. In this paper, this process is carefully introduced. In order to characterize the dependence structure between jumps, the Levy copula function is provided. For estimating the parameters of the model, the inference for margins method is used. As an application, this model is fitted to an automobile insurance dataset with inference for margins method and its accuracy is compared with the full maximum likelihood method. By using the goodness of fit test, it is confirmed that this model is appropriate for describing the data.
Masoumeh Esmailizadeh, Ehsan Bahrami Samani, Volume 13, Issue 2 (2-2020)
Abstract
This paper will analyze inflated bivariate mixed count data. The estimations of model parameters are obtained by the maximum likelihood method. For a bivariate case which has inflation in one or two points, the new bivariate inflated power series distributions are presented. These inflated distributions are used in joint modeling of bivariate count responses. Also, to illustrate the utility of the proposed models, some simulation studies are performed. and finally, a real dataset is analyzed.
Ehsan Bahrami Samani, Nafeseh Khojasteh Bakht, Volume 14, Issue 1 (8-2020)
Abstract
In this paper, the analysis of count response with many zeros, named as zero-inflated data, is considered. Assumes that responses follow a zero-inflated power series distribution. Because of there is missing of the type of random in the covariate, some of the data application, various methods for estimating of parameters by using the score function with and without missing data for the proposed regression model are presented. On the other hand, known or unknown selection probability in the missing covariates results in presenting a semi-parametric method for estimating of parameters in the zero-inflated power series regression model. To illustrate the proposed method, simulation studies and a real example are applied. Finally, the performance of the semi-parametric method is compared with maximum likelihood, complete-case and inverse probability weighted method.
Reza Ahmadi, Volume 14, Issue 1 (8-2020)
Abstract
We propose an integrated approach for decision making about repair and maintenance of deteriorating systems whose failures are detected only by inspections. Inspections at periodic times reveal the true state of the system's components and preventive and corrective maintenance actions are carried out in response to the observed system state. Assuming a threshold-type policy, the paper aims at minimizing the long-run average maintenance cost per unit time by determining appropriate inspection intervals and a maintenance threshold. Using the renewal reward theorem, the expected cost per cycle and expected cycle length emerge as solutions of equations, and a recursive scheme is devised to solve them. We demonstrate the procedure and its outperformance over specific cases when the components' lifetime conforms to a Weibull distribution. Further, a sensitivity analysis is performed to determine the impact of the model's parameters. Attention has turned to perfect repair and inspection, but the structure allows different scenarios to be explored.
Mohammad Hossein Poursaeed, Nader Asadian, Volume 14, Issue 1 (8-2020)
Abstract
A system in discrete time periods is exposed to a sequence of shocks so that shocks occur randomly and independently in each period with a probability p. Considering k(≥1) as a critical level, we assume that the system does not fail when the number of successive shocks is less than k, the system fails with probability Ө, if the number of successive shocks is equal to k and the system completely fails as soon as the number of sequential shocks reaches k+1. Therefore, this model can be considered as a version of run shock model, in which the shocks occur in discrete periods of time, and the behavior of the system is not fixed when encountering k successive shocks. In this paper, we examine the characteristics of the system according to this model, especially the first and second-order moments of the system's lifetime, and also estimate its unknown parameters. Finally, a method is proposed to calculate the mean of the generalized geometric distribution.
Dariush Najarzadeh, Volume 14, Issue 1 (8-2020)
Abstract
The hypothesis of complete independence is necessary for many statistical inferences. Classical testing procedures can not be applied to test this hypothesis in high-dimensional data. In this paper, a simple test statistic is presented for testing complete independence in multivariate high dimensional normal data. Using the theory of martingales, the asymptotic normality of the test statistic is established. In order to evaluate the performance of the proposed test and compare it with existing procedures, a simulation study was conducted. The simulation results indicate that the proposed test has an empirical type-I error rate with an average relative error less than the available tests. An application of the proposed method for gene expression clinical prostate data is presented.
Marjan Rajabi, Volume 14, Issue 1 (8-2020)
Abstract
The advent of new technology in recent years has facilitated the production of high dimension data. In these data we need evaluating more than one assumption. Multiple testing can be used for the collection of assumptions that are simultaneously tested and controlled the rate of family wise error that is the most critical issue in such tests. In this report, the authors apply Sidak and Stepwise strategies for controlling family wise error rate in detecting outlier profiles and comparing to each other. Considering our simulation results, the performance of such methods are compared using the parametric bootstrap snd by applying on real data in dataset illustrate the implementation of the proposed methods.
Mahdi Roozbeh, Monireh Maanavi, Volume 14, Issue 2 (2-2021)
Abstract
The popular method to estimation the parameters of a linear regression model is the ordinary least square method which, despite the simplicity of calculating and providing the BLUE estimator of parameters, in some situations leads to misleading solutions. For example, we can mention the problems of multi-collinearity and outliers in the data set. The least trimmed squares method which is one of the most popular of robust regression methods decreases the influence of outliers as much as possible. The main goal of this paper is to provide a robust ridge estimation in order to model dental age data. Among the methods used to determine age, the most popular method throughout the world is the modern modified Demirjian method that is based on the calcification of the permanent tooth in panoramic radiography. It has been shown that using the robust ridge estimator is leading to reduce the mean squared error in comparison with the OLS method. Also, the proposed estimators were evaluated in simulated data sets.
|
|