|
|
 |
Search published articles |
 |
|
Showing 123 results for Type of Study: Applied
Mahnaz Nabil, Mousa Golalizadeh, Volume 8, Issue 2 (3-2015)
Abstract
Recently, employing multivariate statistical techniques for data, that are geometrically random, made more attention by the researchers from applied disciplines. Shape statistics, as a new branch of stochastic geometry, constitute batch of such data. However, due to non-Euclidean feature of such data, adopting usual tools from the multivariate statistics to proper statistical analysis of them is not somewhat clear. How to cluster the shape data is studied in this paper and then its performance is compared with the traditional view of multivariate statistics to this subject via applying these methods to analysis the distal femur.
Shahram Mansoury, Volume 9, Issue 1 (9-2015)
Abstract
Jaynes' principle of maximum entropy states that among all the probability distributions satisfying some constraints, one should be selected which has maximum uncertainty. In this paper, we consider the methods of obtaining maximum entropy bivariate density functions via Taneja and Burg's measure of entropy under the constraints that the marginal distributions and correlation coefficient are prescribed. Next, a numerical method is considered. Finally, each method is illustrated via a numerical example.
Roshanak Aliakbari Saba, Alireza Zahedian, Marzieh Arbabi, Volume 9, Issue 1 (9-2015)
Abstract
Annual estimation of average household incomes is one of the main goals of the household income and expenditure survey in Iran. So, regarding importance of accuracy of gathered data and reasons that lead to error in measuring household income, in this paper, model-based methods are used for estimating income measurement error and adjusting sample households declared income for 2011 household income and expenditure survey.
Farnoosh Ashoori, Malihe Ebrahimpour, Abolghasem Bozorgnia, Volume 9, Issue 2 (2-2016)
Abstract
Distribution of extreme values of a data set is especially used in natural phenomena including flow discharge, wind speeds, precipitation and it is also used in many other applied sciences such as reliability studies and analysis of environmental extreme events. So if one can model the extremal behavior, then the manner of their future behavior can be predicted. This article is devoted to study extreme wind speeds in Zahedan city using maximal generalized extreme value distribution. In this article, we apply four methods to estimate distribution parameters including maximum likelihood estimation, probability weighted moments, elemental percentile and quantile least squares then compare estimates by average scaled absolute error criterion. We also obtain quantiles estimation and confidence intervals. As a part of result, return period of maximum wind speeds are computed.
Sana Eftekhar, Ehsan Kharati-Koopaei, Soltan Mohammad Sadooghi-Alvandi, Volume 9, Issue 2 (2-2016)
Abstract
Process capability indices are widely used in various industries as a statistical measure to assess how well a process meets a predetermined level of production tolerance. In this paper, we propose new confidence intervals for the ratio and difference of two Cpmk indices, based on the asymptotic and parametric bootstrap approaches. We compare the performance of our proposed methods with generalized confidence intervals in term of coverage probability and average length via a simulation study. Our simulation results show the merits of our proposed methods.
S. Morteza Najibi, Mousa Golalizadeh, Mohammad Reza Faghihi, Volume 9, Issue 2 (2-2016)
Abstract
In this paper, we study the applicability of probabilistic solutions for the alignment of tertiary structure of proteins and discuss its difference with the deterministic algorithms. For this purpose, we introduce two Bayesian models and address a solution to add amino acid sequence and type (primary structure) to protein alignment. Furthermore, we will study the parameter estimation with Markov Chain Monte Carlo sampling from the posterior distribution. Finally, in order to see the effectiveness of these methods in the protein alignment, we have compared the parameter estimations in a real data set.
Ali Doostmoradi, Mohammadreza Zadkarami, Aref Khanjari Idenak, Zahara Fereidooni, Volume 10, Issue 1 (8-2016)
Abstract
In this paper we propose a new distribution based on Weibull distribution. This distribution has three parameters which displays increasing, decreasing, bathtub shaped, unimodal and increasing-decreasing-increasing failure rates. Then consider characteristics of this distribution and a real data set is used to compared proposed distribution whit some of the generalized Weibull distribution.
Mina Godazi, Mohammadreza Akhoond, Abdolrahman Rasekh Rasekh, Volume 10, Issue 1 (8-2016)
Abstract
One of the methods that in recent years has attracted the attention of many researchers for modeling multivariate mixed outcome data is using the copula function. In this paper a regression model for mixed survival and discrete outcome data based on copula function is proposed. Where the continuous variable was time and could has censored observations. For this task it is assumed that marginal distributions are known and a latent variable was used to transform discrete variable to continuous. Then by using a copula function, the joint distribution of two variables was constructed and finally the obtained model was used to model birth interval data in Ahwaz city in south-west of Iran.
Fateme Delshad Chermahini, Saeid Pooladsaz, Volume 10, Issue 2 (2-2017)
Abstract
Neighbour effects, that is the response on a given plot is affected by the treatments in neighbouring plot and the effect by the treatment applied to that plot. As a result, the estimate of treatment differences may deviate because of this interference from neighbouring plots. Neighbour-balanced designs ensure that the treatment comparisons will be as little affected by neighbour effects as possible. Circular neighbour-balanced design are divided into two groups. In the previouse researchs, method of cyclic shifts to construct CNB1 has been used, the authors used this method to construct CNB2. Some series of CNB2 are found by omputer programming using in MATLAB software and method of cyclic shifts. Then, some of these designs witch are universally optimal under models with one sided neighbour effect (M1) are identified.
Fatemeh Hosseini, Elham Homayonfal, Volume 10, Issue 2 (2-2017)
Abstract
Hierarchical spatio-temporal models are used for modeling space-time responses and temporally and spatially correlations of the data is considered via Gaussian latent random field with Matérn covariance function. The most important interest in these models is estimation of the model parameters and the latent variables, and is predict of the response variables at new locations and times. In this paper, to analyze these models, the Bayesian approach is presented. Because of the complexity of the posterior distributions and the full conditional distributions of these models and the use of Monte Carlo samples in a Bayesian analysis, the computation time is too long. For solving this problem, Gaussian latent random field with Matern covariance function are represented as a Gaussian Markov Random Field (GMRF) through the Stochastic Partial Differential Equations (SPDE) approach. Approximatin Baysian method and Integrated Nested Laplace Approximation (INLA) are used to obtain an approximation of the posterior distributions and to inference about the model. Finally, the presented methods are applied to a case study on rainfall data observed in the weather stations of Semnan in 2013.
Habib Jafari, Samira Amibigi, Parisa Parsamaram, Volume 11, Issue 1 (9-2017)
Abstract
Most of the research of design optimality is conducted on linear and generalized linear models. In applicable studies, in agriculture, social sciences, etc, usually in addition to fixed effects, there is also at least one random effect in the model. These models are known as mixed models. In this article, Beta regression model with a random intercept is considered as a mixed model and locally D-optimal design is calculated for simple and quadratic forms of the model and the trend of changes of optimal design points for different parameter values will be studied. For the simple model, a two point locally D-optimal design has been obtained for different parameter values and in the quadratic model, a three point locally D-optimal design has been acquired. Also, according to the efficiency criterion, these locally D-optimal designs are compared with the same designs. It was observed that the efficiency of optimal design, when the random intercept is not considered in the model is lower than the case in which the random effect is considered.
Mojtaba Moradi, Volume 11, Issue 2 (3-2018)
Abstract
The basic reproduction number is the average number of secondary infection cases generated by a single primary case in a susceptible population. Estimation of the basic reproduction number is important in medical studies. In this paper, we describe a new method for estimating the basic reproduction number by branching processes. Finally, we apply this estimator on real data reported by the National Center for Biotechnology Information in the USA.
Meysam Tasallizadeh Khemes, Zahra Rezaei Ghahroodi, Volume 11, Issue 2 (3-2018)
Abstract
There are several methods for clustering time course gene expression data. But, these methods have limitations such as the lack of consideration of correlation over time and suffering of high computational. In this paper, by introducing the non-parametric and semi parametric mixed effects model, this correlation over time is considered and by using penalized splines, computation burden dramatically reduced. At the end, using a simulation study the performance of the presented method is compared with previous methods and by using BIC criteria, the most appropriate model is selected. Also the proposed approach is illustrated in a real time course gene expression data set.
Hadi Emami, Parvaneh Mansoori, Volume 11, Issue 2 (3-2018)
Abstract
Semiparametric linear mixed measurement error models are extensions of linear mixed measurement error models to include a nonparametric function of some covariate. They have been found to be useful in both cross-sectional and longitudinal studies. In this paper first we propose a penalized corrected likelihood approach to estimate the parametric component in semiparametric linear mixed measurement error model and then using the case deletion and subject deletion analysis we survey the influence diagnostics in such models. Finally, the performance of our influence diagnostics methods are illustrated through a simulated example and a real data set.
Afshin Fallah, Ramin Kazemi, Hasan Khosravi, Volume 11, Issue 2 (3-2018)
Abstract
Regression analysis is done, traditionally, considering homogeneity and normality assumption for the response variable distribution. Whereas in many applications, observations indicate to a heterogeneous structure containing some sub-populations with skew-symmetric structure either due to heterogeneity, multimodality or skewness of the population or a combination of them. In this situations, one can use a mixture of skew-symmetric distributions to model the population. In this paper we considered the Bayesian approach of regression analysis under the assumption of heterogeneity of population and a skew-symmetric distribution for sub-populations, by using a mixture of skew normal distributions. We used a simulation study and a real world example to assess the proposed Bayesian methodology and to compare it with frequentist approach.
Mohamad Bayat, Hamzeh Torabi, Volume 12, Issue 1 (9-2018)
Abstract
Nowadays, the use of various censorship methods has become widespread in industrial and clinical tests. Type I and Type II progressive censoring are two types of these censors. The use of these censors also has some disadvantages. This article tries to reduce the defects of the type I progressive censoring by making some change to progressive censorship. Considering the number and the time of the withdrawals as a random variable, this is done. First, Type I, Type II progressive censoring and two of their generalizations are introduced. Then, we introduce the new censoring based on the Type I progressive censoring and its probability density function. Also, some of its special cases will be explained and a few related theorems are brought. Finally, the simulation algorithm is brought and for comparison of introduced censorship against the traditional censorships a simulation study was done.
Fatemeh Iranmanesh, Mohsen Rezapour, Reza Pourmousa, Volume 12, Issue 1 (9-2018)
Abstract
In this paper, we study the maintenance method in a system. We also consider a system that begin at time zero with most efficienty. After the first failure it is repaired, but we assume that the lifetime of the system is stochastically less than its lifetime at time zero. It is repaired after the second failure and after the third failur it is checked whether the system should be dismanteld or completly repaired. During the performance of the system preventive maintenance could be used to increase the lifetime of the system. Because these actions are costly, we discuss a method for optimizing the cost of preventive maintenance. Finally, we provide some illustrative examples.
Ebrahim Amini-Seresht, Majid Sadeghifar, Mona Shiri, Volume 12, Issue 1 (9-2018)
Abstract
In this paper, we further investigate stochastic comparisons of the lifetime of parallel systems with heterogeneous independent Pareto components in term of the star order and convex order. It will be proved that the lifetime of a parallel system with heterogeneous independent components from Pareto model is always smaller than from the lifetime of another parallel system with homogeneous independent components from Pareto model in the sense of convex order. Also, under a general condition on the scale parameters, it is proved a result involving with star order.
Kourosh Dadkhah, Edris Samadi Tudar, Volume 12, Issue 1 (9-2018)
Abstract
The presence of outliers in data set may affect structure of analysis of variance test so that test results led to wrong acceptance or rejection of null hypothesis. In this paper the method of robust permutation distribution of F statistic based on trimmed mean is proposed. This method by permutation distribution of a function of trimmed mean, reduces the sensitivity to classical assumptions such as normality and presence of outlier and it guarantees the reliability of result. The proposed method is compared with robust analysis of variance based of forward search approach. The proposed method, unlike the forward search-based approach is free of restricted parametric assumptions and computationally spend less time. Numerically assessment results on type I error and power of test, demonstrate good performance of this robust method in comparison with competitor method.
Naghi Hemmati, Mousa Golalizadeh, Volume 12, Issue 1 (9-2018)
Abstract
According to multiple sources of errors, shape data are often prone to measurement error. Ignoring such error, if does exists, causes many problems including the biasedness of the estimators. The estimators coming from variables without including the measurement errors are called naive estimators. These for rotation and scale parameters are biased, while using the Procrustes matching for two dimensional shape data. To correct this and to improve the naive estimators, regression calibration methods that can be obtained through the complex regression models and invoking the complex normal distribution, as well as the conditional score are proposed in this paper. Moreover, their performance are studied in simulation studies. Also, the statistical shape analysis of the sand hills in Ardestan in Iran is undertaken in presence of measurement errors.
|
|