|
|
 |
Search published articles |
 |
|
General users only can access the published articles
Showing 33 results for Subject:
Hamidreza Fotouhi, Mousa Golalizadeh, Volume 6, Issue 2 (2-2013)
Abstract
One of the typical aims of statistical shape analysis, in addition to deriving an estimate of mean shape, is to get an estimate of shape variability. This aim is achived through employing the principal component analysis. Because the principal component analysis is limited to data on Euclidean space, this method cannot be applied for the shape data which are inherently non-Euclidean data. In this situation, the principal geodesic analysis or its linear approximation can be used as a generalization of the principal component analysis in non-Euclidean space. Because the main root of this method is the gradient descent algorithm, revealing some of its main defects, a new algorithm is proposed in this paper which leads to a robust estimate of mean shape and also preserves the geometrical structure of shape. Then, providing some theoretical aspects of principal geodesic analysis, its application is evaluated in a simulation study and in real data.
Kobra Gholizadeh, Mohsen Mohammadzadeh, Zahra Ghayyomi, Volume 7, Issue 1 (9-2013)
Abstract
In Bayesian analysis of structured additive regression models which are a flexible class of statistical models, the posterior distributions are not available in a closed form, so Markov chain Monte Carlo algorithm due to complexity and large number of hyperparameters takes long time. Integrated nested Laplace approximation method can avoid the hard simulations using the Gaussian and Laplace approximations. In this paper, consideration of spatial correlation of the data in structured additive regression model and its estimation by the integrated nested Laplace approximation are studied. Then a crime data set in Tehran city are modeled and evaluated. Next, a simulation study is performed to compare the computational time and precision of the models provided by the integrated nested Laplace approximation and Markov chain Monte Carlo algorithm
Mohammad Gholami Fesharaki, Anoshirvan Kazemnejad, Farid Zayeri, Volume 7, Issue 2 (3-2014)
Abstract
In two level modeling, random effect and error's normality assumption is one of the basic assumptions. Violating this assumption leads to incorrect inference about coefficients of the model. In this paper, to resolve this problem, we use skew normal distribution instead of normal distribution for random and error components. Also, we show that ignoring positive (negative) skewness in the model causes overestimating (underestimating) in intercept estimation and underestimating (overestimating) in slope estimation by a simulation study. Finally, we use this model to study relationship between shift work and blood cholesterol.
Hashem Mahmoudnejad, Mousa Golalizadeh, Volume 7, Issue 2 (3-2014)
Abstract
Although the measurement error exists in the most scientific experiments, in order to simplify the modeling, its presence is usually ignored in statistical studying. In this paper, various approaches on estimating the parameters of multilevel models in presence of measurement error are studied. In addition, to improve the parameter estimates in this case, a new method is proposed which has high precision and reasonable convergence rate in compare with previous common approaches. Also, the performance of the proposed method as well as usual approaches are evaluated and compared using simulation study and analyzing real data of the income-expenditure of some households in Tehran city in 2008.
Anahita Nodehi, Mousa Golalizadeh, Volume 8, Issue 1 (9-2014)
Abstract
Bivariate Von Mises distribution, which behaves relatively similar to bivariate normal distributions, has been proposed for representing the simultaneously probabilistic variability of these angles. One of the remarkable properties of this distribution is having the univariate Von Mises as the conditional density. However, the marginal density takes various structures depend on its involved parameters and, in general, has no closed form. This issue encounters the statistical inference with particular problems. In this paper, this distribution and its properties are studied, then the procedure to sample via the acceptance-rejection algorithm is described. The problems encountered in choosing a proper candidate distribution, arising from the cyclic feature of both angles, is investigated and the properties of its conditional density is utilized to overcome this obstacle.
Mahnaz Nabil, Mousa Golalizadeh, Volume 8, Issue 2 (3-2015)
Abstract
Recently, employing multivariate statistical techniques for data, that are geometrically random, made more attention by the researchers from applied disciplines. Shape statistics, as a new branch of stochastic geometry, constitute batch of such data. However, due to non-Euclidean feature of such data, adopting usual tools from the multivariate statistics to proper statistical analysis of them is not somewhat clear. How to cluster the shape data is studied in this paper and then its performance is compared with the traditional view of multivariate statistics to this subject via applying these methods to analysis the distal femur.
S. Morteza Najibi, Mousa Golalizadeh, Mohammad Reza Faghihi, Volume 9, Issue 2 (2-2016)
Abstract
In this paper, we study the applicability of probabilistic solutions for the alignment of tertiary structure of proteins and discuss its difference with the deterministic algorithms. For this purpose, we introduce two Bayesian models and address a solution to add amino acid sequence and type (primary structure) to protein alignment. Furthermore, we will study the parameter estimation with Markov Chain Monte Carlo sampling from the posterior distribution. Finally, in order to see the effectiveness of these methods in the protein alignment, we have compared the parameter estimations in a real data set.
Meysam Moghimbeigi, Volume 10, Issue 2 (2-2017)
Abstract
Statistical analysis of fractional Brownian motion process is one of the most important issues in the field of stochastic processes. The most important issue in the study of this process is statistical inference about the Hurst parametersof the fractional Brownian motion. One of the methods for estimation of aforementioned parameter is maximum likelihood approach. Due to the computational complexity of this approach to give a closed estimate, it is attempting to derive the parameter estimated through the numerical method approach. Also, the theoretical result of the paper is evaluated in a simulation study for different scenarios.
Omid Akhgari, Mousa Golalizadeh, Volume 10, Issue 2 (2-2017)
Abstract
The presence of endogenous variables in the statistical models leads to inconsistent and bias estimators for the parameters. In this case, several approaches have been proposed which are able to tackle the biase and inconsistency problems only in large sample situations. One of these methods is biased on instrumental variables which causes removing endogenous variables. The method of two-stage least squares is another approach in this case that it has more accurate than ordinary least squares. This paper aims to enhance the accuracy of three methods of estimation based upon least square methodology called, two-stage iterative least squares, two-stage Jackknife least squares and also two-stage calibration least squares. In order to evaluate the performance of each method, a simulation study is conducted. Also, using data collected in 1390 related to the cost and revenue in Iran, those methods to estimate parameters are compared.
Meysam Tasallizadeh Khemes, Zahra Rezaei Ghahroodi, Volume 11, Issue 2 (3-2018)
Abstract
There are several methods for clustering time course gene expression data. But, these methods have limitations such as the lack of consideration of correlation over time and suffering of high computational. In this paper, by introducing the non-parametric and semi parametric mixed effects model, this correlation over time is considered and by using penalized splines, computation burden dramatically reduced. At the end, using a simulation study the performance of the presented method is compared with previous methods and by using BIC criteria, the most appropriate model is selected. Also the proposed approach is illustrated in a real time course gene expression data set.
Naghi Hemmati, Mousa Golalizadeh, Volume 12, Issue 1 (9-2018)
Abstract
According to multiple sources of errors, shape data are often prone to measurement error. Ignoring such error, if does exists, causes many problems including the biasedness of the estimators. The estimators coming from variables without including the measurement errors are called naive estimators. These for rotation and scale parameters are biased, while using the Procrustes matching for two dimensional shape data. To correct this and to improve the naive estimators, regression calibration methods that can be obtained through the complex regression models and invoking the complex normal distribution, as well as the conditional score are proposed in this paper. Moreover, their performance are studied in simulation studies. Also, the statistical shape analysis of the sand hills in Ardestan in Iran is undertaken in presence of measurement errors.
Freshteh Osmani, Ali Akbar Rasekhi, Volume 12, Issue 2 (3-2019)
Abstract
Data loss and missing values is a common problem in data analysis. Therefore, it is important that by estimating missing values, the data was completed and placed in the proper path. Two approaches commonly used to deal with missing data are multiple imputation (MI) and inverse-probability weighting (IPW). In this study, a third approach which is a combination of MI and IPW will be introduced. It can be said by results of the simulation study that IPW/MI can have advantages over alternatives. Regarding the missing values in most studies, especially in the medical field, ignoring them leads to wrong analysis. So, using of robust methods to proper analysis of missing values is essential.
Meysam Moghimbeygi, Mousa Golalizadeh, Volume 13, Issue 1 (9-2019)
Abstract
Recalling the definition of shape as a point on hyper-sphere, proposed by Kendall, the regression model is studied in this paper. In order to simplify the modeling, the triangulation via two landmarks is proposed. The triangulation not only simplifies the regression modelling of the shapes but also provides straightforward computation procedure to reconstruct geometrical structure of the objects. Novelty of the proposed method in this paper is on using the predictor variable, based upon the shape, which suitably describes the geometrical variability of the response. The comparison and evaluation of the proposed methods with the full Procrustes matching through the mean square error criteria are done. Application of two models for the configurations of rat skulls is investigated.
Kiomars Motarjem, Volume 15, Issue 2 (3-2022)
Abstract
The prevalence of Covid-19 is greatly affected by the location of the patients. From the beginning of the pandemic, many models have been used to analyze the survival time of Covid-19 patients. These models often use the Gaussian random field to include this effect in the survival model. But the assumption of Gaussian random effects is not realistic. In this paper, by considering a spatial skew Gaussian random field for random effects and a new spatial survival model is introduced. Then, in a simulation study, the performance of the proposed model is evaluated. Finally, the application of the model to analyze the survival time data of Covid-19 patients in Tehran is presented.
Mousa Golalizadeh, Sedigheh Noorani, Volume 16, Issue 1 (9-2022)
Abstract
Nowadays, the observations in many scientific fields, including biological sciences, are often high dimensional, meaning the number of variables exceeds the number of samples. One of the problems in model-based clustering of these data types is the estimation of too many parameters. To overcome this problem, the dimension of data must be first reduced before clustering, which can be done through dimension reduction methods. In this context, a recent approach that is recently receiving more attention is the random Projections method. This method has been studied from theoretical and practical perspectives in this paper. Its superiority over some conventional approaches such as principal component analysis and variable selection method was shown in analyzing three real data sets.
Miss Forouzan Jafari, Dr. Mousa Golalizadeh, Volume 17, Issue 2 (2-2024)
Abstract
The mixed effects model is one of the powerful statistical approaches used to model the relationship between the response variable and some predictors in analyzing data with a hierarchical structure. The estimation of parameters in these models is often done following either the least squares error or maximum likelihood approaches. The estimated parameters obtained either through the least squares error or the maximum likelihood approaches are inefficient, while the error distributions are non-normal. In such cases, the mixed effects quantile regression can be used. Moreover, when the number of variables studied increases, the penalized mixed effects quantile regression is one of the best methods to gain prediction accuracy and the model's interpretability. In this paper, under the assumption of an asymmetric Laplace distribution for random effects, we proposed a double penalized model in which both the random and fixed effects are independently penalized. Then, the performance of this new method is evaluated in the simulation studies, and a discussion of the results is presented along with a comparison with some competing models. In addition, its application is demonstrated by analyzing a real example.
Miss Nilia Mosavi, Dr. Mousa Golalizadeh, Volume 17, Issue 2 (2-2024)
Abstract
Cancer progression among patients can be assessed by creating a set of gene markers using statistical data analysis methods. Still, one of the main problems in the statistical study of this type of data is the large number of genes versus a small number of samples. Therefore, it is essential to use dimensionality reduction techniques to eliminate and find the optimal number of genes to predict the desired classes accurately. On the other hand, choosing an appropriate method can help extract valuable information and improve the machine learning model's efficiency. This article uses an ensemble learning approach, a random support vector machine cluster, to find the optimal feature set. In the current paper and in dealing with real data, it is shown that via randomly projecting the original high-dimensional feature space onto multiple lower-dimensional feature subspaces and combining support vector machine classifiers, not only the essential genes are found in causing prostate cancer, but also the classification precision is increased.
Mr Milad Pakdel, Dr Kiomars Motarjem, Volume 18, Issue 1 (8-2024)
Abstract
In some instances, the occurrence of an event can be influenced by its spatial location, giving rise to spatial survival data. The accurate and precise estimation of parameters in a spatial survival model poses a challenge due to the complexity of the likelihood function, highlighting the significance of employing a Bayesian approach in survival analysis. In a Bayesian spatial survival model, the spatial correlation between event times is elucidated using a geostatistical model. This article presents a simulation study to estimate the parameters of classical and spatial survival models, evaluating the performance of each model in fitting simulated survival data. Ultimately, it is demonstrated that the spatial survival model exhibits superior efficacy in analyzing blood cancer data compared to conventional models.
Mehrnoosh Madadi, Kiomars Motarjem, Volume 18, Issue 2 (2-2025)
Abstract
Due to the volume and complexity of emerging data in survival analysis, it is necessary to use statistical learning methods in this field. These methods can estimate the probability of survival and the effect of various factors on the survival of patients. In this article, the performance of the Cox model as a common model in survival analysis is compared with compensation-based methods such as Cox Ridge and Cox Lasso, as well as statistical learning methods such as random survival forests and neural networks. The simulation results show that in linear conditions, the performance of the models mentioned above is similar to the Cox model. In non-linear conditions, methods such as Cox lasso, random survival forest, and neural networks perform better. Then, these models were evaluated in the analysis of the data of patients with atheromatous, and the results showed that when faced with data with a large number of explanatory variables, statistical learning approaches generally perform better than the classical survival model.
Mohammad Mehdi Saber, Mohsen Mohammadzadeh, Volume 18, Issue 2 (2-2025)
Abstract
In this article, autoregressive spatial regression and second-order moving average will be presented to model the outputs of a heavy-tailed skewed spatial random field resulting from the developed multivariate generalized Skew-Laplace distribution. The model parameters are estimated by the maximum likelihood method using the Kolbeck-Leibler divergence criterion. Also, the best spatial predictor will be provided. Then, a simulation study is conducted to validate and evaluate the performance of the proposed model. The method is applied to analyze a real data.
|
|