[Home ] [Archive]   [ فارسی ]  
:: Main :: About :: Current Issue :: Archive :: Search :: Submit :: Contact ::
Main Menu
Home::
Journal Information::
Articles archive::
For Authors::
For Reviewers::
Registration::
Ethics Considerations::
Contact us::
Site Facilities::
::
Search in website

Advanced Search
..
Receive site information
Enter your Email in the following box to receive the site news and information.
..
Indexing and Abstracting



 
..
Social Media

..
Licenses
Creative Commons License
This Journal is licensed under a Creative Commons Attribution NonCommercial 4.0
International License
(CC BY-NC 4.0).
 
..
Similarity Check Systems


..
:: Search published articles ::
General users only can access the published articles
Showing 10 results for Subject:

Ehsan Eshaghi, Hossein Baghishani, Davood Shahsavani,
Volume 7, Issue 1 (9-2013)
Abstract

In some semiparametric survival models with time dependent coefficients, a closed-form solution for coefficients estimates does not exist. Therefore, they have to be estimated by using approximate numerical methods. Due to the complicated forms of such estimators, it is too hard to extract their sampling distributions. In such cases, one usually uses the asymptotic theory to evaluate properties of the estimators. In this paper, first the model is introduced and a method is proposed, by using the Taylor expansion and kernel methods, to estimate the model. Then, the consistency and asymptotic normality of the estimators are established. The performance of the model and estimating procedure are evaluated by a heavy simulation study as well. Finally, the proposed model is applied on a real data set on heart disease patients in one of the Mashhad hospitals.

Mina Norouzirad, Mohammad Arashi,
Volume 11, Issue 1 (9-2017)
Abstract

Penalized estimators for estimating regression parameters have been considered by many authors for many decades. Penalized regression with rectangular norm is one of the mainly used since it does variable selection and estimating parameters, simultaneously. In this paper, we propose some new estimators by employing uncertain prior information on parameters. Superiority of the proposed shrinkage estimators over the least absoluate and shrinkage operator (LASSO) estimator is demonstrated via a Monte Carlo study. The prediction rate of the proposed estimators compared to the LASSO estimator is also studied in the US State Facts and Figures dataset.


Sedighe Eshaghi, Hossein Baghishani, Negar Eghbal,
Volume 12, Issue 1 (9-2018)
Abstract

Introducing some efficient model selection criteria for mixed models is a substantial challenge; Its source is indeed fitting the model and computing the maximum likelihood estimates of the parameters. Data cloning is a new method to fit mixed models efficiently in a likelihood-based approach. This method has been popular recently and avoids the main problems of other likelihood-based methods in mixed models. A disadvantage of data cloning is its inability of computing the maximum of likelihood function of the model. This value is a key quantity in proposing and calculating information criteria. Therefore, it seems that we can not, directly, define an appropriate information criterion by data cloning approach. In this paper, this believe is broken and a criterion based on data cloning is introduced. The performance of the proposed model selection criterion is also evaluated by a simulation study.


Maryam Borzoei Bidgoli, Mohammad Arashi,
Volume 12, Issue 2 (3-2019)
Abstract

One way of dealing with the problem of collinearity in linear models, is to make use of the Liu estimator. In this paper, a new estimator by generalizing the modified Liu estimator of Li and Yang (2012) has been proposed. This estimator is constructed based on a prior information of vector parameters in linear regression and the generalized estimator of Akdeniz and Kachiranlar (1995). Using the mean square error matrix criterion, we have obtained the superiority conditions Of this newly defined estimator over the generalized Liu estimator. For comparison sake, a numerical example as well as a Monte Carlo simulation study are considered.


Mohammad Kazemi, Davood Shahsavani, Mohammad Arashi,
Volume 12, Issue 2 (3-2019)
Abstract

In this paper, we introduce a two-step procedure, in the context of high dimensional additive models, to identify nonzero linear and nonlinear components. We first develop a sure independence screening procedure based on the distance correlation between predictors and marginal distribution function of the response variable to reduce the dimensionality of the feature space to a moderate scale. Then a double penalization based procedure is applied to identify nonzero and linear components, simultaneously. We conduct extensive simulation experiments and a real data analysis to evaluate the numerical performance of the proposed method.

Mohammmad Arast, Mohammmad Arashi, Mohammmad Reza Rabie,
Volume 13, Issue 1 (9-2019)
Abstract

Often‎, ‎in high dimensional problems‎, ‎where the number of variables is large the number of observations‎, ‎penalized estimators based on shrinkage methods have better efficiency than the OLS estimator from the prediction error viewpoint‎. In these estimators‎, ‎the tuning or shrinkage parameter plays a deterministic role in variable selection‎. ‎The bridge estimator is an estimator which simplifies to ridge or LASSO estimators varying the tuning parameter‎. ‎In these paper‎, ‎the shrinkage bridge estimator is derived under a linear constraint on regression coefficients and its consistency is proved‎. ‎Furthermore‎, ‎its efficiency is evaluated in a simulation study and a real example‎.


Mozhgan Taavoni, Mohammad Arashi,
Volume 14, Issue 2 (2-2021)
Abstract

This paper considers the problem of simultaneous variable selection and estimation in a semiparametric mixed-effects model for longitudinal data with normal errors. We approximate the nonparametric function by regression spline and simultaneously estimate and select the variables under the optimization of the penalized objective function. Under some regularity conditions, the asymptotic behaviour of the resulting estimators is established in a high-dimensional framework where the number of parametric covariates increases as the sample size increases. For practical implementation, we use an EM algorithm to selects the significant variables and estimates the nonzero coefficient functions. Simulation studies are carried out to assess the performance of our proposed method, and a real data set is analyzed to illustrate the proposed procedure. 

Negar Eghbal, Hossein Baghishani,
Volume 14, Issue 2 (2-2021)
Abstract

Geostatistical spatial count data in finite populations can be seen in many applications, such as urban management and medicine. The traditional model for analyzing these data is the spatial logit-binomial model. In the most applied situations, these data have overdispersion alongside the spatial variability. The binomial model is not the appropriate candidate to account for the overdispersion. The proper alternative is a beta-binomial model that has sufficient flexibility to account for the extra variability due to the possible overdispersion of counts. In this paper, we describe a Bayesian spatial beta-binomial for geostatistical count data by using a combination of the integrated nested Laplace approximation and the stochastic partial differential equations methods. We apply the methodology for analyzing the number of people injured/killed in car crashes in Mashhad, Iran. We further evaluate the performance of the model using a simulation study.


Mahsa Nadifar, Hossein Baghishani, Afshin Fallah,
Volume 15, Issue 1 (9-2021)
Abstract

Many of spatial-temporal data, particularly in medicine and disease mapping, are counts. Typically, these types of count data have extra variability that distrusts the classical Poisson model's performance. Therefore, incorporating this variability into the modeling process, plays an essential role in improving the efficiency of spatial-temporal data analysis. For this purpose, in this paper, a new Bayesian spatial-temporal model, called gamma count, with enough flexibility in modeling dispersion is introduced. For implementing statistical inference in the proposed model, the integrated nested Laplace approximation method is applied. A simulation study was performed to evaluate the performance of the proposed model compared to the traditional models. In addition, the application of the model has been demonstrated in analyzing leukemia data in Khorasan Razavi province, Iran.

Alireza Beheshty, Hosein Baghishani, Mohammadhasan Behzadi, Gholamhosein Yari, Daniel Turek,
Volume 19, Issue 1 (9-2025)
Abstract

Financial and economic indicators, such as housing prices, often show spatial correlation and heterogeneity. While spatial econometric models effectively address spatial dependency, they face challenges in capturing heterogeneity. Geographically weighted regression is naturally used to model this heterogeneity, but it can become too complex when data show homogeneity across subregions. In this paper, spatially homogeneous subareas are identified through spatial clustering, and Bayesian spatial econometric models are then fitted to each subregion. The integrated nested Laplace approximation method is applied to overcome the computational complexity of posterior inference and the difficulties of MCMC algorithms. The proposed methodology is assessed through a simulation study and applied to analyze housing prices in Mashhad City.



Page 1 from 1     

مجله علوم آماری – نشریه علمی پژوهشی انجمن آمار ایران Journal of Statistical Sciences

Persian site map - English site map - Created in 0.06 seconds with 41 queries by YEKTAWEB 4713