[Home ] [Archive]   [ فارسی ]  
:: Main :: About :: Current Issue :: Archive :: Search :: Submit :: Contact ::
Main Menu
Home::
Journal Information::
Articles archive::
For Authors::
For Reviewers::
Registration::
Ethics Considerations::
Contact us::
Site Facilities::
::
Search in website

Advanced Search
..
Receive site information
Enter your Email in the following box to receive the site news and information.
..
Indexing and Abstracting



 
..
Social Media

..
Licenses
Creative Commons License
This Journal is licensed under a Creative Commons Attribution NonCommercial 4.0
International License
(CC BY-NC 4.0).
 
..
Similarity Check Systems


..
:: Search published articles ::
Showing 3 results for Variable Selection

Mohammad Kazemi, Davood Shahsavani, Mohammad Arashi,
Volume 12, Issue 2 (3-2019)
Abstract

In this paper, we introduce a two-step procedure, in the context of high dimensional additive models, to identify nonzero linear and nonlinear components. We first develop a sure independence screening procedure based on the distance correlation between predictors and marginal distribution function of the response variable to reduce the dimensionality of the feature space to a moderate scale. Then a double penalization based procedure is applied to identify nonzero and linear components, simultaneously. We conduct extensive simulation experiments and a real data analysis to evaluate the numerical performance of the proposed method.

Mozhgan Taavoni, Mohammad Arashi,
Volume 14, Issue 2 (2-2021)
Abstract

This paper considers the problem of simultaneous variable selection and estimation in a semiparametric mixed-effects model for longitudinal data with normal errors. We approximate the nonparametric function by regression spline and simultaneously estimate and select the variables under the optimization of the penalized objective function. Under some regularity conditions, the asymptotic behaviour of the resulting estimators is established in a high-dimensional framework where the number of parametric covariates increases as the sample size increases. For practical implementation, we use an EM algorithm to selects the significant variables and estimates the nonzero coefficient functions. Simulation studies are carried out to assess the performance of our proposed method, and a real data set is analyzed to illustrate the proposed procedure. 

Om-Aulbanin Bashiri Goudarzi, Abdolreza Sayyareh, Sedigheh Zamani Mehreyan,
Volume 19, Issue 1 (9-2025)
Abstract

The boosting algorithm is a hybrid algorithm to reduce variance, a family of machine learning algorithms in supervised learning. This algorithm is a method to transform weak learning systems into strong systems based on the combination of different results. In this paper, mixture models with random effects are considered for small areas, where the errors follow the AR-GARCH model. To select the variable, machine learning algorithms, such as boosting algorithms, have been proposed. Using simulated and tax liability data, the boosting algorithm's performance is studied and compared with classical variable selection methods, such as the step-by-step method.

Page 1 from 1     

مجله علوم آماری – نشریه علمی پژوهشی انجمن آمار ایران Journal of Statistical Sciences

Persian site map - English site map - Created in 0.06 seconds with 35 queries by YEKTAWEB 4710