[Home ] [Archive]   [ فارسی ]  
:: Main :: About :: Current Issue :: Archive :: Search :: Submit :: Contact ::
Main Menu
Home::
Journal Information::
Articles archive::
For Authors::
For Reviewers::
Registration::
Ethics Considerations::
Contact us::
Site Facilities::
::
Search in website

Advanced Search
..
Receive site information
Enter your Email in the following box to receive the site news and information.
..
Indexing and Abstracting



 
..
Social Media

..
Licenses
Creative Commons License
This Journal is licensed under a Creative Commons Attribution NonCommercial 4.0
International License
(CC BY-NC 4.0).
 
..
Similarity Check Systems


..
:: Search published articles ::
Showing 4 results for Model Selection

Abdolreza Sayyareh, Raouf Obeidi,
Volume 4, Issue 1 (9-2010)
Abstract

 AIC is commonly used for model selection but the value of AIC has no direct interpretation Cox's test is a generalization of the likelihood ratio test  When the true model is unknown  based on AIC we select  a model but we cannot talk about the closeness of  the selected model to the true model Because it is not clear the selected model is wellspecified or mis-specified This paper extends Akaikes AIC-type model selection beside the Cox test for model selection and based on the simulations we study the results of AIC and Cox's test and the ability of these two criterion and test to discriminate models If based on AIC we select a model whether or not Cox's test has a ability of selecting a better model  Words which one will considering the foundations of the rival models On the other hand the model selection literature has been generally poor at reflecting the foundations of a set of reasonable models when the true model is unknown As a part of results we will propose an approach to selecting the reasonable set of models    
Abdolreza Sayyareh,
Volume 4, Issue 2 (3-2011)
Abstract

In this paper we have established for the Kullback-Leibler divergence that the relative error is supperadditive. It shows that a mixture of k rival models gives a better upper bound for Kullback-Leibler divergence to model selection. In fact, it is shown that the mixed model introduce a model which is better than of the all rival models in the mixture or a model which is better than the worst rival model in the mixture.
Ghobad Barmalzan, Abdolreza Sayyareh,
Volume 4, Issue 2 (3-2011)
Abstract

Suppose we have a random sample of size n of a population with true density h(.). In general, h(.) is unknown and we use the model f as an approximation of this density function. We do inference based on f. Clearly, f must be close to the true density h, to reach a valid inference about the population. The suggestion of an absolute model based on a few obsevations, as an approximation or estimation of the true density, h, results a great risk in the model selection. For this reason, we choose k non-nested models and investigate the model which is closer to the true density. In this paper, we investigate this main question in the model selection that how is it possible to gain a collection of appropriate models for the estimation of the true density function h, based on Kullback-Leibler risk.
Sedighe Eshaghi, Hossein Baghishani, Negar Eghbal,
Volume 12, Issue 1 (9-2018)
Abstract

Introducing some efficient model selection criteria for mixed models is a substantial challenge; Its source is indeed fitting the model and computing the maximum likelihood estimates of the parameters. Data cloning is a new method to fit mixed models efficiently in a likelihood-based approach. This method has been popular recently and avoids the main problems of other likelihood-based methods in mixed models. A disadvantage of data cloning is its inability of computing the maximum of likelihood function of the model. This value is a key quantity in proposing and calculating information criteria. Therefore, it seems that we can not, directly, define an appropriate information criterion by data cloning approach. In this paper, this believe is broken and a criterion based on data cloning is introduced. The performance of the proposed model selection criterion is also evaluated by a simulation study.



Page 1 from 1     

مجله علوم آماری – نشریه علمی پژوهشی انجمن آمار ایران Journal of Statistical Sciences

Persian site map - English site map - Created in 0.03 seconds with 36 queries by YEKTAWEB 4713