Showing 7 results for Kullback-Leibler
Hossein Baghishani, Mohammad Mahdi Tabatabaei,
Volume 1, Issue 1 (9-2007)
Abstract
In parameter driven models, the main problem is likelihood approximation and also parameter estimation. One approach to this problem is to apply simpler likelihoods such as composite likelihood. In this paper, we first introduce the parameter driven models and composite likelihood and then define a new model selection criterion based on composite likelihood. Finally, we demonstrate composite likelihood's capabilities in inferences and accurate model selection in parameter driven models throughout a simulation study.
Maliheh Abbasnejad, Marzeiyeh Shakouri,
Volume 2, Issue 2 (2-2009)
Abstract
In this paper, we establish a goodness of fit test for exponentiality based on the estimated Renyi information. We use an estimator for Renyi distance in manner of Correa entropy estimate. Critical values of the test are computed by Monte Carlo simulation. Also we compute the power of the test under different alternatives and show that it compares favorably with the leading competitor.
Abdolreza Sayareh, Parisa Torkman,
Volume 3, Issue 1 (9-2009)
Abstract
Model selection aims to find the best model. Selection in the presence of censored data arises in a variety of problems. In this paper we emphasize that the Kullback-Leibler divergence under complete data has a better advantage. Some procedures are provided to construct a tracking interval for the expected difference of Kullback-Leibler risks based on Type II right censored data. Simulation study shows that this procedure works properly for optimum model selection.
Abdolreza Sayyareh, Raouf Obeidi,
Volume 4, Issue 1 (9-2010)
Abstract
AIC is commonly used for model selection but the value of AIC has no direct interpretation Cox's test is a generalization of the likelihood ratio test When the true model is unknown based on AIC we select a model but we cannot talk about the closeness of the selected model to the true model Because it is not clear the selected model is wellspecified or mis-specified This paper extends Akaikes AIC-type model selection beside the Cox test for model selection and based on the simulations we study the results of AIC and Cox's test and the ability of these two criterion and test to discriminate models If based on AIC we select a model whether or not Cox's test has a ability of selecting a better model Words which one will considering the foundations of the rival models On the other hand the model selection literature has been generally poor at reflecting the foundations of a set of reasonable models when the true model is unknown As a part of results we will propose an approach to selecting the reasonable set of models
Abdolreza Sayyareh,
Volume 4, Issue 2 (3-2011)
Abstract
In this paper we have established for the Kullback-Leibler divergence that the relative error is supperadditive. It shows that a mixture of k rival models gives a better upper bound for Kullback-Leibler divergence to model selection. In fact, it is shown that the mixed model introduce a model which is better than of the all rival models in the mixture or a model which is better than the worst rival model in the mixture.
Ghobad Barmalzan, Abdolreza Sayyareh,
Volume 4, Issue 2 (3-2011)
Abstract
Suppose we have a random sample of size n of a population with true density h(.). In general, h(.) is unknown and we use the model f as an approximation of this density function. We do inference based on f. Clearly, f must be close to the true density h, to reach a valid inference about the population. The suggestion of an absolute model based on a few obsevations, as an approximation or estimation of the true density, h, results a great risk in the model selection. For this reason, we choose k non-nested models and investigate the model which is closer to the true density. In this paper, we investigate this main question in the model selection that how is it possible to gain a collection of appropriate models for the estimation of the true density function h, based on Kullback-Leibler risk.
Ebrahim Konani, Saeid Bagrezaei,
Volume 5, Issue 1 (9-2011)
Abstract
In this article the characterization of distributions is considered by using Kullback-Leibler information and records values. Then some characterizations are obtained based on Kullback-Leibler information and Shannon entropy of order statistics and record values.