|
|
 |
Search published articles |
 |
|
Showing 6 results for Censored Data
Abdolreza Sayareh, Parisa Torkman, Volume 3, Issue 1 (9-2009)
Abstract
Model selection aims to find the best model. Selection in the presence of censored data arises in a variety of problems. In this paper we emphasize that the Kullback-Leibler divergence under complete data has a better advantage. Some procedures are provided to construct a tracking interval for the expected difference of Kullback-Leibler risks based on Type II right censored data. Simulation study shows that this procedure works properly for optimum model selection.
Azadeh Kiapour, Volume 11, Issue 1 (9-2017)
Abstract
Usually, we estimate the unknown parameter by observing a random sample and using the usual methods of estimation such as maximum likelihood method. In some situations, we have information about the real parameter in the form of a guess. In these cases, one may shrink the maximum likelihood or other estimators towards a guess value and construct a shrinkage estimator. In this paper, we study the behavior of a Bayes shrinkage estimator for the scale parameter of exponential distribution based on censored samples under an asymmetric and scale invariant loss function. To do this, we propose a Bayes shrinkage estimator and compute the relative efficiency between this estimator and the best linear estimator within a subclass with respect to sample size, hyperparameters of the prior distribution and the vicinity of the guess and real parameter. Also, the obtained results are extended to Weibull and Rayleigh lifetime distributions.
Mehran Naghizadeh Qomi, Maryam Vahidian, Volume 11, Issue 2 (3-2018)
Abstract
The problem of finding tolerance intervals receives very much attention in researches and is widely applied in industry. Tolerance interval is a random interval that covers a proportion of the considered population with a specified confidence level. In this paper, the statistical tolerance limits are expressed for lifetime of k out of n systems with exponentially distributed component lifetimes. Then, we compute the accuracy of proposed tolerance limits and the number of failures needed to attain a desired accuracy level based on type-II right censored data. Finally, we extend our results to the Weibull distribution.
Mehran Naghizadeh Qomi, Zohre Mahdizadeh, Hamid Zareefard, Volume 12, Issue 1 (9-2018)
Abstract
Suppose that we have a random sample from one-parameter Rayleigh distribution. In classical methods, we estimate the interesting parameter based on the sample information and with usual estimators. Sometimes in practice, the researcher has some information about the unknown parameter in the form of a guess value. This guess is known as nonsample information. In this case, linear shrinkage estimators are introduced by combining nonsample and sample information which have smaller risk than usual estimators in the vicinity of guess and true value. In this paper, some shrinkage testimators are introduced using different methods based on vicinity of guess value and true parameter and their risks are computed under the entropy loss function. Then, the performance of shrinkage testimators and the best linear estimator is calculated via the relative efficiency of them. Therefore, the results are applied for the type-II censored data.
Mehran Naghizadeh Qomi, Volume 14, Issue 2 (2-2021)
Abstract
In classical statistics, the parameter of interest is estimated based on sample information and using natural estimators such as maximum likelihood estimators. In Bayesian statistics, the Bayesian estimators are constructed based on prior knowledge and combining with it sample information. But, in some situations, the researcher has information about the unknown parameter as a guess. Bayesian shrinkage estimators can be constructed by Combining this non-sample information with sample information together with the prior knowledge, which is in the area of semi-classical statistics. In this paper, we introduce a class of Bayesian shrinkage estimators for the Weibull scale parameter as a generalization of the estimator at hand and consider the bias and risk of them under LINEX loss function. Then, the proposed estimators are compared using a real data set.
Bahram Haji Joudaki, Soliman Khazaei, Reza Hashemi, Volume 19, Issue 1 (9-2025)
Abstract
Accelerated failure time models are used in survival analysis when the data is censored, especially when combined with auxiliary variables. When the models in question depend on an unknown parameter, one of the methods that can be applied is Bayesian methods, which consider the parameter space as infinitely dimensional. In this framework, the Dirichlet process mixture model plays an important role. In this paper, a Dirichlet process mixture model with the Burr XII distribution as the kernel is considered for modeling the survival distribution in the accelerated failure time. Then, MCMC methods were employed to generate samples from the posterior distribution. The performance of the proposed model is compared with the Polya tree mixture models based on simulated and real data. The results obtained show that the proposed model performs better.
|
|