[Home ] [Archive]   [ فارسی ]  
:: Main :: About :: Current Issue :: Archive :: Search :: Submit :: Contact ::
Main Menu
Home::
Journal Information::
Articles archive::
For Authors::
For Reviewers::
Registration::
Ethics Considerations::
Contact us::
Site Facilities::
::
Search in website

Advanced Search
..
Receive site information
Enter your Email in the following box to receive the site news and information.
..
Indexing and Abstracting



 
..
Social Media

..
Licenses
Creative Commons License
This Journal is licensed under a Creative Commons Attribution NonCommercial 4.0
International License
(CC BY-NC 4.0).
 
..
Similarity Check Systems


..
:: Search published articles ::
Showing 237 results for Type of Study: Research

Abdolreza Sayareh, Parisa Torkman,
Volume 3, Issue 1 (9-2009)
Abstract

Model selection aims to find the best model. Selection in the presence of censored data arises in a variety of problems. In this paper we emphasize that the Kullback-Leibler divergence under complete data has a better advantage. Some procedures are provided to construct a tracking interval for the expected difference of Kullback-Leibler risks based on Type II right censored data. Simulation study shows that this procedure works properly for optimum model selection.
Sakineh Sadeghi, Iraj Kazemi,
Volume 3, Issue 1 (9-2009)
Abstract

Recently, dynamic panel data models are comprehensively used in social and economic studies. In fitting these models, a lagged response is incorrectly considered as an explanatory variable. This ad-hoc assumption produces unreliable results when using conventional estimation approaches. A principle issue in the analysis of panel data is to take into account the variability of experimental individual effects. These effects are usually assumed fixed in many studies, because of computational complexity. In this paper, we assume random individual effects to handle such variability and then compare the results with fixed effects. Furthermore, we obtain the model parameter estimates by implementing the maximum likelihood and Gibbs sampling methods. We also fit these models on a data set which contains assets and liabilities of banks in Iran.
Behzad Mahmoudian, Mousa Golalizadeh,
Volume 3, Issue 1 (9-2009)
Abstract

Modeling of extreme responses in presence nonlinear, temporal, spatial and interaction effects can be accomplished with mixed models. In addition, smoothing spline through mixed model and Bayesian approach together provide convenient framework for inference of extreme values. In this article, by representing as a mixed model, smoothing spline is used to assess nonlinear covariate effect on extreme values. For this reason, we assume that extreme responses given covariates and random effects are independent with generalized extreme value distribution. Then by using MCMC techniques in Bayesian framework, location parameter of distribution is estimated as a smooth function of covariates. Finally, the proposed model is employed to model the extreme values of ozone data.
Abbas Mahdavi, Mina Towhidi,
Volume 3, Issue 2 (3-2010)
Abstract

One of the most important issues in inferential statistics is the existence of outlier observations. Since these observations have a great influence on fitted model and its related inferences, it is necessary to find a method for specifying the effect of outlier observations. The aim of this article is to investigate the effect of outlier observations on kernel density function estimation. In this article we have tried to represent a method for identification of outlier observations and their effect on kernel density function estimation by using forward search method

Maliheh Abbasnejad Mashhadi, Davood Mohammadi,
Volume 4, Issue 1 (9-2010)
Abstract

In this paper, we characterize symmetric distributions based on Renyi entropy of order statistics in subsamples. A test of symmetry is proposed based on the estimated Renyi entropy. Critical values of the test are computed by Monte Carlo simulation. Also we compute the power of the test under different alternatives and show that it behaves better that the test of Habibi and Arghami (1386).
Abdolreza Sayyareh, Raouf Obeidi,
Volume 4, Issue 1 (9-2010)
Abstract

 AIC is commonly used for model selection but the value of AIC has no direct interpretation Cox's test is a generalization of the likelihood ratio test  When the true model is unknown  based on AIC we select  a model but we cannot talk about the closeness of  the selected model to the true model Because it is not clear the selected model is wellspecified or mis-specified This paper extends Akaikes AIC-type model selection beside the Cox test for model selection and based on the simulations we study the results of AIC and Cox's test and the ability of these two criterion and test to discriminate models If based on AIC we select a model whether or not Cox's test has a ability of selecting a better model  Words which one will considering the foundations of the rival models On the other hand the model selection literature has been generally poor at reflecting the foundations of a set of reasonable models when the true model is unknown As a part of results we will propose an approach to selecting the reasonable set of models    
Haleh Nekoee, Hooshang Talebi,
Volume 4, Issue 2 (3-2011)
Abstract

Two designs, with N runs and k factors all with two levels are said to be isomorphic or equivalent if one is obtained from another by permuting rows, columns or/and changing the levels of one or more factors. When N and k increase the matter of isomorphic recognition of two designs will be complicated. Therefore it is essential to apply needed conditions which are able to recognize and separate non-isomorphic designs. It should be done in the least possible time. Majority of needed existed conditions in the literature review can’t meet the two objectives, maximum separation in minimum span, at the same time. In this paper, a new method has been used to present non-equivalent. This new method has been designed abased on choice and comparisons of one or some rows of design matrix. This new method hopefully has higher ability to recognize non-equivalence. Besides, the new method has lower calculation and therefore is able to determine non-equivalence of two designs.

Hamid Reza Chareh, Afshin Fallah,
Volume 4, Issue 2 (3-2011)
Abstract

This paper considers the weight distributions in order to incorporating the topics related to construction of skew-symetric (skew-normal) and bimodal distributions. It discusses that many of skew-normal distributions disscussed in recent years researches can be studid in more general form along with some other interesting aspects in context of weigth distributions. Two cosiderable case of the recent years reaserches have been disscussed. It is shown that the introduced distributions in these reseaches along with all of their interesting properties can be obtain from weigth distribution perspective as only special cases.

Ahad Malekzadeh, Mina Tohidi,
Volume 4, Issue 2 (3-2011)
Abstract

Coefficient of determination is an important criterion in different applications. The problem of point estimation of this parameter has been considered by many researchers. In this paper, the class of linear estimators of R^2 was considered. Then, two new estimators were proposed, which have lower risks than other usual estimator, such as the sample coefficient of determination and its adjusted form. Also on the basis of some simulations, we show that the Jacknife estimator is an efficient estimator with lower risk, when the number of observations is small.

Abdolreza Sayyareh,
Volume 4, Issue 2 (3-2011)
Abstract

In this paper we have established for the Kullback-Leibler divergence that the relative error is supperadditive. It shows that a mixture of k rival models gives a better upper bound for Kullback-Leibler divergence to model selection. In fact, it is shown that the mixed model introduce a model which is better than of the all rival models in the mixture or a model which is better than the worst rival model in the mixture.
Ghobad Barmalzan, Abdolreza Sayyareh,
Volume 4, Issue 2 (3-2011)
Abstract

Suppose we have a random sample of size n of a population with true density h(.). In general, h(.) is unknown and we use the model f as an approximation of this density function. We do inference based on f. Clearly, f must be close to the true density h, to reach a valid inference about the population. The suggestion of an absolute model based on a few obsevations, as an approximation or estimation of the true density, h, results a great risk in the model selection. For this reason, we choose k non-nested models and investigate the model which is closer to the true density. In this paper, we investigate this main question in the model selection that how is it possible to gain a collection of appropriate models for the estimation of the true density function h, based on Kullback-Leibler risk.
Shokofeh Zeinodini, Ahmad Parsian,
Volume 4, Issue 2 (3-2011)
Abstract

In this paper, a class of generalized Bayes Minimax estimators of the mean vector of a normal distribution with unknown positive definite covariance matrix is obtained under the sum of squared error loss function. It is shown that this class is an extension of the class obtained by Lin and Tasi (1973).
Hamid Esmaili, Mina Towhidi, Seyd Rooalla Roozgar, Mehdi Amiri,
Volume 5, Issue 1 (9-2011)
Abstract

Usually, in testing hypothesis a p_value is used for making decision. Would p_value be the best measure to accept or reject the null hypothesis? Would it be possible to have a better measure than the ordinary p_value? In this paper, hypothesis testing has been considered not as a choice to make decision but as an estimating problem to possible accuracy of a given set, labeled by Θ_0 and p_value would be used as an estimator to possible accuracy of Θ_0 Real numbers as a parametric space has been usually accepted by researcher although the parametric space has been limited in many of applications. A measure named as modified p_value which functions more better than usual p_value in bounded parametric space, would be introduced in normal distribution of one-side and two-side testing.
Eisa Mahmoudi, Reyhaneh Lalehzari,
Volume 5, Issue 1 (9-2011)
Abstract

In this paper a new version of skew uniform distribution is introduced which is completely different from the previous works. Some important properties of the new distribution contain the expression for the density and distribution, kth moments, moment generating and characteristic functions, variance, skewness and kurtusis, mean deviation from the mean, median and mode and parameter estimation are investigated. Also a simulation study on this distribution is carried out to show the consistency of the maximum likelihood and moments estimators. In the end, the new skew uniform distribution is compared with uniform distribution.
Mohammad Hossein Aalamatsaz, Foroogh Mahpishanian,
Volume 5, Issue 1 (9-2011)
Abstract

There is a family of generalized Farlie-Gumbel-Morgenstern copulas, known as the semiparametric family, which is generated by a function called distribution-based generator. These generators have been studied typically for symmetric distributions in the literature. In this article, is proposed a method for asymmetric case which increases the flexibility of distribution-based generators and, thus, the model. In addition, a method for generalizing general generators is provided which can also be used to obtain more flexible distribution-based generators. Clearly, with more flexible generators more desirable models can be found to fit real data.
Elham Zamanzadeh, Jafar Ahmadi,
Volume 5, Issue 1 (9-2011)
Abstract

In this paper, first a brief introduction of ranked set sampling is presented. Then, construction of confidence intervals for a quantile of the parent distribution based on ordered ranked set sample is given. Because the corresponding confidence coefficient is an step function, one may not be able to find the exact prescribed value. With this in mind, we introduce a new method and show that one can obtained an optimal confidence interval by appealing the proposed approach. We also compare the proposed scheme with the other existence methods.
Ebrahim Konani, Saeid Bagrezaei,
Volume 5, Issue 1 (9-2011)
Abstract

In this article the characterization of distributions is considered by using Kullback-Leibler information and records values. Then some characterizations are obtained based on Kullback-Leibler information and Shannon entropy of order statistics and record values.
Aref Khanjari Idenak, Mohammadreza Zadkarami, Alireza Daneshkhah,
Volume 5, Issue 2 (2-2012)
Abstract

In this paper a new compounding distribution with increasing, decreasing, bathtub shaped and unimodal-bathtub shaped hazard rate function. The new three-parameters distribution as a generalization of the exponential power distribution is proposed. Maximum likelihood estimation of the parameters, raw-moments, density function of the order statistics, survival function, hazard rate function, mean residual lifetime, reliability function and median are presented. Then the properties of this distribution are illustrated based on a real data set.

Mehdi Shams, Mehdi Emadi, Naser Reza Arghami,
Volume 5, Issue 2 (2-2012)
Abstract

In this paper the class of all equivariant is characterized functions. Then two conditions for the proof of the existence of equivariant estimators are introduced. Next the Lehmann's method is generalized for characterization of the class of equivariant location and scale function in terms of a given equivariant function and invariant function to an arbitrary group family. This generalized method has applications in mathematics, but to make it useful in statistics, it is combined with a suitable function to make an equivariant estimator. This of course is usable only for unique transitive groups, but fortunately most statistical examples are of this sort. For other group equivariant estimators are directly obtained.

Khadijeh Mehri, Rahim Chinipardaz,
Volume 5, Issue 2 (2-2012)
Abstract

This article is concerned with the comparison between posterior probability and p-value in two-parameter exponential distribution when the location parameter is considered as extra (nuisance) parameter. It has been shown that for a fixed p-value the posterior probability is increases as the number of observations gets large value. It means that it may be different results between classical and Bayesian point of view. This irreconcilability between classical evidence and Bayesian evidence is remained if we compare the lower bound of posterior probability under a class of reasonable prior distributions.


Page 2 from 12     

مجله علوم آماری – نشریه علمی پژوهشی انجمن آمار ایران Journal of Statistical Sciences

Persian site map - English site map - Created in 0.06 seconds with 52 queries by YEKTAWEB 4710