[Home ] [Archive]   [ فارسی ]  
:: Main :: About :: Current Issue :: Archive :: Search :: Submit :: Contact ::
Main Menu
Home::
Journal Information::
Articles archive::
For Authors::
For Reviewers::
Registration::
Ethics Considerations::
Contact us::
Site Facilities::
::
Search in website

Advanced Search
..
Receive site information
Enter your Email in the following box to receive the site news and information.
..
Indexing and Abstracting



 
..
Social Media

..
Licenses
Creative Commons License
This Journal is licensed under a Creative Commons Attribution NonCommercial 4.0
International License
(CC BY-NC 4.0).
 
..
Similarity Check Systems


..
:: Search published articles ::

Hossein Baghishani, Mohammad Mahdi Tabatabaei,
Volume 1, Issue 1 (9-2007)
Abstract

In parameter driven models, the main problem is likelihood approximation and also parameter estimation. One approach to this problem is to apply simpler likelihoods such as composite likelihood. In this paper, we first introduce the parameter driven models and composite likelihood and then define a new model selection criterion based on composite likelihood. Finally, we demonstrate composite likelihood's capabilities in inferences and accurate model selection in parameter driven models throughout a simulation study.
Mostafa Razmkhah, Jafar Ahmadi, Bahareh Khatib Astaneh,
Volume 1, Issue 1 (9-2007)
Abstract

A sequence of observations in which only successive minimum (maximum) values are observed are called record values. One sampling scheme for generating record values is: "Data are obtained via the inverse sampling scheme, where items are presented sequentially and sampling is terminated when n'th record is observed''. In this plan, the expectation of inter record times are infinite and in practice the number of records are few. Under the assumption that the process of observing record values can be replicated, one may consider the repetition of inverse sampling plan to achieve the specific number of records. In the latter scheme, we assume $m$ independent samples are obtained sequentially from the parent distribution and only recod data are observed. Two sampling (consecutive and repetition) plan are compared with regard to Fisher information contained in the extracted record data and general results are obtained. The proposed procedure is illustrated by considering several life time distributions such as Exponential, Burr XII and Weibull.
Mohammad Reza Alavi, Rahim Chinipardaz,
Volume 1, Issue 1 (9-2007)
Abstract

The classical analysis is based on random samples. However, in many situations the observations are recorded according to a nonnegative function of observations. In this case the mechanism of sampling is called weighted sampling. The usual statistical methods based on a weighted sample may be not valid and have to be adjusted. In this paper adjusted methods under some particular weight functions for normal distribution are studied and a new distribution called double normal distribution, is introduced as a weighted normal distribution.
Mohammad Arashi, Mahammad Mahdi Tabatabaei,
Volume 1, Issue 2 (2-2008)
Abstract

In this paper, we obtain the generalized least square, restricted generalized least square and shrinkage estimators for the regression vector parameter assuming that the errors have multivariate t distribution. Also we calculate their quadratic risks and propose the dominance order of the underlying estimators.
Arezoo Habibi Rad, Naser Reza Arghami,
Volume 1, Issue 2 (2-2008)
Abstract

The estimate of entropy (sample entropy), has been introduced by Vasicek (1976), for the first time. In this paper, we provide an estimate of entropy of order statistics, that is the extention of the entropy estimate. Then we present an application of the entropy estimate of order statistics as a test statistic for symmetry of distribution versus skewness. The proposed test has been compared with some other existing tests. A Monte Carlo simulation study shows that the proposed test has more power than the Park's (1999) test.
Mohammad Reza Farid Rohani, Khalil Shafiei Holighi,
Volume 1, Issue 2 (2-2008)
Abstract

In recent years, some statisticians have studied the signal detection problem by using the random field theory. In this paper we have considered point estimation of the Gaussian scale space random field parameters in the Bayesian approach. Since the posterior distribution for the parameters of interest dose not have a closed form, we introduce the Markov Chain Monte Carlo (MCMC) algorithm to approximate the Bayesian estimations. We have also applied the proposed procedure to real fMRI data, collected by the Montreal Neurological Institute.
Mahmodreza Gohari, Mahmoud Mahmoudi, Kazem Mohammad, Ein Allah Pasha,
Volume 1, Issue 2 (2-2008)
Abstract

Recurrent events are one type of multivariate survival data. Correlation between observations on each subject is the most important feature of this type of data. This feature does not allow using the ordinary survival models. Frailty models are one of the main approaches to the analysis of recurrent events. Ordinary Frailty models assumed the frailty is constant over time, that is not realistic in many applications. In this paper we introduce a time-dependent frailty model. The introduced model is based on piecewise semiparametric proportional hazard and frailty variable followed a Gamma distribution. The frailty variable in the model has a gamma process that is constant during each interval and has independent increments in the beginning of each interval. We found a close form function for integrated likelihood function and estimated parameters of model. The efficiency of introduced model was compared with an ordinary constant gamma model by a simulation study


Ahmad Parsian, Shahram Azizi Sazi,
Volume 2, Issue 1 (8-2008)
Abstract

In this paper, a new class of estimators namely Constrained Bayes Estimators are obtained under Balanced Loss Function (BLF) and Weighted Balanced Loss Function (WBLF) using a ``Bayesian solution". The Constrained Bayes Estimators are calculated for the natural parameter of one-parameter exponential families of distributions. A common approach to the prior uncertainty in Bayesian analysis is to choose a class $Gamma$ of prior distributions and look for an optimal decision within the class $Gamma$. This is known as robust Bayesian methodology. Among several methods of choosing the optimal rules in the context of the robust Bayes method, we discuss obtaining Posterior Regret Constrained Gamma-Minimax (PRCGM) rule under Squared Error Loss and then employing the ``Bayesian solution", we obtain the optimal rules under BLF and WBLF.


Hadi Alizadeh Noughabi, Reza Alizadeh Noughabi,
Volume 2, Issue 1 (8-2008)
Abstract

In this paper we evaluate the power of the sample entropy goodness-of-fit tests for normal, exponential and uniform distributions and we compare them with the other statistical tests. We show, by simulation, that them have less power than of the other tests considered. We next introduce a new test for symmetry based on sample entropy and show, by simulation, that it has higher power than Cabilio and Masaro test (1996).

Ehsan Zamanzadeh, Naser Arghami,
Volume 2, Issue 2 (2-2009)
Abstract

In this paper, we first introduce two new entropy estimators. These estimators are obtained by correcting Corea(1995)'s estimator in the extreme points and also assigning different weights to the end points.We then make a comparison among our proposed new entropy estimators and the entropy estimators proposed by Vasicek (1976), Ebrahimi, et al. (1994) and Corea(1995). We also introduce goodness of fit tests for exponentiality and normality based on our proposed entropy estimators. Results of a simulation study show that the proposed estimators and goodness of fit tests have good performances in comparison with the leading competitors.

Maliheh Abbasnejad, Marzeiyeh Shakouri,
Volume 2, Issue 2 (2-2009)
Abstract

In this paper, we establish a goodness of fit test for exponentiality based on the estimated Renyi information. We use an estimator for Renyi distance in manner of Correa entropy estimate. Critical values of the test are computed by Monte Carlo simulation. Also we compute the power of the test under different alternatives and show that it compares favorably with the leading competitor.

Rahman Farnoosh, Afshin Fallah, Arezoo Hajrajabi,
Volume 2, Issue 2 (2-2009)
Abstract

The modified likelihood ratio test, which is based on penalized likelihood function, is usually used for testing homogeneity of the mixture models. The efficiency of this test is seriously affected by the shape of penalty function that is used in penalized likelihood function. The selection of penalty function is usually based on avoiding of complexity and increasing tractability, hence the results may be far from optimality. In this paper, we consider a more general form of penalty function that depends on a shape parameter. Then this shape parameter and the parameters of mixture models are estimated by using Bayesian paradigm. It is shown that the proposed Bayesian approach is more efficient in comparison to modified likelihood test. The proposed Bayesian approach is clearly more efficient, specially in nonidentifiability situation, where frequentist approaches are almost failed.

Masoumeh Izanloo, Arezou Habibirad,
Volume 3, Issue 1 (9-2009)
Abstract

Unified hybrid censoring scheme is a mixture of generalized Type-I and Type-II hybrid censoring schemes. In this paper, we mainly consider the analysis of unified hybrid censored data when the lifetime distribution of the individual item is a two-parameter generalized exponential distribution. It is observed that the maximum likelihood estimators can not be obtained in a closed form. We obtain the maximum likelihood estimates of the parameters by using Newton-Raphson algorithm. The Fisher information matrix has been obtained and it can be used for constructing asymptotic confidence intervals. We also obtain the Bayes estimates of the unknown parameters under the assumption of independent gamma priors using the importance sampling procedure. Simulations are performed to compare the performances of the different schemes and one data set is analyzed for illustrative purposes.
Ameneh Kheradmandi, Nahid Sanjari Fasipour,
Volume 3, Issue 1 (9-2009)
Abstract

Gomez et al. (2007) introduced the skew t-normal distribution, showing that it is a good alternative to model heavy tailed data with strong symmetrical nature, specially because it has a larger range of skewness than the skew-normal distribution. Gomez et al. (2007) and Lin et al. (2009) described some properties of this distribution. In this paper, we consider some further properties of skew student-t-normal distribution. Also, we present four theorems for constructing of this distribution. Next we illustrate a numerical example to model the Vanadium pollution data in the Shadegan Wetland by using skew student-t-normal distribution.
Abdolreza Sayareh, Parisa Torkman,
Volume 3, Issue 1 (9-2009)
Abstract

Model selection aims to find the best model. Selection in the presence of censored data arises in a variety of problems. In this paper we emphasize that the Kullback-Leibler divergence under complete data has a better advantage. Some procedures are provided to construct a tracking interval for the expected difference of Kullback-Leibler risks based on Type II right censored data. Simulation study shows that this procedure works properly for optimum model selection.
Sakineh Sadeghi, Iraj Kazemi,
Volume 3, Issue 1 (9-2009)
Abstract

Recently, dynamic panel data models are comprehensively used in social and economic studies. In fitting these models, a lagged response is incorrectly considered as an explanatory variable. This ad-hoc assumption produces unreliable results when using conventional estimation approaches. A principle issue in the analysis of panel data is to take into account the variability of experimental individual effects. These effects are usually assumed fixed in many studies, because of computational complexity. In this paper, we assume random individual effects to handle such variability and then compare the results with fixed effects. Furthermore, we obtain the model parameter estimates by implementing the maximum likelihood and Gibbs sampling methods. We also fit these models on a data set which contains assets and liabilities of banks in Iran.
Maryam Torkzadeh, Soroush Alimoradi,
Volume 3, Issue 1 (9-2009)
Abstract

One of the tools for determining nonlinear effects and interactions between the explanatory variables in a logistic regression model is using of evolutionary product unit neural networks. To estimate the model parameters constructed by this method, a combination of evolutionary algorithms and classical optimization tools is used. In this paper, we change the structure of neural networks in the form that all model parameters can be estimated by using an evolutionary algorithms causes a model that is Akaike information criterion is better than conventional logisti model Akaike information criterion, but using the combination method gives the best model.

Abbas Mahdavi, Mina Towhidi,
Volume 3, Issue 2 (3-2010)
Abstract

One of the most important issues in inferential statistics is the existence of outlier observations. Since these observations have a great influence on fitted model and its related inferences, it is necessary to find a method for specifying the effect of outlier observations. The aim of this article is to investigate the effect of outlier observations on kernel density function estimation. In this article we have tried to represent a method for identification of outlier observations and their effect on kernel density function estimation by using forward search method

Reza Hashemi, Ghobad Barmalzan, Abedin Haidari,
Volume 3, Issue 2 (3-2010)
Abstract

Considering the characteristics of the bivariate normal distribution, in which uncorrelation of two random variables is equivalent to their independence, it is interesting to verify this issue in other distributions in other words whether or not the multivariate normal distribution is the only distribution in which uncorrelation is equivalent to independence. This paper aims to answer this question by presenting some concepts and introduce another family in which uncorrelation is equivalent to independence.

Dr Shahram Mansoury, Dr Eynollah Pasha,
Volume 3, Issue 2 (3-2010)
Abstract

Stochastically ordered random variables with given marginal distributions are combined into a joint distribution preserving the ordering and the marginals using a maximum entropy principle. A closed-form of the maximum entropy density function is obtained. Next we have compared the entropies of maximum entropy distributions, under two constraints The constraints are either prescription of marginal distributions and the marginals and covariance matrix.

Page 1 from 6    
First
Previous
1
 

مجله علوم آماری – نشریه علمی پژوهشی انجمن آمار ایران Journal of Statistical Sciences

Persian site map - English site map - Created in 0.07 seconds with 52 queries by YEKTAWEB 4710