[Home ] [Archive]   [ فارسی ]  
:: Main :: About :: Current Issue :: Archive :: Search :: Submit :: Contact ::
Main Menu
Home::
Journal Information::
Articles archive::
For Authors::
For Reviewers::
Registration::
Ethics Considerations::
Contact us::
Site Facilities::
::
Search in website

Advanced Search
..
Receive site information
Enter your Email in the following box to receive the site news and information.
..
Indexing and Abstracting



 
..
Social Media

..
Licenses
Creative Commons License
This Journal is licensed under a Creative Commons Attribution NonCommercial 4.0
International License
(CC BY-NC 4.0).
 
..
Similarity Check Systems


..
:: Search published articles ::

Mehrdad Naderi, Alireza Arabpour, Ahad Jamalizadeh,
Volume 11, Issue 2 (3-2018)
Abstract

This paper presents a new extension of Birnbaum-Saunders distribution based on skew Laplace distribution. Some properties of the new distribution are studied and the EM-type estimators of the parameters with their standard errors are obtained. Finally, we conduct a simulation study and illustrate our distribution by considering two real data example.


Sayed Mohammad Reza Alavi, Safura Alibabaie, Rahim Chinipardaz,
Volume 11, Issue 2 (3-2018)
Abstract

The standard Beta distribution is a suitable distribution for modeling the data that include proportions. In many situations which the data of proportions include a considerable number of zeros and ones, the inflated beta distributions are more appropriate. When probabilities of recording such observations are proportional to a nonnegative weight function, the recorded observations distributed as a weighted inflated Beta. This article focuses on the size biased inflated Beta distribution as a special case of weighted inflated Beta distribution with the power weight function. Some properties of this distribution is studied and its parameters are estimated using maximum likelihood and method of moments approaches. The estimators are compared via a simulation study. Finally, the real mortality data set is fitted for this model.


Afshin Fallah, Ramin Kazemi, Hasan Khosravi,
Volume 11, Issue 2 (3-2018)
Abstract

Regression analysis is done, traditionally, considering homogeneity and normality assumption for the response variable distribution. Whereas in many applications, observations indicate to a heterogeneous structure containing some sub-populations with skew-symmetric structure either due to heterogeneity, multimodality or skewness of the population or a combination of them. In this situations, one can use a mixture of skew-symmetric distributions to model the population. In this paper we considered the Bayesian approach of regression analysis under the assumption of heterogeneity of population and a skew-symmetric distribution for sub-populations, by using a mixture of skew normal distributions. We used a simulation study and a real world example to assess the proposed Bayesian methodology and to compare it with frequentist approach.

Shahram Yaghoobzadeh Shahrastani,
Volume 12, Issue 1 (9-2018)
Abstract

In this paper, based on generalized order statistics the Bayesian and maximum liklihood estimations of the parameters, the reliability and the hazard functions of Gompertz distribution are investigated. Specializations to Bayesian and maximum liklihood estimators, some lifetime parameters of progressive II censoring and record values are obtained. Also by using two real data sets and simulated data accurations of different estimates of the parameters are compared. Next the Bayesian and maximum liklihood estimates of the Gompertz distribution are compared with Weibull and Lomax distrtibutions.


Ali Shadrokh, Shahram Yaghoobzadeh, Masoud Yarmohammadi,
Volume 12, Issue 1 (9-2018)
Abstract

In this article, with the help of exponentiated-G distribution, we obtain extensions for the Probability density function and Cumulative distribution function, moments and moment generating functions, mean deviation, Racute{e}nyi and Shannon entropies and order Statistics of this family of distributions. We use maximum liklihood method of estimate the parameters and with the help of a real data set, we show the Risti$acute{c}-Balakrishnan-G family of distributions is a proper model for lifetime distribution.


Mahdieh Mozafari, Mehrdad Naderi, Alireza Arabpour,
Volume 12, Issue 1 (9-2018)
Abstract

This paper introduces a new distribution based on extreme value distribution. Some properties and characteristics of the new distribution such as distribution function, moment generating function and skewness and kurtosis are studied. Finally, by computing the maximum likelihood estimators of the new distribution's parameters, the performance of the model is illustrated via two real examples.


Sedighe Eshaghi, Hossein Baghishani, Negar Eghbal,
Volume 12, Issue 1 (9-2018)
Abstract

Introducing some efficient model selection criteria for mixed models is a substantial challenge; Its source is indeed fitting the model and computing the maximum likelihood estimates of the parameters. Data cloning is a new method to fit mixed models efficiently in a likelihood-based approach. This method has been popular recently and avoids the main problems of other likelihood-based methods in mixed models. A disadvantage of data cloning is its inability of computing the maximum of likelihood function of the model. This value is a key quantity in proposing and calculating information criteria. Therefore, it seems that we can not, directly, define an appropriate information criterion by data cloning approach. In this paper, this believe is broken and a criterion based on data cloning is introduced. The performance of the proposed model selection criterion is also evaluated by a simulation study.


Mehran Naghizadeh Qomi, Zohre Mahdizadeh, Hamid Zareefard,
Volume 12, Issue 1 (9-2018)
Abstract

Suppose that we have a random sample from one-parameter Rayleigh distribution‎. ‎In classical methods‎, ‎we estimate the interesting parameter based on the sample information and‎ ‎with usual estimators‎. ‎Sometimes in practice‎, ‎the researcher has some information about the unknown‎ ‎parameter in the form of a guess value‎. ‎This guess is known as nonsample information‎. ‎In this case‎, ‎linear shrinkage estimators are introduced‎ ‎by combining nonsample and sample information which have smaller risk than usual estimators in the vicinity of‎ ‎guess and true value‎. ‎In this paper‎, ‎some shrinkage testimators are introduced using different methods based on‎ ‎vicinity of guess value and true parameter and their risks are computed under the entropy loss function‎. ‎Then‎, ‎the performance of‎ ‎shrinkage testimators and the best linear estimator is calculated via the relative efficiency of them‎. ‎Therefore‎, ‎the results are applied for the type-II censored data.


Maryam Borzoei Bidgoli, Mohammad Arashi,
Volume 12, Issue 2 (3-2019)
Abstract

One way of dealing with the problem of collinearity in linear models, is to make use of the Liu estimator. In this paper, a new estimator by generalizing the modified Liu estimator of Li and Yang (2012) has been proposed. This estimator is constructed based on a prior information of vector parameters in linear regression and the generalized estimator of Akdeniz and Kachiranlar (1995). Using the mean square error matrix criterion, we have obtained the superiority conditions Of this newly defined estimator over the generalized Liu estimator. For comparison sake, a numerical example as well as a Monte Carlo simulation study are considered.


Peyman Amiri Domari, Mehrdad Naderi, Ahad Jamalizadeh,
Volume 12, Issue 2 (3-2019)
Abstract

In order to construct the asymmetric models and analyzing data set with asymmetric properties, an useful approach is the weighted model. In this paper, a new class of skew-Laplace distributions is introduced by considering a two-parameter weight function which is appropriate to asymmetric and multimodal data sets. Also, some properties of the new distribution namely skewness and kurtosis coefficients, moment generating function, etc are studied. Finally, The practical utility of the methodology is illustrated through a real data collection.


Abouzar Bazyari, Narges Mousavi,
Volume 12, Issue 2 (3-2019)
Abstract

In this article, we wish to find and select appropriate estimators for statistical population density function using line transect sampling in the present of detection functions with light and heavy tailed distributions. Also it is shown that how the type of detection function could be effective in selection of the best estimator and then we propose a unbiased estimators that has the lower variance than the existed estimators. the simulation results show that if detection functions have heavy tailed distribution, then the new estimators have least mean square error.


Meysam Moghimbeygi, Mousa Golalizadeh,
Volume 13, Issue 1 (9-2019)
Abstract

Recalling the definition of shape as a point on hyper-sphere, proposed by Kendall, the regression model is studied in this paper. In order to simplify the modeling, the triangulation via two landmarks is proposed. The triangulation not only simplifies the regression modelling of the shapes but also provides straightforward computation procedure to reconstruct geometrical structure of the objects. Novelty of the proposed method in this paper is on using the predictor variable, based upon the shape, which suitably describes the geometrical variability of the response. The comparison and evaluation of the proposed methods with the full Procrustes matching through the mean square error criteria are done. Application of two models for the configurations of rat skulls is investigated.


Mozhgan Dehghani, Mohammad Reza Zadkarami, Mohammad Reza Akhoond,
Volume 13, Issue 1 (9-2019)
Abstract

In the last decade, Poisson regression has been used for modeling count response variables. Poisson regression is not a suitable choice when count data bears superfluity of zero numbers. In this article, two models zero-inflated Poisson regression and bivariate zero-inflated Poisson regression with random effect are used to modeling count responses with a superfluity of zero numbers. Usually, distribution of the random effect is considered normal, but we intend to employ more flexible skew-normal distribution for the distribution of the random effect. Finally, the purpose model is applied to data which as obtained from the Shahid Chamran University of Ahvaz concerning the number of failed courses and fail grade point average semesters. we used a simulation method to verify parameter estimations. 


Hossein Nadeb, Hamzeh Torabi,
Volume 13, Issue 1 (9-2019)
Abstract

In this paper, a general method for goodness of fit test for the location-scale family of distributions under Type-II progressive censoring is presented and its properties are investigated. Then, using Monte Carlo simulation studies, the power of this test is compared with the powers of some existing tests for testing the Gumbel distribution. Finally the proposed test is used for fitting a distribution to a real data set. 


Dariush Najarzadeh,
Volume 13, Issue 1 (9-2019)
Abstract

‎Testing the Hypothesis of independence of a p-variate vector subvectors‎, ‎as a pretest for many others related tests‎, ‎is always as a matter of interest‎. ‎When the sample size n is much larger than the dimension p‎, ‎the likelihood ratio test (LRT) with chisquare approximation‎, ‎has an acceptable performance‎. ‎However‎, ‎for moderately high-dimensional data by which n is not much larger than p‎, ‎the chisquare approximation for null distribution of the LRT statistic is no more usable‎. ‎As a general case‎, ‎here‎, ‎a simultaneous subvectors independence testing procedure in all k p-variate normal distributions is considered‎. ‎To test this hypothesis‎, ‎a normal approximation for the null distribution of the LRT statistic was proposed‎. ‎A simulation study was performed to show that the proposed normal approximation outperforms the chisquare approximation‎. ‎Finally‎, ‎the proposed testing procedure was applied on prostate cancer data‎.


Atefe Pourkazemi, Hadi Alizadeh Noughabi, Sara Jomhoori,
Volume 13, Issue 2 (2-2020)
Abstract

In this paper, the Bootstrap and Jackknife methods are stated and using these methods, entropy is estimated. Then the estimators based on Bootstrap and Jackknife are investigated in terms of bias and RMSE using simulation. The proposed estimators are compared with other entropy estimators by Monte Carlo simulation. Results show that the entropy estimators based on Bootstrap and Jackknife have a good performance as compared to the other estimators. Next, some tests of normality based on the proposed estimators are introduced and the power of these tests are compared with other tests.

Marjan Zare, Akbar Asgharzadeh, Seyed Fazel Bagheri,
Volume 14, Issue 1 (8-2020)
Abstract

In this paper, the smallest confidence region is obtained for the location and scale parameters of the two-parameter exponential distribution. For this purpose, we use constrained optimization problems. We first provide some suitable pivotal quantities to obtain a balanced confidence region. We then obtain the smallest confidence region by minimizing the area of the confidence region using the Lagrangian method. Two numerical examples are presented to illustrate the proposed methods. Finally, some applications of proposed joint confidence regions in hypothesis testing and the construction of confidence bands are discussed.

Shadi Saeidi Jeyberi, Mohammadreza Zadkarami, Gholamali Parham,
Volume 14, Issue 1 (8-2020)
Abstract

In this paper, Bayesian fuzzy estimator is obtained first, for the fuzzy data based on the probability prior distribution and afterward based on the possible model and the possibility of a prior distribution. Considering the effect of the membership functions on the fuzzy and possibility Bayesian estimators, a membership function that gives the optimal fuzzy and possibility Bayesian estimators will be introduced for the data. The optimality of the new triangular-gaussian membership function is denoted by using the normal and exponential data sets.

Elham Basiri, Seyed Mahdi Salehi,
Volume 14, Issue 1 (8-2020)
Abstract

‎Nowadays inference based on censored samples has been studied by many researchers‎. ‎One of the most common censoring methods is progressively type II censoring‎. ‎In this model‎, ‎n items are put on the test‎. ‎At each failure times some of the remaining items randomly withdrawn from the test‎. ‎This process continues until for a pre-fixed value as m, ‎failure times of m items are observed‎. ‎For determining the best number for the items on the test different criteria can be considered‎. ‎One of the most important factors that can be considered is the cost criterion‎. ‎In this paper‎, ‎by considering cost function and Weibull distribution for the lifetime of items‎, ‎we find the optimal value for the sample size‎, ‎i.e‎. n‎. ‎In order to evaluate‎, ‎the obtained results one example based on real data is given‎. 

Dariush Najarzadeh,
Volume 14, Issue 1 (8-2020)
Abstract

The hypothesis of complete independence is necessary for many statistical inferences. Classical testing procedures can not be applied to test this hypothesis in high-dimensional data. In this paper, a simple test statistic is presented for testing complete independence in multivariate high dimensional normal data. Using the theory of martingales, the asymptotic normality of the test statistic is established. In order to evaluate the performance of the proposed test and compare it with existing procedures, a simulation study was conducted. The simulation results indicate that the proposed test has an empirical type-I error rate with an average relative error less than the available tests. An application of the proposed method for gene expression clinical prostate data is presented.


Page 4 from 6     

مجله علوم آماری – نشریه علمی پژوهشی انجمن آمار ایران Journal of Statistical Sciences

Persian site map - English site map - Created in 0.07 seconds with 51 queries by YEKTAWEB 4710