|
|
 |
Search published articles |
 |
|
Masoud Ghasemi Behjani, Milad Asadzadeh, Volume 22, Issue 1 (12-2017)
Abstract
In this paper we propose a utility function and obtain the Bayese stimate and the optimum sample size under this utility function. This utility function is designed especially to obtain the Bayes estimate when the posterior follows a gamma distribution. We consider a Normal with known mean, a Pareto, an Exponential and a Poisson distribution for an optimum sample size under the proposed utility function, so that minimizes the cost of sampling. In this process, we use Lindley cost function in order to minimize the cost. Here, because of the complicated form of computation, we are unable to solve it analytically and use the mumerical methids to get the optimum sample size.
, , Volume 22, Issue 1 (12-2017)
Abstract
Analysis of large geostatistical data sets, usually, entail the expensive matrix computations. This problem creates challenges in implementing statistical inferences of traditional Bayesian models. In addition,researchers often face with multiple spatial data sets with complex spatial dependence structures that their analysis is difficult. This is a problem for MCMC sampling algorithms that are commonly used in Bayesian analysis of spatial models, causing serious problems such as slowing down and chain integration. To escape from such computational problems, we use low-rank models, to analyze Gaussian geostatistical data. This models improve MCMC sampler convergence rate and decrease sampler run-time by reducing parameter space. The idea here is to assume, quite reasonably, that the spatial information available from the entire set of observed locations can be summarized in terms of a smaller, but representative, sets of locations, or ‘knots’. That is, we still use all of the data but we represent the spatial structure through a dimension reduction. So, again, in implementing the reduction, we need to design the knots. Consideration of this issue forms the balance of the article. To evaluate the performance of this class of models, we conduct a simulation study as well as analysis of a real data set regarding the quality of underground mineral water of a large area in Golestan province, Iran.
Mohammad Bahrami, , Volume 22, Issue 2 (3-2018)
Abstract
Abstract One of the main goal in the mixture distributions is to determine the number of components. There are different methods for determination the number of components, for example, Greedy-EM algorithm which is based on adding a new component to the model until satisfied the best number of components. The second method is based on maximum entropy and finally the third method is based on nonparametric. In this manuscript it is considered the mixture distributions with Skew-t-Normal components.
Ali Hedayati, Esmaile Khorram, Saeid Rezakhah, Volume 22, Issue 2 (3-2018)
Abstract
Maximum likelihood estimation of multivariate distributions needs solving a optimization problem with large dimentions (to the number of unknown parameters) but two- stage estimation divides this problem to several simple optimizations. It saves significant amount of computational time. Two methods are investigated for estimation consistency check. We revisit Sankaran and Nair's bivariate Pareto distribution as an example. Two data sets (simulated data and real data) have been analyzed for illustrative purposes.
Anita Abdollahi Nanvapisheh, Volume 22, Issue 2 (3-2018)
Abstract
In this paper, first, we investigate probability density function and the failure rate function of some families of exponential distributions. Then we present their features such as expectation, variance, moments and maximum likelihood estimation and we identify the most flexible distributions according to the figure of probability density function and the failure rate function and finally we offer practical examples of them.
S. Mahmoud Taheri, Volume 22, Issue 2 (3-2018)
Abstract
There are two main approches to the fuzzy regression (more precisely: regression in fuzzy environment): the least of sum of distances (including two methods of least squared errors and least absolute errors) and the possibilistic method (the method of least whole vaguness under some restrictions). Beside, some heuristic methods have been proposed to deal with fuzzy regression. Some of them are based on a combination of two mentioned approaches. Some of them are based on computational algorithmes. A few of heuristic methods use the fuzzy inference systems. Also, there are some methods based on clustering, artificial neural networks, evolutionary algorithms, and nonparametric procedures.
In this paper, a history and basic ideas of the two main approaches to fuzzy regression are reveiwed, and some heuristic methods in this topic are investigated. Moreover, 10 criterion are proposed by which one can evaluate and compare fuzzy regression models.
, , , Volume 22, Issue 2 (3-2018)
Abstract
Robust regression is an appropriate alternative for ordinal regression when outliers exist in a given data set. If we have fuzzy observations, using ordinal regression methods can't model them; In this case, using fuzzy regression is a good method. When observations are fuzzy and there are outliers in the data sets, using robust fuzzy regression methods are appropriate alternatives. In this paper, we propose a fuzzy least square regression analysis. When independent variables are crisp, the dependent variable is fuzzy number and outliers are present in the data set. In the proposed method, the residuals are ranked as the comparison of fuzzy sets. In the proposed method, the residuals are ranked as the comparison of fuzzy sets, and the weight matrix is defined by the membership function of the residuals. Weighted fuzzy least squares estimators (WFLSE) are obtained by using weight matrix. Two examples are discussed and results of these examples are presented. Finally, we compare this proposed method with ordinal least squares method using the goodness of fit indices.
, , Volume 22, Issue 2 (3-2018)
Abstract
Today's manufacturers face increasingly intense competition and to remain profitable they needed to design, develop and produce high reliable products. One way by which manufacturers attract consumers to their products is by providing warranties on the products. Consumers are willing to purchase a longer warranty period product. While maintaining such a policy needs very high cost for manufacturers. Determining the appropriate warranty length becomes an important decision problem for manufacturers. In this article, by Bayesian approach and using an appropriate utility function, we determine the optimal warranty lengths for product with exponential life time distribution.
Ms Sara Jazan, Dr Seyyed Morteza Amini, Volume 22, Issue 2 (3-2018)
Abstract
One of the factors affecting the statistical analysis of the data is the presence of outliers. The methods which are not affected by the outliers are called robust methods. Robust regression methods are robust estimation methods of regression model parameters in the presence of outliers. Besides outliers, the linear dependency of regressor variables, which is called multicollinearity, the large number of regressor variables with respect to sample size, specially in high dimensional sparse models, are problems which result in efficiency reduction of inferences in classical regression methods. In this paper, we first study the disadvantages of classical least squares regression method, when facing with outliers, multicollinearity and sparse models. Then, we introduce and study robust and penalized regression methods, as a solution to overcome these problems. Furthermore, considering outliers and multicollinearity or sparse models, simultaneously, we study penalized-robust regression methods. We examine the performance of different estimators introdused in this paper, through three different simulation studies. A real data set is also analyzed using the proposed methods.
Ali Shadrokh, Shahrastani Shahram Yaghoobzadeh, Volume 22, Issue 2 (3-2018)
Abstract
In this paper, a new five-parameter so-called Beta-Gompertz Geometric (BGG) distribution is introduced that can have a decreasing, increasing, and bathtub-shaped failure rate function depending on its parameters. Some mathematical properties of the this distribution, such as the density and hazard rate functions, moments, moment generating function, R and Shannon entropy, Bonferroni and Lorenz curves and the mean deavations are provided. We discuss maximum likelihood estimation of the BGG parameters from one observed sample. At the end, in order to show the BGG distribution flexibility, an application using a real data set is presented.
, , Volume 22, Issue 2 (3-2018)
Abstract
In this paper, a new probability distribution, based on the family of hyperbolic cosine distributions is proposed and its various statistical and reliability characteristics are investigated. The new category of HCF distributions is obtained by combining a baseline F distribution with the hyperbolic cosine function. Based on the base log-logistics distribution, we introduce a new distribution so-called HCLL and derive the various properties of the proposed distribution including the moments, quantiles, moment generating function, failure rate function, mean residual lifetime, order statistics and stress-strength parameter. Estimation of the parameters of HCLL for a real data set is investigated by using three methods: maximum likelihood, Bayesian and bootstrap (parametric and non-parametric). We evaluate the efficiency of the maximum likelihood estimation method by Monte Carlo simulation.
In addition, in the application section, by using a realistic data set, the superiority of HCLL model to generalized exponential, Weibull, hyperbolic cosine exponential, gamma, weighted exponential distributions is shown through the different criteria of selection model.
, Volume 22, Issue 2 (3-2018)
Abstract
Dr. Mehdi Shams, Volume 23, Issue 1 (9-2018)
Abstract
In this paper we explain a necessary and sufficent condition for independence between any arbitrary statistics with sufficient statistics which is also maximum likelihood estimator in a general
exponential family with location and scale parameter namely generalized normal distribution. At the end, it is shown that the converse is true except in the asymptotic cases.
Dr Fatemeh Hosseini, Dr Omid Karimi, Ms Ahdiyeh Azizi, Volume 23, Issue 1 (9-2018)
Abstract
Often in practice the data on the mortality of a living unit correlation is due to the location of the observations in the study. One of the most important issues in the analysis of survival data with spatial dependence, is estimation of the parameters and prediction of the unknown values in known sites based on observations vector. In this paper to analyze this type of survival, Cox regression model with piecewise exponential function used as a hazard and spatial dependence as a Gaussian random field and as a latent variable is added to the model. Because there is no closed form for posterior distribution and full conditional distributions, also long computing for Markov chain Monte Carlo algorithms, to analyze the model are used the approximate Bayesian methods.
A practical example of how to implement an approximate Bayesian approach is presented.
Mohsen Khosravi, Maryam Khajehhassani, Volume 23, Issue 1 (9-2018)
Abstract
The Borel-Cantelli Lemma is very important in the probability theory. In this paper, we first describe the general case of the Borel-Cantelli Lemma. The first part of this lemma, assuming convergence and the second part includes divergence and independence assumptions. In the following, we have brought generalizations of the first and second part of this lemma. In most generalizations of part $amalg$, the condition of independence, pairwise independence, weakening and elimination of the condition of independence have been investigated.
Dr Mahdi Yousefi, Maryam Masoumi, Volume 23, Issue 1 (9-2018)
Abstract
Due to the economical restrictions, improving efficiency in the collection and processing of blood products at blood centers is important. This study uses data envelopment analysis (DEA) to evaluate the efficiency of 31 Provincial units of Iranian Blood Transfusion Organizations (IBTO) to determine to what extent efficiency can be improved. Efficiency grades were computed with DEA linear programming techniques and Provincial units of IBTO characteristics that important affect efficiency determined for two consecutive years, 22 Provincial units of IBTO were efficient and 9 were inefficient. Otherwise based on PCA results, Efficiency was mainly affected by Number of BCPCs, BCCs, MTs, Efficiency did not directly relate to Population density of province, Number of donor beds and area of BTC. The major reason of inefficiency was excess allocating resulting from a suboptimal combination of Number of BCPCs, BCCs, and MTs.
, , Volume 23, Issue 1 (9-2018)
Abstract
In this paper some properties of Beta - X family are discussed and a member of the family,the beta– normal distribution, is studied in detail.One real data set are used to illustrate the applications of the beta-normal distribution and compare that to gamma - normal and Birnbaum-Saunders distriboutions.
Dr Ehsan Bahrami Samani, Volume 23, Issue 1 (9-2018)
Abstract
In this paper, we propose Hurdle regression models for analysing count responses with extra zeros. A method of estimating maximum likelihood is used to estimate model parameters. The application of the proposed model is presented in insurance dataset. In this example, there are many numbers of claims equal to zero is considered that clarify the application of the model with a zero-inflated count response. Different count regression models are introduced in this paper to model such data sets. Including Hurdle Poisson and Hurdle Negative Binomial regression models.
Miss Atefeh Karami, Volume 23, Issue 1 (9-2018)
Abstract
The normal distribution plays an important role in statistical analysis. However, a researcher may also wish to construct another symmetric distributions which fit the data better than the normal distribution. For this purpose, more flexible distributions have been introduced. In this thesis, we introduce some of such distributions. We first introduce the slash distribution as a family of mixed-normal scale distributions. The slash distribution can be used instead of normal distribution, in many situations. We also introduce Skew-slash. Then, a new modified-slash distribution is discussed. We also study a modified Skew-slash distribution. Some properties of the distributions discussed in this thesis are given. In particular, we present the stochastic representations, density functions and moments of the distributions.
Maryam Parsaeian, Sima Naghizadeh, Habib Naderi, Volume 23, Issue 1 (9-2018)
Abstract
Explaining the problem. The equating process is used to compare the scores of the two different tests with the same theme. The goal of this research is finding the best method of equating data using Logistic model.
Method. we are using the data of Ph.D. test in Statistic major for two consecutive years 92 and 93. For analyzing, we are specifically using the tests of Statistics major which includes 45 questions. Parameters of test and ability of individuals are considered according to the three parametrs model and by using the MULTILOG software. In this study, we are using the Mean-mean, Mean-Sigma, Haebara, and Stocking-Lord methods by considering the unequal groups with Anchor-test design, and we are using the root mean square error for choosing the optimal solution.
Conclusion. The results of this study show that the methods under characteristic curve are more accurate.
|
|