|
|
 |
Search published articles |
 |
|
Showing 201 results for Type of Study: Research
Dr Vahid Rezaeitabar, Selva Salimi, Volume 21, Issue 1 (9-2016)
Abstract
A Bayesian network is a graphical model that represents a set of random variables and their causal relationship via a Directed Acyclic Graph (DAG). There are basically two methods used for learning Bayesian network: parameter-learning and structure-learning. One of the most effective structure-learning methods is K2 algorithm. Because the performance of the K2 algorithm depends on node ordering, more effective node ordering inference methods are needed. In this paper, based on the fact that the parent and child variables are identified by estimated Markov Blanket (MB), we first estimate the MB of a variable using Grow-Shrink algorithm, then determine the candidate parents of a variable by evaluating the conditional frequencies using Dirichlet probability density function. Then the candidate parents are used as input for the K2 algorithm. Experimental results for most of the datasets indicate that our proposed method significantly outperforms previous method.
Miss Fahimeh Boroomandi, Dr Mahmood Kharrati, Dr Javad Behboodian, Volume 21, Issue 1 (9-2016)
Abstract
The classic F-test is usually used for testing the effects of factors in homoscedastic two-way ANOVA models. However, the assumption of equal cell variances is usually violated in practice. In recent years, several test procedures have been proposed for testing the effects of factors. In this paper, the two methods that are approximate degree of freedom (ADF) and parametric bootstrap (PB) approaches are evaluated in terms of type one error and power. The simulation results show that these two methods have satisfactory performance in terms of type one error and their power is very close to each other approximately. However, the ADF method is very easy to implement in comparison with PB appreach which is simulation-based method and consequently time consuming.
Mr Alireza Shirvani, Volume 21, Issue 1 (9-2016)
Abstract
A Poisson distribution is well used as a standard model for analyzing count data. So the Poisson distribution parameter estimation is widely applied in practice. Providing accurate confidence intervals for the discrete distribution parameters is very difficult. So far, many asymptotic confidence intervals for the mean of Poisson distribution is provided. It is known that the coverage probability of the confidence interval (L(X),U(X)) is a function of distribution parameter. Since Poisson distribution is discrete, coverage probability of confidence intervals for Poisson mean has no closed form and the exact calculation of confidence coefficient, average coverage probability and maximum coverage probabilities for this intervals, is very difficult. Methodologies for computing the exact average coverage probabilities as well as the exact confidence coefficients of confidence intervals for one-parameter discrete distributions with increasing bounds are proposed by Wang (2009). In this paper, we consider a situation that the both lower and upper bounds of the confidence interval is increasing. In such situations, we explore the problem of finding an exact maximum coverage probabilities for confidence intervals of Poisson mean. Decision about confidence intervals optimality, based on simultaneous evaluation of confidence coefficient, average coverage probability and maximum coverage probabilities, will be more reliable.
Ali Aghmohammadi, Sakine Mohammadi, Volume 21, Issue 2 (3-2017)
Abstract
Dynamic panel data models include the important part of medicine, social and economic studies. Existence of the lagged dependent variable as an explanatory variable is a sensible trait of these models. The estimation problem of these models arises from the correlation between the lagged depended variable and the current disturbance. Recently, quantile regression to analyze dynamic panel data has been taken in to consideration. In this paper, quantile regression model by adding an adaptive Lasso penalty term to the random effects for dynamic panel data is introduced by assuming correlation between the random effects and initial observations. Also, this model is illustrated by assuming that the random effects and initial values are independent. These two models are analyzed from a Bayesian point of view. Since, in these models posterior distributions of the parameters are not in explicit form, the full conditional posterior distributions of the parameters are calculated and the Gibbs sampling algorithm is used to deduction. To compare the performance of the proposed method with the conventional methods, a simulation study was conducted and at the end, applications to a real data set are illustrated.
, Volume 21, Issue 2 (3-2017)
Abstract
In this paper, the concept of joint reliability importance (JRI) of two or groups of components in a coherent system with independent components have been studied. The JRI is defined as the rate at which the system reliability improves as the reliabilities of the two or groups of components improve.
Generally, the sign and the value of the JRI represent the type and the degree of interactions between components with respect to systems reliability.
Abbas Parchami, Volume 21, Issue 2 (3-2017)
Abstract
This paper has been discussed and reviewed two R packages FuzzyNumbers and Calculator.LR.FNs. These packages have the ability of installation on R software, and in fact they propose some useful instruments and functions to the users for draw and easily using arithmetic operators on LR fuzzy numbers. For the convenience of the readers, the proposed methods and functions have been presented with several numerical examples in this paper which can help to better understanding.
, , , Volume 21, Issue 2 (3-2017)
Abstract
In this paper a new weighted fuzzy ridge regression method for a given set of crisp input and triangular fuzzy output values is proposed. In this regard, ridge estimator of fuzzy parameters is obtained for regression model and its prediction error is calculated by using the weighted fuzzy norm of crisp ridge coefficients. . To evaluate the proposed regression model, we introduce the fuzzy coefficient of determination (FCD). Fuzzy regression is compared with its ridge version by using mean predict error and FCD, numerically. It is evident from comparison results the proposed fuzzy ridge regression is superior to the non-ridge counterpar
Shahrastani Shahram Yaghoobzadeh, Volume 21, Issue 2 (3-2017)
Abstract
In this study, E-Bayesian of parameters of two parameter exponential distribution under squared error loss function is obtained. The estimated and the efficiency of the proposed method has been compared with Bayesian estimator using Monte Carlo simulation.
Masoud Ghasemi Behjani, , Volume 21, Issue 2 (3-2017)
Abstract
In this article, the method of determining the optimal sample size is based on Linex asymmetric loss function and has been expressed through Bayesian method for normal, Poisson and exponential distributions. The desirable sample size has been calculated through numerical method. In numerical method, the average posterior risk is calculated and then it is added to the Lindley linear cost function to achieve the average of the total cost. Then, the diagram of sample size is drawn in comparison to the average of total cost and eventually, the optimal sample size that minimizes the cost has been achieved.
Fattaneh Nezampoor, Alireza Soleimani, Volume 22, Issue 1 (12-2017)
Abstract
In this paper some properties of logistics - x family are discussed and a member of the family, the logistic–normal distribution, is studied in detail. Average deviations, risk function and fashion for logistic–normal distribution is obtained. The method of maximum likelihood estimation is proposed for estimating the parameters of the logistic–normal distribution and a data set is used to show applications of logistic–normal distribution.
Miss Elaheh Kadkhoda, Mr Morteza Mohammadi, Dr Gholam Reza Mohtashami Borzadaran, Volume 22, Issue 1 (12-2017)
Abstract
Generalized Lambda Distribution is an extension of Tukey's lambda distribution, that is very flexible in modeling information and statistical data. In this paper, We introduced two parameterization of this distribution. Then We estimate parameters by moment matching, percentile, starship and maximum likelihood methods and compare two parameterization and parameter estimation methods with Kolmogorov-Smirnov test.
, , Volume 22, Issue 1 (12-2017)
Abstract
In this article, first of all, the Kumaraswamy distribution is introduced. Then, the joint and marginal distributions of W = X1/X2 and T = X1/X1+X2 where X1 and X2 are independent Kumaraswamy random variables, are obtained and the moments of these random variables are computed.
The distribution of random variables W and T can be used in reliability studies and statistical models such as stress-strength.
Shahram Yaghoobzadeh Shahrastani Shahram Yaghoobzadeh, Volume 22, Issue 1 (12-2017)
Abstract
In this paper, a new distribution of the three-parameter lifetime model called the Marshall-Olkin Gompertz is proposed on the basis of the Gompertz distribution. It is a generalization of the Gompertz distribution having decreasing failure rate and can also be increasing and bathtub-shaped depending on its parameters. The probability density function, cumulative distribution function, hazard rate function and some mathematical properties of this model such as, central moments, moments of order statistics, Renyi and Shannon entropies and quantile function are derived. In addition, the maximum likelihood of its parameters method is estimated and this new distribution compared with some Gompertz distribution generalizations by means of a set of real data.
Dr. Mehdi Shams, Volume 22, Issue 1 (12-2017)
Abstract
Given the importance of Markov chains in information theory, the definition of conditional probability for these random processes can also be defined in terms of mutual information. In this paper, the relationship between the concept of sufficiency and Markov chains from the perspective of information theory and the relationship between probabilistic sufficiency and algorithmic sufficiency is determined.
2039
Masoud Ghasemi Behjani, Milad Asadzadeh, Volume 22, Issue 1 (12-2017)
Abstract
In this paper we propose a utility function and obtain the Bayese stimate and the optimum sample size under this utility function. This utility function is designed especially to obtain the Bayes estimate when the posterior follows a gamma distribution. We consider a Normal with known mean, a Pareto, an Exponential and a Poisson distribution for an optimum sample size under the proposed utility function, so that minimizes the cost of sampling. In this process, we use Lindley cost function in order to minimize the cost. Here, because of the complicated form of computation, we are unable to solve it analytically and use the mumerical methids to get the optimum sample size.
Mohammad Bahrami, , Volume 22, Issue 2 (3-2018)
Abstract
Abstract One of the main goal in the mixture distributions is to determine the number of components. There are different methods for determination the number of components, for example, Greedy-EM algorithm which is based on adding a new component to the model until satisfied the best number of components. The second method is based on maximum entropy and finally the third method is based on nonparametric. In this manuscript it is considered the mixture distributions with Skew-t-Normal components.
Ali Hedayati, Esmaile Khorram, Saeid Rezakhah, Volume 22, Issue 2 (3-2018)
Abstract
Maximum likelihood estimation of multivariate distributions needs solving a optimization problem with large dimentions (to the number of unknown parameters) but two- stage estimation divides this problem to several simple optimizations. It saves significant amount of computational time. Two methods are investigated for estimation consistency check. We revisit Sankaran and Nair's bivariate Pareto distribution as an example. Two data sets (simulated data and real data) have been analyzed for illustrative purposes.
S. Mahmoud Taheri, Volume 22, Issue 2 (3-2018)
Abstract
There are two main approches to the fuzzy regression (more precisely: regression in fuzzy environment): the least of sum of distances (including two methods of least squared errors and least absolute errors) and the possibilistic method (the method of least whole vaguness under some restrictions). Beside, some heuristic methods have been proposed to deal with fuzzy regression. Some of them are based on a combination of two mentioned approaches. Some of them are based on computational algorithmes. A few of heuristic methods use the fuzzy inference systems. Also, there are some methods based on clustering, artificial neural networks, evolutionary algorithms, and nonparametric procedures.
In this paper, a history and basic ideas of the two main approaches to fuzzy regression are reveiwed, and some heuristic methods in this topic are investigated. Moreover, 10 criterion are proposed by which one can evaluate and compare fuzzy regression models.
, , , Volume 22, Issue 2 (3-2018)
Abstract
Robust regression is an appropriate alternative for ordinal regression when outliers exist in a given data set. If we have fuzzy observations, using ordinal regression methods can't model them; In this case, using fuzzy regression is a good method. When observations are fuzzy and there are outliers in the data sets, using robust fuzzy regression methods are appropriate alternatives. In this paper, we propose a fuzzy least square regression analysis. When independent variables are crisp, the dependent variable is fuzzy number and outliers are present in the data set. In the proposed method, the residuals are ranked as the comparison of fuzzy sets. In the proposed method, the residuals are ranked as the comparison of fuzzy sets, and the weight matrix is defined by the membership function of the residuals. Weighted fuzzy least squares estimators (WFLSE) are obtained by using weight matrix. Two examples are discussed and results of these examples are presented. Finally, we compare this proposed method with ordinal least squares method using the goodness of fit indices.
, , Volume 22, Issue 2 (3-2018)
Abstract
Today's manufacturers face increasingly intense competition and to remain profitable they needed to design, develop and produce high reliable products. One way by which manufacturers attract consumers to their products is by providing warranties on the products. Consumers are willing to purchase a longer warranty period product. While maintaining such a policy needs very high cost for manufacturers. Determining the appropriate warranty length becomes an important decision problem for manufacturers. In this article, by Bayesian approach and using an appropriate utility function, we determine the optimal warranty lengths for product with exponential life time distribution.
|
|