|
|
 |
Search published articles |
 |
|
Showing 81 results for Mohammad
Maryam Borzoei Bidgoli, Mohammad Arashi, Volume 12, Issue 2 (3-2019)
Abstract
One way of dealing with the problem of collinearity in linear models, is to make use of the Liu estimator. In this paper, a new estimator by generalizing the modified Liu estimator of Li and Yang (2012) has been proposed. This estimator is constructed based on a prior information of vector parameters in linear regression and the generalized estimator of Akdeniz and Kachiranlar (1995). Using the mean square error matrix criterion, we have obtained the superiority conditions Of this newly defined estimator over the generalized Liu estimator. For comparison sake, a numerical example as well as a Monte Carlo simulation study are considered.
Mohammad Kazemi, Davood Shahsavani, Mohammad Arashi, Volume 12, Issue 2 (3-2019)
Abstract
In this paper, we introduce a two-step procedure, in the context of high dimensional additive models, to identify nonzero linear and nonlinear components. We first develop a sure independence screening procedure based on the distance correlation between predictors and marginal distribution function of the response variable to reduce the dimensionality of the feature space to a moderate scale. Then a double penalization based procedure is applied to identify nonzero and linear components, simultaneously. We conduct extensive simulation experiments and a real data analysis to evaluate the numerical performance of the proposed method.
Mohammad Reza Yeganegi, Rahim Chinipardaz, Volume 13, Issue 1 (9-2019)
Abstract
This paper is investigating the mixture autoregressive model with constant mixing weights in state space form and generalization to ARMA mixture model. Using a sequential Monte Carlo method, the forecasting, filtering and smoothing distributions are approximated and parameters f the model is estimated via the EM algorithm. The results show the dimension of parameter vector in state space representation reduces. The results of the simulation study show that the proposed filtering algorithm has a steady state close to the real values of the state vector. Moreover, according to simulation results, the mean vectors of filtering and smoothing distribution converges to state vector quickly.
Mozhgan Dehghani, Mohammad Reza Zadkarami, Mohammad Reza Akhoond, Volume 13, Issue 1 (9-2019)
Abstract
In the last decade, Poisson regression has been used for modeling count response variables. Poisson regression is not a suitable choice when count data bears superfluity of zero numbers. In this article, two models zero-inflated Poisson regression and bivariate zero-inflated Poisson regression with random effect are used to modeling count responses with a superfluity of zero numbers. Usually, distribution of the random effect is considered normal, but we intend to employ more flexible skew-normal distribution for the distribution of the random effect. Finally, the purpose model is applied to data which as obtained from the Shahid Chamran University of Ahvaz concerning the number of failed courses and fail grade point average semesters. we used a simulation method to verify parameter estimations.
Sayed Mohammad Reza Alavi, Mohammad Joharzadeh, Rahim Chinipardaz, Volume 13, Issue 1 (9-2019)
Abstract
Usually in survey sampling when the sensitive questions are asked directly, the respondents do not provide true answers. The randomized response techniques have been introduced to protect the privacy responses. In this article we focus on Simons randomized response technique for qualitative variables. Using the combination of the two different Simmons’ models, a new combined randomized response technique is introduced to increase protection of privacy. Using simulation in R package, efficiency of the proposed model is compared to the Simmons’ and Alavi and Tajodini's (1394) models. Finally, the proposed model has been employed for estimating the proportion of student cheating in Shahid Chamran University.
Mohammad Nasirifar, Mohammadreza Akhoond, Mohammadreza Zadkarami, Volume 13, Issue 2 (2-2020)
Abstract
The parameters of reliability for the most family marginal distribution is estimated with the assumption of independence between two component stress and strength, but, unfortunately when these two component are correlated, have been less discussed. Recently, a method based on a copula function for estimating the reliability parameter is proposed under the assumption of correlation between stress and strength components. In this paper, this method is used to estimate the reliability parameter when the distribution of componets is Generalized Exponential (GE). For this purpose FGM, generalized FGM and frank copula function have been used. Then simulation is also used to demonstrate the suitability of the estimates. In the end, reliability parameter for data relative contribution of major groups in terms of age breakdown of the population of urban and rural areas in Iran in the year 1390 will be estimated.
Sayed Mohammad Reza Alavi, Sara Nayyeri, Mohammad Reza Akhoond, Volume 13, Issue 2 (2-2020)
Abstract
In many sample surveys, the variables of interest, such as student cheating in a university are sensitive in nature. In such situations, the interviewees respond to direct questions untruthful, or refuse to answer. The various indirect methods such as randomized response technique and item count technique are introduced to collect sensitive information. In this paper a new item count is proposed, then its randomized version called randomized item count model is introduced. Using this model an unbiased estimator for the sensitive proportion of the population is obtained. The variance of the estimator and an estimate for its variance are obtained. A criterion for comparing efficiency and privacy is introduced simultaneously. Using simulation, the proposed model is evaluated and its efficiency and privacy are compared with the Simons’ technique. Based on this criterion, it is shown that the proposed method is better than the Simons method. The proportion of student cheating in the Shahid Chamran University of Ahvaz is estimated using the proposed model.
Azam Rastin, Mohammadreza Faridrohani, Volume 13, Issue 2 (2-2020)
Abstract
The methodology of sufficient dimension reduction has offered an effective means to facilitate regression analysis of high-dimensional data. When the response is censored, most existing estimators cannot be applied, or require some restrictive conditions. In this article modification of sliced inverse, regression-II have proposed for dimension reduction for non-linear censored regression data. The proposed method requires no model specification, it retains full regression information, and it provides a usually small set of composite variables upon which subsequent model formulation and prediction can be based. Finally, the performance of the method is compared based on the simulation studies and some real data set include primary biliary cirrhosis data. We also compare with the sliced inverse regression-I estimator.
Shadi Saeidi Jeyberi, Mohammadreza Zadkarami, Gholamali Parham, Volume 14, Issue 1 (8-2020)
Abstract
In this paper, Bayesian fuzzy estimator is obtained first, for the fuzzy data based on the probability prior distribution and afterward based on the possible model and the possibility of a prior distribution. Considering the effect of the membership functions on the fuzzy and possibility Bayesian estimators, a membership function that gives the optimal fuzzy and possibility Bayesian estimators will be introduced for the data. The optimality of the new triangular-gaussian membership function is denoted by using the normal and exponential data sets.
Mehrnaz Mohammadpour, Masoumeh Shirozhan, Volume 14, Issue 1 (8-2020)
Abstract
In this paper, we introduce a new integer-valued autoregressive model of first order based on the negative binomial thinning operator, where the noises are serially dependent. Some statistical properties of the model are discussed. The model parameters are estimated by maximum likelihood and Yule-Walker methods. By a simulation study, the performances of the two estimation methods are studied. This survey was carried out to study the efficiency of the new model by applying it on real data.
Mohammad Hossein Poursaeed, Nader Asadian, Volume 14, Issue 1 (8-2020)
Abstract
A system in discrete time periods is exposed to a sequence of shocks so that shocks occur randomly and independently in each period with a probability p. Considering k(≥1) as a critical level, we assume that the system does not fail when the number of successive shocks is less than k, the system fails with probability Ө, if the number of successive shocks is equal to k and the system completely fails as soon as the number of sequential shocks reaches k+1. Therefore, this model can be considered as a version of run shock model, in which the shocks occur in discrete periods of time, and the behavior of the system is not fixed when encountering k successive shocks. In this paper, we examine the characteristics of the system according to this model, especially the first and second-order moments of the system's lifetime, and also estimate its unknown parameters. Finally, a method is proposed to calculate the mean of the generalized geometric distribution.
Akram Kohansal, Nafiseh Alemohammad, Fatemeh Azizzadeh, Volume 14, Issue 2 (2-2021)
Abstract
The Bayesian estimation of the stress-strength parameter in Lomax distribution under the progressive hybrid censored sample is considered in three cases. First, assuming the stress and strength are two random variables with a common scale and different shape parameters. The Bayesian estimations of these parameters are approximated by Lindley method and the Gibbs algorithm. Second, assuming the scale parameter is known, the exact Bayes estimation of the stress-strength parameter is obtained. Third, assuming all parameters are unknown, the Bayesian estimation of the stress-strength parameter is derived via the Gibbs algorithm. Also, the maximum likelihood estimations are calculated, and the usefulness of the Bayesian estimations is confirmed, in comparison with them. Finally, the different methods are evaluated utilizing the Monte Carlo simulation and one real data set is analyzed.
Mohammad Reaz Kazemi, Volume 14, Issue 2 (2-2021)
Abstract
In this paper, we investigate the confidence interval for the parameter of the common correlation coefficient of several bivariate normal populations. To do this, we use the confidence distribution approach. By simulation studies and using the concepts of coverage probability and expected length, We compare this method with the generalized variable approach. Results of simulation studies show that the coverage probability of the proposed method is close to the nominal level in all situations and also, in most cases, the expected length of this method is less than that of the generalized variable approach. Finally, we present two real examples to apply this approach.
Mozhgan Taavoni, Mohammad Arashi, Volume 14, Issue 2 (2-2021)
Abstract
This paper considers the problem of simultaneous variable selection and estimation in a semiparametric mixed-effects model for longitudinal data with normal errors. We approximate the nonparametric function by regression spline and simultaneously estimate and select the variables under the optimization of the penalized objective function. Under some regularity conditions, the asymptotic behaviour of the resulting estimators is established in a high-dimensional framework where the number of parametric covariates increases as the sample size increases. For practical implementation, we use an EM algorithm to selects the significant variables and estimates the nonzero coefficient functions. Simulation studies are carried out to assess the performance of our proposed method, and a real data set is analyzed to illustrate the proposed procedure.
Meysam Mohammadpour, Hossein Bevrani, Reza Arabi Belaghi, Volume 15, Issue 1 (9-2021)
Abstract
Wind speed probabilistic distributions are one of the main wind characteristics for the evaluation of wind energy potential in a specific region. In this paper, 3-parameter Log-Logistic distribution is introduced and it compared with six used statistical models for the modeling the actual wind speed data reported of Tabriz and Orumiyeh stations in Iran. The maximum likelihood estimators method via Nelder–Mead algorithm is utilized for estimating the model parameters. The flexibility of proposed distributions is measured according to the coefficient of determination, Chi-square test, Kolmogorov-Smirnov test, and root mean square error criterion. Results of the analysis show that 3-parameter Log-Logistic distribution provides the best fit to model the annual and seasonal wind speed data in Orumiyeh station and except summer season for Tabriz station. Also, wind power density error is estimated for the proposed different distributions.
Zahra Khadem Bashiri, Ali Shadrokh, Masoud Yarmohammadi, Volume 15, Issue 1 (9-2021)
Abstract
One of the most critical discussions in regression models is the selection of the optimal model, by identifying critical explanatory variables and negligible variables and more easily express the relationship between the response variable and explanatory variables. Given the limitations of selecting variables in classical methods, such as stepwise selection, it is possible to use penalized regression methods. One of the penalized regression models is the Lasso regression model, in which it is assumed that errors follow a normal distribution. In this paper, we introduce the Bayesian Lasso regression model with an asymmetric distribution error and the high dimensional setting. Then, using the simulation studies and real data analysis, the performance of the proposed model's performance is discussed.
Morteza Mohammadi, Mahdi Emadi, Mohammad Amini, Volume 15, Issue 1 (9-2021)
Abstract
Divergence measures can be considered as criteria for analyzing the dependency and can be rewritten based on the copula density function. In this paper, Jeffrey and Hellinger dependency criteria are estimated using the improved probit transformation method, and their asymptotic consistency is proved. In addition, a simulation study is performed to measure the accuracy of the estimators. The simulation results show that for low sample size or weak dependence, the Hellinger dependency criterion performs better than Kullback-Libeler and Jeffrey dependency criteria. Finally, the application of the studied methods in hydrology is presented.
Mohammad Hossein Poursaeed, Volume 15, Issue 1 (9-2021)
Abstract
In this paper, based on an appropriate pivotal quantity, two methods are introduced to determine confidence region for the mean and standard deviation in a two parameter uniform distribution, in which the application of numerical methods is not mandatory. In the first method, the smallest region is obtained by minimizing the confidence region's area, and in the second method, a simultaneous Bonferroni confidence interval is introduced by using the smallest confidence intervals. By the comparison of area and coverage probability of the introduced methods, as well as, comparison of the width of strip including the standard deviation in both methods, it has been shown that the first method has a better efficiency. Finally, an approximation for the quantile of F
distribution used in calculating the confidence regions in a special case is presented.
Mojtaba Esfahani, Mohammad Amini, Gholamreza Mohtashami Borzadaran, Volume 15, Issue 1 (9-2021)
Abstract
In this article, the total time on test (TTT) transformation and its major properties are investigated. Then, the relationship between the TTT transformation and some subjects in reliability theory is expressed. The TTT diagram is also drawn for some well-known lifetime distributions, and a real-data analysis is performed based on this diagram. A new distorted family of distributions is introduced using the distortion function. The statistical interpretation of the new life distribution from the perspective of reliability is provided, and its survival function is derived. Finally, a generalization of the Weibull distribution is introduced using a new distortion function. A real data analysis shows its superiority in fitting in comparison to the traditional Weibull model.
Majid Chahkandi, Jalal Etminan, Mohammad Khanjari Sadegh, Volume 15, Issue 1 (9-2021)
Abstract
Redundancy and reduction are two main methods for improving system reliability. In a redundancy method, system reliability can be improved by adding extra components to some original components of the system. In a reduction method, system reliability increases by reducing the failure rate at all or some components of the system. Using the concept of reliability equivalence factors, this paper investigates equivalence between the reduction and redundancy methods. A closed formula is obtained for computing the survival equivalence factor. This factor determines the amount of reduction in the failure rate of a system component(s) to reach the reliability of the same system when it is improved. The effect of component importance measure is also studied in our derivations.
|
|