|
|
 |
Search published articles |
 |
|
Ahad Malekzadeh, Asghar Esmaeli-Ayan, Seyed Mahdi Mahmodi, Volume 15, Issue 1 (9-2021)
Abstract
The panel data model is used in many areas, such as economics, social sciences, medicine, and epidemiology. In recent decades, inference on regression coefficients has been developed in panel data models. In this paper, methods are introduced to test the equality models of the panel model among the groups in the data set. First, we present a random quantity that we estimate its distribution by two methods of approximation and parametric bootstrap. We also introduce a pivotal quantity for performing this hypothesis test. In a simulation study, we compare our proposed approaches with an available method based on the type I error and test power. We also apply our method to gasoline panel data as a real data set.
Mohammad Hossein Poursaeed, Volume 15, Issue 1 (9-2021)
Abstract
In this paper, based on an appropriate pivotal quantity, two methods are introduced to determine confidence region for the mean and standard deviation in a two parameter uniform distribution, in which the application of numerical methods is not mandatory. In the first method, the smallest region is obtained by minimizing the confidence region's area, and in the second method, a simultaneous Bonferroni confidence interval is introduced by using the smallest confidence intervals. By the comparison of area and coverage probability of the introduced methods, as well as, comparison of the width of strip including the standard deviation in both methods, it has been shown that the first method has a better efficiency. Finally, an approximation for the quantile of F
distribution used in calculating the confidence regions in a special case is presented.
Firozeh Bastan, Seyed Mohamad Taghi Kamel Mirmostafaee, Volume 15, Issue 2 (3-2022)
Abstract
In this paper, estimation and prediction for the Poisson-exponential distribution are studied based on lower records and inter-record times. The estimation is performed with the help of maximum likelihood and Bayesian methods based on two symmetric and asymmetric loss functions. As it seems that the integrals of the Bayes estimates do not possess closed forms, the Metropolis-Hastings within Gibbs and importance sampling methods are applied to approximating these integrals. Moreover, the Bayesian prediction of future records is also investigated. A simulation study and an application example are presented to evaluate and show the applicability of the paper's results and also to compare the numerical results when the inference is based on records and inter-record times with those when the inference is based on records alone.
Mehdi Balui, Einolah Deiri, Farshin Hormozinejad, Ezzatallah Baloui Jamkhaneh, Volume 15, Issue 2 (3-2022)
Abstract
In most practical cases, to increase parameter estimation accuracy, we need an estimator with the least risk. In this, contraction estimators play a critical role. Our main purpose is to evaluate the efficiency of some shrinkage estimators of the shape parameter of the Pareto-Rayleigh distribution under two classes of shrinkage estimators. In this research, the purpose estimators' efficiency will be compared with the unbiased estimator obtained under the quadratic loss function. The relationship between these two classes of shrinkage estimators was examined, and then the relative efficiency of the proposed estimators was discussed and concluded via doing a Monte Carlo simulation.
Anis Iranmanesh, Farzaneh Oliazadeh, Vahid Fakoor, Volume 15, Issue 2 (3-2022)
Abstract
In this article, we propose two non-parametric estimators for the past entropy based on length-biased data, and the strong consistency of the proposed estimators is proved. In addition, some simulations are conducted to evaluate the performance of the proposed estimators. Based on the results, we show that they have better performance in a different region of the probability distribution for length-biased random variables.
Sakineh Dehghan, Mohamadreza Faridrohani, Volume 15, Issue 2 (3-2022)
Abstract
The concept of data depth has provided a helpful tool for nonparametric multivariate statistical inference by taking into account the geometry of the multivariate data and ordering them. Indeed, depth functions provide a natural centre-outward order of multivariate points relative to a multivariate distribution or a given sample. Since the outlingness of issues is inevitably related to data ranks, the centre-outward ordering could provide an algorithm for outlier detection. In this paper, based on the data depth concept, an affine invariant method is defined to identify outlier observations. The affine invariance property ensures that the identification of outlier points does not depend on the underlying coordinate system and measurement scales. This method is easier to implement than most other multivariate methods. Based on the simulation studies, the performance of the proposed method based on different depth functions has been studied. Finally, the described method is applied to the residential houses' financial values of some cities of Iran in 1397.
Parviz Nasiri, Raouf Obeidi, Volume 16, Issue 1 (9-2022)
Abstract
This paper presents the inverse Weibull-Poisson distribution to fit censored lifetime data. The parameters of scale, shape and failure rate are considered in terms of estimation and hypothesis testing, so the parameters are estimated under the type-II of censorship using the maximum likelihood and Bayesian methods. In Bayesian analysis, the parameters are estimated under different loss functions. The simulation section presents the symmetric confidence interval and HPD, and the estimators are compared using statistical criteria. Finally, the model's goodness of fit is evaluated using an actual data set.
Alla Alhamidah, Mehran Naghizadeh, , Volume 16, Issue 2 (3-2023)
Abstract
This paper discusses the Bayesian and E-Bayesian estimators in Burr type-XII model is discussed. The estimators are obtained based on type II censored data under the bounded reflected gamma loss function. The relationship between E-Bayesian estimators and their asymptotic properties is presented. The performance of the proposed estimators is evaluated using Monte Carlo simulation.
Mr Arta Roohi, Ms Fatemeh Jahadi, Dr Mahdi Roozbeh, Dr Saeed Zalzadeh, Volume 17, Issue 1 (9-2023)
Abstract
The high-dimensional data analysis using classical regression approaches is not applicable, and the consequences may need to be more accurate.
This study tried to analyze such data by introducing new and powerful approaches such as support vector regression, functional regression, LASSO and ridge regression. On this subject, by investigating two high-dimensional data sets (riboflavin and simulated data sets) using the suggested approaches, it is progressed to derive the most efficient model based on three criteria (correlation squared, mean squared error and mean absolute error percentage deviation) according to the type of data.
Shahrastani Shahram Yaghoobzadeh, Volume 17, Issue 1 (9-2023)
Abstract
In this article, it is assumed that the arrival rate of customers to the queuing system M/M/c has an exponential distribution with parameter $lambda$ and the service rate of customers has an exponential distribution with parameter $mu$ and is independent of the arrive rate. It is also assumed that the system is active until time T. Under this stopping time, maximum likelihood estimation and bayesian estimation under general entropy loss functions and weighted error square, as well as under-informed and uninformed prior distributions, the system traffic intensity parameter M/M/c and system stationarity probability are obtained. Then the obtained estimators are compared by Monte Carlo simulation and a numerical example to determine the most suitable estimator.
Dariush Najarzadeh, Volume 17, Issue 1 (9-2023)
Abstract
In multiple regression analysis, the population multiple correlation coefficient (PMCC) is widely used to measure the correlation between a variable and a set of variables. To evaluate the existence or non-existence of this type of correlation, testing the hypothesis of zero PMCC can be very useful. In high-dimensional data, due to the singularity of the sample covariance matrix, traditional testing procedures to test this hypothesis lose their applicability. A simple test statistic was proposed for zero PMCC based on a plug-in estimator of the sample covariance matrix inverse. Then, a permutation test was constructed based on the proposed test statistic to test the null hypothesis. A simulation study was carried out to evaluate the performance of the proposed test in both high-dimensional and low-dimensional normal data sets. This study was finally ended by applying the proposed approach to mice tumour volumes data.
Bahram Haji Joudaki, Reza Hashemi, Soliman Khazaei, Volume 17, Issue 2 (2-2024)
Abstract
In this paper, a new Dirichlet process mixture model with the generalized inverse Weibull distribution as the kernel is proposed. After determining the prior distribution of the parameters in the proposed model, Markov Chain Monte Carlo methods were applied to generate a sample from the posterior distribution of the parameters. The performance of the presented model is illustrated by analyzing real and simulated data sets, in which some data are right-censored. Another potential of the proposed model demonstrated for data clustering. Obtained results indicate the acceptable performance of the introduced model.
Nasrin Noori, Hossein Bevrani, Volume 17, Issue 2 (2-2024)
Abstract
The prevalence of high-dimensional datasets has driven increased utilization of the penalized likelihood methods. However, when the number of observations is relatively few compared to the number of covariates, each observation can tremendously influence model selection and inference. Therefore, identifying and assessing influential observations is vital in penalized methods. This article reviews measures of influence for detecting influential observations in high-dimensional lasso regression and has recently been introduced. Then, these measures under the elastic net method, which combines removing from lasso and reducing the ridge coefficients to improve the model predictions, are investigated. Through simulation and real datasets, illustrate that introduced influence measures effectively identify influential observations and can help reveal otherwise hidden relationships in the data.
Mrs. Elaheh Kadkhoda, Mr. Gholam Reza Mohtashami Borzadaran, Mr. Mohammad Amini, Volume 18, Issue 1 (8-2024)
Abstract
Maximum entropy copula theory is a combination of copula and entropy theory. This method obtains the maximum entropy distribution of random variables by considering the dependence structure. In this paper, the most entropic copula based on Blest's measure is introduced, and its parameter estimation method is investigated. The simulation results show that if the data has low tail dependence, the proposed distribution performs better compared to the most entropic copula distribution based on Spearman's coefficient. Finally, using the monthly rainfall series data of Zahedan station, the application of this method in the analysis of hydrological data is investigated.
, Roshanak Zaman, Volume 18, Issue 2 (2-2025)
Abstract
In this paper, the prediction of the lifetime of k-component coherent systems is studied using classical and Bayesian approaches with type-II censored system lifetime data. The system structure and signature are assumed to be known, and the component lifetime distribution follows a half-logistic model. Various point predictors, including the maximum likelihood predictor, the best-unbiased predictor, the conditional median predictor, and the Bayesian point predictor under a squared error loss function, are calculated for the coherent system lifetime. Since the integrals required for Bayes prediction do not have closed forms, the Metropolis-Hastings algorithm and importance sampling methods are applied to approximate these integrals. For type-II censored lifetime data, prediction interval based on the pivotal quantity, prediction interval HCD, and Bayesian prediction interval are considered. A Monte Carlo simulation study and a numerical example are conducted to evaluate and compare the performances of the different prediction methods.
Farzane Hashemi, Volume 18, Issue 2 (2-2025)
Abstract
One of the most widely used statistical topics in research fields is regression problems. In these models, the basic assumption of model errors is their normality, which, in some cases, is different due to asymmetry features or break points in the data. Piecewise regression models have been widely used in various fields, and it is essential to detect the breakpoint. The break points in piecewise regression models are necessary to know when and how the pattern of the data structure changes. One of the major problems is that there is a heavy tail in these data, which has been solved by using some distributions that generalize the normal distribution. In this paper, the piecewise regression model will be investigated based on the scale mixture of the normal distribution. Also, this model will be compared with the standard piecewise regression model derived from normal errors.
Om-Aulbanin Bashiri Goudarzi, Abdolreza Sayyareh, Sedigheh Zamani Mehreyan, Volume 19, Issue 1 (9-2025)
Abstract
The boosting algorithm is a hybrid algorithm to reduce variance, a family of machine learning algorithms in supervised learning. This algorithm is a method to transform weak learning systems into strong systems based on the combination of different results. In this paper, mixture models with random effects are considered for small areas, where the errors follow the AR-GARCH model. To select the variable, machine learning algorithms, such as boosting algorithms, have been proposed. Using simulated and tax liability data, the boosting algorithm's performance is studied and compared with classical variable selection methods, such as the step-by-step method.
Zahra Nicknam, Rahim Chinipardaz, Volume 19, Issue 1 (9-2025)
Abstract
Classical hypothesis tests for the parameters provide suitable tests when the hypotheses are not restricted. The best are the uniformly most powerful test and the uniformly most powerful unbiased test. These tests are designed for specific hypotheses, such as one-sided and two-sided for the parameter. However, in practice, we may encounter hypotheses that the parameters under test have typical restrictions in the null or alternative hypothesis. Such hypotheses are not included in the framework of classical hypothesis testing. Therefore, statisticians are looking for more powerful tests than the most powerful ones. In this article, the union-intersection test for the sign test of variances in several normal distributions is proposed and compared with the likelihood ratio test. Although the union-intersection test is more powerful, neither test is unbiased. Two rectangular and smoothed tests have been examined for a more powerful test.
Bahram Haji Joudaki, Soliman Khazaei, Reza Hashemi, Volume 19, Issue 1 (9-2025)
Abstract
Accelerated failure time models are used in survival analysis when the data is censored, especially when combined with auxiliary variables. When the models in question depend on an unknown parameter, one of the methods that can be applied is Bayesian methods, which consider the parameter space as infinitely dimensional. In this framework, the Dirichlet process mixture model plays an important role. In this paper, a Dirichlet process mixture model with the Burr XII distribution as the kernel is considered for modeling the survival distribution in the accelerated failure time. Then, MCMC methods were employed to generate samples from the posterior distribution. The performance of the proposed model is compared with the Polya tree mixture models based on simulated and real data. The results obtained show that the proposed model performs better.
Dr Adeleh Fallah, Volume 19, Issue 1 (9-2025)
Abstract
In this paper, estimation for the modified Lindley distribution parameter is studied based on progressive Type II censored data. Maximum likelihood estimation, Pivotal estimation, and Bayesian estimation were calculated using the Lindley approximation and Markov chain Monte Carlo methods. Asymptotic, Pivotal, bootstrap, and Bayesian confidence intervals are provided. A Monte Carlo simulation study has been conducted to evaluate and compare the performance of different estimation methods. To further illustrate the introduced estimation methods, two real examples are provided.
|
|