|
|
 |
Search published articles |
 |
|
Dariush Najarzadeh, Volume 17, Issue 1 (9-2023)
Abstract
In multiple regression analysis, the population multiple correlation coefficient (PMCC) is widely used to measure the correlation between a variable and a set of variables. To evaluate the existence or non-existence of this type of correlation, testing the hypothesis of zero PMCC can be very useful. In high-dimensional data, due to the singularity of the sample covariance matrix, traditional testing procedures to test this hypothesis lose their applicability. A simple test statistic was proposed for zero PMCC based on a plug-in estimator of the sample covariance matrix inverse. Then, a permutation test was constructed based on the proposed test statistic to test the null hypothesis. A simulation study was carried out to evaluate the performance of the proposed test in both high-dimensional and low-dimensional normal data sets. This study was finally ended by applying the proposed approach to mice tumour volumes data.
Bahram Haji Joudaki, Reza Hashemi, Soliman Khazaei, Volume 17, Issue 2 (2-2024)
Abstract
In this paper, a new Dirichlet process mixture model with the generalized inverse Weibull distribution as the kernel is proposed. After determining the prior distribution of the parameters in the proposed model, Markov Chain Monte Carlo methods were applied to generate a sample from the posterior distribution of the parameters. The performance of the presented model is illustrated by analyzing real and simulated data sets, in which some data are right-censored. Another potential of the proposed model demonstrated for data clustering. Obtained results indicate the acceptable performance of the introduced model.
Nasrin Noori, Hossein Bevrani, Volume 17, Issue 2 (2-2024)
Abstract
The prevalence of high-dimensional datasets has driven increased utilization of the penalized likelihood methods. However, when the number of observations is relatively few compared to the number of covariates, each observation can tremendously influence model selection and inference. Therefore, identifying and assessing influential observations is vital in penalized methods. This article reviews measures of influence for detecting influential observations in high-dimensional lasso regression and has recently been introduced. Then, these measures under the elastic net method, which combines removing from lasso and reducing the ridge coefficients to improve the model predictions, are investigated. Through simulation and real datasets, illustrate that introduced influence measures effectively identify influential observations and can help reveal otherwise hidden relationships in the data.
Mrs. Elaheh Kadkhoda, Mr. Gholam Reza Mohtashami Borzadaran, Mr. Mohammad Amini, Volume 18, Issue 1 (8-2024)
Abstract
Maximum entropy copula theory is a combination of copula and entropy theory. This method obtains the maximum entropy distribution of random variables by considering the dependence structure. In this paper, the most entropic copula based on Blest's measure is introduced, and its parameter estimation method is investigated. The simulation results show that if the data has low tail dependence, the proposed distribution performs better compared to the most entropic copula distribution based on Spearman's coefficient. Finally, using the monthly rainfall series data of Zahedan station, the application of this method in the analysis of hydrological data is investigated.
, Roshanak Zaman, Volume 18, Issue 2 (2-2025)
Abstract
In this paper, the prediction of the lifetime of k-component coherent systems is studied using classical and Bayesian approaches with type-II censored system lifetime data. The system structure and signature are assumed to be known, and the component lifetime distribution follows a half-logistic model. Various point predictors, including the maximum likelihood predictor, the best-unbiased predictor, the conditional median predictor, and the Bayesian point predictor under a squared error loss function, are calculated for the coherent system lifetime. Since the integrals required for Bayes prediction do not have closed forms, the Metropolis-Hastings algorithm and importance sampling methods are applied to approximate these integrals. For type-II censored lifetime data, prediction interval based on the pivotal quantity, prediction interval HCD, and Bayesian prediction interval are considered. A Monte Carlo simulation study and a numerical example are conducted to evaluate and compare the performances of the different prediction methods.
Farzane Hashemi, Volume 18, Issue 2 (2-2025)
Abstract
One of the most widely used statistical topics in research fields is regression problems. In these models, the basic assumption of model errors is their normality, which, in some cases, is different due to asymmetry features or break points in the data. Piecewise regression models have been widely used in various fields, and it is essential to detect the breakpoint. The break points in piecewise regression models are necessary to know when and how the pattern of the data structure changes. One of the major problems is that there is a heavy tail in these data, which has been solved by using some distributions that generalize the normal distribution. In this paper, the piecewise regression model will be investigated based on the scale mixture of the normal distribution. Also, this model will be compared with the standard piecewise regression model derived from normal errors.
Om-Aulbanin Bashiri Goudarzi, Abdolreza Sayyareh, Sedigheh Zamani Mehreyan, Volume 19, Issue 1 (9-2025)
Abstract
The boosting algorithm is a hybrid algorithm to reduce variance, a family of machine learning algorithms in supervised learning. This algorithm is a method to transform weak learning systems into strong systems based on the combination of different results. In this paper, mixture models with random effects are considered for small areas, where the errors follow the AR-GARCH model. To select the variable, machine learning algorithms, such as boosting algorithms, have been proposed. Using simulated and tax liability data, the boosting algorithm's performance is studied and compared with classical variable selection methods, such as the step-by-step method.
Zahra Nicknam, Rahim Chinipardaz, Volume 19, Issue 1 (9-2025)
Abstract
Classical hypothesis tests for the parameters provide suitable tests when the hypotheses are not restricted. The best are the uniformly most powerful test and the uniformly most powerful unbiased test. These tests are designed for specific hypotheses, such as one-sided and two-sided for the parameter. However, in practice, we may encounter hypotheses that the parameters under test have typical restrictions in the null or alternative hypothesis. Such hypotheses are not included in the framework of classical hypothesis testing. Therefore, statisticians are looking for more powerful tests than the most powerful ones. In this article, the union-intersection test for the sign test of variances in several normal distributions is proposed and compared with the likelihood ratio test. Although the union-intersection test is more powerful, neither test is unbiased. Two rectangular and smoothed tests have been examined for a more powerful test.
Bahram Haji Joudaki, Soliman Khazaei, Reza Hashemi, Volume 19, Issue 1 (9-2025)
Abstract
Accelerated failure time models are used in survival analysis when the data is censored, especially when combined with auxiliary variables. When the models in question depend on an unknown parameter, one of the methods that can be applied is Bayesian methods, which consider the parameter space as infinitely dimensional. In this framework, the Dirichlet process mixture model plays an important role. In this paper, a Dirichlet process mixture model with the Burr XII distribution as the kernel is considered for modeling the survival distribution in the accelerated failure time. Then, MCMC methods were employed to generate samples from the posterior distribution. The performance of the proposed model is compared with the Polya tree mixture models based on simulated and real data. The results obtained show that the proposed model performs better.
Dr Adeleh Fallah, Volume 19, Issue 1 (9-2025)
Abstract
In this paper, estimation for the modified Lindley distribution parameter is studied based on progressive Type II censored data. Maximum likelihood estimation, Pivotal estimation, and Bayesian estimation were calculated using the Lindley approximation and Markov chain Monte Carlo methods. Asymptotic, Pivotal, bootstrap, and Bayesian confidence intervals are provided. A Monte Carlo simulation study has been conducted to evaluate and compare the performance of different estimation methods. To further illustrate the introduced estimation methods, two real examples are provided.
|
|