[Home ] [Archive]   [ فارسی ]  
:: Main :: About :: Current Issue :: Archive :: Search :: Submit :: Contact ::
Main Menu
Home::
Journal Information::
Articles archive::
For Authors::
For Reviewers::
Registration::
Ethics Considerations::
Contact us::
Site Facilities::
::
Search in website

Advanced Search
..
Receive site information
Enter your Email in the following box to receive the site news and information.
..
Indexing and Abstracting



 
..
Social Media

..
Licenses
Creative Commons License
This Journal is licensed under a Creative Commons Attribution NonCommercial 4.0
International License
(CC BY-NC 4.0).
 
..
Similarity Check Systems


..
:: Search published articles ::

Dariush Najarzadeh,
Volume 17, Issue 1 (9-2023)
Abstract

In multiple regression analysis, the population multiple correlation coefficient (PMCC)  is widely used to    measure the correlation between a variable and a set of variables. To evaluate the existence or non-existence of this type of correlation, testing the hypothesis of zero  PMCC can be very useful. In high-dimensional data, due to the singularity of the sample covariance matrix, traditional testing procedures to test this hypothesis lose their applicability. A simple test statistic was proposed for zero  PMCC  based on a plug-in estimator of the sample covariance matrix inverse. Then, a permutation test was constructed based on the proposed test statistic to test the null hypothesis. A  simulation study was carried out to evaluate the performance of the proposed test in both high-dimensional and low-dimensional normal data sets. This study was finally ended by applying the proposed approach to mice tumour volumes data.
Bahram Haji Joudaki, Reza Hashemi, Soliman Khazaei,
Volume 17, Issue 2 (2-2024)
Abstract

 In this paper, a new Dirichlet process mixture model with the generalized inverse Weibull distribution as the kernel is proposed. After determining the prior distribution of the parameters in the proposed model, Markov Chain Monte Carlo methods were applied to generate a sample from the posterior distribution of the parameters. The performance of the presented model is illustrated by analyzing real and simulated data sets, in which some data are right-censored. Another potential of the proposed model demonstrated for data clustering. Obtained results indicate the acceptable performance of the introduced model.
Nasrin Noori, Hossein Bevrani,
Volume 17, Issue 2 (2-2024)
Abstract

The prevalence of high-dimensional datasets has driven increased utilization of the penalized likelihood methods. However, when the number of observations is relatively few compared to the number of covariates, each observation can tremendously influence model selection and inference. Therefore, identifying and assessing influential observations is vital in penalized methods. This article reviews measures of influence for detecting influential observations in high-dimensional lasso regression and has recently been introduced. Then, these measures under the elastic net method, which combines removing from lasso and reducing the ridge coefficients to improve the model predictions, are investigated. Through simulation and real datasets, illustrate that introduced influence measures effectively identify influential observations and can help reveal otherwise hidden relationships in the data.

Mrs. Elaheh Kadkhoda, Mr. Gholam Reza Mohtashami Borzadaran, Mr. Mohammad Amini,
Volume 18, Issue 1 (8-2024)
Abstract

Maximum entropy copula theory is a combination of copula and entropy theory. This method obtains the maximum entropy distribution of random variables by considering the dependence structure. In this paper, the most entropic copula based on Blest's measure is introduced, and its parameter estimation method is investigated. The simulation results show that if the data has low tail dependence, the proposed distribution performs better compared to the most entropic copula distribution based on Spearman's coefficient. Finally, using the monthly rainfall series data of Zahedan station, the application of this method in the analysis of hydrological data is investigated.
, Roshanak Zaman,
Volume 18, Issue 2 (2-2025)
Abstract

In this paper, the prediction of the lifetime of k-component coherent systems is studied using classical and Bayesian approaches with type-II censored system lifetime data. The system structure and signature are assumed to be known, and the component lifetime distribution follows a half-logistic model. Various point predictors, including the maximum likelihood predictor, the best-unbiased predictor, the conditional median predictor, and the Bayesian point predictor under a squared error loss function, are calculated for the coherent system lifetime. Since the integrals required for Bayes prediction do not have closed forms, the Metropolis-Hastings algorithm and importance sampling methods are applied to approximate these integrals. For type-II censored lifetime data, prediction interval based on the pivotal quantity, prediction interval HCD, and Bayesian prediction interval are considered. A Monte Carlo simulation study and a numerical example are conducted to evaluate and compare the performances of the different prediction methods.
Farzane Hashemi,
Volume 18, Issue 2 (2-2025)
Abstract

One of the most widely used statistical topics in research fields is regression problems. In these models, the basic assumption of model errors is their normality, which, in some cases, is different due to asymmetry features or break points in the data. Piecewise regression models have been widely used in various fields, and it is essential to detect the breakpoint. The break points in piecewise regression models are necessary to know when and how the pattern of the data structure changes. One of the major problems is that there is a heavy tail in these data, which has been solved by using some distributions that generalize the normal distribution. In this paper, the piecewise regression model will be investigated based on the scale mixture of the normal distribution. Also, this model will be compared with the standard piecewise regression model derived from normal errors.
Om-Aulbanin Bashiri Goudarzi, Abdolreza Sayyareh, Sedigheh Zamani Mehreyan,
Volume 19, Issue 1 (9-2025)
Abstract

The boosting algorithm is a hybrid algorithm to reduce variance, a family of machine learning algorithms in supervised learning. This algorithm is a method to transform weak learning systems into strong systems based on the combination of different results. In this paper, mixture models with random effects are considered for small areas, where the errors follow the AR-GARCH model. To select the variable, machine learning algorithms, such as boosting algorithms, have been proposed. Using simulated and tax liability data, the boosting algorithm's performance is studied and compared with classical variable selection methods, such as the step-by-step method.
Zahra Nicknam, Rahim Chinipardaz,
Volume 19, Issue 1 (9-2025)
Abstract

Classical hypothesis tests for the parameters provide suitable tests when the hypotheses are not restricted. The best are the uniformly most powerful test and the uniformly most powerful unbiased test. These tests are designed for specific hypotheses, such as one-sided and two-sided for the parameter. However, in practice, we may encounter hypotheses that the parameters under test have typical restrictions in the null or alternative hypothesis. Such hypotheses are not included in the framework of classical hypothesis testing. Therefore, statisticians are looking for more powerful tests than the most powerful ones. In this article, the union-intersection test for the sign test of variances in several normal distributions is proposed and compared with the likelihood ratio test. Although the union-intersection test is more powerful, neither test is unbiased. Two rectangular and smoothed tests have been examined for a more powerful test.
Bahram Haji Joudaki, Soliman Khazaei, Reza Hashemi,
Volume 19, Issue 1 (9-2025)
Abstract

Accelerated failure time models are used in survival analysis when the data is censored, especially when combined with auxiliary variables. When the models in question depend on an unknown parameter, one of the methods that can be applied is Bayesian methods, which consider the parameter space as infinitely dimensional. In this framework, the Dirichlet process mixture model plays an important role. In this paper, a Dirichlet process mixture model with the Burr XII distribution as the kernel is considered for modeling the survival distribution in the accelerated failure time. Then, MCMC methods were employed to generate samples from the posterior distribution. The performance of the proposed model is compared with the Polya tree mixture models based on simulated and real data. The results obtained show that the proposed model performs better.
Dr Adeleh Fallah,
Volume 19, Issue 1 (9-2025)
Abstract

In this paper, estimation for the modified Lindley distribution parameter is studied based on progressive Type II censored data. Maximum likelihood estimation, Pivotal estimation, and Bayesian estimation were calculated using the Lindley approximation and Markov chain Monte Carlo methods. Asymptotic, Pivotal, bootstrap, and Bayesian confidence intervals are provided. A Monte Carlo simulation study has been conducted to evaluate and compare the performance of different estimation methods. To further illustrate the introduced estimation methods, two real examples are provided.
Bahram Tarami, Nahid Sanjari Farsipour, Hassan Khosravi,
Volume 19, Issue 2 (4-2025)
Abstract

In many applications, observations have a skewness, an elongated shape, a heavy tail, a multi-mode structure, or a mixed distribution. Therefore, models based on the normal distribution cannot provide correct inferences under such conditions and can lead to biased estimators or increased variance. The Laplace distribution and its generalizations can be suitable alternatives in such situations due to their elongation, heavy tails, and skewness. On the other hand, in models based on mixed distributions, there is always a possibility that fewer samples are available from one or more components. Therefore, given the Bayesian approach's advantage in handling small samples, this research developed a Bayesian model to fit a finite mixed regression model with skew-Laplace distributions and conducted a simulation study to assess its performance. Laplace has been compared in two approaches, frequentist and Bayesian. The results show that the Bayesian approach of the model is more effective than other  models.
Shahram Yaghoobzadeh,
Volume 19, Issue 2 (4-2025)
Abstract

Studying various models in queueing theory is essential for improving the efficiency of queueing systems. In this paper, from the family of models {E_r/M/c; r,c in N}, the E_r/M/3 model is introduced, and quantities such as the distribution of the number of customers in the system, the average number of customers in the queue and in the system, and the average waiting time in the queue and in the system for a single customer are obtained. Given the crucial role of the traffic intensity parameter in performance evaluation criteria of queueing systems, this parameter is estimated using Bayesian, E‑Bayesian, and hierarchical Bayesian methods under the general entropy loss function and based on the system’s stopping time. Furthermore, based on the E‑Bayesian estimator, a new estimator for the traffic intensity parameter is proposed, referred to in this paper as the E^2‑Bayesian estimator. Accordingly, among the Bayesian, E‑Bayesian, hierarchical Bayesian, and the new estimator, the one that minimizes the average waiting time in the customer queue is considered the optimal estimator for the traffic intensity parameter in this paper. Finally, through Monte Carlo simulation and using a real dataset, the superiority of the proposed estimator over the other mentioned estimators is demonstrated.


Zohreh Nakhaeezadeh, Sarah Jomhoori, Fatemeh Yousefzadeh,
Volume 19, Issue 2 (4-2025)
Abstract

Integer-valued time series models play an essential role in the analysis of dependent count data. One of the main challenges in these models is to detect structural changes over time. These changes may be caused by sudden interventions such as policy changes, pandemics, or system failures. In this paper, the empirical likelihood method is used to detect structural changes in a class of INAR(1) processes. This method is a tool for early warning of structural changes in these processes. Using simulation, the empirical sizes and powers of the test are calculated for different sample sizes, and the test's performance is investigated. Finally, the practical efficiency of the test is investigated by identifying the change point in two real datasets: the number of robberies and the number of COVID-19 deaths.



Page 6 from 6     

مجله علوم آماری – نشریه علمی پژوهشی انجمن آمار ایران Journal of Statistical Sciences

Persian site map - English site map - Created in 0.08 seconds with 44 queries by YEKTAWEB 4722