[Home ] [Archive]   [ فارسی ]  
:: Main :: About :: Current Issue :: Archive :: Search :: Submit :: Contact ::
Main Menu
Home::
Journal Information::
Articles archive::
For Authors::
For Reviewers::
Registration::
Ethics Considerations::
Contact us::
Site Facilities::
::
Search in website

Advanced Search
..
Receive site information
Enter your Email in the following box to receive the site news and information.
..
Indexing and Abstracting



 
..
Social Media

..
Licenses
Creative Commons License
This Journal is licensed under a Creative Commons Attribution NonCommercial 4.0
International License
(CC BY-NC 4.0).
 
..
Similarity Check Systems


..
:: Search published articles ::
Showing 246 results for Type of Study: Research

Jalal Chachi, Mohammadreza Akhond, Shokoufeh Ahmadi,
Volume 18, Issue 2 (2-2025)
Abstract

The Lee-Carter model is a useful dynamic stochastic model representing the evolution of central mortality rates over time. This model only considers the uncertainty about the coefficient related to the mortality trend over time but not the age-dependent coefficients. This paper proposes a fuzzy extension of the Lee-Carter model that allows quantifying the uncertainty of both kinds of parameters. The variability of the time-dependent index is modeled as a stochastic fuzzy time series. Likewise, the uncertainty of the age-dependent coefficients is quantified using triangular fuzzy numbers. Considering this last hypothesis requires developing and solving a fuzzy regression model. Once the generalization of the desired fuzzy model is introduced, we will show how to fit the logarithm of the central mortality rate in Khuzestan province using by using fuzzy numbers arithmetic during the years 1401-1383 and random fuzzy forecast in the years 1402-1406.
Elham Ranjbar, Mohamad Ghasem Akbari, Reza Zarei,
Volume 19, Issue 1 (9-2025)
Abstract

In the time series analysis, we may encounter situations where some elements of the model are imprecise quantities. One of the most common situations is the inaccuracy of the underlying observations, usually due to measurement or human errors. In this paper, a new fuzzy autoregressive time series model based on the support vector machine approach is proposed. For this purpose, the kernel function has been used for the stability and flexibility of the model, and the constraints included in the model have been used to control the points. In order to examine the performance and effectiveness of the proposed fuzzy autoregressive time series model, some goodness of fit criteria are used. The results were based on one example of simulated fuzzy time series data and two real examples, which showed that the proposed method performed better than other existing methods.
Mohammad Shafaei Noughabi, Mohammad Khorashadizade,
Volume 19, Issue 1 (9-2025)
Abstract

This article introduces a new extension of the log-logistic distribution, and its properties and parameter estimation are studied and analyzed. It is shown that adding a parameter to this distribution makes its shape more symmetric and less skewed as the parameter increases. Unlike the original distribution, the moments of the new distribution and its quantile function always exist. Furthermore, it is demonstrated that the reliability measures, such as the hazard rate function, the mean residual life function, and stochastic orderings, are more flexible in the new distribution. Additionally, the parameters of the distribution are estimated using the LLP and ML methods, and the efficiency and consistency of the estimators are evaluated through simulation studies. Finally, the practical applicability of the model is demonstrated by applying the new model to real-world data from airborne equipment and lung cancer patients.
Om-Aulbanin Bashiri Goudarzi, Abdolreza Sayyareh, Sedigheh Zamani Mehreyan,
Volume 19, Issue 1 (9-2025)
Abstract

The boosting algorithm is a hybrid algorithm to reduce variance, a family of machine learning algorithms in supervised learning. This algorithm is a method to transform weak learning systems into strong systems based on the combination of different results. In this paper, mixture models with random effects are considered for small areas, where the errors follow the AR-GARCH model. To select the variable, machine learning algorithms, such as boosting algorithms, have been proposed. Using simulated and tax liability data, the boosting algorithm's performance is studied and compared with classical variable selection methods, such as the step-by-step method.
Arezu Rahmanpour, Yadollah Waghei, Gholam Reza Mohtashami Borzadaran,
Volume 19, Issue 1 (9-2025)
Abstract

Change point detection is one of the most challenging statistical problems because the number and position of these points are unknown. In this article, we will first introduce the concept of change point and then obtain the parameter estimation of the first-order autoregressive model AR(1); in order to investigate the precision of estimated parameters, we have done a simulation study. The precision and consistency of parameters were evaluated using MSE. The simulation study shows that parameter estimation is consistent. In the sense that as the sample size increases, the MSE of different parameters converges to zero. Next, the AR(1) model with the change point was fitted to Iran's annual inflation rate data (from 1944 to 2022), and the inflation rate in 2023  and 2024 was predicted using it.
Tara Mohammadi, Hadi Jabbari, Sohrab Effati,
Volume 19, Issue 1 (9-2025)
Abstract

‎Support vector machine (SVM) as a supervised algorithm was initially invented for the binary case‎, ‎then due to its applications‎, ‎multi-class algorithms were also designed and are still being studied as research‎. ‎Recently‎, ‎models have been presented to improve multi-class methods‎. ‎Most of them examine the cases in which the inputs are non-random‎, ‎while in the real world‎, ‎we are faced with uncertain and imprecise data‎. ‎Therefore‎, ‎this paper examines a model in which the inputs are uncertain and the problem's constraints are also probabilistic‎. ‎Using statistical theorems and mathematical expectations‎, ‎the problem's constraints have been removed from the random state‎. ‎Then‎, ‎the moment estimation method has been used to estimate the mathematical expectation‎. ‎Using Monte Carlo simulation‎, ‎synthetic data has been generated and the bootstrap resampling method has been used to provide samples as input to the model and the accuracy of the model has been examined‎. ‎Finally‎, ‎the proposed model was trained with real data and its accuracy was evaluated with statistical indicators‎. ‎The results from simulation and real examples show the superiority of the proposed model over the model based on deterministic inputs‎.


Alireza Beheshty, Hosein Baghishani, Mohammadhasan Behzadi, Gholamhosein Yari, Daniel Turek,
Volume 19, Issue 1 (9-2025)
Abstract

Financial and economic indicators, such as housing prices, often show spatial correlation and heterogeneity. While spatial econometric models effectively address spatial dependency, they face challenges in capturing heterogeneity. Geographically weighted regression is naturally used to model this heterogeneity, but it can become too complex when data show homogeneity across subregions. In this paper, spatially homogeneous subareas are identified through spatial clustering, and Bayesian spatial econometric models are then fitted to each subregion. The integrated nested Laplace approximation method is applied to overcome the computational complexity of posterior inference and the difficulties of MCMC algorithms. The proposed methodology is assessed through a simulation study and applied to analyze housing prices in Mashhad City.


Zahra Nicknam, Rahim Chinipardaz,
Volume 19, Issue 1 (9-2025)
Abstract

Classical hypothesis tests for the parameters provide suitable tests when the hypotheses are not restricted. The best are the uniformly most powerful test and the uniformly most powerful unbiased test. These tests are designed for specific hypotheses, such as one-sided and two-sided for the parameter. However, in practice, we may encounter hypotheses that the parameters under test have typical restrictions in the null or alternative hypothesis. Such hypotheses are not included in the framework of classical hypothesis testing. Therefore, statisticians are looking for more powerful tests than the most powerful ones. In this article, the union-intersection test for the sign test of variances in several normal distributions is proposed and compared with the likelihood ratio test. Although the union-intersection test is more powerful, neither test is unbiased. Two rectangular and smoothed tests have been examined for a more powerful test.
Dr Adeleh Fallah,
Volume 19, Issue 1 (9-2025)
Abstract

In this paper, estimation for the modified Lindley distribution parameter is studied based on progressive Type II censored data. Maximum likelihood estimation, Pivotal estimation, and Bayesian estimation were calculated using the Lindley approximation and Markov chain Monte Carlo methods. Asymptotic, Pivotal, bootstrap, and Bayesian confidence intervals are provided. A Monte Carlo simulation study has been conducted to evaluate and compare the performance of different estimation methods. To further illustrate the introduced estimation methods, two real examples are provided.
Bahram Tarami, Nahid Sanjari Farsipour, Hassan Khosravi,
Volume 19, Issue 2 (3-2026)
Abstract

In many applications, observations have a skewness, an elongated shape, a heavy tail, a multi-mode structure, or a mixed distribution. Therefore, models based on the normal distribution cannot provide correct inferences under such conditions and can lead to biased estimators or increased variance. The Laplace distribution and its generalizations can be suitable alternatives in such situations due to their elongation, heavy tails, and skewness. On the other hand, in models based on mixed distributions, there is always a possibility that fewer samples are available from one or more components. Therefore, given the Bayesian approach's advantage in handling small samples, this research developed a Bayesian model to fit a finite mixed regression model with skew-Laplace distributions and conducted a simulation study to assess its performance. Laplace has been compared in two approaches, frequentist and Bayesian. The results show that the Bayesian approach of the model is more effective than other  models.
Dr. Mahdi Alimohammadi, Mrs. Rezvan Gharebaghi,
Volume 19, Issue 2 (3-2026)
Abstract

It was proved about 60 years ago that if a continuous random variable X has an increasing failure rate  then its order statistics will also be increasing failure rate, and this problem remained unproved for the discrete case until recently a proof method using an integral inequality was provided. In this article, we present a completely different method to solve this problem.
, Hadi Alizadeh Noughabi, Majid Chahkandi,
Volume 19, Issue 2 (3-2026)
Abstract

In today’s industrial world, effective maintenance plays a key role in reducing costs and improving productivity. This paper introduces goodness-of-fit tests based on information measures, including entropy, extropy, and varentropy, to evaluate the type of repair in repairable systems. Using system age data after repair, the tests examine the adequacy of the arithmetic reduction of age model of order 1. The power of the proposed tests is compared with classical tests based on martingale residuals and the probability integral transform. Simulation results show that the proposed tests perform better in identifying imperfect repair models. Their application to real data on vehicle failures also indicates that this model provides a good fit.


Omid Kharazmi, Faezeh Shirazi-Niya,
Volume 19, Issue 2 (3-2026)
Abstract

In this paper, by considering the generalized chi-squared information and the relative generalized chi-squared information measures, discrete versions of these information measures are introduced. Then, generalizations of these information quantities based on their convexity property are presented. Some essential features of these new measures and their relationships are studied. Moreover, the performance of these new information measures is investigated for some well-known and widely used models in coding theory and thermodynamics, such as escort distributions and generalized escort distributions. Finally, two applications of the introduced discrete generalized chi-squared information measure are examined in the context of image quality assessment. In addition, the results obtained from the performance of these measures are compared with the performance of the critical metric, peak signal-to-noise ratio. It is shown that the generalized chi-squared divergence measure exhibits performance similar to the peak signal-to-noise ratio and can be used as an alternative metric.
Ebrahim Amini-Seresht,
Volume 19, Issue 2 (3-2026)
Abstract

In this paper, a nonparametric test based on incomplete data is proposed to investigate the usual stochastic order  using an extension of Banerjee statistic for Type I censored data. This extension is optimized with weight coefficients based on Simpson's rule and the bootstrap method with 10000 iterations to estimate the empirical distribution of the proposed test statistic. The empirical distribution of the statistic under censoring is studied, and the power of the test is evaluated using Monte Carlo simulations against the Lehmann alternative model.


Shahram Yaghoobzadeh,
Volume 19, Issue 2 (3-2026)
Abstract

Studying various models in queueing theory is essential for improving the efficiency of queueing systems. In this paper, from the family of models {E_r/M/c; r,c in N}, the E_r/M/3 model is introduced, and quantities such as the distribution of the number of customers in the system, the average number of customers in the queue and in the system, and the average waiting time in the queue and in the system for a single customer are obtained. Given the crucial role of the traffic intensity parameter in performance evaluation criteria of queueing systems, this parameter is estimated using Bayesian, E‑Bayesian, and hierarchical Bayesian methods under the general entropy loss function and based on the system’s stopping time. Furthermore, based on the E‑Bayesian estimator, a new estimator for the traffic intensity parameter is proposed, referred to in this paper as the E^2‑Bayesian estimator. Accordingly, among the Bayesian, E‑Bayesian, hierarchical Bayesian, and the new estimator, the one that minimizes the average waiting time in the customer queue is considered the optimal estimator for the traffic intensity parameter in this paper. Finally, through Monte Carlo simulation and using a real dataset, the superiority of the proposed estimator over the other mentioned estimators is demonstrated.


Dr Mahdi Rasekhi,
Volume 20, Issue 1 (9-2026)
Abstract

In this paper, a first-order integer-valued autoregressive process with non-negative integer values is introduced, based on the binomial thinning operator and driven by Poisson-Komal distributed noise. To estimate the parameters of the proposed model, two estimation methods are investigated: Conditional Maximum Likelihood Estimation and the Yule–Walker Method. Furthermore, the performance of these estimation techniques is evaluated through a simulation study. In addition, the practical applicability of the proposed model is demonstrated using two real-world datasets from the field of veterinary sciences.


Dr Alireza Pakgohar, Dr Soheil Shokri,
Volume 20, Issue 1 (9-2026)
Abstract

This study investigates the wavelet energy distribution in high-frequency fractal systems and analyzes its characteristics using information-theoretic measures. The main innovation of this paper lies in modeling the wavelet energy distribution ($p_j$) using a truncated geometric distribution and incorporating the concept of extropy to quantify system complexity. It is demonstrated that this distribution is strongly influenced by the fractal parameter $alpha$ and the number of decomposition levels $M$. By computing wavelet entropy and extropy as measures of disorder and information, respectively—the study provides a quantitative analysis of the complexity of these systems. The paper further examines key properties of this distribution, including its convergence to geometric, uniform, and degenerate distributions under limiting conditions (e.g., $M to infty$ or $alpha to 0$). Results indicate that entropy and extropy serve as complementary tools for a comprehensive description of system behavior: while entropy measures disorder, extropy reflects the degree of information and certainty. This approach establishes a novel framework for analyzing real-world signals with varying parameters and holds potential applications in the analysis of fractal signals and modeling of complex systems in fields such as finance and biology.

To validate the theoretical findings, synthetic fractal signals (fractional Brownian motion) with varying fractal parameters ($alpha$) and decomposition levels ($M$) were simulated. Numerical results show that wavelet entropy increases significantly with the number of decomposition levels ($M$), whereas extropy exhibits slower growth and saturates at higher decomposition levels. These findings underscore the importance of selecting an appropriate decomposition level. The proposed combined framework offers a powerful tool for analyzing and modeling complex, non-stationary systems in domains such as finance and biology.
Reza Alizadeh Noughabi, Zohreh Pakdaman, 0000-0002-7515-1896 Hadi Alizadeh Noughabi,
Volume 20, Issue 1 (9-2026)
Abstract

In this paper, a novel index entitled the Jensen cumulative residual extropy divergence is investigated for the analysis and measurement of the behavioral complexity of conditional mixed systems. First, using the vector of conditional coefficients obtained from the signature vector, the behavior of this measure is analytically examined for a class of coherent systems as well as their dual systems, in the case where the components follow gamma distributions. Then, simulations are performed to evaluate the obtained results. The results of this paper show that the minimum complexity is achieved by coherent $k$-out-of-$n$ systems with the Jensen cumulative residual extropy divergence equal to zero. Moreover, the results indicate that duality of systems does not necessarily lead to equality of the Jensen cumulative residual extropy divergence in conditional mixed systems; rather, this index is sensitive to component weighting, order statistics, and the structural interaction among the components of the system.
Fatemeh Ghasemi, Ali Mohammadian Mosammam, Mateu Jorge,
Volume 20, Issue 1 (9-2026)
Abstract

This paper presents a nonparametric Bayesian method for estimating nonstationary covariance structures in big spatial datasets. The approach extends the Vecchia approximation and assumes conditional independence among ordered data points, leading to a sparse precision matrix and sparse Cholesky decomposition. This enables modeling an $n$-dimensional Gaussian process as a sequence of Bayesian linear regressions. Data ordering via maximum minimum distance improves model performance. Applying the grouping algorithm to ordered data removes weak dependencies and defines a block-sparse covariance structure, significantly reducing computational burden and enhancing accuracy. Simulations and real data analysis show that posterior samples from the proposed method yield narrower uncertainty intervals than those from ungrouped approaches.
Dr Tahere Manouchehri, Dr Ali Reza Nematollahi,
Volume 20, Issue 1 (9-2026)
Abstract

In this paper, we present a comprehensive review and comparative analysis of estimation methods for periodic autoregressive (PAR) models driven by scale mixture of skew-normal (SMSN) innovations, a flexible class suitable for modeling both symmetric and asymmetric data. Expectation-conditional maximization algorithms are employed to develop maximum likelihood, maximum a posteriori, and Bayesian estimation procedures. A thorough evaluation of these methods is conducted using simulation studies, with particular attention to asymptotic properties and robustness against outliers, high peaks, and heavy tails. To demonstrate their practical utility, these methods are applied to monthly Google stock price data.



Page 12 from 13     

مجله علوم آماری – نشریه علمی پژوهشی انجمن آمار ایران Journal of Statistical Sciences

Persian site map - English site map - Created in 0.18 seconds with 50 queries by YEKTAWEB 4722