|
|
 |
Search published articles |
 |
|
Showing 237 results for Type of Study: Research
Abdol Saeed Toomaj, Volume 18, Issue 1 (8-2024)
Abstract
In this paper, the entropy characteristics of the lifetime of coherent systems are investigated using the concept of system signature. The results are based on the assumption that the lifetime distribution of system components is independent and identically distributed. In particular, a formula for calculating the Tsallis entropy of a coherent system's lifetime is presented, which is used to compare systems with the same characteristics. Also, bounds for the lifetime Tsallis entropy of coherent systems are presented. These bounds are especially useful when the system has many components or a complex structure. Finally, a criterion for selecting the preferred system among coherent systems based on the relative Tsallis entropy is presented.
Fatemeh Hosseini, Omid Karimi, Volume 18, Issue 1 (8-2024)
Abstract
The spatial generalized linear mixed models are often used, where the latent variables representing spatial correlations are modeled through a Gaussian random field to model the categorical spatial data. The violation of the Gaussian assumption affects the accuracy of predictions and parameter estimates in these models. In this paper, the spatial generalized linear mixed models are fitted and analyzed by utilizing a stationary skew Gaussian random field and employing an approximate Bayesian approach. The performance of the model and the approximate Bayesian approach is examined through a simulation example, and implementation on an actual data set is presented.
Omid Karimi, Fatemeh Hosseini, Volume 18, Issue 2 (2-2025)
Abstract
Spatial regression models are used to analyze quantitative spatial responses based on linear and non-linear relationships with explanatory variables. Usually, the spatial correlation of responses is modeled with a Gaussian random field based on a multivariate normal distribution. However, in practice, we encounter skewed responses, which are analyzed using skew-normal distributions. Closed skew-normal distribution is one of the extended families of skew-normal distributions, which has similar properties to normal distributions. This article presents a hierarchical Bayesian analysis based on a flexible subclass of closed skew-normal distributions. Given the time-consuming nature of Monte Carlo methods in hierarchical Bayes analysis, we have opted to use the variational Bayes approach to approximate the posterior distribution. This decision was made to expedite the analysis process without compromising the accuracy of our results. Then, the proposed model is implemented and analyzed based on the real earthquake data of Iran.
Mr. Majid Hashempour, Mr. Morteza Mohammadi, Volume 18, Issue 2 (2-2025)
Abstract
This paper introduces the dynamic weighted cumulative residual extropy criterion as a generalization of the weighted cumulative residual extropy criterion. The relationship of the proposed criterion with reliability criteria such as weighted mean residual lifetime, hazard rate function, and second-order conditional moment are studied. Also, characterization properties, upper and lower bounds, inequalities, and stochastic orders based on dynamic weighted cumulative residual extropy and the effect of linear transformation on it will be presented. Then, a non-parametric estimator based on the empirical method for the introduced criterion is given, and its asymptotic properties are studied. Finally, an application of the dynamic weighted cumulative residual extropy in selecting the appropriate data distribution on a real data set is discussed.
Aqeel Lazam Razzaq, Isaac Almasi, Ghobad Saadat Kia, Volume 18, Issue 2 (2-2025)
Abstract
Adding parameters to a known distribution is a valuable way of constructing flexible families of distributions. In this paper, we introduce a new model, the modified additive hazard rate model, by replacing the additive hazard rate distribution in the general proportional add ratio model. Next, when two sets of random variables follow the modified additive hazard model, we establish stochastic comparisons between the series and parallel systems comprising these components.
Ali Dastbaravarde, Volume 18, Issue 2 (2-2025)
Abstract
In statistical hypothesis testing, model misspecification error occurs when the real model of the data is none of the models under null and alternative hypotheses. This research has studied the probability of model misspecification errors in one-sided tests. These error rates are compared between the Neyman-Pearson and evidential statistical inference approaches. The results show that the evidential approach works better than the Neyman-Pearson approach.
, Roshanak Zaman, Volume 18, Issue 2 (2-2025)
Abstract
In this paper, the prediction of the lifetime of k-component coherent systems is studied using classical and Bayesian approaches with type-II censored system lifetime data. The system structure and signature are assumed to be known, and the component lifetime distribution follows a half-logistic model. Various point predictors, including the maximum likelihood predictor, the best-unbiased predictor, the conditional median predictor, and the Bayesian point predictor under a squared error loss function, are calculated for the coherent system lifetime. Since the integrals required for Bayes prediction do not have closed forms, the Metropolis-Hastings algorithm and importance sampling methods are applied to approximate these integrals. For type-II censored lifetime data, prediction interval based on the pivotal quantity, prediction interval HCD, and Bayesian prediction interval are considered. A Monte Carlo simulation study and a numerical example are conducted to evaluate and compare the performances of the different prediction methods.
Mohammad Mehdi Saber, Mohsen Mohammadzadeh, Volume 18, Issue 2 (2-2025)
Abstract
In this article, autoregressive spatial regression and second-order moving average will be presented to model the outputs of a heavy-tailed skewed spatial random field resulting from the developed multivariate generalized Skew-Laplace distribution. The model parameters are estimated by the maximum likelihood method using the Kolbeck-Leibler divergence criterion. Also, the best spatial predictor will be provided. Then, a simulation study is conducted to validate and evaluate the performance of the proposed model. The method is applied to analyze a real data.
Farzane Hashemi, Volume 18, Issue 2 (2-2025)
Abstract
One of the most widely used statistical topics in research fields is regression problems. In these models, the basic assumption of model errors is their normality, which, in some cases, is different due to asymmetry features or break points in the data. Piecewise regression models have been widely used in various fields, and it is essential to detect the breakpoint. The break points in piecewise regression models are necessary to know when and how the pattern of the data structure changes. One of the major problems is that there is a heavy tail in these data, which has been solved by using some distributions that generalize the normal distribution. In this paper, the piecewise regression model will be investigated based on the scale mixture of the normal distribution. Also, this model will be compared with the standard piecewise regression model derived from normal errors.
Jalal Chachi, Mohammadreza Akhond, Shokoufeh Ahmadi, Volume 18, Issue 2 (2-2025)
Abstract
The Lee-Carter model is a useful dynamic stochastic model representing the evolution of central mortality rates over time. This model only considers the uncertainty about the coefficient related to the mortality trend over time but not the age-dependent coefficients. This paper proposes a fuzzy extension of the Lee-Carter model that allows quantifying the uncertainty of both kinds of parameters. The variability of the time-dependent index is modeled as a stochastic fuzzy time series. Likewise, the uncertainty of the age-dependent coefficients is quantified using triangular fuzzy numbers. Considering this last hypothesis requires developing and solving a fuzzy regression model. Once the generalization of the desired fuzzy model is introduced, we will show how to fit the logarithm of the central mortality rate in Khuzestan province using by using fuzzy numbers arithmetic during the years 1401-1383 and random fuzzy forecast in the years 1402-1406.
Elham Ranjbar, Mohamad Ghasem Akbari, Reza Zarei, Volume 19, Issue 1 (9-2025)
Abstract
In the time series analysis, we may encounter situations where some elements of the model are imprecise quantities. One of the most common situations is the inaccuracy of the underlying observations, usually due to measurement or human errors. In this paper, a new fuzzy autoregressive time series model based on the support vector machine approach is proposed. For this purpose, the kernel function has been used for the stability and flexibility of the model, and the constraints included in the model have been used to control the points. In order to examine the performance and effectiveness of the proposed fuzzy autoregressive time series model, some goodness of fit criteria are used. The results were based on one example of simulated fuzzy time series data and two real examples, which showed that the proposed method performed better than other existing methods.
Mohammad Shafaei Noughabi, Mohammad Khorashadizade, Volume 19, Issue 1 (9-2025)
Abstract
This article introduces a new extension of the log-logistic distribution, and its properties and parameter estimation are studied and analyzed. It is shown that adding a parameter to this distribution makes its shape more symmetric and less skewed as the parameter increases. Unlike the original distribution, the moments of the new distribution and its quantile function always exist. Furthermore, it is demonstrated that the reliability measures, such as the hazard rate function, the mean residual life function, and stochastic orderings, are more flexible in the new distribution. Additionally, the parameters of the distribution are estimated using the LLP and ML methods, and the efficiency and consistency of the estimators are evaluated through simulation studies. Finally, the practical applicability of the model is demonstrated by applying the new model to real-world data from airborne equipment and lung cancer patients.
Om-Aulbanin Bashiri Goudarzi, Abdolreza Sayyareh, Sedigheh Zamani Mehreyan, Volume 19, Issue 1 (9-2025)
Abstract
The boosting algorithm is a hybrid algorithm to reduce variance, a family of machine learning algorithms in supervised learning. This algorithm is a method to transform weak learning systems into strong systems based on the combination of different results. In this paper, mixture models with random effects are considered for small areas, where the errors follow the AR-GARCH model. To select the variable, machine learning algorithms, such as boosting algorithms, have been proposed. Using simulated and tax liability data, the boosting algorithm's performance is studied and compared with classical variable selection methods, such as the step-by-step method.
Arezu Rahmanpour, Yadollah Waghei, Gholam Reza Mohtashami Borzadaran, Volume 19, Issue 1 (9-2025)
Abstract
Change point detection is one of the most challenging statistical problems because the number and position of these points are unknown. In this article, we will first introduce the concept of change point and then obtain the parameter estimation of the first-order autoregressive model AR(1); in order to investigate the precision of estimated parameters, we have done a simulation study. The precision and consistency of parameters were evaluated using MSE. The simulation study shows that parameter estimation is consistent. In the sense that as the sample size increases, the MSE of different parameters converges to zero. Next, the AR(1) model with the change point was fitted to Iran's annual inflation rate data (from 1944 to 2022), and the inflation rate in 2023 and 2024 was predicted using it.
Tara Mohammadi, Hadi Jabbari, Sohrab Effati, Volume 19, Issue 1 (9-2025)
Abstract
Support vector machine (SVM) as a supervised algorithm was initially invented for the binary case, then due to its applications, multi-class algorithms were also designed and are still being studied as research. Recently, models have been presented to improve multi-class methods. Most of them examine the cases in which the inputs are non-random, while in the real world, we are faced with uncertain and imprecise data. Therefore, this paper examines a model in which the inputs are uncertain and the problem's constraints are also probabilistic. Using statistical theorems and mathematical expectations, the problem's constraints have been removed from the random state. Then, the moment estimation method has been used to estimate the mathematical expectation. Using Monte Carlo simulation, synthetic data has been generated and the bootstrap resampling method has been used to provide samples as input to the model and the accuracy of the model has been examined. Finally, the proposed model was trained with real data and its accuracy was evaluated with statistical indicators. The results from simulation and real examples show the superiority of the proposed model over the model based on deterministic inputs.
Alireza Beheshty, Hosein Baghishani, Mohammadhasan Behzadi, Gholamhosein Yari, Daniel Turek, Volume 19, Issue 1 (9-2025)
Abstract
Financial and economic indicators, such as housing prices, often show spatial correlation and heterogeneity. While spatial econometric models effectively address spatial dependency, they face challenges in capturing heterogeneity. Geographically weighted regression is naturally used to model this heterogeneity, but it can become too complex when data show homogeneity across subregions. In this paper, spatially homogeneous subareas are identified through spatial clustering, and Bayesian spatial econometric models are then fitted to each subregion. The integrated nested Laplace approximation method is applied to overcome the computational complexity of posterior inference and the difficulties of MCMC algorithms. The proposed methodology is assessed through a simulation study and applied to analyze housing prices in Mashhad City.
Zahra Nicknam, Rahim Chinipardaz, Volume 19, Issue 1 (9-2025)
Abstract
Classical hypothesis tests for the parameters provide suitable tests when the hypotheses are not restricted. The best are the uniformly most powerful test and the uniformly most powerful unbiased test. These tests are designed for specific hypotheses, such as one-sided and two-sided for the parameter. However, in practice, we may encounter hypotheses that the parameters under test have typical restrictions in the null or alternative hypothesis. Such hypotheses are not included in the framework of classical hypothesis testing. Therefore, statisticians are looking for more powerful tests than the most powerful ones. In this article, the union-intersection test for the sign test of variances in several normal distributions is proposed and compared with the likelihood ratio test. Although the union-intersection test is more powerful, neither test is unbiased. Two rectangular and smoothed tests have been examined for a more powerful test.
Dr Adeleh Fallah, Volume 19, Issue 1 (9-2025)
Abstract
In this paper, estimation for the modified Lindley distribution parameter is studied based on progressive Type II censored data. Maximum likelihood estimation, Pivotal estimation, and Bayesian estimation were calculated using the Lindley approximation and Markov chain Monte Carlo methods. Asymptotic, Pivotal, bootstrap, and Bayesian confidence intervals are provided. A Monte Carlo simulation study has been conducted to evaluate and compare the performance of different estimation methods. To further illustrate the introduced estimation methods, two real examples are provided.
Dr. Mahdi Alimohammadi, Mrs. Rezvan Gharebaghi, Volume 19, Issue 2 (4-2025)
Abstract
It was proved about 60 years ago that if a continuous random variable X has an increasing failure rate then its order statistics will also be increasing failure rate, and this problem remained unproved for the discrete case until recently a proof method using an integral inequality was provided. In this article, we present a completely different method to solve this problem.
Omid Kharazmi, Faezeh Shirazi-Niya, Volume 19, Issue 2 (4-2025)
Abstract
In this paper, by considering the generalized chi-squared information and the relative generalized chi-squared information measures, discrete versions of these information measures are introduced. Then, generalizations of these information quantities based on their convexity property are presented. Some essential features of these new measures and their relationships are studied. Moreover, the performance of these new information measures is investigated for some well-known and widely used models in coding theory and thermodynamics, such as escort distributions and generalized escort distributions. Finally, two applications of the introduced discrete generalized chi-squared information measure are examined in the context of image quality assessment. In addition, the results obtained from the performance of these measures are compared with the performance of the critical metric, peak signal-to-noise ratio. It is shown that the generalized chi-squared divergence measure exhibits performance similar to the peak signal-to-noise ratio and can be used as an alternative metric.
|
|