[Home ] [Archive]   [ فارسی ]  
:: Main :: About :: Current Issue :: Archive :: Search :: Submit :: Contact ::
Main Menu
Home::
Journal Information::
Articles archive::
For Authors::
For Reviewers::
Registration::
Ethics Considerations::
Contact us::
Site Facilities::
::
Search in website

Advanced Search
..
Receive site information
Enter your Email in the following box to receive the site news and information.
..
Indexing and Abstracting



 
..
Social Media

..
Licenses
Creative Commons License
This Journal is licensed under a Creative Commons Attribution NonCommercial 4.0
International License
(CC BY-NC 4.0).
 
..
Similarity Check Systems


..
:: Search published articles ::
Showing 126 results for Type of Study: Applied

Miss Forouzan Jafari, Dr. Mousa Golalizadeh,
Volume 17, Issue 2 (2-2024)
Abstract

The mixed effects model is one of the powerful statistical approaches used to model the relationship between the response variable and some predictors in analyzing data with a hierarchical structure. The estimation of parameters in these models is often done following either the least squares error or maximum likelihood approaches. The estimated parameters obtained either through the least squares error or the maximum likelihood approaches are inefficient, while the error distributions are non-normal.   In such cases, the mixed effects quantile regression can be used. Moreover, when the number of variables studied increases, the penalized mixed effects quantile regression is one of the best methods to gain prediction accuracy and the model's interpretability. In this paper, under the assumption of an asymmetric Laplace distribution for random effects, we proposed a double penalized model in which both the random and fixed effects are independently penalized. Then, the performance of this new method is evaluated in the simulation studies, and a discussion of the results is presented along with a comparison with some competing models. In addition, its application is demonstrated by analyzing a real example.
Bahram Haji Joudaki, Reza Hashemi, Soliman Khazaei,
Volume 17, Issue 2 (2-2024)
Abstract

 In this paper, a new Dirichlet process mixture model with the generalized inverse Weibull distribution as the kernel is proposed. After determining the prior distribution of the parameters in the proposed model, Markov Chain Monte Carlo methods were applied to generate a sample from the posterior distribution of the parameters. The performance of the presented model is illustrated by analyzing real and simulated data sets, in which some data are right-censored. Another potential of the proposed model demonstrated for data clustering. Obtained results indicate the acceptable performance of the introduced model.
Behnam Amiri, Roya Nasirzadeh,
Volume 17, Issue 2 (2-2024)
Abstract

Spatial processes are widely used in data analysis, specifically image processing. In image processing, examining periodic images is one of the most critical challenges. To investigate this issue, we can use periodically correlated spatial processes. To this end, it is necessary to determine whether the images are periodic or not, and if they are, what type of period it is. In the current study, we first introduce and express the properties of periodically correlated spatial processes. Then, we present a spatial periodogram to determine the period of periodically correlated spatial processes. Finally, we will elaborate on its usage to recognize the periodicity of the images.

Mrs. Elaheh Kadkhoda, Mr. Gholam Reza Mohtashami Borzadaran, Mr. Mohammad Amini,
Volume 18, Issue 1 (8-2024)
Abstract

Maximum entropy copula theory is a combination of copula and entropy theory. This method obtains the maximum entropy distribution of random variables by considering the dependence structure. In this paper, the most entropic copula based on Blest's measure is introduced, and its parameter estimation method is investigated. The simulation results show that if the data has low tail dependence, the proposed distribution performs better compared to the most entropic copula distribution based on Spearman's coefficient. Finally, using the monthly rainfall series data of Zahedan station, the application of this method in the analysis of hydrological data is investigated.
Mr Milad Pakdel, Dr Kiomars Motarjem,
Volume 18, Issue 1 (8-2024)
Abstract

In some instances, the occurrence of an event can be influenced by its spatial location, giving rise to spatial survival data. The accurate and precise estimation of parameters in a spatial survival model poses a challenge due to the complexity of the likelihood function, highlighting the significance of employing a Bayesian approach in survival analysis. In a Bayesian spatial survival model, the spatial correlation between event times is elucidated using a geostatistical model. This article presents a simulation study to estimate the parameters of classical and spatial survival models, evaluating the performance of each model in fitting simulated survival data. Ultimately, it is demonstrated that the spatial survival model exhibits superior efficacy in analyzing blood cancer data compared to conventional models.


Hamed Salemian, Eisa Mahmoudi, Sayed Mohammad Reza Alavi,
Volume 18, Issue 1 (8-2024)
Abstract

Often, in sample surveys, respondents refused to answer some questions of a sensitive nature. Randomized response methods are designed not to reveal respondent confidentiality. In this article, a new quantitative randomized response method is introduced, and by conducting a series of simulation studies, we show that the proposed method is preferable to the cumulative and multiplicative methods. By using unbiased predictors, we estimate the covariance between two sensitive variables. In an experimental study using the proposed method, the average number of cheating and the average daily cigarette consumption of the Shahid Chamran University of Ahvaz students are estimated along with their variance, and an estimate for the covariance between them is provided.
Ms. Samira Taheri, Dr Mohammad Ghasem Akbari, Dr Gholamreza Hesamian,
Volume 18, Issue 1 (8-2024)
Abstract

In this paper, based on the concept of $alpha$-values of fuzzy random variables, the fuzzy moving average model of order $q$ is introduced. In this regard, first, the definitions of variance, covariance, and correlation coefficient between fuzzy random variables are presented, and their properties are investigated. In the following, while introducing the fuzzy moving average model of order $q$, this model's autocovariance and autocorrelation functions are calculated. Finally, some examples are presented for the obtained results.

Mozhgan Moradi, Shaho Zarei,
Volume 18, Issue 1 (8-2024)
Abstract

Model-based clustering is the most widely used statistical clustering method, in which heterogeneous data are divided into homogeneous groups using inference based on mixture models. The presence of measurement error in the data can reduce the quality of clustering and, for example, cause overfitting and produce spurious clusters. To solve this problem, model-based clustering assuming a normal distribution for measurement errors has been introduced. However, too large or too small (outlier) values ​​of measurement errors cause poor performance of existing clustering methods. To tackle this problem {and build a stable model against the presence of outlier measurement errors in the data}, in this article, a symmetric $alpha$-stable distribution is proposed as a replacement for the normal distribution for measurement errors, and the model parameters are estimated using the EM algorithm and numerical methods. Through simulation and real data analysis, the new model is compared with the MCLUST-based model, considering cases with and without measurement errors, and the performance of the proposed model  for data clustering in the presence of various outlier measurement errors is shown.
Roghayeh Ghorbani Gholi Abad, Gholam Reza Mohtashami Borzadaran, Mohammad Amini, Zahra Behdani,
Volume 18, Issue 2 (2-2025)
Abstract

Abstract: The use of tail risk measures has been noticed in recent decades, especially in the financial and banking industry. The most common ones are value at risk and expected shortfall. The tail Gini risk measure, a composite risk measure, was introduced recently. The primary purpose of this article is to find the relationship between the concepts of economic risks, especially the expected shortfall and the tail Gini risk measure, with the concepts of inequality indices in the economy and reliability. Examining the relationship between these concepts allows the researcher to use the concepts of one to investigate other concepts. As you will see below, the existing mathematical relationships between the tail risk measures and the mentioned indices have been obtained, and these relationships have been calculated for some distributions. Finally, real data from the Iranian Stock Exchange was used to familiarize the concept of this tail risk measure. 

Mehrnoosh Madadi, Kiomars Motarjem,
Volume 18, Issue 2 (2-2025)
Abstract

Due to the volume and complexity of emerging data in survival analysis, it is necessary to use statistical learning methods in this field. These methods can estimate the probability of survival and the effect of various factors on the survival of patients. In this article, the performance of the Cox model as a common model in survival analysis is compared with compensation-based methods such as Cox Ridge and Cox Lasso, as well as statistical learning methods such as random survival forests and neural networks. The simulation results show that in linear conditions, the performance of the models mentioned above is similar to the Cox model. In non-linear conditions, methods such as Cox lasso, random survival forest, and neural networks perform better. Then, these models were evaluated in the analysis of the data of patients with atheromatous, and the results showed that when faced with data with a large number of explanatory variables, statistical learning approaches generally perform better than the classical survival model.
Maryam Maleki, Hamid Reza Nili-Sani, M.g. Akbari,
Volume 18, Issue 2 (2-2025)
Abstract

In this paper, we consider the issue of data classification in which the response (dependent) variable is two (or multi) valued and the predictor (independent) variables are ordinary variables. The errors could be nonprecise and random. In this case, the response variable is also a fuzzy random variable. Based on this and logistic regression, we formulate a model and find the estimation of the coefficients using the least squares method. We will describe the results with an example of one independent random variable. Finally, we provide recurrence relations for the estimation of parameters. This relation can be used in machine learning and big data classification.
Abdolreza Sayyareh, Saeide Abdollahzadeh,
Volume 18, Issue 2 (2-2025)
Abstract

Non-invasive NIPT test has been used in trisomy 21 screening. However, there is a possibility of misdiagnosis in the methods used to diagnose Down syndrome. Therefore, it is essential to provide a process that can be used alongside these methods to improve efficiency. The main goal of this article is to design a model based on machine learning algorithms for the early diagnosis of Down syndrome. Machine learning algorithms such as support vector machine, simple Bayes, decision tree, random forest, and nearest neighbor, which are frequently used to improve the diagnosis of disorders, have been implemented on the mentioned dataset. The performance of each model on the Down syndrome dataset was investigated, and the most suitable model for this purpose was introduced.
ُsomayeh Mohebbi, Ali M. Mosammam,
Volume 19, Issue 1 (9-2025)
Abstract

Systemic risk, as one of the challenges of the financial system, has attracted special attention from policymakers, investors, and researchers. Identifying and assessing systemic risk is crucial for enhancing the financial stability of the banking system. In this regard, this article uses the Conditional Value at Risk method to evaluate the systemic risk of simulated data and Iran's banking system. In this method, the conditional mean and conditional variance are modeled using Autoregressive Moving Average and Generalized Autoregressive Conditional Heteroskedasticity models, respectively. The data studied includes the daily stock prices of 17 Iranian banks from April 8, 2019, to May 1, 2023, which contains missing values in some periods. The Kalman filter approach has been used for interpolating the missing values. Additionally, Vine copulas  with a hierarchical tree structure have been employed to describe the nonlinear dependencies and hierarchical risk structure of the returns of the studied banks. The results of these calculations indicate that Bank Tejarat has the highest systemic risk, and the increase in systemic risk, in addition to causing financial crises, has adverse effects on macroeconomic performance. These results can significantly help in predicting and mitigating the effects of financial crises and managing them effectively.


Bahram Haji Joudaki, Soliman Khazaei, Reza Hashemi,
Volume 19, Issue 1 (9-2025)
Abstract

Accelerated failure time models are used in survival analysis when the data is censored, especially when combined with auxiliary variables. When the models in question depend on an unknown parameter, one of the methods that can be applied is Bayesian methods, which consider the parameter space as infinitely dimensional. In this framework, the Dirichlet process mixture model plays an important role. In this paper, a Dirichlet process mixture model with the Burr XII distribution as the kernel is considered for modeling the survival distribution in the accelerated failure time. Then, MCMC methods were employed to generate samples from the posterior distribution. The performance of the proposed model is compared with the Polya tree mixture models based on simulated and real data. The results obtained show that the proposed model performs better.
Mehrdad Ghaderi, Zahra Rezaei Ghahroodi, Mina Gandomi,
Volume 19, Issue 1 (9-2025)
Abstract

Researchers often face the problem of how to address missing data. Multiple imputation by chained equations is one of the most common methods for imputation. In theory, any imputation model can be used to predict the missing values. However, if the predictive models are incorrect, it can lead to biased estimates and invalid inferences. One of the latest solutions for dealing with missing data is machine learning methods and the SuperMICE method. In this paper, We present a set of simulations indicating that this approach produces final parameter estimates with lower bias and better coverage than other commonly used imputation methods. Also, implementing some machine learning methods and an ensemble algorithm, SuperMICE, on the data of the Industrial establishment survey is discussed,  in which the imputation of different variables in the data co-occurs. Also, the evaluation of various methods is discussed, and the method that has better performance than the other methods is introduced.


Mehran Naghizadeh Qomi, Zohre Mahdizadeh,
Volume 19, Issue 1 (9-2025)
Abstract

This paper investigates repetitive acceptance sampling inspection plans of lots based on type I censoring when the lifetime has a Tsallis q-exponential distribution. A repetitive acceptance sampling inspection plan is introduced, and its components, along with the optimal average sample number and the operating characteristic value of the plan, are calculated under the specified values for the parameter of distribution and consumer's and producer's risks using a nonlinear programming optimization problem. Comparing the results of the proposed repetitive acceptance sampling plan with the optimal single sampling inspection plan demonstrates the efficiency of the repetitive acceptance sampling plan over the single sampling plan. Moreover, repetitive sampling plans with a limited linear combination of risks are introduced and compared with the existing plan. Results of the introduced plan in tables and figures show that this plan has a lower ASN and, therefore, more efficiency than the existing design. A practical example in the textile industry is used to apply the proposed schemes.
Meisam Moghimbeygi,
Volume 19, Issue 2 (4-2025)
Abstract

The classification of shape data is a significant challenge in the statistical analysis of shapes and machine learning. In this paper, we introduce a multinomial logistic regression model based on shape descriptors for classifying labeled configurations. In this model, the explanatory variables include a set of geometric descriptors such as area, elongation, convexity, and circularity, while the response variable represents the category of each configuration. The inclusion of these descriptors preserves essential geometric information and enhances classification accuracy. We evaluate the proposed model using both simulated data and real datasets, and the results demonstrate its effective performance. Additionally, the proposed method was compared with one of the existing methods in the literature, and the results indicated its superiority in terms of both classification accuracy and computational simplicity.


Hossein Haghbin,
Volume 19, Issue 2 (4-2025)
Abstract

In this paper, a novel approach for forecasting a time sequence of probability density functions is introduced, which is based on Functional Singular Spectrum Analysis (FSSA). This approach is designed to analyze functional time series and address the constraints in predicting density functions, such as non-negativity and unit integral properties. First, appropriate transformations are introduced to convert the time series of density functions into a functional time series. Then, FSSA is applied to forecast the new functional time series, and finally, the predicted functions are transformed back into the space of density functions using the inverse transformation. The proposed method is evaluated using real-world data, including the density of satellite imagery.
Dr Mojtaba Kashani, Dr Reza Ghasemi,
Volume 19, Issue 2 (4-2025)
Abstract

In statistical research, experimental designs are used to investigate the effect of control variables on output responses. These methods are based on the assumption of normal distribution of data and face fundamental challenges in dealing with outliers. The present study examines five different examples of experimental design methods to deal with this challenge: Huber, quadratic, substitution, ranking, and fuzzy regression robustness methods. By providing empirical evidence from real data on seedling growth and weld quality, it is shown that fuzzy can be used as an efficient alternative to conventional methods in the presence of outliers. It is shown that fuzzy not only outperforms the classical experimental design method in the presence of outliers, but also outperforms standard robustness methods in handling outliers.


Seyed Jamal, Khorashadizadeh, Fatemeh Yousefzadeh, Sara Jomhoori,
Volume 19, Issue 2 (4-2025)
Abstract

Researchers develop generalized families of distributions to better model data in fields like risk management, economics, and insurance. In this paper, a new distribution, the Extended Exponential Log-Logistic Distribution, is introduced, which belongs to the class of heavy-tailed distributions. Some statistical properties of the model, including moments, moment-generating function, entropy, and economic inequality curves, are derived. Six estimation methods are proposed for estimating the model parameters, and the performance of these methods is evaluated using randomly generated datasets. Additionally, several insurance-related measures, including Value at Risk, Tail Value at Risk, Tail Variance, and Tail Variance Premium, are calculated. Finally, two real insurance datasets are employed, showing that the proposed model fits the data better than many existing related models.

Page 6 from 7     

مجله علوم آماری – نشریه علمی پژوهشی انجمن آمار ایران Journal of Statistical Sciences

Persian site map - English site map - Created in 0.07 seconds with 52 queries by YEKTAWEB 4718