|
|
 |
Search published articles |
 |
|
Dr. Shahram Yaghoobzadeh Shahrestani, Dr. Reza Zarei, Volume 25, Issue 1 (1-2021)
Abstract
Whenever approximate and initial information about the unknown parameter of a distribution is available, the shrinkage estimation method can be used to estimate it. In this paper, first, the E-Bayesian estimation of the parameter of an inverse Rayleigh distribution under the general entropy loss function is obtained. Then, the shrinkage estimate of the inverse Rayleigh distribution parameter is investigated using the guess value. Also, using Monte Carlo simulations and a real data set, the proposed shrinkage estimation is compared with the UMVU and E-Bayesian estimators based on the relative efficiency criterion.
Hamid Reza Nili Sani, Mehdi Jafari, Volume 25, Issue 2 (3-2021)
Abstract
In this study, we first introduce the Banach lattice random elements and some of their properties. Then, using the order defined in Banach lattice space, we introduce and characterize the order negatively dependence Banach lattice random elements by the order defined in Banach lattice space. Finally, we obtain some limit theorems for the sequence of order negatively dependence Banach lattice random elements.
Taban Baghfalaki, , , , Volume 25, Issue 2 (3-2021)
Abstract
In analyzing longitudinal data with counted responses, normal distribution is usually used for distribution of the random efffects. However, in some applications random effects may not be normally distributed. Misspecification of this distribution may cause reduction of efficiency of estimators. In this paper, a generalized log-gamma distribution is used for the random effects which includes the normal one as a special case. As the frquentist analysis faces with complex computation, the Bayesian analysis of this model is investigated and then it is utilized for analyzing two real data sets. Also, some simulation studies are conducted to evaluate the performance of the relevant models.
Mohammadreza Faridrohani, Behdad Mostafaiy, Seyyed Mohammad Ebrahim Hosseininasab, Volume 25, Issue 2 (3-2021)
Abstract
Recently with science and technology development, data with functional nature are easy to collect. Hence, statistical analysis of such data is of great importance. Similar to multivariate analysis, linear combinations of random variables have a key role in functional analysis. The role of Theory of Reproducing Kernel Hilbert Spaces is very important in this content. In this paper we study a general concept of Fisher’s linear discriminant analysis that extends the classical multivariate method to the case functional data. A bijective map is used to link a second order process to the reproducing kernel Hilbert space, generated by its within class covariance kernel. Finally a real data set related to Iranian weather data collected in 2008 is also treated.
Farzad Eskandari, Sima Naghizadeh Ardebili, , Volume 25, Issue 2 (3-2021)
Abstract
The Internet of Things is suggested as the upcoming revolution in the Information and communication technology due to its very high capability of making various businesses and industries more productive and efficient. This productivity comes from the emergence of innovation and the introduction of new capabilities for businesses. Different industries have shown varying reactions to IOT, but what is clear is that IOT has applications in all Businesses. These applications have made significant progress in some industries such as health and transportation but is under development in others, namely agriculture and animal husbandry. In fact, the production of data bases on the Internet of Things is one of the main pillars in the field of big data and data science, Therefore, statistical concepts and models that are used in data science can be beneficially implemented in such data. Among the valid statistical models, Bayesian statistics for data is being utilized in these studies. In this research the fundamentals of Bayesian statistics for big data and most notably the data produced by IOT is explained. They have been Pragmatically examined in both road traffic as well as people’s social behavior towards using vehicles, which have had practically and scientifically valid results.
Mehrdad Tamiji, Dr. S. Mahmoud Taheri, Volume 25, Issue 2 (3-2021)
Abstract
Methods of inferring the population structure, its applications in identifying disease models as well as foresighting the physical and mental situation of human beings have been finding ever-increasing importance. In this article, first, motivation and significance of studying the problem of population structure is explained. In the next section, the applications of inference of population structure in biology and the treatment of various diseases are described. Afterward, the methods of inferring the population structure as well as detecting the disease model correspond to each subpopulation, for populations whose members are admixture or not, are described separately. To this end, the methods of inferring the population structure through the Bayesian approach are emphasized and the reasons for the superiority of Bayesian methods are illustrated.
Sirous Fathi Manesh, Muhyiddin Izadi, Baha-Eldin Khaledi, Volume 25, Issue 2 (3-2021)
Abstract
One of the challenges for decision-makers in insurance and finance is choosing the appropriate criteria for making decisions. Mathematical expectation, expected utility, and distorted expectation are the three most common measures in this area. In this article, we study these three criteria, and by providing some examples, we review and compare the decisions made by each measure.
Miss. Kimia Kazemi, Prof. Mohsen Mohammadzadeh, Volume 25, Issue 2 (3-2021)
Abstract
In conventional methods for spatial survival data modeling, it is often assumed that the coefficients of explanatory variables in different regions have a constant effect on survival time. Usually, the spatial correlation of data through a random effect is also included in the model. But in many practical issues, the factors affecting survival time do not have the same effects in different regions. In this paper, we consider the spatial effects of factors affecting survival time are not the same in the different areas.
For this purpose, spatial regression models and spatial varying coefficient models are introduced. Next, the Bayesian estimates of their parameters are presented. Three models of classical regression, spatial regression and spatial varying coefficient regression are used to analyze Esophageal cancer survival data. The relative risk of various factors is examined and evaluated.
Dr Seyed Kamran Ghoreishi, , Volume 25, Issue 2 (3-2021)
Abstract
In this paper, we first define longitudinal-dynamic heteroscedastic hierarchical normal models. These models can be used to fit longitudinal data in which the dependency structure is constructed through a dynamic model rather than observations. We discuss different methods for estimating the hyper-parameters. Then the corresponding estimates for the hyper-parameter that causes the association in the model will be presented. The comparison among various empirical estimators is illustrated through a simulation study. Finally, we apply our methods to a real dataset.
Ms Monireh Maanavi, Dr Mahdi Roozbeh, Volume 26, Issue 1 (12-2021)
Abstract
The method of least squares is a very simple, practical and useful approach for estimating regression coefficients of the linear models. This statistical method is used by users of different fields to provide the best unbiased linear estimator with the least variance. Unfortunately, this method will not have reliable output if outliers are present in the dataset, as the collapse point (estimator consistency criterion) of this method is 0% . It is therefore important to identify these observations. Until now, the various methods have been proposed to identify these observations. In this article, the proposed methods are reviewed and discussed in details. Finally, by presenting a simulation example, we examine each of the proposed methods.
Dr Abolfazl Rafiepour, Volume 26, Issue 1 (12-2021)
Abstract
Nowadays, the need to pay attention to teaching statistics at all levels from elementary school to university has become more apparent. One of the goals of the various educational systems, which is reflected in their upstream documents, is to have citizens who are equipped with statistical literacy. In this regard, many statistical organizations and institutions have mentioned statistics education as one of their special goals and missions. School math textbooks in Iran also have sections devoted to discussions of statistics and probability. An examination of the role of statistics in Iran school Mathematics textbooks shows that there is good progress in including statistics and probability concepts in textbooks, but it is still far from the ideal. In the present article, after a brief discussion on the necessity of paying attention to statistics education in the school mathematics curriculum, the historical course of presenting statistics education will be reviewed. Then, the challenges in introducing the topics of teaching statistics and probability in the school mathematics curriculum are illustrated, and in the end, two new approaches (attention to big-data, use of new technologies in stimulating and modeling real-world phenomena) will be introduced in more detail and with examples.
Seyedeh Azadeh Fallah Mortezanejad, Gholamreza Mohtashami Borzadaran, Bahram Sadeghpour Gildeh, Mohammad Amini, Volume 26, Issue 1 (12-2021)
Abstract
A copula function is a useful tool in identifying the dependency structure of dependent data and thus fitting a proper distribution to the existing data set. In this paper, using the copula function for stock market data including three variables of financial weakness, accumulated profit, and tangible assets related to 110 Iranian trading companies from 1385 to 1389 is analyzed and especially a three-dimensional distribution of these data is appropriate. We used a variety of tools to examine the dependency type in the data set, containing the scatter, chi, and Kendall plots. We also analyze the directional and tail dependency of the data set and calculated the dependence coefficients of Kendall tau and Spearman rho. Finally, we perform a good fitness of fit test for a few well-known copula functions, so that we can get the right copula function of the data set coming from the stock market.
Ehsan Bahrami Samani, Samira Bahramian, Volume 26, Issue 1 (12-2021)
Abstract
The occurrence of lifetime data is a problem which is commonly encountered in various researches, including surveys, clinical trials and epidemioligical studies. Recently there has been extensive methodological resarech on analyzing lifetime data. Howerver, because usually little information from data is available to corretly estimate, the inferences might be sensitive to untestable assumptions which this calls for a sensitivity analysis to be performed.
In this paper, we describe how to evaluate the effect that perturbations to the Log-Beta Weibull Regression Responses. Also, we review and extend the application and interpretation of influence analysis methods using censored data analysis. A full likelihood-based approach that allows yielding maximum likelihood estimates of the model parameters is used. Some simulation studies are conducted to evalute the performance of the proposed indices in ddetecting sensitivity of key model parameters. We illustrate the methods expressed by analyzing the cancer data.
Prof. Anoshiravan Kazemnejad, Miss Parisa Riyahi, Dr Shayan Mostafaee, Volume 26, Issue 1 (12-2021)
Abstract
The multifactorial dimension reduction algorithm is considered as a powerful algorithm for identifying high-order interactions in high dimensional structures. In this study, information of 748 patients with Behcetchr('39')s disease who referred to the Rheumatology Research Center, Shariati Hospital, Tehran, and 776 healthy controls was used to identify the interaction effects between ERAP1 gene polymorphisms involved in the occurrence of Behcetchr('39')s disease using the multifactor dimensionality reduction algorithm. Data analysis was performed using MDR 3.0.2 software. The models obtained from the multifactorial dimensional reduction algorithm with balanced accuracy above 0.6 have been determined to increase the risk of Behcetchr('39')s disease. The multi-factor reduction algorithm has high power and speed in calculating the interaction effects of polymorphisms or genetic mutations and identifying important interactions.
Miss Tayebeh Karami, Dr Muhyiddin Izadi, Dr Mehrdad Niaparast, Volume 26, Issue 1 (12-2021)
Abstract
The subject of classification is one of the important issues in different sciences. Logistic regression is one of the statistical
methods to classify data in which the underlying distribution of the data is assumed to be known. Today, researchers in
addition to statistical methods use other methods such as machine learning in which the distribution of the data does not
need to be known. In this paper, in addition to the logistic regression, some machine learning methods including CART
decision tree, random forest, Bagging and Boosting of supervising learning are introduced. Finally, using four real data
sets, we compare the performance of these algorithms with respect to the accuracy measure.
Ramin Kazemi, Volume 26, Issue 1 (12-2021)
Abstract
The main goal of this paper is to investigate the site and bond percolation of the lattice $mathbb{Z}^2$. The main symbols and concepts, including critical probabilities, are introduced. Bethe lattice and $k$-branching trees are examined and finally lattice
$mathbb{Z}^2$ is considered. The fundamental theorem of Harris and Kesten that presents the lower and upper bounds of the critical probability on the lattice $mathbb{Z}^2$ expresses and proves.
Dr Fatemeh Hosseini, Dr Omid Karimi, Volume 26, Issue 1 (12-2021)
Abstract
Spatial generalized linear mixed models are used commonly for modeling discrete spatial responses. In this models the spatial correlation of the data is considered as spatial latent variables. For simplicity, it is usually assumed in these models that spatial latent variables are normally distributed. An incorrect normality assumption may leads to inaccurate results and is therefore erroneous. In this paper we model the spaial latent variables in a general random field, namely the closed skew Gaussian random field which is more flexible and includes the Gaussian random field. We propose a new algorithm for maximum likelihood estimates of the parameters. A key ingredient in our algorithm is using a Hamiltonian Monte Carlo version of the EM algorithm. The performance of the proposed model and algorithm is presented through a simulation study.
Dr. Abouzar Bazyari, Volume 26, Issue 1 (12-2021)
Abstract
In this paper, first, the generalized lambda distribution and the characteristics of this distribution are introduced. The concept of resistance stress is fully explained and the reliability of a system from the perspective of resistance stress is examined. Also, the mathematical form of the resistance stress parameter in the generalized lambda distribution has been calculated. The estimation of the parameters has been investigated by the moments method and for different parameters values the graph of generalized lambda distribution is drawn and resistance stress parameter calculated. With a real example the application of the results is illustrated.
Mahsa Markani, Manije Sanei Tabas, Habib Naderi, Hamed Ahmadzadeh, Javad Jamalzadeh, Volume 26, Issue 2 (3-2022)
Abstract
When working on a set of regression data, the situation arises that this data
It limits us, in other words, the data does not meet a set of requirements. The generalized entropy method is able to estimate the model parameters Regression is without applying any conditions on the error probability distribution. This method even in cases where the problem Too poorly designed (for example when sample size is too small, or data that has alignment
They are high and ...) is also capable. Therefore, the purpose of this study is to estimate the parameters of the logistic regression model using the generalized entropy of the maximum. A random sample of bank customers was collected and in this study, statistical work and were performed to estimate the model parameters from the binary logistic regression model using two methods maximum generalized entropy (GME) and maximum likelihood (ML). Finally, two methods were performed. We compare the mentioned. Based on the accuracy of MSE criteria to predict customer demand for long-term account opening obtained from logistic regression using both GME and ML methods, the GME method was finally more accurate than the ml method.
Dr Mahdi Roozbeh, Ms mlihe Malekjafarian, Ms Monireh Maanavi, Volume 26, Issue 2 (3-2022)
Abstract
The most important goal of statistical science is to analyze the real data of the world around us. If this information is analyzed accurately and correctly, the results will help us in many important decisions. Among the real data around us which its analysis is very important, is the water consumption data. Considering that Iran is located in a semi-arid climate area of the earth, it is necessary to take big steps for predicting and selecting the best and the most appropriate accurate models of water consumption, which is necessary for the macro-national decisions. But analyzing the real data is usually complicated. In the analysis of the real data set, we usually encounter with the problems of multicollinearity and outliers points. Robust methods are used for analyzing the datasets with outliers and ridge method is used for analyzing the data sets with multicollinearity. Also, the restriction on the models is resulted from using non-sample information in estimation of regression coefficients. In this paper, it is proceeded to model the water consumption data using robust stochastic restricted ridge approach and then, the performance of the proposed method is examined through a Monte Carlo simulation study.
|
|