[Home ] [Archive]   [ فارسی ]  
:: Main :: About :: Current Issue :: Archive :: Search :: Submit :: Contact ::
Main Menu
Home::
Journal Information::
Articles archive::
For Authors::
For Reviewers::
Registration::
Contact us::
Site Facilities::
::
Search in website

Advanced Search
..
Receive site information
Enter your Email in the following box to receive the site news and information.
..
:: Search published articles ::
Showing 201 results for Type of Study: Research

Vahid Rezaei Tabar,
Volume 26, Issue 2 (3-2022)
Abstract

At the end of December 2019, the spread of a new infectious disease was reported in Wuhan, China, caused by a new coronavirus and officially named Covid-19 by the World Health Organization. As the number of victims of the virus exceeded 1,000, the World Health Organization chose the official name Covid-19 for the disease, which refers to "corona", "virus", "disease" and the year 2019.
 The forecasting about Covid-19 can help the government make better decisions. In this paper, an objective approach is used for forecasting Covid-19 based on the statistical methods. The most important goal in this paper is to forecast the prevalence of coronavirus for confirmed, dead and improved cases and to estimate the duration of the management of this virus using the exponential smoothing method. The exponential smoothing family model is used for short time-series data. This model is a kind of moving average model that modifies itself. In other words, exponential smoothing is one of the most widely used statistical methods for time series forecasting, and the idea is that recent observations will usually provide the best guidance for the future. Finally, according to the exponential smoothing, we will provide some suggestions.
Mahsa Markani, Manije Sanei Tabas, Habib Naderi, Hamed Ahmadzadeh, Javad Jamalzadeh,
Volume 26, Issue 2 (3-2022)
Abstract

‎When working on a set of regression data‎, ‎the situation arises that this data‎

‎It limits us‎, ‎in other words‎, ‎the data does not meet a set of requirements‎. ‎The generalized entropy method is able to estimate the model parameters‎ ‎Regression is without applying any conditions on the error probability distribution‎. ‎This method even in cases where the problem‎ ‎Too poorly designed (for example when sample size is too small‎, ‎or data that has alignment‎

‎They are high and‎ .‎..) is also capable. ‎Therefore‎, ‎the purpose of this study is to estimate the parameters of the logistic regression model using the generalized entropy of the maximum‎. ‎A random sample of bank customers was collected and in this study‎, ‎statistical work and were performed to estimate the model parameters from the binary logistic regression model using two methods maximum generalized entropy (GME) and maximum likelihood (ML)‎. ‎Finally‎, ‎two methods were performed‎. ‎We compare the mentioned‎. ‎Based on the accuracy of MSE criteria to predict customer demand for long-term account opening obtained from logistic regression using both GME and ML methods‎, ‎the GME method was finally more accurate than the ml method‎.


Mr Mahmood Mirjalili, Mr Jaber Kazempoor, Mrs Behshid Yasavoli,
Volume 26, Issue 2 (3-2022)
Abstract

The cumulative distribution and density functions of a product of some random variables following the power distribution with different parameters have been provided.
The corresponding characteristic and moment-generating functions are also derived.
We extend the results to the exponential variables and furthermore, some useful identities have been investigated in detail.
Benita Doalt Zadeh, Ayyub Sheikhi, Mashallah Mashinchi, Alireza Arabpour,
Volume 26, Issue 2 (3-2022)
Abstract

In this paper, we study the fuzzy random variable and its cumulative distribution function and express the combined fuzzy

random variable and their cumulative distribution function. Then, we express the concept of copula and its application in the construction of jointly cumulative distribution function and the application of copula is expressed in the construction of a jointly cumulative distribution function for two fuzzy random variables. Finally, to better understand, an example is presented.


Dr Majid Jafari Khaledi, Mr Hassan Mirzavand,
Volume 26, Issue 2 (3-2022)
Abstract

To make statistical inferences about regression model parameters, it is necessary to assume a specific distribution on the random error expression. A basic assumption in a linear regression model is that the random error expression follows a normal distribution. However, in some statistical researches, data simultaneously display skewness and bimodality features. In this setting, the normality assumption is  violated. A common approach to avoiding this problem is to use a mixture of skew-normal distributions. But such models involve many parameters, which it makes difficult to fit the models to the data. Moreover, these models are faced with the non-identifiability issue.
In this situation, a suitable solution is to use flexible distributions, which can take into account the skewness and bimodality observed in the data distribution. In this direction, various methods have been proposed based on developing of the skew-normal distribution in recent years. In this paper, these methods are used to introduce more flexible regression models than the regression models based on skew-normal distribution and a mixture of two skew-normal distributions. Their performance is compared using a simulation example. The methodology is then illustrated in a practical example related to a horses dataset.
 
Ramin Kazemi, Mohammad Qasem Vahidi-Asl,
Volume 26, Issue 2 (3-2022)
Abstract

Knowledge of statistics, ever since its inception, has served every aspect of human life and every individual and social class. It has shown its extraordinary potential in dealing with numerous problems encountering human beings since the occurring of Covid-19 in Wuhan, China. A vast amount of literature has appeared showing the power of the science of statistics in answering different questions regarding this disease and all its consequences. But it comes short of, as an instance, in modelling the geometry of disease spread among societies and in the world as a whole. Here the only way to deal with this matter is to resort to probability theory and its many ramifications in providing realistic models in describing this spread. A very power tool in this regard is percolation theory, which besides its many applications in mathematical physics, is very handy in modelling epidemic diseases, among them the Covid-19.  A short description of this theory with its use in modelling the spread of epidemic deceases, shows the importance of dealing with probability as a separate subject in the curricula and not a subordinate of the science of statistics which is now dominant in the statistics major curricula in the Iranian schools.


Dr. Mehdi Shams, Dr. Gholamreza Hesamian,
Volume 27, Issue 1 (3-2023)
Abstract

‎Information inequalities have many applications in estimation theory and statistical decision making‎. ‎This paper describes the application of an information inequality to make the minimax decision in the framework of Bayesian theory‎. ‎In this way‎, ‎first a fundamental inequality for Bayesian risk is introduced under the square error loss function and then its applications are expressed in determining asymptotically and locally minimax estimators in the case of univariate and multivariate‎. ‎In the case that the parameter components are orthogonal‎, ‎the asymptotic-local minimax estimators are obtained for a function of the mean vector and the covariance matrix in the multivariate normal distribution‎. ‎In the end‎, ‎the bounds of information inequality are calculated under a general loss function‎.


Mr Arta Roohi, Ms Fatemeh Jahadi, Dr Mahdi Roozbeh,
Volume 27, Issue 1 (3-2023)
Abstract

‎The most popular technique for functional data analysis is the functional principal component approach‎, ‎which is also an important tool for dimension reduction‎. ‎Support vector regression is branch of machine learning and strong tool for data analysis‎. ‎In this paper by using the method of functional principal component regression based on the second derivative penalty‎, ‎ridge and lasso and support vector regression with four kernels (linear‎, ‎polynomial‎, ‎sigmoid and radial) in spectroscopic data‎, ‎the dependent variable on the predictor variables was modeled‎. ‎According to the obtained results‎, ‎based on the proposed criteria for evaluating the goodness of fit‎, ‎support vector regression with linear kernel and error equal to $0.2$ has had the most appropriate fit to the data set‎.


Dr. Sedigheh Shams, ,
Volume 27, Issue 1 (3-2023)
Abstract

Copula functions are  useful tools in modeling the dependence between random variables, but most existing copula functions are symmetric, while in many applications, asymmetric joint functions are required. One of these applications is reliability modeling, where asymmetric joint functions can explain different tail dependencies and provide a better model. Therefore, the theory of constructing asymmetric copula functions that can model a wider range of data has been developed. In this research, while reviewing the methods of creating asymmetric copula functions that can provide various tail dependencies, these functions are used to estimate the two-dimensional reliability of data on the age an usage of Rana and Dana cars.
Dr Fatemeh Hosseini, Dr Omid Karimi,
Volume 27, Issue 1 (3-2023)
Abstract

Spatial generalized linear mixed model is commonly used to model Non-Gaussian data and the spatial correlation of the data is modelled by latent variables. In this paper, latent variables are modeled using a stationary skew Gaussian random field and a new algorithm based on composite marginal likelihood is presented. The performance of this stationary random field in the model and the proposed algorithm is implemented in a simulation example.


Dr. Behzad Mansouri, Dr. Rahim Chinipardaz, Sami Atiyah Sayyid Al-Farttosi, Dr. Habiballah Habiballah,
Volume 27, Issue 1 (3-2023)
Abstract

The empirical distribution function is used as an estimate of the cumulative probability distribution function
of a random variable. The empirical distribution function has a fundamental role in many statistical inferences, which are
little known in some cases. In this article, the empirical probability function is introduced as a derivative of the empirical
distribution function, and it is shown that moment estimators such as sample mean, sample median, sample variance, and
sample correlation coefficient result from replacing the random variable density function with the empirical probability
function in the theoretical definitions. In addition, the kernel probability density function estimator is used to estimate the
population parameters and a new method for bandwidth estimation in the kernel density estimation is introduced.
Keywords: Empirical distribution function, moment estimate, kernel estimator, bandwidth.
Dr. Abouzar Bazyari,
Volume 27, Issue 1 (3-2023)
Abstract

In risk models, the ruin probabilities and Lundberg bound are calculated despite knowing the statistical distribution of random variables. In the present paper, for collective risk model and discrete time risk model of insurance company for independent and identically distributed claims with light-tailed distribution, the infinite time ruin probabilities are computed using Lundberg bound, moreover the general forms of density functions of random variables of claim sizes are derived. 
‎‎‎For some special cases in the  discrete time risk model, the density functions of claim sizes have the shifted geometric distribution, and for the collective risk model, they always have an exponential distribution. 
‎‎Presenting the numerical examples of infinite time ruin probabilities and the simulated values of these probabilities and the Lundberg bound are the final results of this article.

Ms. Zahra Jafarian Moorakani, Dr. Heydar Ali Mardani-Fard,
Volume 27, Issue 1 (3-2023)
Abstract

The ordinary linear regression model is $Y=Xbeta+varepsilon$ and the estimation of parameter $beta$ is: $hatbeta=(X'X)^{-1}X'Y$. However, when using this estimator in a practical way, certain problems may arise such as variable selection, collinearity, high dimensionality, dimension reduction, and measurement error, which makes it difficult to use the above estimator. In most of these cases, the main problem is the singularity of the matrix $X'X$. Many solutions have been proposed to solve them. In this article, while reviewing these problems, a set of common solutions as well as some special and advanced methods (which are less favored by someone, but still have the potential to solve these problems intelligently) to solve them.
Dr Nabaz Esmailzadeh, Dr Khosrow Fazli,
Volume 27, Issue 1 (3-2023)
Abstract

In this article, based on a random sample from a normal distribution with unknown parameters, we obtain the shortest confidence interval for the standard deviation parameter using the sample standard deviation. We show that this confidence interval cannot be obtained by taking the square root of the endpoints of the shortest confidence interval for the variance given by Tate and Klett.  A table is provided to calculate the confidence interval for several sample sizes and three common confidence coefficients. Also, the power performance of the tests made based on the mentioned confidence intervals is considered.

Azam Karandish Marvasti, Dr Ehsan Ormoz, Dr Maryam Basirat,
Volume 27, Issue 1 (3-2023)
Abstract

In this paper, the concept of unit generalized Gompertz (UGG) distribution will be introduced as a new transformed model of the unit Gompertz distribution, which contains the unit Gompertz distribution as a special case. We calculate explicit expressions for the moments, moment generating, quantile, and hazard functions, and Tsallis and R'{e}nyi entropy. Some different methods for estimation and inference about model parameters are presented too. To estimate the unknown parameters of the model, the maximum likelihood, maximum product spacings, and bootstrap sampling have been discussed, and also approximate confidence interval is presented. Finally, a simulation study and an application to a real data set are given.


Dr Mahdi Roozbeh, , ,
Volume 27, Issue 2 (3-2023)
Abstract

Functional data analysis is used to develop statistical approaches to the data sets that are functional and continuous essentially‎, ‎and because these functions belong to the spaces with infinite dimensional‎, using conventional methods in classical statistics for analyzing such data sets is challenging‎.

The most popular technique for statistical data analysis is the functional principal components approach‎, ‎which is an important tool for dimensional reduction‎. In this research, using the method of‎ functional principal component regression based on the second derivative penalty‎, ‎ridge and lasso, ‎the ‎analysis of ‎Canadian climate and spectrometric data sets ‎is proceed‎. ‎To ‎do ‎this, ‎to ‎obtain ‎the ‎optimum ‎values ‎of ‎the ‎penalized ‎parameter ‎in ‎proposed ‎methods, ‎the generalized cross validation, which is a ‎valid ‎and ‎efficient ‎criterion, ‎is ‎applied.‎


Mohamad Jarire,
Volume 27, Issue 2 (3-2023)
Abstract

In this article, the number of failures of a coherent system has been studied under the assumption that the lifetime of system components are non-distributed discrete and dependent random variables. First, the probability that exactly

i

Failure
i=0, ..., n-k,

in a system

$k$

From

n

Under the condition that the system at the time of monitoring

t

it works

it will be counted. In the following, this result has been generalized to other coherent systems. In addition, it has been shown that in the case of independence and co-distribution of component lifetimes, the probability obtained is consistent with the corresponding probability in the continuous state obtained in the existing literature. Finally, by presenting practical examples, the behavior of this probability has been investigated in the case that the system components have interchangeable and necessarily non-distributed lifetimes


Seyyed Roohollah Roozegar, Amir Reza Mahmoodi,
Volume 27, Issue 2 (3-2023)
Abstract


 Many regression estimation techniques are strongly affected by outlier data and many errors occur in their estimation.
In the recent years, robust methods have been developed to solve this issue. The minimum density power divergence
estimator is an estimation method based on the minimum distance between two density functions, which provides a
robust estimate in situations where the data contain a number of outliers. In this research, we present the robust estimation
method of minimum density power divergence to estimate the parameters of the Poisson regression model,
which can produce robust estimators with the least loss in efficiency. Also, we will investigate the performance of the
proposed estimators by providing a real example.
Sedigheh Zamani Mehreyan,
Volume 27, Issue 2 (3-2023)
Abstract

‎The boosted mixture learning method‎, ‎BML‎, ‎is an incremental method to learn mixture models for the classification problem‎. ‎In each step of the boosted mixture learning method‎, ‎a new component is added to the mixture model according to an objective function to ensure that the objective function is maximized‎. ‎Sometimes the likelihood function or equivalently information criteria are defined as the objective function of BML‎. ‎The mixture model is updated whenever a new component is added to the mixture model based on the maximum likelihood function and information criteria‎.

‎Since the information criteria does not have the ability to identify equivalent models‎, ‎therefore‎, ‎it is possible that the new mixture model and the current mixture model are equivalent‎.

‎In this paper‎, ‎the boosted mixture learning method has been corrected using Vuong's model selection test‎, ‎which has the ability to identify equivalent models‎. ‎The performance of two learning methods is evaluated over simulation data and over the U.S‎. ‎imports of goods by customs basis.‎


Nafise Azadi, Ebrahim Reyhani, Anahita Komeyjani, Ehsan Bahrami Samani,
Volume 27, Issue 2 (3-2023)
Abstract

The purpose of this research is to investigate the statistical thinking of undergraduate student teachers in the field of mathematics education in the topic of diagrammatic literacy based on the framework of Wilde and Pfannkuch. For this purpose, a questionnaire including 9 diagram literacy questions (box diagram) was designed. The questions were classified based on the components of the Wilde and Pfannkuch framework. Questionnaire was completed by 50 student teachers of math education of Farhangian University of Education and Training of the director of Shahid Darjaei (boys and girls). The responses were leveled based on Watson's framework, which is a modification of Solow's model. The significance of the gender differences of student teachers was not confirmed statistically. The findings showed that most of the student teachers' answers in all statistical components are at the relational level, but the results showed the average performance of the student teachers in the components of statistical thinking in the topic of graph literacy.

Page 9 from 11     

مجله اندیشه آماری Andishe _ye Amari
Persian site map - English site map - Created in 0.05 seconds with 44 queries by YEKTAWEB 4714