|
|
 |
Search published articles |
 |
|
Hamid Reza Nilisani, Mohammad Noori, Volume 17, Issue 2 (3-2013)
Abstract
Mehran Naghizadeh Qomi, Shokofa Kabiri, Volume 17, Issue 2 (3-2013)
Abstract
In this paper, we investigate the relation between two variables where the one is measured with the ratio or interval scale and the other with the nominal. In this cases, we use from serial correlation coefficient. Computation of some serial coefficient such as biserial and point-biserial coefficient is carried out with some examples and R program.
Volume 17, Issue 2 (3-2013)
Abstract
In multivariate cases, the aim of canonical correlation analysis (CCA) for two sets of variables x and y is to obtain linear combinations of them so that they have the largest possible correlation. However, when x and y are continouse functions of another variable (generally time) in nature, these two functions belong to function spaces which are of infinite dimension, and CCA for them should be carried out by using some tools provided by functional data analysis. In this paper we first review definitions and concepts of CCA for multivariate data, and then express those of CCA for functional data with considering the problems occur when generalizing the concepts from multivariate to functional cases. We have also treated a real functional data set and interpreted the obtained results.
Volume 18, Issue 1 (9-2013)
Abstract
This paper is a brief introduction to the concepts, methods and algorithms for data mining in statistical software R using a package named Rattle. Rattle provides a good graphical environment to perform some of the procedures and algorithms without the need for programming. Some parts of the package will be explained by a number of examples.
Volume 18, Issue 1 (9-2013)
Abstract
Nowadays there has been an increasing interest in more flexible distributions like skew distributions that can represent observed behavior more closely. These distributions are often used in the medical and behavioral sciences for real-valued random variables whose distributions are not symmetric. Because high Application of skew distributions, in this paper after a brief review of famous skew distributions, normal, t, skew normal and skew t distributions were fitted on a real data set taken from Mobarakeh Steel Company medical data and then best fit was selected using AIC.
Narges Sohrabi, Hadi Movaghari, Kazem Fayyaz-Heydari, Volume 18, Issue 1 (9-2013)
Abstract
Numerical techniques are too often designed to yield specific answers to rigidly defined questions. Graphical techniques are less confining. They aid in understanding the numerous relationships reflected in the data. They help reveal the existence of peculiar looking observations or subsets of the data. It is difficult to obtain similar information from numerical procedures. In this article, by a real data, some types of displaying bivariate data are introduced.
En Mohammad Amini, , , Volume 18, Issue 2 (3-2014)
Abstract
In this paper, we study the properties of power weighted means, arithmetic, geometry and harmonic for two copulas.
Mr Mousa Abdi, Dr Akbar Asgharzadeh, Volume 18, Issue 2 (3-2014)
Abstract
For computing different point estimates such as method of moment and maximum like-lihood estimates and different interval estimates (classical confidence interval, unbi-ased confidence interval, HPD interval), we may deal with the equations which need be solved numerically. In this paper, some numerical methods for solving these type of equations are reviewed in S-PLUS package. Various examples are presented to illus-trate the methods described.
Dr Yadollah Mehrabi, Parvin Sarbakhsh, Dr Farid Zayeri, Dr Maryam Daneshpour, Volume 19, Issue 2 (2-2015)
Abstract
Logic regression is a generalized regression and classification method that is able to make Boolean combinations
as new predictive variables from the original binary variables. Logic regression was introduced for case control or
cohort study with independent observations. Although in various studies, correlated observations occur due to different
reasons, logic regression have not been studied in theory and application to analyze of correlated observations
and longitudinal data.
Due to the importance of identifying and considering the interactions between variables in longitudinal studies,
in this paper we propose Transition Logic Regression as an extension of Logic Regression to binary longitudinal
data. AIC of the models are used as score function of Annealing algorithm. In order to assess the performance of
the method, simulation study is done in various conditions of sample size, first order dependency and interaction
effect. According to results of simulation study, by increasing the sample size, percentage of identification of true
interactions and MSE of estimations get better. As an application, we assess interaction effect of some SNPs on
HDL level over time in TLGS study using our proposed model.
Dr. A. Asgharzadeh, Mr. Hamed Yahyaee, Mr. M. Abdi, Volume 20, Issue 1 (4-2015)
Abstract
Confidence intervals are one of the most important topics in mathematical statistics which are related to statistical
hypothesis tests. In a confidence interval, the aim is that to find a random interval that coverage the unknown parameter
with high probability. Confidence intervals and its different forms have been extensively discussed in standard
statistical books. Since the most of statistical distributions have more than one parameter, so joint confidence regions
are more important than confidence intervals. In this paper, we discuss joint confidence regions. Some examples
are given for illustration purposes.
, , Volume 20, Issue 1 (4-2015)
Abstract
This paper is a Persian translation of R. T. Cox (1946) famous work concerning subjective probability. It establishes an axiomatic foundation for subjective probability, akin to Kolmogorov's work.
, , Volume 20, Issue 2 (10-2015)
Abstract
Nadarajah and Haghighi (2011) introduced a new generalization of the exponential distribution as an alternative
to the gamma, Weibull and exponeniated exponential distributions. In this paper, a generalization of the Nadarajah–
Haghighi (NH) distribution namely exponentiated generalized NH distribution is introduced and discussed. The
properties and applications of proposed model to real data are discussed. A Monte Carlo simulation experiment will
be conducted to evaluate the maximum likelihood estimators of unknown parameters.
S Mahmoud Taheri, , , , Volume 23, Issue 1 (9-2018)
Abstract
This study aims to use a method of systemic review, called meta-analysis, to analysis the results of studies carried out in Iran about the role of self-regulation learning on learners’ academic performance in the past decade. So far studies investigating the relationship between self-learning and academic achievement have been conducted mainly in the frame of classical statistical models, while the nature of these variables and the relationship between them are fuzzy so that. It is suitable, therefore, to employ a fuzzy method to analysis such data. To do this 50 accomplished researches about the role of self-regulation learning on learners’ academic performance, 31 researches were chosen for fuzzy meta-analysis. The obtained results show that there is a meaningful relationship between self-regulation learning and learners’ academic achievement and self-regulation learning cam explain 4-17 percent of variance of the academic achievement. The obtain results can be use to education program planning and effective learning them.
Afshin Fallah, Khadiheh Rezaei, Volume 23, Issue 1 (9-2018)
Abstract
When the observations reflect a multimodal, asymmetric or truncated construction or a combination of them, using usual unimodal and symmetric distributions leads to misleading results. Therefore, distributions with ability of modeling skewness, multimodality and truncation have been in the core of interest in statistical literature, always. There are different methods to contract a distribution with these abilities, which using the weighted distribution is one of these methods. In this paper, it is shown that by using a weight function one can create such desired abilities in the corresponding weighted distribution.
Dr. Mehdi Shams, Volume 23, Issue 2 (3-2019)
Abstract
In this paper, after introducing exponential family and a history of work done by researchers in the field of statistics, some applications of this family in statistical inference especially in estimation problem,statistical hypothesis testing and statistical information theory concepts will be discussed.
Mrs Azam Rastin, Dr Mohmmadreza Faridrohani, Dr Amirabbas Momenan, Dr Fatemeh Eskandari, Dr Davood Khalili, Volume 23, Issue 2 (3-2019)
Abstract
Cardiovascular diseases (CVDs) are the leading cause of death worldwide. To specify an appropriate model to determine the risk of CVD and predict survival rate, users are required to specify a functional form which relates the outcome variables to the input ones. In this paper, we proposed a dimension reduction method using a general model, which includes many widely used survival models as special cases.
Using an appropriate combination of dimension reduction and Cox Proportional Hazards model, we found a method which is effective for survival prediction.
Atieh Shabaniyan Borujeni, iraj Kazemi, Volume 24, Issue 1 (9-2019)
Abstract
A popular application of nonlinear models with mixed effects pharmacokinetic studies, in which the distribution of used drug during the life of the individual study. The fit of these models assume normality of the random effects and errors are common, but can not make it invalid results in the estimation. In longitudinal data analysis, typically assume that the random effects and random errors are normally distributed, but there is a possible violation of empirical studies. For this reason, the analysis of the pharmacokinetic data such as normal distribution, slashe, t - student and Contaminated normal considered to be based on analytical achieved. In this paper, parameter estimation of nonlinear models with mixed effects on the maximum likelihood estimation method and the Bayesian approach respectively by SAS software and Open Bugs pharmacokinetic data set for being carried out. Also, using the model selection criteria are based on these two approaches, we found the best fit model to the data.
Dr. Mousa Golalizadeh, Mr. Amir Razaghi, Volume 24, Issue 1 (9-2019)
Abstract
The Principal Components Analysis is one of the popular exploratory approaches to reduce the dimension and to describe the main source of variation among data. Despite many benefits, it is encountered with some problems in multivariate analysis. Having outliers among data significantly influences the results of this method and it sounds a robust version of PCA is beneficial in this case. In addition, having moderate loadings in the final results makes the interpretation of principal components rather difficult. One can consider a version of sparse components in this case. We study a hybrid approach consisting of joint robust and sparse components and conduct some simulations to evaluate and compare it with other traditional methods. The proposed technique is implemented in a real-life example dealing with the crime rate in the USA.
Ramin Kazemi, Volume 24, Issue 1 (9-2019)
Abstract
The goal of this paper is to introduce the contraction method for analysing the algorithms.
By means of this method several interesting classes of recursions can be analyzed as paricular cases of the general framework. The main steps of this technique is based on contraction properties of algorithm with respect to suitable probability metrics. Typlically the limiting distribution is characterized as a fixed poin of a limiting operator on the class of probability distributions.
G. R. Mohtashami Borzadaran, Volume 25, Issue 2 (3-2021)
Abstract
Thomas Bayes, the founder of Bayesian vision, entered the University of
Edinburgh in 1719 to study logic and theology. Returning in 1722, he worked with
his father in a small church. He also was a mathematician and in 1740 he made a
novel discovery which he never published, but his friend Richard Price found it in his
notes after his death in 1761, reedited
it and published it. But until Laplace, no one
cared until the late 18th century, when data did not have equal confidence in Europe.
Pierre − Simon Laplace, a young mathematician, believed that probability theory was
a key in his hand, and he independently discovered the Bayesian mechanism and published
it in 1774. Laplace expressed the principle not in an equation but in words.
Today, Bayesian statistics as a discipline of statistical philosophy and the interpretation of probability is very important and has become known as the Bayesian theorem
presented after Bayesian death. Allen Turing is a British computer scientist, mathematician
and philosopher who is now known as the father of computer science and artificial
intelligence. His outstanding achievements during his short life are the result of the
adventures of a beautiful mind that was finally extinguished forever with a suspicious
death. During World War II, Turing worked in Belchley Park, the center of the British
decipherment, and for a time was in charge of the German Navy’s cryptographic analysis.
He devised several methods, specifically from Bayesian’s point of view, without
breaking his name to crack German codes, as well as the electromechanical machine
method that could find the features of the Enigma machine. Finding Enigma can also
be considered one of his great works. Alan Turing was a leading scientist who played
an important role in the development of computer science and artificial intelligence and
the revival of Bayesian thought. Turing provided an effective and stimulating contribution
to artificial intelligence through the Turing experiment. He then worked at the
National Physics Laboratory in the United Kingdom, presenting one of the prototypes
of a stored computer program, though it worked, which was not actually made as the
”Manchester Mark ”. He went to the University of Manchester in 1948 to be recognized
as the world’s first real computer. However, later on, the role of Bayesian rule and law
in scientific developments becomes more important. Many possible Bayesian methods
in the 21st century have made significant advances in the explanation and application of
Bayesian statistics in climate development and have solved many of the world’s problems.
New global technology has grown on Bayesian ideas, which will be reviewed intion of probability is very important and has become known as the Bayesian theorem
presented after Bayesian death. Allen Turing is a British computer scientist, mathematician
and philosopher who is now known as the father of computer science and artificial
intelligence. His outstanding achievements during his short life are the result of the
adventures of a beautiful mind that was finally extinguished forever with a suspicious
death. During World War II, Turing worked in Belchley Park, the center of the British
decipherment, and for a time was in charge of the German Navy’s cryptographic analysis.
He devised several methods, specifically from Bayesian’s point of view, without
breaking his name to crack German codes, as well as the electromechanical machine
method that could find the features of the Enigma machine. Finding Enigma can also
be considered one of his great works. Alan Turing was a leading scientist who played
an important role in the development of computer science and artificial intelligence and
the revival of Bayesian thought. Turing provided an effective and stimulating contribution
to artificial intelligence through the Turing experiment. He then worked at the
National Physics Laboratory in the United Kingdom, presenting one of the prototypes
of a stored computer program, though it worked, which was not actually made as the
”Manchester Mark ”. He went to the University of Manchester in 1948 to be recognized
as the world’s first real computer. However, later on, the role of Bayesian rule and law
in scientific developments becomes more important. Many possible Bayesian methods
in the 21st century have made significant advances in the explanation and application of
Bayesian statistics in climate development and have solved many of the world’s problems.
New global technology has grown on Bayesian ideas, which will be reviewed in this article.
|
|