[Home ] [Archive]   [ فارسی ]  
:: Main :: About :: Current Issue :: Archive :: Search :: Submit :: Contact ::
Main Menu
Home::
Journal Information::
Articles archive::
For Authors::
For Reviewers::
Registration::
Contact us::
Site Facilities::
::
Search in website

Advanced Search
..
Receive site information
Enter your Email in the following box to receive the site news and information.
..
:: Search published articles ::

Dr. Mehdi Shams,
Volume 23, Issue 2 (3-2019)
Abstract

‎In this paper‎, ‎after introducing exponential family and a history of work done by researchers in the field of statistics‎, ‎some applications of this family in statistical inference especially in estimation problem‎,‎statistical hypothesis testing and statistical information theory concepts will be discussed‎.


Mrs Azam Rastin, Dr Mohmmadreza Faridrohani, Dr Amirabbas Momenan, Dr Fatemeh Eskandari, Dr Davood Khalili,
Volume 23, Issue 2 (3-2019)
Abstract

 ‎Cardiovascular diseases (CVDs) are the leading cause of death worldwide‎. ‎To specify an appropriate model to determine the risk of CVD and predict survival rate‎, ‎users are required to specify a functional form which relates the outcome variables to the input ones‎. ‎In this paper‎, ‎we proposed a dimension reduction method using a general model‎, ‎which includes many widely used survival models as special cases‎.

‎Using an appropriate combination of dimension reduction and Cox Proportional Hazards model‎, ‎we found a method which is effective for survival prediction‎.  


Atieh Shabaniyan Borujeni‎, ‎iraj Kazemi‎,
Volume 24, Issue 1 (9-2019)
Abstract

‎A popular application of nonlinear models with mixed effects pharmacokinetic studies‎, ‎in which the distribution of used drug during the life of the individual study‎. ‎The fit of these models assume normality of the random effects and errors are common‎, ‎but can not make it invalid results in the estimation‎. ‎In longitudinal data analysis‎, ‎typically assume that the random effects and random errors are normally distributed‎, ‎but there is a possible violation of empirical studies‎. ‎For this reason‎, ‎the analysis of the pharmacokinetic data such as normal distribution‎, ‎slashe‎, ‎t‎ - ‎student and Contaminated normal considered to be based on analytical achieved‎. ‎In this paper‎, ‎parameter estimation of nonlinear models with mixed effects on the maximum likelihood estimation method and the Bayesian approach respectively by SAS software and Open Bugs pharmacokinetic data set for being carried out‎. ‎Also‎, ‎using the model selection criteria are based on these two approaches‎, ‎we found the best fit model to the data‎.
Dr. Mousa Golalizadeh, Mr. Amir Razaghi,
Volume 24, Issue 1 (9-2019)
Abstract

‎The Principal Components Analysis is one of the popular exploratory approaches to reduce the dimension and to describe the main source of variation among data‎. ‎Despite many benefits‎, ‎it is encountered with some problems in multivariate analysis‎. ‎Having outliers among data significantly influences the results of this method and it sounds a robust version of PCA is beneficial  in this case‎. ‎In addition‎, ‎having moderate loadings in the final results makes the interpretation of principal components rather difficult‎. ‎One can consider a version of sparse components in this case‎. ‎We study a hybrid approach consisting of joint robust and sparse components and conduct some simulations to evaluate and compare it with other traditional methods‎. ‎The proposed technique is implemented in a real-life example dealing with the crime rate in the USA‎.
Ramin Kazemi,
Volume 24, Issue 1 (9-2019)
Abstract

‎The goal of this paper is to introduce the contraction method for analysing the algorithms‎.

‎By means of this method several interesting classes of recursions can be analyzed as paricular cases of the general framework‎. ‎The main steps of this technique is based on contraction properties of algorithm with respect to suitable probability metrics‎. ‎Typlically the limiting distribution is characterized as a fixed poin of a limiting operator on the class of probability distributions‎.‎


G. R. Mohtashami Borzadaran,
Volume 25, Issue 2 (3-2021)
Abstract

Thomas Bayes, the founder of Bayesian vision, entered the University of
Edinburgh in 1719 to study logic and theology. Returning in 1722, he worked with
his father in a small church. He also was a mathematician and in 1740 he made a
novel discovery which he never published, but his friend Richard Price found it in his
notes after his death in 1761, reedited
it and published it. But until Laplace, no one
cared until the late 18th century, when data did not have equal confidence in Europe.
Pierre − Simon Laplace, a young mathematician, believed that probability theory was
a key in his hand, and he independently discovered the Bayesian mechanism and published
it in 1774. Laplace expressed the principle not in an equation but in words.
Today, Bayesian statistics as a discipline of statistical philosophy and the interpretation of probability is very important and has become known as the Bayesian theorem
presented after Bayesian death. Allen Turing is a British computer scientist, mathematician
and philosopher who is now known as the father of computer science and artificial
intelligence. His outstanding achievements during his short life are the result of the
adventures of a beautiful mind that was finally extinguished forever with a suspicious
death. During World War II, Turing worked in Belchley Park, the center of the British
decipherment, and for a time was in charge of the German Navy’s cryptographic analysis.
He devised several methods, specifically from Bayesian’s point of view, without
breaking his name to crack German codes, as well as the electromechanical machine
method that could find the features of the Enigma machine. Finding Enigma can also
be considered one of his great works. Alan Turing was a leading scientist who played
an important role in the development of computer science and artificial intelligence and
the revival of Bayesian thought. Turing provided an effective and stimulating contribution
to artificial intelligence through the Turing experiment. He then worked at the
National Physics Laboratory in the United Kingdom, presenting one of the prototypes
of a stored computer program, though it worked, which was not actually made as the
”Manchester Mark ”. He went to the University of Manchester in 1948 to be recognized
as the world’s first real computer. However, later on, the role of Bayesian rule and law
in scientific developments becomes more important. Many possible Bayesian methods
in the 21st century have made significant advances in the explanation and application of
Bayesian statistics in climate development and have solved many of the world’s problems.
New global technology has grown on Bayesian ideas, which will be reviewed intion of probability is very important and has become known as the Bayesian theorem
presented after Bayesian death. Allen Turing is a British computer scientist, mathematician
and philosopher who is now known as the father of computer science and artificial
intelligence. His outstanding achievements during his short life are the result of the
adventures of a beautiful mind that was finally extinguished forever with a suspicious
death. During World War II, Turing worked in Belchley Park, the center of the British
decipherment, and for a time was in charge of the German Navy’s cryptographic analysis.
He devised several methods, specifically from Bayesian’s point of view, without
breaking his name to crack German codes, as well as the electromechanical machine
method that could find the features of the Enigma machine. Finding Enigma can also
be considered one of his great works. Alan Turing was a leading scientist who played
an important role in the development of computer science and artificial intelligence and
the revival of Bayesian thought. Turing provided an effective and stimulating contribution
to artificial intelligence through the Turing experiment. He then worked at the
National Physics Laboratory in the United Kingdom, presenting one of the prototypes
of a stored computer program, though it worked, which was not actually made as the
”Manchester Mark ”. He went to the University of Manchester in 1948 to be recognized
as the world’s first real computer. However, later on, the role of Bayesian rule and law
in scientific developments becomes more important. Many possible Bayesian methods
in the 21st century have made significant advances in the explanation and application of
Bayesian statistics in climate development and have solved many of the world’s problems.
New global technology has grown on Bayesian ideas, which will be reviewed in this article.
Dr Rahim Chinipardaz, Dr Behzad Mansouri,
Volume 25, Issue 2 (3-2021)
Abstract

There are two reasons that 2013 named as Statistics year. First, it was 300 year after written the book, Ars Conjectandi, by Bernoulli and the second, presentation of Bayes article 250 year ago. Hald (2007) beleive that the development period of Probability and Statistics is started from Bernoulli and ended by Fisher. This article expaline the role of Bernoulli book in Statistics.


Dr Masoud Yarmohammadi, Dr Eynollah Pasha,
Volume 25, Issue 2 (3-2021)
Abstract

Statistics stems out from induction. Induction is a long lasting notion in philosophy. The nature of notions in philosophy
are such that neither they can be solved completely nor one can leave them forever. One of the most important problem in
induction is “the problem of induction”.

In this paper we give a short history of induction and discuss some aspects of the problem of induction.

 
Dr. Mousa Golalizadeh,
Volume 25, Issue 2 (3-2021)
Abstract

The current article is a translation of a paper published in Significance journal, 2020, Vol. 17, No. 4 captioned as “C.R. Rao’s ‎Century''‎, which has been scripted as a perception of an appreciation  note by involvements of  Bradley Efron, Shun-ichi Amari, Donald B. Rubin, Arni S. R. Srinivasa Rao and David R. Cox. Therefore, it is not possible to address this manuscript as a scientific paper, which is regularly accepted among the researchers. Evidently, the proposed translated article is prepared with the focus on appreciating professor Rao’s a century contribution in statistics. With this intention, Persian speakers, i.e., those who are somehow associated with statistics, could become aware of professor Rao’s invaluable role in spreading the statistics around the world. Absolutely, individuals that have achieved a bachelor’s degree in statistics, are familiar with at least two well-known titles: “Cramer-Rao’s inequality'' and “Rao-Blackwell theorem'', possessing Rao’s designation in both titles. Needless to say, that mentioning his remarkable role in statistics and learning more about his outstanding character by some renowned statisticians, who have also made remarkable impacts on statistics, is a must. In accordance with the author of this paper, advantageous activities of Rao, some of which have come to this paper, can be considered as a model for those who enter in various fields of statistics and intend to follow Rao’s scientific and social life.
Ali Reza Taheriyoun, Gazelle Azadi,
Volume 26, Issue 1 (12-2021)
Abstract

Profile monitoring is usually faced by control charts and mostly the response variable is observable in those problems‎. ‎We confront here with a similar problem where the values of the reward function are observed instead of the response variable vector and we use the dart model to make it easier to understand‎. ‎Supposing there exists at most one change-point‎, ‎a sequence of independent points resulted by darts throws is observed and the estimation of parameters and the change-point (if there exists any) are presented using the‎ ‎frequentist and Bayesian approaches‎. ‎In both the approaches‎, ‎two possible precision scalar and matrix are studied separately‎. ‎The results are examined through a simulation study and the methods applied on a real data‎. 

Dr Hossein Samimi Haghgozar,
Volume 26, Issue 1 (12-2021)
Abstract

In probability theory, a random variable (vector) is divided into discrete, absolutely continuous, singular continuous, and a mixture of them. Absolutely discrete and continuous random variables (vectors) have been extensively studied in various probability and statistics books. However, less attention has been paid to the issue of singular continuous distributions and mixture distributions, part of which is singular continuous. In this article, an example of ‎singular‎ random vectors is given. Also, examples of mixture random vectors are presented whose distribution function is a convex linear combination of discrete, absolutely ‎continuous,‎ and continuous distribution functions.

 


Taban Baghfalaki, Parvaneh Mehdizadeh, Mahdy Esmailian,
Volume 26, Issue 1 (12-2021)
Abstract

Joint models use in follow-up studies to investigate the relationship between longitudinal markers and survival outcomes
and have been generalized to multiple markers or competing risks data. Many statistical achievements in the field of joint
modeling focuse on shared random effects models which include characteristics of longitudinal markers as explanatory variables
in the survival model. A less-known approach is the joint latent class model, assuming that a latent class structure
fully captures the relationship between the longitudinal marker and the event risk. The latent class model may be appropriate
because of the flexibility in modeling the relationship between the longitudinal marker and the time of event, as well as the
ability to include explanatory variables, especially for predictive problems. In this paper, we provide an overview of the joint
latent class model and its generalizations. In this regard, first a review of the discussed models is introduced and then the
estimation of the model parameters is discussed. In the application section, two real data sets are analyzed.

Vahid Rezaei Tabar,
Volume 26, Issue 2 (3-2022)
Abstract

At the end of December 2019, the spread of a new infectious disease was reported in Wuhan, China, caused by a new coronavirus and officially named Covid-19 by the World Health Organization. As the number of victims of the virus exceeded 1,000, the World Health Organization chose the official name Covid-19 for the disease, which refers to "corona", "virus", "disease" and the year 2019.
 The forecasting about Covid-19 can help the government make better decisions. In this paper, an objective approach is used for forecasting Covid-19 based on the statistical methods. The most important goal in this paper is to forecast the prevalence of coronavirus for confirmed, dead and improved cases and to estimate the duration of the management of this virus using the exponential smoothing method. The exponential smoothing family model is used for short time-series data. This model is a kind of moving average model that modifies itself. In other words, exponential smoothing is one of the most widely used statistical methods for time series forecasting, and the idea is that recent observations will usually provide the best guidance for the future. Finally, according to the exponential smoothing, we will provide some suggestions.
Khosrow Fazli, Korosh Arzideh,
Volume 26, Issue 2 (3-2022)
Abstract

The Buffon’s needle problem is a random experiment leading to estimate of the number π by ”randomly” throwing a
needle onto a plane partitioned by parallel lines. Indeed, in the independently repetitions of the experiment, based on
the number of times where the needle will cross a line, one can construct an estimator of π. The aim of this note is to
obtain a better estimator (in some sense) by considering a model where the plane is partitioned by rectangles. We show
that both estimators are asymptotically normal and unbiased; and also the confidence intervals are obtained for π. We
calculate the asymptotic relative efficiency of the estimators and show that the estimator based on the rectangles is more
efficient. The data of a real experiment is provided.
Mohammad Khorasani, Dr Farzad Eskandari,
Volume 26, Issue 2 (3-2022)
Abstract

In today’s world, using the statistical modeling process, natural phenomena can be used to analyze and predict the events under study. ‎ Many hydrological modeling methods do not make the best use of available information because hydrological models show a wide range of environmental processes that complex the model‎‏. ‎‎‎‎In particular, when predicting, parameters affect the performance of statistical models. In many risk assessment issues, the presence of uncertainty in the parameters leads to uncertainty in predicting the model. Global sensitivity analysis is a tool used to show uncertainty and
is used in decision making, risk assessment, model simplifcation and so on. Minkowski distance sensitivity analysis and regional sensitivity analysis are two broad methods that can work with a given sample set of model input-output pair. One signifcant difference between them is that minkowski distance sensitivity analysis analyzes output distributions conditional on input values (forward), while regional sensitivity analysis analyzes input distributions conditional on output values (reverse). In this dissertation, we study the relationship between these two approaches and show that regional sensitivity analysis (reverse), when focusing on probability density functions of input, converges towards minkowski distance sensitivity analysis (forward) as the number of classes for conditioning model outputs in the reverse method increases. Similar to the existing general form of forward sensitivity indices, we derive a general form of the reverse sensitivity indices and provide the corresponding reverse given-data method. Finally, the sensitivity analysis of a water storage design with high dimensions of the model outputs is performed.


Dr Nabaz Esmailzadeh, Dr Khosrow Fazli,
Volume 27, Issue 1 (3-2023)
Abstract

In this article, based on a random sample from a normal distribution with unknown parameters, we obtain the shortest confidence interval for the standard deviation parameter using the sample standard deviation. We show that this confidence interval cannot be obtained by taking the square root of the endpoints of the shortest confidence interval for the variance given by Tate and Klett.  A table is provided to calculate the confidence interval for several sample sizes and three common confidence coefficients. Also, the power performance of the tests made based on the mentioned confidence intervals is considered.

Dr Ali Safdari Vaighani,
Volume 27, Issue 2 (3-2023)
Abstract

In this article, we take a look at Henri Poincaré's view on the methodology of mathematics, taken from the book Science and Method, and look at the role of choosing facts in the discovery of mathematics. The author deals with the foundations of the methodology of science and beautifully explains the future of mathematics and the direction of the development of mathematics, which started from the past and is continuing, and impresses the reader with this deep thinking. The author's belief in the framework of discovery of mathematical rules based on facts is well evident in this book. Poincaré's profound thinking in studying the laws of chance and its hidden realities in relation to the facts of existence is undeniable. This short article presents a selection of the content of the first part of the book to get familiar with it.
Dr Ehsan Bahrami Samani, Ms Kiyana Javidi Anaraki, ,
Volume 28, Issue 1 (9-2023)
Abstract

Given the limited energy resources globally, energy optimization is crucial. A significant portion of this energy is consumed
by buildings. Therefore, the aim of this research is to explore the simultaneous factors affecting the heating and cooling of
buildings. In the current research, 768 different residential buildings simulated with Ecotect software have been investigated.
Joint regression model and exploratory data analysis methods were used to identify the influencing factors of the heating and
cooling of buildings. Based on variables such as relative compactness, overall height, surface area, and roof of the buildings,
a new variable called ”type” (building model) was introduced and shown to be one of the strongest factors affecting the
heating and cooling of buildings. This variable is related to the shape of the building. In the joint regression model, it is
assumed that the responses follow a multivariate normal distribution. Then this model is compared with separate regression
models (without assuming responses correlation) and using Akaike’s information criterion and deviance information criterion,
pointing to the superiority of the joint regression model. Additionally, the model parameters are estimated using the maximum
likelihood estimation method and the amount of Akaike model compared to the separate model is a decrease of 0.0072%,
which shows the superiority of the joint regression model. The deviance information criterion is equal to 0.001736%, and in
comparison with the chi distribution, the null hypothesis is rejected to test the superiority of the models, which is regressed
to the superiority of the joint model.
Maryam Maleki, Hamid Reza Nili-Sani, Dr. M.gh. Akari,
Volume 28, Issue 2 (3-2024)
Abstract

In this article, logistic regression models are studied in which the response variables are two (or multiple) values and the explanatory variables (predictor or independent) are ordinary variables, but the errors have a vagueness nature in addition to being random. Based on this, we formulate the proposed model and determine the estimation of the coefficients for a case with only one explanatory variable using the method of least squares. In the end, we explain the results with an example.
Mohammad Q. Vahidi-As;,
Volume 28, Issue 2 (3-2024)
Abstract

In the realm of statistical research, two primary methodologies can be identified: the first involves addressing self-motivated problems, where researchers select topics based on personal interest, often as a continuation of their doctoral studies. The second methodology focuses on collaborative problem-solving with researchers from various scientific disciplines, including both experimental sciences and other fields. While publishing original articles in both approaches is valuable, the collaborative method is particularly significant as it aims to address real-world problems, thereby enhancing the scientific discourse. Currently, most statistical research in Iran predominantly follows the first approach. In contrast, the second approach not only addresses pressing issues faced by the country—provided these problems are genuinely relevant—but also fosters a deeper understanding of statistics among researchers from other disciplines. This collaboration can lead to increased engagement between statisticians and professionals in various fields, ultimately promoting a more comprehensive understanding of statistical science across diverse areas of knowledge. The lack of emphasis on interdisciplinary collaboration is particularly concerning given the existence of critical real-world problems that can only be effectively addressed through joint efforts between statisticians and experts from other domains. This article will briefly examine several instances of research that have either been overlooked or received minimal attention, highlighting the need for greater interdisciplinary engagement in statistical research within Iran.
 

Page 11 from 12     

مجله اندیشه آماری Andishe _ye Amari
Persian site map - English site map - Created in 0.07 seconds with 43 queries by YEKTAWEB 4714