|
|
 |
Search published articles |
 |
|
Showing 6 results for Tree
Samira Jalayeri, , , Volume 17, Issue 1 (9-2012)
Abstract
Implementing unequal probability sampling, without replacement, is very complex and several methods have been suggested for its performance, including : Midseno design and systematic design. One of the methods that have been introduced by Devil and Tille (1998) is the splitting design that leads to simple random sampling .in this paper by completely explaining the design, with an example, we have shown, the method to calcculate probability for each possible samples, using R software. it`s good to know that we can implement this design using the program in different communities after defining the ideal probability inclusion.
Volume 18, Issue 1 (9-2013)
Abstract
This paper is a brief introduction to the concepts, methods and algorithms for data mining in statistical software R using a package named Rattle. Rattle provides a good graphical environment to perform some of the procedures and algorithms without the need for programming. Some parts of the package will be explained by a number of examples.
Dr. Mehri Javanian, Volume 19, Issue 1 (6-2014)
Abstract
This article describes the limiting distribution of the degrees of nodes has been derived for a kind of random tree named k-minimal label random recursive tree, as the size of the tree goes to infinity. The outdegree of the tree is equal to the number of customers in a pyramid marketing agency immediatly alluring
Dr Fatemeh Hosseini, Dr Omid Karimi, Miss Fatemeh Hamedi, Volume 24, Issue 1 (9-2019)
Abstract
Tree models represent a new and innovative way of analyzing large data sets by dividing predictor space into simpler areas. Bayesian Additive Regression Trees model, a model that we explain in this article, uses a totality of trees in its structure, since the combination of several trees from a tree only has a higher accuracy.
Then, this model is a tree-based model and a nonparametric model that uses general aggregation methods, and boosting algorithms in particular and in fact is extension of the classification and Regression Tree methods in which the decision tree exists in the structure of these methods.
In this method, on the parameters of the model sum of tree and put regular prior then use the boosting algorithms for analysis. In this paper, first the Bayesian Additive Regression Trees model is introduced and then applied in survival analysis of lung cancer patients.
Alireza Rezaee, Mojtaba Ganjali, Ehsan Bahrami, Volume 25, Issue 1 (1-2021)
Abstract
Nonrespose is a source of error in the survey results and National statistical organizations are always looking for ways to
control and reduce it. Predicting nonrespons sampling units in the survey before conducting the survey is one of the solutions
that can help a lot in reducing and treating the survey nonresponse. Recent advances in technology and the facilitation of
complex calculations have made it possible to apply machine learning methods, such as regression and classification trees
or support vector machines, to many issues, including predicting the nonresponse of sampling units in statistics. . In this
article, while reviewing the above methods, we will predict the nonresponse sampling units in a establishment survey using
them and we will show that the combination of the above methods is more accurate in predicting the correct nonresponse
than any of the methods.
Miss Tayebeh Karami, Dr Muhyiddin Izadi, Dr Mehrdad Niaparast, Volume 26, Issue 1 (12-2021)
Abstract
The subject of classification is one of the important issues in different sciences. Logistic regression is one of the statistical
methods to classify data in which the underlying distribution of the data is assumed to be known. Today, researchers in
addition to statistical methods use other methods such as machine learning in which the distribution of the data does not
need to be known. In this paper, in addition to the logistic regression, some machine learning methods including CART
decision tree, random forest, Bagging and Boosting of supervising learning are introduced. Finally, using four real data
sets, we compare the performance of these algorithms with respect to the accuracy measure.
|
|