|
|
|
 |
Search published articles |
 |
|
Showing 3 results for Decision Tree
Volume 18, Issue 1 (9-2013)
Abstract
This paper is a brief introduction to the concepts, methods and algorithms for data mining in statistical software R using a package named Rattle. Rattle provides a good graphical environment to perform some of the procedures and algorithms without the need for programming. Some parts of the package will be explained by a number of examples.
Dr Fatemeh Hosseini, Dr Omid Karimi, Miss Fatemeh Hamedi, Volume 24, Issue 1 (9-2019)
Abstract
Tree models represent a new and innovative way of analyzing large data sets by dividing predictor space into simpler areas. Bayesian Additive Regression Trees model, a model that we explain in this article, uses a totality of trees in its structure, since the combination of several trees from a tree only has a higher accuracy.
Then, this model is a tree-based model and a nonparametric model that uses general aggregation methods, and boosting algorithms in particular and in fact is extension of the classification and Regression Tree methods in which the decision tree exists in the structure of these methods.
In this method, on the parameters of the model sum of tree and put regular prior then use the boosting algorithms for analysis. In this paper, first the Bayesian Additive Regression Trees model is introduced and then applied in survival analysis of lung cancer patients.
Miss Tayebeh Karami, Dr Muhyiddin Izadi, Dr Mehrdad Niaparast, Volume 26, Issue 1 (12-2021)
Abstract
The subject of classification is one of the important issues in different sciences. Logistic regression is one of the statistical
methods to classify data in which the underlying distribution of the data is assumed to be known. Today, researchers in
addition to statistical methods use other methods such as machine learning in which the distribution of the data does not
need to be known. In this paper, in addition to the logistic regression, some machine learning methods including CART
decision tree, random forest, Bagging and Boosting of supervising learning are introduced. Finally, using four real data
sets, we compare the performance of these algorithms with respect to the accuracy measure.
|
|
|
|
|
|