In addition to kNN, decision tree and random forest algorithms are widely used for classification problems. In a decision tree algorithm, a tree of if/then statements is created based on the features and labeled points of a training set. In a random forest algorithm, many decision trees are built and the results are combined (a type of "ensemble" model). In these examples, we will explore individual Decision trees.
"Single decision trees are highly interpretable. The entire model can be completely represented by a simple two-dimensional graphic (binary tree)" - from The Elements of Statistical Learning
Each level of a decision tree splits the data according to different attributes.
Decision trees perform best when a small number of attributes provide most of the information needed to classify observations.
Visualize the accuracies for each of the pruning parameters used in example2. This should help you quickly identify what parameter would be best for our model.