Decision Trees. an Introduction

Similar documents
Decision Trees. Nicholas Ruozzi University of Texas at Dallas. Based on the slides of Vibhav Gogate and David Sontag

Exercise 11: Solution - Decision tree

Estimating the Probability of Winning an NFL Game Using Random Forests

Naïve Bayes. Robot Image Credit: Viktoriya Sukhanova 123RF.com

Naïve Bayes. Robot Image Credit: Viktoriya Sukhanova 123RF.com

Princess Nora University Faculty of Computer & Information Systems ARTIFICIAL INTELLIGENCE (CS 370D) Computer Science Department

SEARCH SEARCH TREE. Node: State in state tree. Root node: Top of state tree

SEARCH TREE. Generating the children of a node

Artificial Intelligence. Uninformed Search Strategies

A computer program that improves its performance at some task through experience.

Essen,al Probability Concepts

Problem Solving Agents

CSE 3401: Intro to AI & LP Uninformed Search II

Junction design rules for urban traffic networks

Uninformed Search (Ch )

Depth-bounded Discrepancy Search

Uninformed search strategies

Uninformed Search Strategies

Introduction. AI and Searching. Simple Example. Simple Example. Now a Bit Harder. From Hammersmith to King s Cross

Uninformed search methods II.

Transposition Table, History Heuristic, and other Search Enhancements

Uninformed Search (Ch )

FEATURES. Features. UCI Machine Learning Repository. Admin 9/23/13

CSE 3402: Intro to Artificial Intelligence Uninformed Search II

Organizing Quantitative Data

Problem Solving as Search - I

Biostatistics & SAS programming

Predicting the Draft and Career Success of Tight Ends in the National Football League

A new Decomposition Algorithm for Multistage Stochastic Programs with Endogenous Uncertainties

Extensive Games with Perfect Information

Game Theory (MBA 217) Final Paper. Chow Heavy Industries Ty Chow Kenny Miller Simiso Nzima Scott Winder

CSC242: Intro to AI. Lecture 21

The Mathematics of Gambling

Flyweight Pattern. Flyweight: Intent. Use sharing to support large numbers of fine-grained objects efficiently. CSIE Department, NTUT Chien-Hung Liu

The Evolution of Transport Planning

Tech Suit Survey (Virginia LMSC)

Answers Investigation 4

COMP 406 Lecture 05. Artificial. Fiona Yan Liu Department of Computing The Hong Kong Polytechnic University

Honest Mirror: Quantitative Assessment of Player Performances in an ODI Cricket Match

Using Online Learning to Analyze the Opponents Behavior

COMP9414: Artificial Intelligence Uninformed Search

Using Spatio-Temporal Data To Create A Shot Probability Model

The Intrinsic Value of a Batted Ball Technical Details

2 When Some or All Labels are Missing: The EM Algorithm

PREDICTING the outcomes of sporting events

PREDICTING THE NCAA BASKETBALL TOURNAMENT WITH MACHINE LEARNING. The Ringer/Getty Images

UK rated gliding competition scoring explained

An Efficient Code Compression Technique using Application-Aware Bitmask and Dictionary Selection Methods

Search I. Tuomas Sandholm Carnegie Mellon University Computer Science Department. [Read Russell & Norvig Chapter 3]

Uninformed search methods II.

4.1 Why is the Equilibrium Diameter Important?

Name: Date: Day/Period: CGC1P1: Interactions in the Physical Environment. Factors that Affect Climate

Solving Problems by Searching chap3 1. Problem-Solving Agents

Building an NFL performance metric

Predicting Tennis Match Outcomes Through Classification Shuyang Fang CS074 - Dartmouth College

Pedestrian Behaviour Modelling

DATA MINING ON CRICKET DATA SET FOR PREDICTING THE RESULTS. Sushant Murdeshwar

Optimizing Gas Supply for Industrial Lasers

Bayesian Optimized Random Forest for Movement Classification with Smartphones

Predicting Horse Racing Results with TensorFlow

A Fair Target Score Calculation Method for Reduced-Over One day and T20 International Cricket Matches

1) Modelling an inversion

Chapter 5: Methods and Philosophy of Statistical Process Control

Predicting NBA Shots

Efficient Minimization of Routing Cost in Delay Tolerant Networks

DECISION-MAKING ON IMPLEMENTATION OF IPO UNDER TOPOLOGICAL UNCERTAINTY

PREDICTING THE FUTURE OF FREE AGENT RECEIVERS AND TIGHT ENDS IN THE NFL

CS 4649/7649 Robot Intelligence: Planning

Diameter in cm. Bubble Number. Bubble Number Diameter in cm

EEC 686/785 Modeling & Performance Evaluation of Computer Systems. Lecture 6. Wenbing Zhao. Department of Electrical and Computer Engineering

CS472 Foundations of Artificial Intelligence. Final Exam December 19, :30pm

G53CLP Constraint Logic Programming

Ruahine Forest Park (RFP) Recreational Hunters Survey

Outline. Terminology. EEC 686/785 Modeling & Performance Evaluation of Computer Systems. Lecture 6. Steps in Capacity Planning and Management

CS145: INTRODUCTION TO DATA MINING

Application of Bayesian Networks to Shopping Assistance

Predictive Model for the NCAA Men's Basketball Tournament. An Honors Thesis (HONR 499) Thesis Advisor. Ball State University.

Participation of Women in the Joint Statistical Meetings: Beginning in 1996, the American Statistical Association s Committee on Women in

1. The table below shows the relative frequencies of the ages of the students at Ingham High School.

Using Actual Betting Percentages to Analyze Sportsbook Behavior: The Canadian and Arena Football Leagues

Real World Search Problems. CS 331: Artificial Intelligence Uninformed Search. Simpler Search Problems. Example: Oregon. Search Problem Formulation

Computer Integrated Manufacturing (PLTW) TEKS/LINKS Student Objectives One Credit

Incompressible Potential Flow. Panel Methods (3)

hot hands in hockey are they real? does it matter? Namita Nandakumar Wharton School, University of

Heap Sort. Lecture 35. Robb T. Koether. Hampden-Sydney College. Mon, Apr 25, 2016

Fig. 3.1 shows the distribution of roe deer in the UK in 1972 and It also shows the location of the sites that were studied in 2007.

Broadly speaking, there are four different types of structures, each with its own particular function:

Reliable Real-Time Recognition of Motion Related Human Activities using MEMS Inertial Sensors

Ripple Tank. Equipment:

/435 Artificial Intelligence Fall 2015

Improving the Australian Open Extreme Heat Policy. Tristan Barnett

Any use of this material except in accordance with a written agreement with Western Power is prohibited.

Five Great Activities Using Spinners. 1. In the circle, which cell will you most likely spin the most times? Try it.

WEATHER DIARY. Carrie Hunter

Predictors for Winning in Men s Professional Tennis

Relevance of Questions from past Level III Essay Exams

Dmt and IDC. KOH CHANG Koh kood / koh mak THAILAND

CENG 466 Artificial Intelligence. Lecture 4 Solving Problems by Searching (II)

STT 315 Section /19/2014

1. OVERVIEW OF METHOD

Transcription:

Decision Trees an Introduction

Outline Top-Down Decision Tree Construction Choosing the Splitting Attribute Information Gain and Gain Ratio

Decision Tree An internal node is a test on an attribute A branch represents an outcome of the test, e.g., house = Rent A leaf node represents a class label or class label distribution At each node, one attribute is chosen to split training examples into distinct classes as much as possible A new case is classified by following a matching path to a leaf node Rent Buy Age < 35 Age 35 Price < 200K Price 200K Yes No Yes No Other No

Weather Data: Play tennis or not Outlook Temperature Humidity Windy Play? sunny hot high false No sunny hot high true No overcast hot high false Yes rain mild high false Yes rain cool normal false Yes rain cool normal true No overcast cool normal true Yes sunny mild high false No sunny cool normal false Yes rain mild normal false Yes sunny mild normal true Yes overcast mild high true Yes overcast hot normal false Yes rain mild high true No

Example Tree for Play tennis

Building Decision Tree [Q93] Top-down tree construction At start, all training examples are at the root Partition the examples recursively by choosing one attribute each time. Bottom-up tree pruning Remove subtrees or branches, in a bottom-up manner, to improve the estimated accuracy on new cases Discussed next week

Choosing the Splitting Attribute At each node, available attributes are evaluated on the basis of separating the classes of the training examples A goodness function is used for this purpose Typical goodness functions: information gain (ID3/C4.5) information gain ratio gini index

A criterion for attribute selection Which is the best attribute? The one which will result in the smallest tree Heuristic: choose the attribute that produces the purest nodes Popular impurity criterion: information gain Information gain increases with the average purity of the subsets that an attribute produces Information gain uses entropy H(p) Strategy: choose attribute that results in greatest information gain

Which attribute to select?

Consider entropy H(p) pure, 100% yes not pure at all, 40% yes not pure at all, 40% yes pure, 100% yes done allmost 1 bit of information required to distinguish yes and no

Entropy log(p) is the 2-log of p Entropy: H(p) = plog(p) (1 p)log(1 p) H(0) = 0 pure node, distribution is skewed entropy( H(1) = p10, p2,, pnpure ) = pnode, 1logp1 distribution p2logp2 is pskewed nlogpn H(0.5) = 1 mixed node, equal distribution

Example: attribute Outlook Outlook = Sunny : info([2,3] ) = entropy(2/5,3/5) = 2/ 5log(2/ 5) 3/ 5log(3/ 5) = 0.971bits Outlook = Overcast : Note: log(0) is not defined, but we evaluate 0*log(0) as zero info([4,0] ) = entropy(1,0) = 1log(1) 0log(0) = 0 bits Outlook = Rainy : info([3,2] ) = entropy(3/5,2/5) = 3/ 5log(3/ 5) 2/ 5log(2/ 5) = 0.971bits Expected information for Outlook : info([3,2],[4,0],[3,2]) = (5/14) 0.971+ (4/14) 0+ (5/14) 0.971 = 0.693bits

Computing the information gain Information gain: (information before split) (information after split) gain(" Outlook") = info([9,5]) - info([2,3],[4,0],[3,2]) = = 0.247 bits 0.940-0.693 Information gain for attributes from weather data: gain(" Outlook") = gain(" Temperature") = gain(" Humidity") = gain(" Windy") = 0.247 bits 0.029 bits 0.152 bits 0.048 bits

Continuing to split gain(" Temperature") = 0.571bits gain(" Windy") = 0.020 bits gain(" Humidity") = 0.971bits

The final decision tree Note: not all leaves need to be pure; sometimes identical instances have different classes Splitting stops when data can t be split any further

Highly-branching attributes Problematic: attributes with a large number of values (extreme case: customer ID) Subsets are more likely to be pure if there is a large number of values Information gain is biased towards choosing attributes with a large number of values This may result in overfitting (selection of an attribute that is non-optimal for prediction)

Weather Data with ID ID Outlook Temperature Humidity Windy Play? A sunny hot high false No B sunny hot high true No C overcast hot high false Yes D rain mild high false Yes E rain cool normal false Yes F rain cool normal true No G overcast cool normal true Yes H sunny mild high false No I sunny cool normal false Yes J rain mild normal false Yes K sunny mild normal true Yes L overcast mild high true Yes M overcast hot normal false Yes N rain mild high true No

Split for ID attribute Entropy of split = 0 (since each leaf node is pure, having only one case. Information gain is maximal for ID

Gain ratio Gain ratio: a modification of the information gain that reduces its bias on high-branch attributes Gain ratio should be Large when data is evenly spread Small when all data belong to one branch Gain ratio takes number and size of branches into account when choosing an attribute It corrects the information gain by taking the intrinsic information of a split into account (i.e. how much info do we need to tell which branch an instance belongs to)

Gain Ratio and Intrinsic Info. Intrinsic information: entropy of distribution of instances into branches IntrinsicInfo( S, A) S S log. S i 2 S i Gain ratio (Quinlan 86) normalizes info gain by: GainRatio ( S, A) = Gain( S, A) IntrinsicInfo( S, A).

Computing the gain ratio Example: intrinsic information for ID info ([1,1,,1) = 14 ( 1/14 log1/14) = Importance of attribute decreases as intrinsic information gets larger Example: 0.940 bits gain_ratio ("ID_code") = = 3.807 bits 3.807 bits 0.246

Gain ratios for weather data Outlook Temperature Info: 0.693 Info: 0.911 Gain: 0.940-0.693 0.247 Gain: 0.940-0.911 0.029 Split info: info([5,4,5]) 1.577 Split info: info([4,6,4]) 1.362 Gain ratio: 0.247/1.577 0.156 Gain ratio: 0.029/1.362 0.021 Humidity Windy Info: 0.788 Info: 0.892 Gain: 0.940-0.788 0.152 Gain: 0.940-0.892 0.048 Split info: info([7,7]) 1.000 Split info: info([8,6]) 0.985 Gain ratio: 0.152/1 0.152 Gain ratio: 0.048/0.985 0.049

More on the gain ratio Outlook still comes out top However: ID has greater gain ratio Standard fix: ad hoc test to prevent splitting on that type of attribute Problem with gain ratio: it may overcompensate May choose an attribute just because its intrinsic information is very low Standard fix: First, only consider attributes with greater than average information gain Then, compare them on gain ratio