A Novel Approach to Predicting the Results of NBA Matches

Similar documents
PREDICTING the outcomes of sporting events

BASKETBALL PREDICTION ANALYSIS OF MARCH MADNESS GAMES CHRIS TSENG YIBO WANG

NBA TEAM SYNERGY RESEARCH REPORT 1

Pairwise Comparison Models: A Two-Tiered Approach to Predicting Wins and Losses for NBA Games

Two Machine Learning Approaches to Understand the NBA Data

Evaluating and Classifying NBA Free Agents

Predicting the Total Number of Points Scored in NFL Games

Using Spatio-Temporal Data To Create A Shot Probability Model

CS 221 PROJECT FINAL

Predicting Momentum Shifts in NBA Games

PREDICTING THE NCAA BASKETBALL TOURNAMENT WITH MACHINE LEARNING. The Ringer/Getty Images

Predicting NBA Shots

Basketball data science

Machine Learning an American Pastime

CS 7641 A (Machine Learning) Sethuraman K, Parameswaran Raman, Vijay Ramakrishnan

Player Availability Rating (PAR) - A Tool for Quantifying Skater Performance for NHL General Managers

Building an NFL performance metric

This page intentionally left blank

A Network-Assisted Approach to Predicting Passing Distributions

Pierce 0. Measuring How NBA Players Were Paid in the Season Based on Previous Season Play

B. AA228/CS238 Component

Predicting Tennis Match Outcomes Through Classification Shuyang Fang CS074 - Dartmouth College

ANALYSIS OF SIGNIFICANT FACTORS IN DIVISION I MEN S COLLEGE BASKETBALL AND DEVELOPMENT OF A PREDICTIVE MODEL

A Simple Visualization Tool for NBA Statistics

Projecting Three-Point Percentages for the NBA Draft

An Artificial Neural Network-based Prediction Model for Underdog Teams in NBA Matches

Title: 4-Way-Stop Wait-Time Prediction Group members (1): David Held

Opleiding Informatica

RUGBY is a dynamic, evasive, and highly possessionoriented

Novel empirical correlations for estimation of bubble point pressure, saturated viscosity and gas solubility of crude oils

Deriving an Optimal Fantasy Football Draft Strategy

Ivan Suarez Robles, Joseph Wu 1

Predicting the use of the sacrifice bunt in Major League Baseball BUDT 714 May 10, 2007

Predicting Season-Long Baseball Statistics. By: Brandon Liu and Bryan McLellan

Navigate to the golf data folder and make it your working directory. Load the data by typing

Predicting the development of the NBA playoffs. How much the regular season tells us about the playoff results.

Title: Modeling Crossing Behavior of Drivers and Pedestrians at Uncontrolled Intersections and Mid-block Crossings

The probability of winning a high school football game.

DATA MINING ON CRICKET DATA SET FOR PREDICTING THE RESULTS. Sushant Murdeshwar

CS145: INTRODUCTION TO DATA MINING

Basketball field goal percentage prediction model research and application based on BP neural network

Jonathan White Paper Title: An Analysis of the Relationship between Pressure and Performance in Major League Baseball Players

Neural Networks II. Chen Gao. Virginia Tech Spring 2019 ECE-5424G / CS-5824

Introduction to Pattern Recognition

Investigation of Winning Factors of Miami Heat in NBA Playoff Season

Legendre et al Appendices and Supplements, p. 1

Football Play Type Prediction and Tendency Analysis

Information Visualization in the NBA: The Shot Chart

Environmental Science: An Indian Journal

100Yards.com. MIS 510, Project Report Spring Chandrasekhar Manda Rahul Chahar Karishma Khalsa

Introduction to Pattern Recognition

Department of Economics Working Paper

Phoenix Soccer 2D Simulation Team Description Paper 2015

Revisiting the Hot Hand Theory with Free Throw Data in a Multivariate Framework

A Machine Learning Approach to Predicting Winning Patterns in Track Cycling Omnium

At each type of conflict location, the risk is affected by certain parameters:

EFFICIENCY OF TRIPLE LEFT-TURN LANES AT SIGNALIZED INTERSECTIONS

Sports Data Mining Technology Used in Basketball Outcome Prediction

Player Comparison. David Lee vs. LaMarcus Aldridge. Elbow Efficiency ELBOW TOUCHES PER GAME % OF TOUCHES ELBOW POINTS PER TOUCH

OXYGEN POWER By Jack Daniels, Jimmy Gilbert, 1979

The Role of Olive Trees Distribution and Fruit Bearing in Olive Fruit Fly Infestation

1. OVERVIEW OF METHOD

PREDICTING OUTCOMES OF NBA BASKETBALL GAMES

THE APPLICATION OF BASKETBALL COACH S ASSISTANT DECISION SUPPORT SYSTEM

An Analysis of Factors Contributing to Wins in the National Hockey League

Lab Report Outline the Bones of the Story

Bayesian Optimized Random Forest for Movement Classification with Smartphones

The Rise in Infield Hits

Estimating the Probability of Winning an NFL Game Using Random Forests

NCSS Statistical Software

Modeling Fantasy Football Quarterbacks

Our Shining Moment: Hierarchical Clustering to Determine NCAA Tournament Seeding

CS249: ADVANCED DATA MINING

Players Movements and Team Shooting Performance: a Data Mining approach for Basketball.

Examining NBA Crunch Time: The Four Point Problem. Abstract. 1. Introduction

Forecasting Baseball

Using Markov Chains to Analyze Volleyball Matches

Honest Mirror: Quantitative Assessment of Player Performances in an ODI Cricket Match

Evaluating the Influence of R3 Treatments on Fishing License Sales in Pennsylvania

SAP Predictive Analysis and the MLB Post Season

Decision Trees. Nicholas Ruozzi University of Texas at Dallas. Based on the slides of Vibhav Gogate and David Sontag

Evaluating The Best. Exploring the Relationship between Tom Brady s True and Observed Talent

CB2K. College Basketball 2000

Game Theory (MBA 217) Final Paper. Chow Heavy Industries Ty Chow Kenny Miller Simiso Nzima Scott Winder

Using New Iterative Methods and Fine Grain Data to Rank College Football Teams. Maggie Wigness Michael Rowell & Chadd Williams Pacific University

Analyzing the Influence of Player Tracking Statistics on Winning Basketball Teams

Application of Bayesian Networks to Shopping Assistance

Scoring dynamics across professional team sports: tempo, balance and predictability

Marathon Performance Prediction of Amateur Runners based on Training Session Data

Open Research Online The Open University s repository of research publications and other research outputs

Analysis of Pressure Rise During Internal Arc Faults in Switchgear

A Conceptual Approach for Using the UCF Driving Simulator as a Test Bed for High Risk Locations

University of Nevada, Reno. The Effects of Changes in Major League Baseball Playoff Format: End of Season Attendance

Predicting the NCAA Men s Basketball Tournament with Machine Learning

SEPARATING A GAS MIXTURE INTO ITS CONSTITUENT ANALYTES USING FICA

Simulating Major League Baseball Games

Performance Analysis of Basketball Referees by Machine Learning Techniques

#19 MONITORING AND PREDICTING PEDESTRIAN BEHAVIOR USING TRAFFIC CAMERAS

Calculation of Trail Usage from Counter Data

67. Sectional normalization and recognization on the PV-Diagram of reciprocating compressor

Transcription:

A Novel Approach to Predicting the Results of NBA Matches Omid Aryan Stanford University aryano@stanford.edu Ali Reza Sharafat Stanford University sharafat@stanford.edu Abstract The current paper presents a novel approach to predicting the results of basketball matches played in the NBA league. As opposed to previous work that merely take into account a teams own performance and statistics in predicting a teams outcome in a match, our approach also uses the data known about the opposing team in our endeavor to make that prediction. Our findings show that utilizing the information about the opponent improves the error rate of observed in these predictions. I. INTRODUCTION Statistics and data have become an ever more attractive component in professional sports in recent years, particularly with respect to the National Basketball Association (NBA). Massive amounts of data are collected on each of the teams in the NBA, from the number of wins and losses of each team to the number of field goals and three-point shots of each player to the average number of minutes each player plays. All the data that is collected from this great sport is truly intriguing, and given that it is usually interpreted by people in the sports profession alone, it is a fantastic and pristine field for people in the machine learning field (and other data-mining sciences) to apply their techniques and draw out interesting results. Thus, we plan to do just that and utilize the algorithms we learn in CS229 to predict the NBA matches. The paper provides a novel approach to predicting these matches. As opposed to previous work done in this field, where the available data is directly fed to the algorithms, we intend to find a relationship between the data sets provided for the two teams in a match and modify the data accordingly before feeding it to our algorithms. Section II will describe the source and format of the dataset we will use to train our models. Section III will present an overview of the model we intend to implement along with a description of each of its components. Section IV will present the results of our implementation, and section V will conclude the paper. II. DATA SET We collect our data from basketball-reference.com. For each game we get a set of 30 features for each team (e.g., http://www.basketball-reference.com/ boxscores/201411160lal.html). The features are listed in Table 1. TABLE I: Features given for each team and each game in the season Minutes Played Field Goals Field Goal Attempts Field Goal Percentage Turnovers Personal Fouls Points Free Throws Free Throw Attempts Free Throw Percentage Offensive Rebounds Defensive Rebounds Total Rebounds Assists Steals Offensive Rating Blocks 3-Point Field Goals 3-Point Field Goal Attempts 3-Point Field Goal Percentage True Shooting Percentage Effective Field Goal Percentage Offensive Rebound Percentage Defensive Rebound Percentage Total Rebound Percentage Assist Percentage Steal Percentage Block Percentage Turnover Percentage Usage Percentage Defensive Rating We created a crawler that scrapes the website and collects the entire data set for all games for the past 5 seasons (2008-2013) and stores them in a csv file per season. We call this data set the game data. Each team plays on average 82 games per season, so we have about 2600 data points in each of those data sets. We primarily use this data set to predict features corresponding to teams playing an upcoming game. We also collect seasonal data for each team (e.g.,

http://www.basketball-reference.com/leagues/nba 2013.html). Here, there are 5 tables of features for each of the teams in the league. We combine 4 of these tables (Team Stats, Opponent Stats, Team Shooting, Opponent Shooting) into one table, resulting in 98 features per team. We call this data set the team data. We primarily use this data set to cluster teams in order to have more accurate feature predictions for upcoming games. Fig. 1: Model Overview III. MODEL OVERVIEW The previous Section described the data set with which we intend to train our models. Each training example is of the form (x, y), which corresponds to the statistics and output of a team in a particular match. x is an N dimensional vector containing the input variables and y indicates whether the team won (y = 1) or lost (y = 0) in that match. Given such a data set, a naive approach would be to feed the training examples directly to the existing algorithms in machine learning (e.g., logistic regression, support vector machines, neural networks, etc.) to predict the outcome of a match. This approach is undertaken in the referenced work. However, this method merely gives a prediction of whether a team wins or loses regardless of which team it plays against. In this paper we plan to tackle the intrinsic problem of the aforementioned method and take a different approach. Instead of directly utilizing the training examples in Section II to train our parameters and feed to our algorithms, we intend to modify them based on the matches they were obtained from as well as the relationship that the different features have with one another. For example, the statistics that a team is expected to have in each match depends on the team it is playing against. We plan to take this into account by clustering the teams into different groups and to predict their expected statistics based on: a)the relationship that the two clusters corresponding to the two teams have with one another and b)their average statistics from the previous matches. Moreover, once we have a prediction of how each team will perform with respect to each of the features, we will congregate the two feature sets of the two teams into a single feature set based on how those features relate to one another. Ultimately, this single feature set is fed to our algorithms in order to make a prediction. An overview of our model is depicted in Figure 1. It is composed of the following three components: 1) Statistic Prediction: In this component the two teams to play in a match are given as input (e.g., Los Angeles Lakers and Boston Celtics) and the expected statistics of each of the teams (x A and x B ) in that match is predicted. 2) Feature Formation: This component forms a single set of input features x from x A and x B based on how the individual features (x i s) are related to one another. 3) Result Prediction: This component contains the different machine learning algorithms we intend to utilize to predict the final outcome based on the input feature set x from the previous component. A. Statistic Prediction We use several different predictive models to predict the features for an upcoming game, using all the data points from the previous games. We describe all such models below: 1) Running average: In this model, the feature values of a team in an upcoming game are predicted using the running average of the features of that team in the previous games it has played in the season. This is the simplest prediction method one can use to predict upcoming games. Since this method relies on the team having played at least one game already, we do not make a prediction for the first game of teams in the season. 2) Exponentially decaying average: This is similar to the previous model, but with the difference that the previous games are weighed by a factor of 0 < α < 1. That is, if a given team has played n games already with g i representing its features in the ith game, our prediction for the n + 1st game will be g n+1 = 1 α 1 α n n α n i g i i=1 The point of using a decaying average is to test

the hypothesis that results have temporal dependence. That is, more recent results are more relevant than older results. The lower the value of α, the higher the importance of the recent results. 3) Home and away average: The hypothesis we aim to test using this prediction is whether or not teams perform differently at home and away. We use two running averages for each team, for home and away games respectively. Then, if the upcoming game is a home game for a given team, the running home game average is our prediction. Similarly, we predict features if the game is away from home. 4) Cluster-based prediction: The aim of this method is to see if we can predict the behavior of a team, based on the type of team they play. In order to cluster the teams, we use our team data set, where for each team we have 98 features. We first perform PCA to reduce the number of features that go into clustering to n. Then, we perform k-means clustering to get k clusters, then we keep a running average of each team s features agains teams of each cluster. That is, we keep n running averages for each team. Then, if the upcoming game of a given team X is against a team which belongs to cluster i, our prediction of X s features is the running average of X s features against teams in cluster i. B. Feature Formation A distinctive aspect of our approach from the previous work is that we take into account the statistics of the opposing team before making a prediction. In other words, we create a training example (to be fed to our algorithms) for each match rather than for each team. This training example would be the output of this component, where the input would be the two (expected) training examples of the two teams (which is also derived with knowledge of the other team by the previous component). Hence, the task of the Feature Formation Component is to form an input variable vector for the match based on the expected input variables of each team. For our implementation, we have carried out the most simplistic method of comparison, which is to simply take the difference of the predicted feature values of each team to form the final feature set, i.e.: x = x A x B An output of y = 1 would then indicate team A s victory, while y = 0 would indicate team B s victory (basketball matches have no draws and one team must win). C. Result Prediction Once we have the predicted feature set of a match, we can then predict the outcome of that match by means of any machine learning algorithm. The algorithms we utilized are linear regression, logistic regression, and support vector machines (SVM). For linear regression, the score difference of the two teams was used to train and test the data. IV. IMPLEMENTATION AND RESULTS The data was trained and tested for each season individually. A collection of five seasons were tested from 2008 to 2013. The foregoing results indicate the average of the results achieved from these seasons. The hold-out cross validation technique was used for training and testing, where 70% of the matches in a season were used to train the algorithms while the rest of the matches were used for testing purposes. The reported error rate was computed by taking the average of 100 iterations of this technique with different training and testing sets. Furthermore, the forward search feature selection algorithm was used to choose the most effective features for each of the algorithms. Here we break down our results based on the Statistics Prediction method used. 1) Running average: The running average is the simplest predictor for the the features of an upcoming game. We simply train our classifiers on a 70/30 training/testing set split. The results are shown in Table II. We see that linear regression turns out to be the best classifier in this case. We note that training and test errors were generally very similar. One possible reason for that is that we used predicted statistics for both runs. The fact that we were using a prediction to train a classifier means that the noise from our predicted statistics cannot be subsumed during training. 2) Exponentially decaying average: We use the decaying averages and feed them into our 3 classifiers to predict the outcomes of the matches in the training and test sets. We experimented with various values of the decay parameter, α. The results are shown in Figure 2. Similar to the previous part, linear regression performs the best amongst the classifiers. Both training and test errors decrease as the value of α increases, which signifies that there is no temporal relationship between the

TABLE II: Training and test errors when using running averages as predictors Training error Test error Linear regression 0.3231 0.3198 Logistic regression 0.3281 0.3251 SVM 0.3317 0.3304 different numbers of cluster and different PCA components during the prediction phase. The results are shown in Figure 3. We find that in general the training and test errors increase with the number of clusters and the number of components in PCA. We find that logistic regression performs best amongst our classifiers, just slightly better than how linear regression performed on the running averages. results (that is, the results from early in the season are as important as the latest results). The training and test errors for all values of α are still higher than those from running averages. 0.44 0.42 0.4 0.38 0.36 0.34 0.32 0.3 0 0.2 0.4 0.6 0.8 1 Linear Regression Tes5ng Error Linear Regression Training Error Logis5c Regression Tes5ng Error Logis5c Regression Training Error SVM Tes5ng Error SVM Training Error Fig. 2: The test training errors as a function of α (x-axis) 3) Home and away average: The classification part is very similar to the two previous sections. The training and test errors are shown in Table 3. We see that linear regression performs better in training and is marginally better in testing. The other two classifiers perform worse, however. This refutes our hypothesis that teams perform differently at home and away from home. We see that the running average (surprisingly) is still our best predictor. TABLE III: Training and test errors when using running home vs away averages as predictors Training error Test error Linear regression 0.314 0.3174 Logistic regression 0.3371 0.3327 SVM 0.3412 0.3397 4) Cluster-based prediction: The classification part is similar to the previous parts. We experimented with Fig. 3: Training and test error rates for various number of PCA components (4, 10, 20, 30 from top to bottom) and number of clusters (2, 3, 5, 10 x-axis).

V. CONCLUSION We used a variety of predictors to predict features of teams in an upcoming game of basketball. We then used those predictions to form a single feature set corresponding to a single match. We then used that feature set to predict the outcome of the corresponding match. We found that running averages of the feature set was consistently the best predictor. We also found that linear regression and logistic regression performed the best when it came to classification, with SVM at a distant third. The reason for the poor performance of SVM is that our data was not nicely separable. Our training and test errors are in line with the literature referenced in the bibliography. REFERENCES [1] Beckler, Matthew, Hongfei Wang, and Michael Papamichael. Nba oracle. Zuletzt besucht am 17 (2013): 2008-2009. [2] Bhandari, Inderpal, et al. Advanced scout: Data mining and knowledge discovery in NBA data. Data Mining and Knowledge Discovery 1.1 (1997): 121-125. [3] Cao, Chenjie. Sports data mining technology used in basketball outcome prediction. (2012). [4] Miljkovic, Dejan, et al. The use of data mining for basketball matches outcomes prediction. Intelligent Systems and Informatics (SISY), 2010 8th International Symposium on. IEEE, 2010.