Neural Nets Using Backpropagation. Chris Marriott Ryan Shirley CJ Baker Thomas Tannahill

Similar documents
Basketball field goal percentage prediction model research and application based on BP neural network

Neural Networks II. Chen Gao. Virginia Tech Spring 2019 ECE-5424G / CS-5824

A Brief History of the Development of Artificial Neural Networks

Optimization and Search. Jim Tørresen Optimization and Search

Fun Neural Net Demo Site. CS 188: Artificial Intelligence. N-Layer Neural Network. Multi-class Softmax Σ >0? Deep Learning II

Open Research Online The Open University s repository of research publications and other research outputs

A Novel Travel Adviser Based on Improved Back-propagation Neural Network

Neural Network in Computer Vision for RoboCup Middle Size League

CS 4649/7649 Robot Intelligence: Planning

Predicting National Gas Consumption in Iran using a Hierarchical Combination of Neural Networks and Genetic Algorithms

EXPERIMENTÁLNÍ ANALÝZA MLP SÍTÍ PRO PREDIKCI POHYBU PLIC PŘI DÝCHÁNÍ Experimental Analysis of MLP for Lung Respiration Prediction

SEARCH SEARCH TREE. Node: State in state tree. Root node: Top of state tree

CS 1675: Intro to Machine Learning. Neural Networks. Prof. Adriana Kovashka University of Pittsburgh November 1, 2018

Detection of Proportion of Different Gas Components Present in Manhole Gas Mixture Using Backpropagation Neural Network

SEARCH TREE. Generating the children of a node

Calibration and Validation of the Simulation Model. Xin Zhang

LOCOMOTION CONTROL CYCLES ADAPTED FOR DISABILITIES IN HEXAPOD ROBOTS

Heart Rate Prediction Based on Cycling Cadence Using Feedforward Neural Network

Folding Reticulated Shell Structure Wind Pressure Coefficient Prediction Research based on RBF Neural Network

ARTIFICIAL NEURAL NETWORK BASED DESIGN FOR DUAL LATERAL WELL APPLICATIONS

A Fault Diagnosis Monitoring System of Reciprocating Pump

Problem Solving as Search - I

The Incremental Evolution of Gaits for Hexapod Robots

Football Match Statistics Prediction using Artificial Neural Networks

intended velocity ( u k arm movements

FORECASTING OF ROLLING MOTION OF SMALL FISHING VESSELS UNDER FISHING OPERATION APPLYING A NON-DETERMINISTIC METHOD

NEURAL NETWORKS BASED TYRE IDENTIFICATION FOR A TYRE INFLATOR OPERATIONS

Introduction to Machine Learning NPFL 054

EE 364B: Wind Farm Layout Optimization via Sequential Convex Programming

Performance of Fully Automated 3D Cracking Survey with Pixel Accuracy based on Deep Learning

Detection of Valve Leakage in Reciprocating Compressor Using Artificial Neural Network (ANN)

CS472 Foundations of Artificial Intelligence. Final Exam December 19, :30pm

Development of an Intelligent Gas Recognizer for Analysis of Sewer Gas

CENG 466 Artificial Intelligence. Lecture 4 Solving Problems by Searching (II)

Attacking and defending neural networks. HU Xiaolin ( 胡晓林 ) Department of Computer Science and Technology Tsinghua University, Beijing, China

1.1 The size of the search space Modeling the problem Change over time Constraints... 21

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

ABSTRACT 1 INTRODUCTION

3D Inversion in GM-SYS 3D Modelling

Evacuation Time Minimization Model using Traffic Simulation and GIS Technology

Process Control Loops

B. AA228/CS238 Component

From Passive to Active Dynamic 3D Bipedal Walking - An Evolutionary Approach -

AIM JOG RUN WALK FAST RUN THINK INCLUSIVE! LiRF Session Card 1

The Evolution of Transport Planning

COMP9414: Artificial Intelligence Uninformed Search

Ocean Fishing Fleet Scheduling Path Optimization Model Research. Based On Improved Ant Colony Algorithm

A Little Math. Wave speed = wave length/wave period C= L/T. Relationship of Wave Length to Depth of Wave Motion

Policy Gradient RL to learn fast walk

Motion Control of a Bipedal Walking Robot

If you need to reinstall FastBreak Pro you will need to do a complete reinstallation and then install the update.

EVOLVING HEXAPOD GAITS USING A CYCLIC GENETIC ALGORITHM

Lecture 39: Training Neural Networks (Cont d)

Predicting NBA Shots

Selection Strategies for Highways. Learning Cooperative Lane

Cyrus Soccer 2D Simulation

A New Approach for Transformer Incipient Fault Diagnosis Based on Dissolved Gas Analysis (DGA)

Deep-water orbital waves

A HYBRID METHOD FOR CALIBRATION OF UNKNOWN PARTIALLY/FULLY CLOSED VALVES IN WATER DISTRIBUTION SYSTEMS ABSTRACT

Transposition Table, History Heuristic, and other Search Enhancements

Gas-liquid Two-phase Flow Measurement Using Coriolis Flowmeters Incorporating Neural Networks

Evolving Gaits for the Lynxmotion Hexapod II Robot

Development of PVT Correlation for Iraqi Crude Oils Using Artificial Neural Network

Design Linear Quadratic Gaussian for Controlling the Blood Pressure of Patient

Gait Analysis using Neural Networks

SIDT 2017 XXII SEMINARIO SCIENTIFICO DELLA SOCIETÀ ITALIANA DOCENTI DI TRASPORTI

Problem Solving Agents

Application of Dijkstra s Algorithm in the Evacuation System Utilizing Exit Signs

Convolutional Neural Networks

2007 Gas-Lift Workshop

Optimization of a Wing-Sail shape for a small boat

Genetic Programming of Multi-agent System in the RoboCup Domain

Princess Nora University Faculty of Computer & Information Systems ARTIFICIAL INTELLIGENCE (CS 370D) Computer Science Department

THe rip currents are very fast moving narrow channels,

Gait Evolution for a Hexapod Robot

An Investigation into the Effectiveness of an Artificial Neural Network in the Prediction of Results in British Flat Racing

,WHUDWLYH3URGXFW(QJLQHHULQJ(YROXWLRQDU\5RERW'HVLJQ

Project Title: Pneumatic Exercise Machine

Genetic Algorithm Optimized Gravity Based RoboCup Soccer Team

Environmental Science: An Indian Journal

Tokyo: Simulating Hyperpath-Based Vehicle Navigations and its Impact on Travel Time Reliability

An Artificial Neural Network-based Prediction Model for Underdog Teams in NBA Matches

Better Search Improved Uninformed Search CIS 32

Workshop 1: Bubbly Flow in a Rectangular Bubble Column. Multiphase Flow Modeling In ANSYS CFX Release ANSYS, Inc. WS1-1 Release 14.

Mechanical Ventilation

Soccereus 2D Simulation Team Description Paper

Atmospheric Waves James Cayer, Wesley Rondinelli, Kayla Schuster. Abstract

Resource Allocation for Malaria Prevention

Generating Arachnid Robot Gaits with Cyclic Genetic Algorithms

Visual Coding in the H1 Neuron of a Blowfly

Representation. Representation. Representation. Representation. 8 puzzle.

Predicting the Total Number of Points Scored in NFL Games

Homework: Turn in Tortoise & the Hare

Chapter 23 Traffic Simulation of Kobe-City

An approach for optimising railway traffic flow on high speed lines with differing signalling systems

MA PM: Memetic algorithms with population management

WONDERLAB: THE EQUINOR GALLERY. The science and maths behind the exhibits 30 MIN INFORMATION. Topic FORCES. Age

Syntax and Parsing II

Autodesk Moldflow Communicator Process settings

In memory of Dr. Kevin P. Granata, my graduate advisor, who was killed protecting others on the morning of April 16, 2007.

Transcription:

Neural Nets Using Backpropagation Chris Marriott Ryan Shirley CJ Baker Thomas Tannahill

Agenda Review of Neural Nets and Backpropagation Backpropagation: The Math Advantages and Disadvantages of Gradient Descent and other algorithms Enhancements of Gradient Descent Other ways of minimizing error

Review Approach that developed from an analysis of the human brain Nodes created as an analog to neurons Mainly used for classification problems (i.e. character recognition, voice recognition, medical applications, etc.)

Review Neurons have weighted inputs, threshold values, activation function, and an output Weighted inputs Output Activation function = f( (inputs * weight))

Review 4 Input AND Inputs Threshold = 1.5 Outputs Threshold = 1.5 Inputs Threshold = 1.5 All weights = 1 and all outputs = 1 if active 0 otherwise

Review Output space for AND gate Input 1 (0,1) (1,1) 1.5 = w1*i1 + w2*i2 (0,0) (1,0) Input 2

Review Output space for XOR gate Demonstrates need for hidden layer Input 1 (0,1) (1,1) (0,0) (1,0) Input 2

Backpropagation: The Math General multi-layered neural network Output Layer 0 1 2 3 4 5 6 7 8 9 X0,0 X1,0 X9,0 0 1 i W0,0 W1,0 Wi,0 Hidden Layer 0 1 Input Layer

Backpropagation: The Math Backpropagation Calculation of hidden layer activation values

Backpropagation: The Math Backpropagation Calculation of output layer activation values

Backpropagation: The Math Backpropagation Calculation of error δ k = f(d k ) -f(o k )

Backpropagation: The Math Backpropagation Gradient Descent objective function Gradient Descent termination condition

Backpropagation: The Math Backpropagation Output layer weight recalculation Learning Rate (eg. 0.25) Error at k

Backpropagation: The Math Backpropagation Hidden Layer weight recalculation

Backpropagation Using Gradient Descent Advantages Relatively simple implementation Standard method and generally works well Disadvantages Slow and inefficient Can get stuck in local minima resulting in suboptimal solutions

Local Minima Local Minimum Global Minimum

Alternatives To Gradient Descent Simulated Annealing Advantages Can guarantee optimal solution (global minimum) Disadvantages May be slower than gradient descent Much more complicated implementation

Alternatives To Gradient Descent Genetic Algorithms/Evolutionary Strategies Advantages Faster than simulated annealing Less likely to get stuck in local minima Disadvantages Slower than gradient descent Memory intensive for large nets

Alternatives To Gradient Descent Simplex Algorithm Advantages Similar to gradient descent but faster Easy to implement Disadvantages Does not guarantee a global minimum

Enhancements To Gradient Descent Momentum Adds a percentage of the last movement to the current movement

Enhancements To Gradient Descent Momentum Useful to get over small bumps in the error function Often finds a minimum in less steps w(t) = -n*d*y + a*w(t-1) w is the change in weight n is the learning rate d is the error y is different depending on which layer we are calculating a is the momentum parameter

Enhancements To Gradient Descent Adaptive Backpropagation Algorithm It assigns each weight a learning rate That learning rate is determined by the sign of the gradient of the error function from the last iteration If the signs are equal it is more likely to be a shallow slope so the learning rate is increased The signs are more likely to differ on a steep slope so the learning rate is decreased This will speed up the advancement when on gradual slopes

Enhancements To Gradient Descent Adaptive Backpropagation Possible Problems: Since we minimize the error for each weight separately the overall error may increase Solution: Calculate the total output error after each adaptation and if it is greater than the previous error reject that adaptation and calculate new learning rates

Enhancements To Gradient Descent SuperSAB(Super Self-Adapting Backpropagation) Combines the momentum and adaptive methods. Uses adaptive method and momentum so long as the sign of the gradient does not change This is an additive effect of both methods resulting in a faster traversal of gradual slopes When the sign of the gradient does change the momentum will cancel the drastic drop in learning rate This allows for the function to roll up the other side of the minimum possibly escaping local minima

Enhancements To Gradient Descent SuperSAB Experiments show that the SuperSAB converges faster than gradient descent Overall this algorithm is less sensitive (and so is less likely to get caught in local minima)

Other Ways To Minimize Error Varying training data Cycle through input classes Randomly select from input classes Add noise to training data Randomly change value of input node (with low probability) Retrain with expected inputs after initial training E.g. Speech recognition

Other Ways To Minimize Error Adding and removing neurons from layers Adding neurons speeds up learning but may cause loss in generalization Removing neurons has the opposite effect

Resources Artifical Neural Networks, Backpropagation, J. Henseler Artificial Intelligence: A Modern Approach, S. Russell & P. Norvig 501 notes, J.R. Parker www.dontveter.com/bpr/bpr.html www.dse.doc.ic.ac.uk/~nd/surprise_96/jo urnal/vl4/cs11/report.html