Visualizing and Understanding Stochastic Depth Networks

Size: px
Start display at page:

Download "Visualizing and Understanding Stochastic Depth Networks"

Transcription

1 Visualizing and Understanding Stochastic Depth Networks Russell Kaplan, Raphael Palefsky-Smith, Liu Jiang Stanford University 450 Serra Mall, Stanford, CA {rjkaplan, rpalefsk, Abstract In this paper, we understand, analyze, and visualize Stochastic Depth Networks, an architecture introduced in March of Stochastic Depth Networks have enjoyed interest as a result of their significant reduction in training time while beating the then state of the art in accuracy. However, while Stochastic Depth Networks have delivered exceptional results, no academic paper has sought to understand the source of their performance or their limitations. In providing an analysis of Stochastic Depth Networks representations, error types, and strengths and weaknesses, we conduct seven experiments: t-sne on layer activations, weight activations, maximally activated images, guided backpropagation, dead neuron counting, robustness to input noise, and linear classifier probes. By specifically comparing and contrasting Stochastic Depth Networks with Fixed Depth Networks (standard residual networks), we discover that Stochastic Depth Networks have a faster training time, a lower test error, similar clustering of data, and more strongly differentiated weight activations. 1. Introduction Stochastic Depth Networks have demonstrated an impressive ability to train extremely deep neural networks. Inspired by Dropout, Stochastic Depth Networks are essentially ResNets with one small tweak: they randomly drop some of the layers at training time and replace them with the identity function [6]. Stochastic Depth Networks have been shown to reduce training time and lower generalization error, and they can train extremely deep networks. The dropping of layers also helps with gradient flow and serves as a regularizer by effectively training a random ensemble of networks that are then averaged at test time. Previous experiments support the regularization hypothesis, but many questions remain about why Stochastic Nets perform so well. We explore the inner workings of Stochastic Depth Networks through a series of seven experiments. Figure 1. Layer Dropout: the third and fifth blocks are replaced with an identity function. (Huang et al.) 2. Related Work The crux of our work involves analyzing deep networks with stochastic depth, the architecture of which is introduced by Huang et al. [6]. To address vanishing gradients and diminished forward flow, both of which are problems associated with 1

2 training deep convolutional networks with hundreds of layers, Huang et al. propose a training procedure called stochastic depth that enables the contradictory setup to train short networks and use deep networks at test time [6]. Huang et al. begin with deep networks but then randomly insert a subset of layers and bypass them with the identity function for each minibatch. The identity connections when dropping a layer are preserved such that the inputs from a previous layer are fed into the next layer in the stack. This approach is complementary to the recent success of residual networks and reduces training time while improving the test error. There are many unexplored facets of Stochastic Depth Networks. Huang et al. only experiment with architectures that use residual connections to make benchmarking against prior work easy and isolate the benefit obtained from stochastic depth [5][6]. This is useful for demonstrating improved performance, but running experiments that implement stochastic depth on networks without residual connections would be more informative. As a direct follow-up to Huang et al., our work analyzes why their procedure works so well. While Huang et al. analyze their Stochastic Depth Network architecture with standard performance techniques, they do not verify that their hypotheses for the drivers behind its high performance are actually true. For example, despite the fact that their regularizer hypothesis seems legitimate, the closest step they take towards verifying their hypothesis is showing that there is less over-fitting. While Stochastic Depth Nets have not been analyzed in great depth, other types of networks, particularly recurrent neural networks (RNNs) and convolutional neural networks (CNNs), have been explored and visualized [7][9][15]. For example, in a spirit similar to ours, Karpathy et al. use character-level language models as an interpretable testbed to provide an analysis of RNNs representations, predictions, and error types [7]. Their experiments reveal the existence of interpretable cells that keep track of long-range dependences. Their comparative analysis with finite horizon n-gram models shows that the source of LSTM improvements is long-range structural dependences. Because CNNs have demonstrated impressive classification performance on the ImageNet benchmark, there have been a few pieces of related work on visualizing and understanding CNNs and diagnosing additional possible improvements to their performance. Zeiler et al. [15] introduce a visualization technique that gives insight into the function of CNNs intermediate feature layers and the operation of the classifier. Similarly, Simonyan et al. [9] consider two visualization techniques: one that generates an image, which maximizes the class score and thus visualizes the notion of the class, and a second that computes a class saliency map, specific to a given image and class. Previous work has also proposed either new frameworks for simplifying the training of deep neural networks [6] or new methods for regularizing networks such as RNNs [8]. For example, He et al. reformulated the layers as learning residual functions referencing the layer inputs, as opposed to learning unreferenced functions [4]. Based on evaluation of residual nets of up to 152 layers on the ImageNet dataset, He et al. provide evidence that their residual networks are easier to optimize and can gain accuracy as depth increases [4]. Another paper by He et al. analyzes the propogation formulations between residual building blocks and proposes a new residual unit that makes training easier and improves generalizations [5]. A series of experiments support the importance of identity mappings, which can be used as the skip connections and afteraddition activation [5]. Interestingly enough, Huang et al.s architecture is essentially an exact copy of Hes RNN architecture, just with some stochastic layer dropping [5][6]. In that sense, our work primarily builds off of He et al.s and Huang et al.s papers. As another example, Krueger et al. propose a new method for regularizing RNNs known as zoneout [8]. Zoneout is a perunit version of stochastic depth [5]. At each timestep, zoneout stochastically forces some hidden units in order to maintain their previous values. Like dropout, zoneout improves generalization by using random noise. However, by preserving rather than dropping hidden units, information on gradient and state are more easily propogated through time. Krueger et al.s empirical investigation of various RNN regularizers shows that zoneout has significant performance improvements across tasks [8]. It is important to note that Krueger et al.s work extends the stochastic depth method to RNNs and networks with hidden state [8]. Some more recent work abstracts away from specific neural network types and attempts to avoid the overarching issue of training a new model for every individual problem. For example, Zamir et al. train a model to learn fundamental vision tasks [15]. They employ a method to learn a generic 3D representation that generalizes to unseen 3D tasks with humanlevel performance on the supervised task and without any fine-tuning needs [15]. The learned representation shows traits of abstraction abilities [15]. Zamir et al. developed independent semantic and 3D representations but integrating them is a future direction of research that we similarly hope to undertake. 3. Model Training Rather than using a pre-trained model, we opted to train our own model. We used the official code from the Huang et al. paper [4]. We train two separate 110-layer ResNets: one with stochasticity of 0.5 on a linear decay schedule, and one with 2

3 no stochasticity. The difference between the two networks lies in the layer death rate. The first network (Fixed Depth) is a conventional ResNet: every layer is trained, and there is no layer death. The second network, the Stochastic Depth Network [4], has a death rate of 0.5 that decays linearly across ResBlock layers (i.e., the later layers have a higher probability of dropping for any particular minibatch. See Huang et al. for details.) The networks are trained on CIFAR-10 with standard data augmentation. There are 45,000 training images, a 5,000-image validation set, and a 10,000-image test set. The mini batch size is 128 with 18 residual modules, and we train on an Amazon GX2-large EC2 instance with an NVIDIA K80 GPU. Training occurs for 500 epochs via a Stochastic Gradient Descent approach with a learning rate of 0.1, that decays to 0.01 and then after 250 and 375 epochs respectively, using weight decay of 1e-4 and Nesterov momentum. As expected, the Stochastic Depth epochs were quicker because we stochastically skip the forward and backward computations for some of the ResBlocks. The Stochastic Depth Networks had an average epoch time of 210 seconds while the Fixed Depth Networks had an average epoch time of 258 seconds. 4. Methods and Technical Solution 4.1. t-sne on Layer Activations Figure 2. Test Error vs Number of Epochs Trained For our first experiment, we used t-sne to visualize 64-dimensional CNN codes of 4096 different image inputs [8]. T- SNE is a visualization technique that embeds high-dimensional vectors into a low-dimensional space, while trying to preserve the relative distances between different points in the high-dimensional space in the low-dimensional projection. In our case, we embed the 64-dimensional codes into a 2-dimensional space and plot each input image according to the projection of its corresponding codes. We use a perplexity of 30 to produce t-sne plots that show embeddings of image codes from the Stochastic and Fixed Depth Networks respectively. We employed t-sne to visualize the activations of the following layers: layer 36 (after the first third of the network), layer 72 (after the second third of the network), and layer 108 (after the final spatial average pooling layer that is just before the final fully-connected layer). We take the outputs of all the neurons in each of those three layers and use them as feature vectors. In the case of layer 108, the output is a 64-length vector. In the case of layers 36 and 72, which are much earlier in the net, the output is large (greater than 16,000). Because t-sne cannot run efficiently on such a large dataset, we used Spatial Average Pooling, a 2D average-pooling operation over an input image, to drop the outputs of layer 72 and layer 36 down to 64. 3

4 4.2. Weight Activations We plot the activations of weights at different layers for a given input image. At a high level, visualizing the activations allows us to identify which neurons at different layers in the network get excited about or respond to which specific inputs Maximally Activated Images We ran the entire test set through the network, examined a single neuron, and recorded its activations for each image. We then sorted the images by this activation and selected the top five. We completed this process for two different neurons across six different layers (layers 1, 21, 41, 61, 81, and 101) on both networks Guided Backpropogation To visualize what input is maximally exciting to specific neurons, we employ guided backpropogation, a method described by Springenberg et al. [8]. Rather than masking out values corresponding to negative entries of the bottom data (backpropogation) or top gradient (deconvolutional network approach), guided backpropogation masks out the values for which at least one of these values is negative [8]. In contrast to traditional backpropogation, guided backpropogation adds an additional guidance signal, thereby preventing the backward flow of negative gradients [8]. Unlike the deconvolutional network approach, guided backpropogation works well without switches and allows for visualization of intermediate as well as last layers of networks. Firstly, we found which input images result in the highest activations for several specific neurons early and late in the network. Next, we change the backwards pass of our network so that the gradient of the layer whose neuron we wish to visualize is set to all 0s, except for the specific neuron we visualize, which has all 1s. We then modify the gradients of the previous ReLU layers Dead Neurons We run the entire test set through the network. At each layer, we note neurons that have zero activation. Because this is done after the ReLU, zero actually means less than or equal to 0. After running the entire test set through, we made note of which neurons were zero-activated for every single image. We then tallied up the number of neurons per layer, and compare between Stochastic and Fixed. Note that we chose to do a cumulative plot due to the fact that the raw graph fluctuates between 5 and 0 at almost every layer and is unreadable. By cumulative, we mean the total number of dead neurons up to and including this layer Robustness to Input Noise Huang et. al. hypothesize that stochastic depth acts as a regularizer. They cite the higher training loss but lower test error of the Stochastic Depth Network after convergence as evidence of a regularizing effect. The authors also draw a comparison to Dropout, which has regularizing benefits that are well-studied by those like Wager et al. [11][13]. We test this hypothesis by adding different types and levels of noise to image inputs and comparing the networks performance. For this experiment, we add noise to images and examine how much the noise affects error [3]. If the Stochastic Depth Network has a better test error than the Fixed Depth Network when the same amount of noise is added, the regularization hypothesis would be supported. The test error is calculated across the entire test set. To achieve an understanding of the overall effect of noise, we ran every image through the network, giving each image a different but equally strong bit of noise, and then record the overall accuracy on the entire test set. Each of the noise functions had a noise parameter that we varied (i.e. the x-axis), and as the parameter increased, the image became noisier. At this point, the images have already been mean-subtracted and have a standard deviation of 1. Thus, most of the pixel values are in the range of 0 to 1, so adding noise of 0.5 is pretty significant. For the Gaussian normally distributed noise, we add a random value with zero mean and standard deviation from our x-axis to each pixel. For the uniformly distributed noise, we added a random value to each pixel but we draw a random value in the range from 0 to the x-axis. For the Gaussian blur, we blurred the image with a Gaussian kernel using the same convolution semantics, thereby keeping the image at its initial size. The noise parameter for the Gaussian blur is σ, which controls the size of the filter. As an example, a filter with σ = 5 will combine more pixels than a filter with σ = Linear Classifier Probes Because neural network models have a reputation for being black boxes, we employ methods to better visualize and understand what is being done at each layer of a Stochastic Depth Network. One way to do this is with a technique called 4

5 linear classifier probes [1], which essentially measures how linearly separable the activations of a particular layer are into final class labels. These probes can only use the hidden units of a specific intermediate layer as discriminating features, and these probes do not affect the training phase of our models as we add them after training. Intermediate layers are particularly interesting as the first layers of a convolution network for image recognition contain relatively general filters in that they would likely continue to perform well even under a different image dataset. Furthermore, the last layers are often specific to a dataset and have to be retrained under a different dataset. Thus, intermediate layers are highly relevant in terms of pinpointing when this transition occurs and if this transformation is progressive or sudden. 5. Experimental Results 5.1. t-sne on Layer Activations As seen in Figure 3, both the Stochastic Depth Network as well as the Fixed Depth Network learned to cluster the data. Notice the clean separability of the final layer, and the different color distributions of the plots for layer 72. The cluster patterns in the t-sne of layer 72, two thirds deep into the network, show that the fixed network activations are close-by when low-level features like background color are close-by in the input space. (On a zoomed in version of this plot, one can clearly observe birds and planes intermixed in the same regions when their backgrounds are both a blue sky, for example. Similarly for deer and horses with grassy backgrounds.) This contrasts with the layer 72 t-sne for the Stochastic Depth Network, where the background colors are relatively jumbled but there are more examples of images in the same class congregating closer together when they have different background patterns and colors. One explanation of the differences in layer 72, supported by later experiments, is that the Stochastic Depth Network cares less about mastering low-level image feature extraction; it devotes more representational capacity to learning and separating higher-level features. We can begin to observe this in this t-sne plot: by layer 72, the Stochastic Depth Network no longer cares to cluster by background color, but rather it has begun to cluster by higher-level semantic significance. (Note that this does not mean the data are more linearly separable into classes with layer 72 activations in the stochastic variant; as our later linear probe experiments show, the opposite is actually the case. But we can see that the high-level features are given more representational weight at this layer, even if those representations aren t yet class-separating.) Figure 3. t-sne plots for the activations at layers 36, 72, and 113 (the final spatial average pooling layer) of 4096 test set images. 5

6 5.2. Weight Activations We observe that across all inputs we tried visualizing, late-layer weight activations are more strongly differentiated between neurons in the Stochastic Depth Network than in the Fixed Depth Network. The implication is that the same late-stage layer in the Fixed Depth Network, the activations are more diffuse across filters (i.e. no one filter is activated as strongly, and more activate weakly) versus a corresponding layer in the Stochastic Depth Network. For a clarifying illustration of this result, see Figure 4. The distribution and strength of weight activations might indicate that the Stochastic Depth Network better confidently discriminates between different classes of the input image. Another observation is that immediately after each time we double the number of filters, otherwise known as neurons or tiles, nearly half of them is are often all black. Our dead neuron distribution experiment confirms this hypothesis: there is a spike in neuron death each time we double the number of filters. Figure 4. Weight activations at various depths of the two different networks, for the same input image. Note that the actual input image was in color. 6

7 5.3. Maximally Activated Images By the end of the process, the neurons learned higher order features. Our results validate the hierarchical assumption of ConvNets. As shown in Figure 5, at Layer 1, we see very basic responses, which makes sense because it is early in the network. Specifically the Fixed Neuron 2 Layer 1 likes red objects regardless of what they are, and Stochastic Neuron 6 Layer 1 likes green objects regardless of background. Fixed Neuron 2 Layer 101 is an emu neuron, Fixed Neuron 6 Layer 101 is a car neuron, and both Stochastic Neurons Layer 101 are horse neurons. Figure 5. This chart displays the top-5 maximally activating images for various ReLU neurons within the Fixed-Depth and Stochastic-Depth networks. At each layer, we plot the images that maximally activate Neurons 2 and 6 (randomly selected). 7

8 5.4. Guided Backpropogation In both the Stochastic and Fixed Depth Networks, neurons perform as expected. The early layer neurons react strongly to color and texture, whereas the late layer neurons react to more semantically meaningful units (e.g. the wheels and headlights on cars, the heads of birds, sticks that the birds often sit on, and so forth). These results were consistent throughout the various neurons we visualized at different layers of each network. Overall, there was no strong difference between the Stochastic and Fixed Depth Networks that we observed through this method. Figure 6. Guided backpropagation visualizations of the excitations of neurons in the first and last ReLU layers of both the Fixed and Stochastic depth network. Within each block, each row represents a different neuron in the layer. The 6 tiles to the right are the top 6 maximally activating images for that neuron, and the tiles to the left are the guided backpropagation visualizations of the neuron corresponding to each of those 6 image inputs. The all-black tiles for the last row in the first ReLU of the fixed network show a dead neuron: it has an activation of 0 (and thus no gradient signal) for all images in the dataset Dead Neurons The plot in Figure 7 shows that stochasticity does not help with the dead neurons problem; in fact the problem is actually more pronounced in the early layers. Nonetheless, the Stochastic Depth Network has relatively fewer dead neurons in later layers. One intuition for this second point is that the later layers drop with higher probability due to the linear decay schedule, in which the probability of survival decays linearly as we go deeper. Because the later layers in the Stochastic Depth Network are dropped frequently, having more neurons is more important because it is less likely that they are present. 8

9 Figure 7. This plot shows the accumulation of dead neurons in each network: i.e., how many neurons up through the layer marked on the x-axis do not activate for any input image? We note that the stochastic depth network accumulates more dead neurons earlier, but the fixed depth network gains more later. They end up with a roughly equal total number of dead neurons Robustness to Input Noise As shown in Figure 8, the Stochastic Depth Network is less robust to image noise than the Fixed Depth Network for both Gaussian normally distributed noise and uniformly distributed noise. The Stochastic Depth Network performs slightly better for Gaussian blur perturbations, although it is questionable how meaningful these results are for σ > 3, given how much of the image is destroyed for larger σ. For examples, please view the images below the graphs of Figure 8. The regularization hypothesis may therefore not be universally true. This is especially apparent for low-level perturbations like image noise. The Stochastic Depth Network has nearly twice the number of dead neurons as the Fixed Depth Network in the earliest layers, as those layers are responsible for the pixel-level pattern matching that the image noise is most likely to interfere with. This, in conjunction with the dead neurons experiment described earlier, suggests that the early layers of a Stochastic Depth Network are actually less robust than those in a Fixed Depth Network. The fact that test-time performance is still generally better for Stochastic Depth suggests that perhaps having the most robust early layers is not that important. The main sources of remaining error on datasets like CIFAR may potentially lie not in problems with early layer feature activations but in layer ones. This supports the general observation made by Deng et al. in [2] that CNNs can often vastly outperform humans on fine-grained pattern recognition tasks in images (e.g. distinguishing between many close breeds of dogs) but be inferior in classification when high-level features of the image are very skewed (e.g. extreme occlusions). 9

10 Figure 8. These plots display the effect of noise on test error for both Fixed Depth and Stochasti -Depth Networks. The x-axis is the amount of noise applied to images in the test set, and the y-axis is the corresponding error on the noise-corrupted test sets. Each plot is a different type of noise Unformly Distributed, Normally Distributed, and Gaussian Blur and the images below provide an example of each noise applied at various amounts to an image Linear Classifier Probes In Figure 9, we plot the results of our linear probe experiments. Interestingly, the fixed networks intermediate layer activations are generally more linearly separable into the class labels than those of the Stochastic Depth Network. The only exception to this is at the earliest layer we probed, layer 18, and the last non-fully-connected layer of the network (the output of the 8 8 average spatial pooling layer). Clearly, the activations at the last layer will be more linearly separable for the Stochastic Depth Network, as this is the network that ultimately had lower test error. However, it is interesting that essentially all of its intermediary layers produce activations that are less separable into classes. Recall that it is not the real job of intermediate layers to produce linearly separable class activations. That is only the job of the last layer of the network; the remaining layers are simply supposed to produce the most useful possible feature activations for further processing by the next layer. Here we see that in the process of doing a better job, Stochastic Depth Networks produce less separating intermediary activations. Why does that happen, and what does it suggest? One interpretation can be made by recalling what stochastic depth actually does: by randomly dropping the activations of some layers, and only letting activations flow through the skip connection when that happens, stochasticity essentially asks more from each of the intermediate layers: be useful to the next layer, but also be useful to the layer after when the next layer is not present. We suspect this results in a kind of representational hedging : because the task demanded of each intermediate layer changes from epoch to epoch, depending on which layers are dropped, they do worse on any one individual layer s task request, like linear separability. They can be thought of as blurry representations that need to work well in multiple different contexts. Figure 9. Test errors of linear probes trained independently at different layers. Probes are trained with no pooling (which means early layer probes have many tens of thousands of parameters) and a learning rate of until convergence. 10

11 6. Conclusion We conducted seven experiments: t-sne on layer activations, weight activations, maximally activated images, guided backpropogation, dead neurons, robustness to input noise, and linear classifier probes. One of our overarching conclusions, which is supported by the overall test error and our dead neuron, t-sne, and linear probes experiments, is that Stochastic Depth Nets are less tuned for low-level feature extraction but more tuned for higher level feature differentiation. This is supported by their higher susceptibility to error after low-level noise is introduced, and the intermediate t-sne plots that show higher level features being paid attention to earlier in the network, whereas the background color is the primary clustering factor for the fixed depth network. The different in robustness to noise also adds nuance to the analysis of Huang et al. s suggestion that stochasticity acts as a regularizer. Increased regularization would normally be expected to provide greater invariance to input noise. Our interpretation is that while stochasticity still likely has a regularizing effect (as test error is lower but training loss is higher after convergence), the effect regularizes across higher-level features in the image, as opposed to low-level perturbations. Overall, it seems that the representations learned by these networks are still rather similar. The performance is different, but not drastically; maximally activating neurons and guided backpropagation visualizations do not reveal major contrasts. But there is nonetheless the hint that the distribution of representational power is slightly different for each network. Stochastic depth networks are a fascinating architectural idea and we look forward to continued research on their utility. 7. Future Work We see many promising avenues for future work and plan to conduct the following additional experiments, among others: 1. Performing analyses on datasets beyond CIFAR-10, including MNIST and (a subset of) ImageNet. This way, we can collect quantitative results independent of the specific dataset, thereby ensuring that our findings do not depend on the properties of CIFAR-10 in particular. 2. Evaluating more architectures, including fully-connected nets and nets without any residual connections. 3. Determining how well the representations learned with Stochastic Depth Networks can be used for transfer learning with new tasks. 4. Finally, as more techniques for neural network visualization and understanding are developed, we would like to apply these generalized techniques to Stochastic Depth Nets in particular, perhaps uncovering relationships that our analyses missed. References [1] G. Alain and Y. Bengio. Understanding Intermediate Layers Using Linear Classifier Probes [2] J. Deng, W. Dong, R. Socher, L.J. Li, K. Li and L. Fei-Fei, ImageNet: A Large-Scale Hierarchical Image Database. In CVPR, [3] S. Dodge and L. Karam. Understanding How Image Quality Affects Deep Neural Networks [4] K. He, X. Zhang, S. Ren, and J. Sun. Deep Residual Learning for Image Recognition. In CVPR, [5] K. He, X. Zhang, S. Ren, and J. Sun. Identity Maps in Deep Residual Networks. In ECCV, [6] G. Huang, Y. Sun, Z. Liu, D. Sedra, and K. Weinberger. Deep Networks with Stochastic Depth. In NIPS, [7] Karpathy, J. Johsnon, and F. Li. Visualizing and Understanding Recurrent Networks. In ICLR, [8] D. Krueger, T. Maharaj, J. Kramr, M. Pezeshki, N. Ballas, N. Ke, A. Goyal, Y. Bengio, H. Larochelle, A. Courville, and C. Pal. Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations [9] K. Simonyan, A. Vedaldi, and A. Zisserman. Deep Inside Convolutional Networks: Visualizing Image Classification Models and Saliency Maps. In ICLR, [10] J.T. Springenberg, A. Dosovitskiy, T. Brox, M. Riedmiller. Striving for Simplicty: The All Convolutional Net. In ICLR, [11] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, R. Salakhutdinov: Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research 15(1): , [12] L.J.P. van der Maaten and G.E. Hinton. Visualizing High-Dimensional Data Using t-sne. Journal of Machine Learning Research 9(Nov): , [13] S. Wager, S. Wang, P. Liang. Dropout Training as Adaptive Regularization. In NIPS,

12 [14] A. Zamir, P. Agrawal, T. Wekel, J. Malik, and S. Savarese. Generic 3D Representations via Pose Estimation and Mapping. In ECCV, [15] M. Zeiler and R. Fergus. Visualizing and Understanding Convolutional Networks. In ECCV,

Fun Neural Net Demo Site. CS 188: Artificial Intelligence. N-Layer Neural Network. Multi-class Softmax Σ >0? Deep Learning II

Fun Neural Net Demo Site. CS 188: Artificial Intelligence. N-Layer Neural Network. Multi-class Softmax Σ >0? Deep Learning II Fun Neural Net Demo Site CS 188: Artificial Intelligence Demo-site: http://playground.tensorflow.org/ Deep Learning II Instructors: Pieter Abbeel & Anca Dragan --- University of California, Berkeley [These

More information

Convolutional Neural Networks

Convolutional Neural Networks CS 1674: Intro to Computer Vision Convolutional Neural Networks Prof. Adriana Kovashka University of Pittsburgh March 13, 15, 20, 2018 Plan for the next few lectures Why (convolutional) neural networks?

More information

Attacking and defending neural networks. HU Xiaolin ( 胡晓林 ) Department of Computer Science and Technology Tsinghua University, Beijing, China

Attacking and defending neural networks. HU Xiaolin ( 胡晓林 ) Department of Computer Science and Technology Tsinghua University, Beijing, China Attacking and defending neural networks HU Xiaolin ( 胡晓林 ) Department of Computer Science and Technology Tsinghua University, Beijing, China Outline Background Attacking methods Defending methods 2 AI

More information

Performance of Fully Automated 3D Cracking Survey with Pixel Accuracy based on Deep Learning

Performance of Fully Automated 3D Cracking Survey with Pixel Accuracy based on Deep Learning Performance of Fully Automated 3D Cracking Survey with Pixel Accuracy based on Deep Learning Kelvin C.P. Wang Oklahoma State University and WayLink Systems Corp. 2017-10-19, Copenhagen, Denmark European

More information

CS 1675: Intro to Machine Learning. Neural Networks. Prof. Adriana Kovashka University of Pittsburgh November 1, 2018

CS 1675: Intro to Machine Learning. Neural Networks. Prof. Adriana Kovashka University of Pittsburgh November 1, 2018 CS 1675: Intro to Machine Learning Neural Networks Prof. Adriana Kovashka University of Pittsburgh November 1, 2018 Plan for this lecture Neural network basics Definition and architecture Biological inspiration

More information

Neural Networks II. Chen Gao. Virginia Tech Spring 2019 ECE-5424G / CS-5824

Neural Networks II. Chen Gao. Virginia Tech Spring 2019 ECE-5424G / CS-5824 Neural Networks II Chen Gao ECE-5424G / CS-5824 Virginia Tech Spring 2019 Neural Networks Origins: Algorithms that try to mimic the brain. What is this? A single neuron in the brain Input Output Slide

More information

Predicting Horse Racing Results with Machine Learning

Predicting Horse Racing Results with Machine Learning Predicting Horse Racing Results with Machine Learning LYU 1703 LIU YIDE 1155062194 Supervisor: Professor Michael R. Lyu Outline Recap of last semester Object of this semester Data Preparation Set to sequence

More information

Introduction to Machine Learning NPFL 054

Introduction to Machine Learning NPFL 054 Introduction to Machine Learning NPFL 054 http://ufal.mff.cuni.cz/course/npfl054 Barbora Hladká hladka@ufal.mff.cuni.cz Martin Holub holub@ufal.mff.cuni.cz Charles University, Faculty of Mathematics and

More information

Lecture 39: Training Neural Networks (Cont d)

Lecture 39: Training Neural Networks (Cont d) Lecture 39: Training Neural Networks (Cont d) CS 4670/5670 Sean Bell Strawberry Goblet Throne (Side Note for PA5) AlexNet: 1 vs 2 parts Caffe represents caffe like the above image, but computes as if it

More information

Universal Style Transfer via Feature Transforms

Universal Style Transfer via Feature Transforms Universal Style Transfer via Feature Transforms Yijun Li, Chen Fang, Jimei Yang, Zhaowen Wang, Xin Lu, Ming-Hsuan Yang UC Merced, Adobe Research, NVIDIA Research Presented: Dong Wang (Refer to slides by

More information

Machine Learning Methods for Climbing Route Classification

Machine Learning Methods for Climbing Route Classification Machine Learning Methods for Climbing Route Classification Alejandro Dobles Mathematics adobles@stanford.edu Juan Carlos Sarmiento Management Science & Engineering jcs10@stanford.edu Abstract Peter Satterthwaite

More information

Evaluating and Classifying NBA Free Agents

Evaluating and Classifying NBA Free Agents Evaluating and Classifying NBA Free Agents Shanwei Yan In this project, I applied machine learning techniques to perform multiclass classification on free agents by using game statistics, which is useful

More information

intended velocity ( u k arm movements

intended velocity ( u k arm movements Fig. A Complete Brain-Machine Interface B Human Subjects Closed-Loop Simulator ensemble action potentials (n k ) ensemble action potentials (n k ) primary motor cortex simulated primary motor cortex neuroprosthetic

More information

A Novel Approach to Predicting the Results of NBA Matches

A Novel Approach to Predicting the Results of NBA Matches A Novel Approach to Predicting the Results of NBA Matches Omid Aryan Stanford University aryano@stanford.edu Ali Reza Sharafat Stanford University sharafat@stanford.edu Abstract The current paper presents

More information

Opleiding Informatica

Opleiding Informatica Opleiding Informatica Determining Good Tactics for a Football Game using Raw Positional Data Davey Verhoef Supervisors: Arno Knobbe Rens Meerhoff BACHELOR THESIS Leiden Institute of Advanced Computer Science

More information

Analyses of the Scoring of Writing Essays For the Pennsylvania System of Student Assessment

Analyses of the Scoring of Writing Essays For the Pennsylvania System of Student Assessment Analyses of the Scoring of Writing Essays For the Pennsylvania System of Student Assessment Richard Hill The National Center for the Improvement of Educational Assessment, Inc. April 4, 2001 Revised--August

More information

PREDICTING the outcomes of sporting events

PREDICTING the outcomes of sporting events CS 229 FINAL PROJECT, AUTUMN 2014 1 Predicting National Basketball Association Winners Jasper Lin, Logan Short, and Vishnu Sundaresan Abstract We used National Basketball Associations box scores from 1991-1998

More information

Deformable Convolutional Networks

Deformable Convolutional Networks Deformable Convolutional Networks -- MSRA COCO Detection & Segmentation Challenge 2017 Entry Jifeng Dai With Haozhi Qi*, Zheng Zhang, Bin Xiao, Han Hu, Bowen Cheng*, Yichen Wei Visual Computing Group Microsoft

More information

Predicting NBA Shots

Predicting NBA Shots Predicting NBA Shots Brett Meehan Stanford University https://github.com/brettmeehan/cs229 Final Project bmeehan2@stanford.edu Abstract This paper examines the application of various machine learning algorithms

More information

Using Spatio-Temporal Data To Create A Shot Probability Model

Using Spatio-Temporal Data To Create A Shot Probability Model Using Spatio-Temporal Data To Create A Shot Probability Model Eli Shayer, Ankit Goyal, Younes Bensouda Mourri June 2, 2016 1 Introduction Basketball is an invasion sport, which means that players move

More information

Using Markov Chains to Analyze a Volleyball Rally

Using Markov Chains to Analyze a Volleyball Rally 1 Introduction Using Markov Chains to Analyze a Volleyball Rally Spencer Best Carthage College sbest@carthage.edu November 3, 212 Abstract We examine a volleyball rally between two volleyball teams. Using

More information

Predicting Human Behavior from Public Cameras with Convolutional Neural Networks

Predicting Human Behavior from Public Cameras with Convolutional Neural Networks Comenius University in Bratislava Faculty of Mathematics, Physics and Informatics Predicting Human Behavior from Public Cameras with Convolutional Neural Networks Master thesis 2016 Ondrej Jariabka Comenius

More information

SPATIAL STATISTICS A SPATIAL ANALYSIS AND COMPARISON OF NBA PLAYERS. Introduction

SPATIAL STATISTICS A SPATIAL ANALYSIS AND COMPARISON OF NBA PLAYERS. Introduction A SPATIAL ANALYSIS AND COMPARISON OF NBA PLAYERS KELLIN RUMSEY Introduction The 2016 National Basketball Association championship featured two of the leagues biggest names. The Golden State Warriors Stephen

More information

Legendre et al Appendices and Supplements, p. 1

Legendre et al Appendices and Supplements, p. 1 Legendre et al. 2010 Appendices and Supplements, p. 1 Appendices and Supplement to: Legendre, P., M. De Cáceres, and D. Borcard. 2010. Community surveys through space and time: testing the space-time interaction

More information

Introduction to Pattern Recognition

Introduction to Pattern Recognition Introduction to Pattern Recognition Jason Corso SUNY at Buffalo 12 January 2009 J. Corso (SUNY at Buffalo) Introduction to Pattern Recognition 12 January 2009 1 / 28 Pattern Recognition By Example Example:

More information

Open Research Online The Open University s repository of research publications and other research outputs

Open Research Online The Open University s repository of research publications and other research outputs Open Research Online The Open University s repository of research publications and other research outputs Developing an intelligent table tennis umpiring system Conference or Workshop Item How to cite:

More information

Tokyo: Simulating Hyperpath-Based Vehicle Navigations and its Impact on Travel Time Reliability

Tokyo: Simulating Hyperpath-Based Vehicle Navigations and its Impact on Travel Time Reliability CHAPTER 92 Tokyo: Simulating Hyperpath-Based Vehicle Navigations and its Impact on Travel Time Reliability Daisuke Fukuda, Jiangshan Ma, Kaoru Yamada and Norihito Shinkai 92.1 Introduction Most standard

More information

NBA TEAM SYNERGY RESEARCH REPORT 1

NBA TEAM SYNERGY RESEARCH REPORT 1 NBA TEAM SYNERGY RESEARCH REPORT 1 NBA Team Synergy and Style of Play Analysis Karrie Lopshire, Michael Avendano, Amy Lee Wang University of California Los Angeles June 3, 2016 NBA TEAM SYNERGY RESEARCH

More information

Taking Your Class for a Walk, Randomly

Taking Your Class for a Walk, Randomly Taking Your Class for a Walk, Randomly Daniel Kaplan Macalester College Oct. 27, 2009 Overview of the Activity You are going to turn your students into an ensemble of random walkers. They will start at

More information

Advanced PMA Capabilities for MCM

Advanced PMA Capabilities for MCM Advanced PMA Capabilities for MCM Shorten the sensor-to-shooter timeline New sensor technology deployed on off-board underwater systems provides navies with improved imagery and data for the purposes of

More information

CS 221 PROJECT FINAL

CS 221 PROJECT FINAL CS 221 PROJECT FINAL STUART SY AND YUSHI HOMMA 1. INTRODUCTION OF TASK ESPN fantasy baseball is a common pastime for many Americans, which, coincidentally, defines a problem whose solution could potentially

More information

Machine Learning an American Pastime

Machine Learning an American Pastime Nikhil Bhargava, Andy Fang, Peter Tseng CS 229 Paper Machine Learning an American Pastime I. Introduction Baseball has been a popular American sport that has steadily gained worldwide appreciation in the

More information

Predicting the Total Number of Points Scored in NFL Games

Predicting the Total Number of Points Scored in NFL Games Predicting the Total Number of Points Scored in NFL Games Max Flores (mflores7@stanford.edu), Ajay Sohmshetty (ajay14@stanford.edu) CS 229 Fall 2014 1 Introduction Predicting the outcome of National Football

More information

TECHNICAL STUDY 2 with ProZone

TECHNICAL STUDY 2 with ProZone A comparative performance analysis of games played on artificial (Football Turf) and grass from the evaluation of UEFA Champions League and UEFA Cup. Introduction Following on from our initial technical

More information

Title: 4-Way-Stop Wait-Time Prediction Group members (1): David Held

Title: 4-Way-Stop Wait-Time Prediction Group members (1): David Held Title: 4-Way-Stop Wait-Time Prediction Group members (1): David Held As part of my research in Sebastian Thrun's autonomous driving team, my goal is to predict the wait-time for a car at a 4-way intersection.

More information

Visual Background Recommendation for Dance Performances Using Dancer-Shared Images

Visual Background Recommendation for Dance Performances Using Dancer-Shared Images 2016 IEEE International Conference on Internet of Things (ithings) and IEEE Green Computing and Communications (GreenCom) and IEEE Cyber, Physical and Social Computing (CPSCom) and IEEE Smart Data (SmartData)

More information

Coupling distributed and symbolic execution for natural language queries. Lili Mou, Zhengdong Lu, Hang Li, Zhi Jin

Coupling distributed and symbolic execution for natural language queries. Lili Mou, Zhengdong Lu, Hang Li, Zhi Jin Coupling distributed and symbolic execution for natural language queries Lili Mou, Zhengdong Lu, Hang Li, Zhi Jin doublepower.mou@gmail.com Outline Introduction to neural enquirers Coupled approach of

More information

Basketball field goal percentage prediction model research and application based on BP neural network

Basketball field goal percentage prediction model research and application based on BP neural network ISSN : 0974-7435 Volume 10 Issue 4 BTAIJ, 10(4), 2014 [819-823] Basketball field goal percentage prediction model research and application based on BP neural network Jijun Guo Department of Physical Education,

More information

Safety Assessment of Installing Traffic Signals at High-Speed Expressway Intersections

Safety Assessment of Installing Traffic Signals at High-Speed Expressway Intersections Safety Assessment of Installing Traffic Signals at High-Speed Expressway Intersections Todd Knox Center for Transportation Research and Education Iowa State University 2901 South Loop Drive, Suite 3100

More information

Neural Nets Using Backpropagation. Chris Marriott Ryan Shirley CJ Baker Thomas Tannahill

Neural Nets Using Backpropagation. Chris Marriott Ryan Shirley CJ Baker Thomas Tannahill Neural Nets Using Backpropagation Chris Marriott Ryan Shirley CJ Baker Thomas Tannahill Agenda Review of Neural Nets and Backpropagation Backpropagation: The Math Advantages and Disadvantages of Gradient

More information

Petacat: Applying ideas from Copycat to image understanding

Petacat: Applying ideas from Copycat to image understanding Petacat: Applying ideas from Copycat to image understanding How Streetscenes Works (Bileschi, 2006) 1. Densely tile the image with windows of different sizes. 2. HMAX C2 features are computed in each window.

More information

STAT 625: 2000 Olympic Diving Exploration

STAT 625: 2000 Olympic Diving Exploration Corey S Brier, Department of Statistics, Yale University 1 STAT 625: 2000 Olympic Diving Exploration Corey S Brier Yale University Abstract This document contains a preliminary investigation of data from

More information

JPEG-Compatibility Steganalysis Using Block-Histogram of Recompression Artifacts

JPEG-Compatibility Steganalysis Using Block-Histogram of Recompression Artifacts JPEG-Compatibility Steganalysis Using Block-Histogram of Recompression Artifacts Jan Kodovský, Jessica Fridrich May 16, 2012 / IH Conference 1 / 19 What is JPEG-compatibility steganalysis? Detects embedding

More information

Deconstructing Data Science

Deconstructing Data Science Deconstructing Data Science David Bamman, UC Berkele Info 29 Lecture 4: Regression overview Jan 26, 217 Regression A mapping from input data (drawn from instance space ) to a point in R (R = the set of

More information

Software Reliability 1

Software Reliability 1 Software Reliability 1 Software Reliability What is software reliability? the probability of failure-free software operation for a specified period of time in a specified environment input sw output We

More information

Predicting Horse Racing Results with TensorFlow

Predicting Horse Racing Results with TensorFlow Predicting Horse Racing Results with TensorFlow LYU 1703 LIU YIDE WANG ZUOYANG News CUHK Professor, Gu Mingao, wins 50 MILLIONS dividend using his sure-win statistical strategy. News AlphaGO defeats human

More information

Journal of Quantitative Analysis in Sports

Journal of Quantitative Analysis in Sports Journal of Quantitative Analysis in Sports Volume 1, Issue 1 2005 Article 5 Determinants of Success in the Olympic Decathlon: Some Statistical Evidence Ian Christopher Kenny Dan Sprevak Craig Sharp Colin

More information

THe rip currents are very fast moving narrow channels,

THe rip currents are very fast moving narrow channels, 1 Rip Current Detection using Optical Flow Shweta Philip sphilip@ucsc.edu Abstract Rip currents are narrow currents of fast moving water that are strongest near the beach. These type of currents are dangerous

More information

SHOT ON GOAL. Name: Football scoring a goal and trigonometry Ian Edwards Luther College Teachers Teaching with Technology

SHOT ON GOAL. Name: Football scoring a goal and trigonometry Ian Edwards Luther College Teachers Teaching with Technology SHOT ON GOAL Name: Football scoring a goal and trigonometry 2006 Ian Edwards Luther College Teachers Teaching with Technology Shot on Goal Trigonometry page 2 THE TASKS You are an assistant coach with

More information

Atmospheric Waves James Cayer, Wesley Rondinelli, Kayla Schuster. Abstract

Atmospheric Waves James Cayer, Wesley Rondinelli, Kayla Schuster. Abstract Atmospheric Waves James Cayer, Wesley Rondinelli, Kayla Schuster Abstract It is important for meteorologists to have an understanding of the synoptic scale waves that propagate thorough the atmosphere

More information

Introduction to Pattern Recognition

Introduction to Pattern Recognition Introduction to Pattern Recognition Jason Corso SUNY at Buffalo 19 January 2011 J. Corso (SUNY at Buffalo) Introduction to Pattern Recognition 19 January 2011 1 / 32 Examples of Pattern Recognition in

More information

Phrase-based Image Captioning

Phrase-based Image Captioning Phrase-based Image Captioning Rémi Lebret, Pedro O. Pinheiro, Ronan Collobert Idiap Research Institute / EPFL ICML, 9 July 2015 Image Captioning Objective: Generate descriptive sentences given a sample

More information

Percentage. Year. The Myth of the Closer. By David W. Smith Presented July 29, 2016 SABR46, Miami, Florida

Percentage. Year. The Myth of the Closer. By David W. Smith Presented July 29, 2016 SABR46, Miami, Florida The Myth of the Closer By David W. Smith Presented July 29, 216 SABR46, Miami, Florida Every team spends much effort and money to select its closer, the pitcher who enters in the ninth inning to seal the

More information

14 The Divine Art of Hovering

14 The Divine Art of Hovering 14 The Divine Art of Hovering INTRODUCTION Having learned the fundamentals of controlling the helicopter in forward flight, the next step is hovering. To the Hover! In many schools, hovering is one of

More information

LQG Based Robust Tracking Control of Blood Gases during Extracorporeal Membrane Oxygenation

LQG Based Robust Tracking Control of Blood Gases during Extracorporeal Membrane Oxygenation 2011 American Control Conference on O'Farrell Street, San Francisco, CA, USA June 29 - July 01, 2011 LQG Based Robust Tracking Control of Blood Gases during Extracorporeal Membrane Oxygenation David J.

More information

B. AA228/CS238 Component

B. AA228/CS238 Component Abstract Two supervised learning methods, one employing logistic classification and another employing an artificial neural network, are used to predict the outcome of baseball postseason series, given

More information

A N E X P L O R AT I O N W I T H N E W Y O R K C I T Y TA X I D ATA S E T

A N E X P L O R AT I O N W I T H N E W Y O R K C I T Y TA X I D ATA S E T A N E X P L O R AT I O N W I T H N E W Y O R K C I T Y TA X I D ATA S E T GAO, Zheng 26 May 2016 Abstract The data analysis is two-part: an exploratory data analysis, and an attempt to answer an inferential

More information

CAAD CTF 2018 Rules June 21, 2018 Version 1.1

CAAD CTF 2018 Rules June 21, 2018 Version 1.1 CAAD CTF 2018 Rules June 21, 2018 Version 1.1 The organizer will invite 5 teams to participate CAAD CTF 2018. We will have it in Las Vegas on Aug. 10 th, 2018. The rules details are below: 1. Each team

More information

Chapter 12 Practice Test

Chapter 12 Practice Test Chapter 12 Practice Test 1. Which of the following is not one of the conditions that must be satisfied in order to perform inference about the slope of a least-squares regression line? (a) For each value

More information

arxiv: v1 [stat.ap] 18 Nov 2018

arxiv: v1 [stat.ap] 18 Nov 2018 Modeling Baseball Outcomes as Higher-Order Markov Chains Jun Hee Kim junheek1@andrew.cmu.edu Department of Statistics & Data Science, Carnegie Mellon University arxiv:1811.07259v1 [stat.ap] 18 Nov 2018

More information

Smart-Walk: An Intelligent Physiological Monitoring System for Smart Families

Smart-Walk: An Intelligent Physiological Monitoring System for Smart Families Smart-Walk: An Intelligent Physiological Monitoring System for Smart Families P. Sundaravadivel 1, S. P. Mohanty 2, E. Kougianos 3, V. P. Yanambaka 4, and M. K. Ganapathiraju 5 University of North Texas,

More information

Object Recognition. Selim Aksoy. Bilkent University

Object Recognition. Selim Aksoy. Bilkent University Image Classification and Object Recognition Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr Image classification Image (scene) classification is a fundamental

More information

Navigate to the golf data folder and make it your working directory. Load the data by typing

Navigate to the golf data folder and make it your working directory. Load the data by typing Golf Analysis 1.1 Introduction In a round, golfers have a number of choices to make. For a particular shot, is it better to use the longest club available to try to reach the green, or would it be better

More information

Regression to the Mean at The Masters Golf Tournament A comparative analysis of regression to the mean on the PGA tour and at the Masters Tournament

Regression to the Mean at The Masters Golf Tournament A comparative analysis of regression to the mean on the PGA tour and at the Masters Tournament Regression to the Mean at The Masters Golf Tournament A comparative analysis of regression to the mean on the PGA tour and at the Masters Tournament Kevin Masini Pomona College Economics 190 2 1. Introduction

More information

Revisiting the Hot Hand Theory with Free Throw Data in a Multivariate Framework

Revisiting the Hot Hand Theory with Free Throw Data in a Multivariate Framework Calhoun: The NPS Institutional Archive DSpace Repository Faculty and Researchers Faculty and Researchers Collection 2010 Revisiting the Hot Hand Theory with Free Throw Data in a Multivariate Framework

More information

Gait Recognition. Yu Liu and Abhishek Verma CONTENTS 16.1 DATASETS Datasets Conclusion 342 References 343

Gait Recognition. Yu Liu and Abhishek Verma CONTENTS 16.1 DATASETS Datasets Conclusion 342 References 343 Chapter 16 Gait Recognition Yu Liu and Abhishek Verma CONTENTS 16.1 Datasets 337 16.2 Conclusion 342 References 343 16.1 DATASETS Gait analysis databases are used in a myriad of fields that include human

More information

ROSE-HULMAN INSTITUTE OF TECHNOLOGY Department of Mechanical Engineering. Mini-project 3 Tennis ball launcher

ROSE-HULMAN INSTITUTE OF TECHNOLOGY Department of Mechanical Engineering. Mini-project 3 Tennis ball launcher Mini-project 3 Tennis ball launcher Mini-Project 3 requires you to use MATLAB to model the trajectory of a tennis ball being shot from a tennis ball launcher to a player. The tennis ball trajectory model

More information

A Bag-of-Gait Model for Gait Recognition

A Bag-of-Gait Model for Gait Recognition A Bag-of-Gait Model for Gait Recognition Jianzhao Qin, T. Luo, W. Shao, R. H. Y. Chung and K. P. Chow The Department of Computer Science, The University of Hong Kong, Hong Kong, China Abstract In this

More information

How to Make, Interpret and Use a Simple Plot

How to Make, Interpret and Use a Simple Plot How to Make, Interpret and Use a Simple Plot A few of the students in ASTR 101 have limited mathematics or science backgrounds, with the result that they are sometimes not sure about how to make plots

More information

What s UP in the. Pacific Ocean? Learning Objectives

What s UP in the. Pacific Ocean? Learning Objectives What s UP in the Learning Objectives Pacific Ocean? In this module, you will follow a bluefin tuna on a spectacular migratory journey up and down the West Coast of North America and back and forth across

More information

Deconstructing Data Science

Deconstructing Data Science Deconstructing Data Science David Bamman, UC Berkele Info 29 Lecture 4: Regression overview Feb 1, 216 Regression A mapping from input data (drawn from instance space ) to a point in R (R = the set of

More information

Interpretable Discovery in Large Image Data Sets

Interpretable Discovery in Large Image Data Sets Interpretable Discovery in Large Image Data Sets Kiri L. Wagstaff and Jake Lee Jet Propulsion Laboratory, California Institute of Technology December 7, 2017 NIPS Interpretable Machine Learning Symposium

More information

67. Sectional normalization and recognization on the PV-Diagram of reciprocating compressor

67. Sectional normalization and recognization on the PV-Diagram of reciprocating compressor 67. Sectional normalization and recognization on the PV-Diagram of reciprocating compressor Jin-dong Wang 1, Yi-qi Gao 2, Hai-yang Zhao 3, Rui Cong 4 School of Mechanical Science and Engineering, Northeast

More information

Building an NFL performance metric

Building an NFL performance metric Building an NFL performance metric Seonghyun Paik (spaik1@stanford.edu) December 16, 2016 I. Introduction In current pro sports, many statistical methods are applied to evaluate player s performance and

More information

Naval Postgraduate School, Operational Oceanography and Meteorology. Since inputs from UDAS are continuously used in projects at the Naval

Naval Postgraduate School, Operational Oceanography and Meteorology. Since inputs from UDAS are continuously used in projects at the Naval How Accurate are UDAS True Winds? Charles L Williams, LT USN September 5, 2006 Naval Postgraduate School, Operational Oceanography and Meteorology Abstract Since inputs from UDAS are continuously used

More information

Atomspheric Waves at the 500hPa Level

Atomspheric Waves at the 500hPa Level Atomspheric Waves at the 5hPa Level Justin Deal, Eswar Iyer, and Bryce Link ABSTRACT Our study observes and examines large scale motions of the atmosphere. More specifically it examines wave motions at

More information

INTRODUCTION TO PATTERN RECOGNITION

INTRODUCTION TO PATTERN RECOGNITION INTRODUCTION TO PATTERN RECOGNITION 3 Introduction Our ability to recognize a face, to understand spoken words, to read handwritten characters all these abilities belong to the complex processes of pattern

More information

Lesson 14: Modeling Relationships with a Line

Lesson 14: Modeling Relationships with a Line Exploratory Activity: Line of Best Fit Revisited 1. Use the link http://illuminations.nctm.org/activity.aspx?id=4186 to explore how the line of best fit changes depending on your data set. A. Enter any

More information

Calculation of Trail Usage from Counter Data

Calculation of Trail Usage from Counter Data 1. Introduction 1 Calculation of Trail Usage from Counter Data 1/17/17 Stephen Martin, Ph.D. Automatic counters are used on trails to measure how many people are using the trail. A fundamental question

More information

Cricket umpire assistance and ball tracking system using a single smartphone camera

Cricket umpire assistance and ball tracking system using a single smartphone camera 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 Cricket umpire assistance and ball tracking system using a single smartphone camera Udit Arora

More information

Application of Dijkstra s Algorithm in the Evacuation System Utilizing Exit Signs

Application of Dijkstra s Algorithm in the Evacuation System Utilizing Exit Signs Application of Dijkstra s Algorithm in the Evacuation System Utilizing Exit Signs Jehyun Cho a, Ghang Lee a, Jongsung Won a and Eunseo Ryu a a Dept. of Architectural Engineering, University of Yonsei,

More information

Trouble With The Curve: Improving MLB Pitch Classification

Trouble With The Curve: Improving MLB Pitch Classification Trouble With The Curve: Improving MLB Pitch Classification Michael A. Pane Samuel L. Ventura Rebecca C. Steorts A.C. Thomas arxiv:134.1756v1 [stat.ap] 5 Apr 213 April 8, 213 Abstract The PITCHf/x database

More information

Improving Context Modeling for Video Object Detection and Tracking

Improving Context Modeling for Video Object Detection and Tracking Learning and Vision Group (NUS), ILSVRC 2017 - VID tasks Improving Context Modeling for Video Object Detection and Tracking National University of Singapore: Yunchao Wei, Mengdan Zhang, Jianan Li, Yunpeng

More information

Predicting Shot Making in Basketball Learnt from Adversarial Multiagent Trajectories

Predicting Shot Making in Basketball Learnt from Adversarial Multiagent Trajectories arxiv:1609.04849v4 [stat.ml] 28 Dec 2017 Predicting Shot Making in Basketball Learnt from Adversarial Multiagent Trajectories Mark Harmon, 1 Patrick Lucey, 2 Diego Klabjan 1 1 Northwestern University 2

More information

Rogue Valley Metropolitan Planning Organization. Transportation Safety Planning Project. Final Report

Rogue Valley Metropolitan Planning Organization. Transportation Safety Planning Project. Final Report Rogue Valley Metropolitan Planning Organization Transportation Safety Planning Project Final Report April 23, 2004 Table of Contents Introduction 2 Scope of Work Activities... 2 Activity #1...2 Activity

More information

Golf s Modern Ball Impact and Flight Model

Golf s Modern Ball Impact and Flight Model Todd M. Kos 2/28/201 /2018 This document reviews the current ball flight laws of golf and discusses an updated model made possible by advances in science and technology. It also looks into the nature of

More information

Section I: Multiple Choice Select the best answer for each problem.

Section I: Multiple Choice Select the best answer for each problem. Inference for Linear Regression Review Section I: Multiple Choice Select the best answer for each problem. 1. Which of the following is NOT one of the conditions that must be satisfied in order to perform

More information

Journal of Chemical and Pharmaceutical Research, 2014, 6(3): Research Article

Journal of Chemical and Pharmaceutical Research, 2014, 6(3): Research Article Available online www.jocpr.com Journal of Chemical and Pharmaceutical Research 2014 6(3):304-309 Research Article ISSN : 0975-7384 CODEN(USA) : JCPRC5 World men sprint event development status research

More information

Neural Network in Computer Vision for RoboCup Middle Size League

Neural Network in Computer Vision for RoboCup Middle Size League Journal of Software Engineering and Applications, 2016, *,** Neural Network in Computer Vision for RoboCup Middle Size League Paulo Rogério de Almeida Ribeiro 1, Gil Lopes 1, Fernando Ribeiro 1 1 Department

More information

DEEP LEARNING FOR LONG-TERM VALUE INVESTING

DEEP LEARNING FOR LONG-TERM VALUE INVESTING DEEP LEARNING FOR LONG-TERM VALUE INVESTING Jonathan Masci Co-Founder of NNAISENSE General Manager at Quantenstein COMPANY STRUCTURE Joint Venture between Large-scale NN solutions for Asset manager since

More information

A Network-Assisted Approach to Predicting Passing Distributions

A Network-Assisted Approach to Predicting Passing Distributions A Network-Assisted Approach to Predicting Passing Distributions Angelica Perez Stanford University pereza77@stanford.edu Jade Huang Stanford University jayebird@stanford.edu Abstract We introduce an approach

More information

Recognition of Tennis Strokes using Key Postures

Recognition of Tennis Strokes using Key Postures ISSC 2010, UCC, Cork, June 23 24 Recognition of Tennis Strokes using Key Postures Damien Connaghan, Ciarán Ó Conaire, Philip Kelly, Noel E. O Connor CLARITY: Centre for Sensor Web Technologies Dublin City

More information

Queue analysis for the toll station of the Öresund fixed link. Pontus Matstoms *

Queue analysis for the toll station of the Öresund fixed link. Pontus Matstoms * Queue analysis for the toll station of the Öresund fixed link Pontus Matstoms * Abstract A new simulation model for queue and capacity analysis of a toll station is presented. The model and its software

More information

Contingent Valuation Methods

Contingent Valuation Methods ECNS 432 Ch 15 Contingent Valuation Methods General approach to all CV methods 1 st : Identify sample of respondents from the population w/ standing 2 nd : Respondents are asked questions about their valuations

More information

Predicting Tennis Match Outcomes Through Classification Shuyang Fang CS074 - Dartmouth College

Predicting Tennis Match Outcomes Through Classification Shuyang Fang CS074 - Dartmouth College Predicting Tennis Match Outcomes Through Classification Shuyang Fang CS074 - Dartmouth College Introduction The governing body of men s professional tennis is the Association of Tennis Professionals or

More information

This file is part of the following reference:

This file is part of the following reference: This file is part of the following reference: Hancock, Timothy Peter (2006) Multivariate consensus trees: tree-based clustering and profiling for mixed data types. PhD thesis, James Cook University. Access

More information

Atmospheric Rossby Waves Fall 2012: Analysis of Northern and Southern 500hPa Height Fields and Zonal Wind Speed

Atmospheric Rossby Waves Fall 2012: Analysis of Northern and Southern 500hPa Height Fields and Zonal Wind Speed Atmospheric Rossby Waves Fall 12: Analysis of Northern and Southern hpa Height Fields and Zonal Wind Speed Samuel Schreier, Sarah Stewart, Ashley Christensen, and Tristan Morath Department of Atmospheric

More information

A Hare-Lynx Simulation Model

A Hare-Lynx Simulation Model 1 A Hare- Simulation Model What happens to the numbers of hares and lynx when the core of the system is like this? Hares O Balance? S H_Births Hares H_Fertility Area KillsPerHead Fertility Births Figure

More information

Tutorial for the. Total Vertical Uncertainty Analysis Tool in NaviModel3

Tutorial for the. Total Vertical Uncertainty Analysis Tool in NaviModel3 Tutorial for the Total Vertical Uncertainty Analysis Tool in NaviModel3 May, 2011 1. Introduction The Total Vertical Uncertainty Analysis Tool in NaviModel3 has been designed to facilitate a determination

More information

Advanced Hydraulics Prof. Dr. Suresh A. Kartha Department of Civil Engineering Indian Institute of Technology, Guwahati

Advanced Hydraulics Prof. Dr. Suresh A. Kartha Department of Civil Engineering Indian Institute of Technology, Guwahati Advanced Hydraulics Prof. Dr. Suresh A. Kartha Department of Civil Engineering Indian Institute of Technology, Guwahati Module - 4 Hydraulic Jumps Lecture - 1 Rapidly Varied Flow- Introduction Welcome

More information