View-invariant Estimation of Height and Stride for Gait Recognition

Similar documents
Gait as a Biometric for Person Identification in Video Sequences. Chiraz BenAbdelkader

Gait Analysis for Recognition and Classification

A Study of Human Gaits across Different Speeds

4. Learning Pedestrian Models

Biomechanics and Models of Locomotion

REPRESENTATION OF HUMAN GAIT TRAJECTORY THROUGH TEMPOROSPATIAL IMAGE MODELLING

A Bag-of-Gait Model for Gait Recognition

Recognizing Human Gait Types 183

Geometric moments for gait description

Gait analysis for human identification

Silhouette-based Human Identification from Body Shape and Gait Λ. 2. Method

Analysis of Gait Characteristics Changes in Normal Walking and Fast Walking Of the Elderly People

ZMP Trajectory Generation for Reduced Trunk Motions of Biped Robots

THe rip currents are very fast moving narrow channels,

Recognition of Tennis Strokes using Key Postures

Introduction to Pattern Recognition

Object Recognition. Selim Aksoy. Bilkent University

1. Introduction. Faustini Libent Ishabailu 1, Dr. Pei Zhijun 2, Abdalla Mohamed Hambal 3

Introduction to Pattern Recognition

Published in: IEEE Conference on Advanced Video and Signal Based Surveillance, AVSS 2007

Open Research Online The Open University s repository of research publications and other research outputs

Human Pose Tracking III: Dynamics. David Fleet University of Toronto

Available online at ScienceDirect. Procedia Engineering 112 (2015 )

HIGH RESOLUTION DEPTH IMAGE RECOVERY ALGORITHM USING GRAYSCALE IMAGE.

In memory of Dr. Kevin P. Granata, my graduate advisor, who was killed protecting others on the morning of April 16, 2007.

Figure 2: Principle of GPVS and ILIDS.

Development of a load model for men-induced loads on stairs

INTRODUCTION TO PATTERN RECOGNITION

An investigation of kinematic and kinetic variables for the description of prosthetic gait using the ENOCH system

Title: Modeling Crossing Behavior of Drivers and Pedestrians at Uncontrolled Intersections and Mid-block Crossings

APPROACH RUN VELOCITIES OF FEMALE POLE VAULTERS

AIS data analysis for vessel behavior during strong currents and during encounters in the Botlek area in the Port of Rotterdam

Using Ground Reaction Forces from Gait Analysis: Body Mass as a Weak Biometric

SPIRIT III Radiometer Saturation Effect

Sample Solution for Problem 1.a

Calculation of Trail Usage from Counter Data

#19 MONITORING AND PREDICTING PEDESTRIAN BEHAVIOR USING TRAFFIC CAMERAS

Motion Control of a Bipedal Walking Robot

Chapter 5: Methods and Philosophy of Statistical Process Control

Ball impact dynamics of knuckling shot in soccer

Nature Neuroscience: doi: /nn Supplementary Figure 1. Visual responses of the recorded LPTCs

Cricket umpire assistance and ball tracking system using a single smartphone camera

Traffic Parameter Methods for Surrogate Safety Comparative Study of Three Non-Intrusive Sensor Technologies

Legendre et al Appendices and Supplements, p. 1

Keywords Gait recognition, principal component analysis, motion features.

Walking with coffee: when and why coffee spills

University of Kassel Swim Start Research

Gait Analysis of a Little Biped Robot. Received May 2015; accepted July 2015

DETRMINATION OF A PLUNGER TYPE WAVE MAKER CHARACTERISTICE IN A TOWING TANK

ScienceDirect. Rebounding strategies in basketball

JPEG-Compatibility Steganalysis Using Block-Histogram of Recompression Artifacts

Advances in Automatic Gait Recognition

Automated Markerless Analysis of Human Gait Motion for Recognition and Classification

Supplementary Figure 1 An insect model based on Drosophila melanogaster. (a)

Aalborg Universitet. Published in: Proceedings of Offshore Wind 2007 Conference & Exhibition. Publication date: 2007

Analysis of Car-Pedestrian Impact Scenarios for the Evaluation of a Pedestrian Sensor System Based on the Accident Data from Sweden

The Influence of Load Carrying Modes on Gait variables of Healthy Indian Women

In ocean evaluation of low frequency active sonar systems

Tutorial for the. Total Vertical Uncertainty Analysis Tool in NaviModel3

3D Turbulence at the Offshore Wind Farm Egmond aan Zee J.W. Wagenaar P.J. Eecen

SCIENTIFIC COMMITTEE SEVENTH REGULAR SESSION August 2011 Pohnpei, Federated States of Micronesia

Impact Points and Their Effect on Trajectory in Soccer

Lesson 14: Simple harmonic motion, Waves (Sections )

EFFICIENCY OF TRIPLE LEFT-TURN LANES AT SIGNALIZED INTERSECTIONS

EVALUATION OF ENVISAT ASAR WAVE MODE RETRIEVAL ALGORITHMS FOR SEA-STATE FORECASTING AND WAVE CLIMATE ASSESSMENT

Gait Analyser. Description of Walking Performance

INTERACTION OF STEP LENGTH AND STEP RATE DURING SPRINT RUNNING

Currents measurements in the coast of Montevideo, Uruguay

Prediction of Nearshore Waves and Currents: Model Sensitivity, Confidence and Assimilation

Body Stabilization of PDW toward Humanoid Walking

Design of a Pedestrian Detection System Based on OpenCV. Ning Xu and Yong Ren*

intended velocity ( u k arm movements

Naval Postgraduate School, Operational Oceanography and Meteorology. Since inputs from UDAS are continuously used in projects at the Naval

THE WAVE CLIMATE IN THE BELGIAN COASTAL ZONE

Transformation of nonfunctional spinal circuits into functional states after the loss of brain input

ASSESMENT Introduction REPORTS Running Reports Walking Reports Written Report

The Intrinsic Value of a Batted Ball Technical Details

Person Re-identification in frontal gait sequences via Histogram of Optic flow Energy Image

*Author for Correspondence

Chapter 11 Waves. Waves transport energy without transporting matter. The intensity is the average power per unit area. It is measured in W/m 2.

Aspects of gravimeter calibration by time domain comparison of gravity records

Variable Face Milling to Normalize Putter Ball Speed and Maximize Forgiveness

Pedestrian traffic flow operations on a platform: observations and comparison with simulation tool SimPed

1. Task Definition. Automated Human Gait Recognition Ifueko Igbinedion, Ysis Tarter CS 229, Fall 2013

ISOLATION OF NON-HYDROSTATIC REGIONS WITHIN A BASIN

Deakin Research Online

Paper 2.2. Operation of Ultrasonic Flow Meters at Conditions Different Than Their Calibration

Humanoid Robots and biped locomotion. Contact: Egidio Falotico

Evaluating the Influence of R3 Treatments on Fishing License Sales in Pennsylvania

A Conceptual Approach for Using the UCF Driving Simulator as a Test Bed for High Risk Locations

ABSTRACT PATTERNS USING 3D-DYNAMIC MODELING. Kaustav Nandy, Master of Science, Department of Electrical And Computer Engineering

Automated Proactive Road Safety Analysis

DOI /HORIZONS.B P23 UDC : (497.11) PEDESTRIAN CROSSING BEHAVIOUR AT UNSIGNALIZED CROSSINGS 1

23 RD INTERNATIONAL SYMPOSIUM ON BALLISTICS TARRAGONA, SPAIN APRIL 2007

Nitrogen subtraction on reported CO 2 emission using ultrasonic flare gas meter

Effects of directionality on wind load and response predictions

A New Approach to Modeling Vertical Stiffness in Heel-Toe Distance Runners

Using sensory feedback to improve locomotion performance of the salamander robot in different environments

WIND-INDUCED LOADS OVER DOUBLE CANTILEVER BRIDGES UNDER CONSTRUCTION

A study of advection of short wind waves by long waves from surface slope images

Transcription:

View-invariant Estimation of Height and Stride for Gait Recognition Chiraz BenAbdelkaderÝ, Ross CutlerÞ, and Larry DavisÝ Ý University of Maryland, College Park chiraz,lsd@umiacs.umd.edu ÞMicrosoft Research rcutler@microsoft.com Abstract. We present a parametric method to automatically identify people in monocular low-resolution video by estimating the height and stride parameters of their walking gait. Stride parameters (stride length and cadence) are functions of body height, weight, and gender. Previous work has demonstrated effective use of these biometrics for identification and verification of people. In this paper, we show that performance is significantly improved by using height as an additional discriminant feature. Height is estimated by robustly segmenting the person from the background and fitting their apparent height to a time-dependent model. This method is correspondence-free and works with low-resolution images of people. It is also view-invariant, albeit performance is optimal in near fronto-parallel configurations. Identification accuracy is estimated at 47% for fronto-parallel sequences of 41 people, and 65% for non-fronto-parallel sequences of 17 people, compared with 18% and 51%, respectively, when only stride and cadence are used. 1 Introduction Gait is an emergent behavioral biometric that involves the use of an individual s walking style to determine or validate identity [1]. Because it can be measured at a distance, i.e. not requiring interaction with the subject, there has also been increased interest in using gait features for human identification in surveillance applications [2]. The motivation for this line of research comes from psychophysical experiments [3 5] as well as biomechanics studies [6 8], both of which provide evidence that gait dynamics (and the motion patterns they generate) contain a signature that is characteristic of, and possibly unique to, each individual. This is what makes each person seems to have a distinctive idiosyncratic way of walking. However, complete and accurate characterization of gait dynamics requires knowledge of the kinematics of tens, if not hundreds, of body landmarks (such as joints and extremities) [7]. Existing video-based gait analysis methods rely on markers or wearable instruments [8]. Achieving this via automatic feature extraction and tracking in low-resolution surveillance video remains difficult and error-prone due to self-occlusion, insufficient texture, etc. In this paper, we propose a correspondence-free, view-invariant, method to robustly compute four gait variables from low-resolution video. The first two variables characterize the timevariation of the person s apparent height (the mean and amplitude of vertical oscillation), and the other two characterize the stride dimensions (the cadence and stride length). We will show that, while these four gait features obviously do not uniquely characterize gait dynamics, they are quite effective in filtering identity.

We use the term apparent height to refer to the person s height while walking, which is a time-dependent quantity, and is different from (though related to) their stature, or standing-height, which is a constant quantity. Because apparent height is a function of the person s cadence (as we shall discuss later), it also can be regarded as a gait parameter. Accurate estimation of these gait features is achieved by exploiting the periodic nature of human walking, and computing the features over many steps. Cadence is estimated via periodicity analysis of the binary silhouette. Using a calibrated camera system, the stride length is estimated by first tracking the person and estimating their distance travelled over a period of time, then counting the number of steps (again using periodicity). Height parameters are estimated by robustly segmenting the person from the background and fitting their apparent height to a timedependent model. We evaluate the discrimination power of these four gait variables using K-nearest neighbor classification in the 4-D feature space they span. Identification accuracy is estimated at 47% for a set of fronto-parallel sequences of 41 people (4 of each), and 65% for non-fronto-parallel sequences of 17 people (7 of each on average). This is a significant increase over 18% and 51%, respectively, obtained when using stride parameters only for classification. This method works with low-resolution images of people, and is robust to changes in lighting, clothing, and random (non-systematic) tracking errors. It is also in principle view-invariant, since it uses 3D quantities for classification. However, performance is optimal in near fronto-parallel configurations in which estimation of the stride and height parameters is most accurate. 1.1 Assumptions Our method makes the following assumptions: People walk upright on a known plane with constant velocity (i.e. in both speed and direction) for at least 3-4 seconds. The camera is static and is calibrated with respect to the ground plane. The frame rate is greater than twice the frequency of the walking. 2 Background and Related Work Several approaches already exist in the computer vision literature on automatic person identification from gait (termed gait recognition) from video [9 13, 2, 14 17]. Closely related to these are the methods for human detection in video, which essentially classify moving objects as human or non-human [18 2], and those for human motion classification, which recognize different types of human locomotion, such as walking, running, limping, etc. [21, 22]. These approaches are typically either holistic [9 12, 2, 14] or model-based [21, 18, 22, 15, 16, 23, 17]. In the former, gait is characterized by the statistics of the spatiotemporal patterns generated by the silhouette of the walking person in the image. That is, a set of features (the gait signature) is computed from these patterns, and used for classification. Model-based approaches, on the other hand, use a model of either the person s shape (structure) or motion, in order to recover features of gait mechanics, such as stride dimensions [18, 23, 17] and kinematics of joint angles [22, 15, 16]. In [18], Yasutomi and Mori estimate cadence and stride length in much the same way as we do (i.e. based on periodicity analysis of image features and a calibrated camera), and use them for pedestrian detection. Specifically, they assume two independent Gaussian distributions for the stride length and cadence of typical human walking, and classify a moving object as human if its computed cadence and stride length values are at most 3 standard deviations away from

the mean values. Obviously this does not exploit the strong correlation between stride length and cadence, unlike our method. Cutler and Davis [19] use the periodicity of image similarity plots to estimate the stride of a walking and running person, assuming a calibrated camera. They contend that stride could be used as a biometric, though they have not conducted any study showing how useful it is as a biometric. In [23], Davis demonstrates the effectiveness of stride length and cadence in discriminating the walking gaits children and adults, though he relies on motion-capture data to extract these features. Perhaps the method most akin to ours is that of Johnson and Bobick [17], in which they extract four static parameters, namely the body height, torso length, leg length and step length, and use them for person identification. These features are estimated as the distances between certain body parts when the feet are maximally apart (i.e. at the double-support phase of walking). Hence, they too use stride parameters (step length only) and height-related parameters (stature, leg length and torso length) for identification. However, they consider stride length to be a static gait parameter, while in fact it varies considerably for any one individual over the range of their free-walking speeds. The typical range of variation for adults is about 3cm [6], which is hardly negligible. This is why we use both cadence and stride length. Furthermore, their approach for estimating the step length does not exploit the periodicity of walking, and is hence not robust to systematic tracking and calibration errors. Model Background Segment and track person Sequence of binary silhouettes ; (width, height, 2D position) Compute gait period Estimate 3D trajectory Compute apparent height Camera calibration; Ground plane Compute cadence Compute stride Fit height model Identify person Fig. 1. Overview of Method. 3 Method The algorithm consists of three main modules, as shown in Figure 1. The first module tracks the walking person in each frame, and extracts their binary silhouette along with its 2D position in the image. Since the camera is static, we use a non-parametric background modeling technique for

segmentation that is quite robust to lighting changes, camera jitter and to the presence of shadows [24]. Foreground blobs are tracked in each frame via simple spatial and temporal coherence: based on overlap of respective bounding boxes in consecutive frames [25]. Once a person has been tracked for a sufficient number of frames, the second module uses the obtained sequence of binary silhouettes to estimate the two height and two stride parameters. Finally, the third module determines the person s identity via standard pattern classification in the 4-D feature space of these four parameters. 3.1 Estimating Ground Position Estimating the person s current 3D position on the ground is instrumental in computing both their distance walked (and hence the stride length) and apparent height, as will be discussed in the next two sections. Assuming the person walks on a plane and that the camera is calibrated with respect to this plane, we compute their 3D position as the inverse projection of the location of their feet in the image [26]. Furthermore, since the feet are mostly apart during walking, we need to locate the point half-way between the two feet. In the fronto-parallel case, we simply approximate this point in the image as the midpoint of the lower edge of the silhouette s bounding box (Figure 2(a)), while in the non-fronto-parallel case we estimate it by finding the local minimum of a signature, ܵ, of the binary silhouette (Figure 2(b)). Specifically, for each Ü along the width of the bounding box, ܵ is defined as the vertical distance between the bottom edge of the bounding box and the lowest silhouette point at that Ü, as illustrated in Figure 2(c)-2(d). 3.2 Estimating Stride Parameters The basic descriptors of human locomotion are speed Î (in meters/sec), cadence (in steps/min) and stride length Ä (in meters/stride) 1. They are related by a simple equation [7, 8]: Î ½ Ä, ½¾¼ and so only two of these variables need to be specified at any time. Therefore, assuming nearly constant-velocity walking with period Ì (seconds/cycle), over a distance Ï (in meters) and Æ steps, then and Ä are estimated by: ½¾¼ Ì Ä Ï Æ (1) (2) Furthermore, if is the frame sampling rate and Ò is the video sequence length (in frames), then Æ Ì Ò Ï and consequently Ä ÒÌ. As described in our previous work [26], we compute the gait period Ì via periodicity analysis of the width of the silhouette bounding box (see Figure 3(a)). Assuming the person walks in a straight line, Ï is estimated as the distance between the first and last point of the person s 3D trajectory, i.e. the positions in the first and Òth frames (see Figure 3(b)). 3.3 Estimating Height Parameters Human walking involves rhythmic up-and-down displacement of the upper body (from pelvis to head), hence the apparent bobbing of the head [6, 27]. It is in fact the nature of bipedal locomotion 1 1 stride = 2 steps.

H x Y-profile Fig. 2. Computing the person s height (red segment) and the mid-feet location (green dot) in the image: (a) The fronto-parallel case. (b) The non-fronto-parallel case. (c) Geometry of the nonfronto-parallel case. (d) Locating the feet by finding local minima of a silhouette signature. Fig. 3. (a) Computing the gait period Ì via periodicity analysis of the bounding box width of binary silhouettes. (b) Smoothed 2D trajectory (blue dots) traced by the person s head and feet, estimated here as the midpoints of the upper and lower edges of the bounding box. The red dots repeat with period ̾ and correspond to frames when both feet are maximally apart.

that demands these vertical oscillations of the body, mainly because the distance between the trunk and the floor (ground) when the legs are spread apart must be less than when it passes over a relatively extended (erect) leg. Furthermore, these vertical movements must occur in a smooth sinusoidal manner for the conservation of energy [28, 6]. This is particularly evident in the image trajectory of the head shown in Figure 3(b). Thus, the apparent height of a walking person can be modelled as a sinusoidal curve: ص «Ò Ø µ (3) The maximum apparent height, «, occurs at the mid-stance phase of walking (when the legs are closest together), and is slightly smaller than the person s stature, though typically within less than 1 cm 2. The minimum height, «, occurs at the mid-swing phase of walking (when the legs are furthest apart). We estimate the person s apparent height from their height in the image, Á, for the three different configurations illustrated in Figure 3.3 as follows: Á Ó Ú ÝÙ Ò Ú (4) Ó Ú Á Ý (5) Ò Ú Á Ó Ú ÝÙ Ò Ú Á Ó Ý (6) Ú Ò Ú where Ý and Ý Ù are respectively the person s lower (feet) and upper (head) vertical coordinates in the image, Á ÝÙ Ý is the person s height in the image, Ú is the tilt angle, is the camera focal length (in pixels), and is the distance from the camera center to the person (i.e. depth). These three cases respectively correspond to when the person is entirely above the image center, entirely below the image center, or neither. Note that when the person is sufficiently far from the camera, using orthographic projection reduces to the numerator Á. The person s height in the image, Á, is estimated as the vertical distance between the head and the feet. In a fronto-parallel sequence, this corresponds to the bounding box height of the binary silhouette, while in a non-fronto parallel configuration it is approximated by the vertical segment that extends from the top of the bounding box (the head) to the point halfway between the two feet, as illustrated in Figure 2). Then, given the time-series ص of apparent heights of a walking person measured over a video sequence of length Ò, and assuming a known frequency of gait (note ¾ Ì ), we estimate the three parameters of the model in Equation 3 via least squares fitting as described in [29]. Specifically, assuming a data model: ص «Ó Ø µ Ø Ø ¼ ½ Ò (7) the unknown parameters of the model that minimize the sum-square of the residuals are given by ½ Ò È Ò Ø½ ص and «Ô ¾ ¾, where ¾ Ò È Ò Ø½ ص µ Ó Ø and ¾ Ò È Ò Ø½ ص µ Ò Ø. Figure 3.3 shows an example of a height series (blue dashed line) fitted to the model (red solid line) via above method. Here, ½ Ñ and «¾Ñ, and the person s real height is ½Ñ. 2 Stature can hence be estimated quite accurately from apparent height: À «. Since and «are together more informative than À, we use them instead of À for identification.

Fig. 4. Estimating the person s apparent height from their height in the image, Á, for the three different configurations: when person is entirely above the image center, entirely below the image center, and at the center of the image. 182 18 178 height (meters) 176 174 172 17 168 1 2 3 4 5 6 7 8 time (frames) Fig. 5. Apparent height data (blue); model (red).

3.4 Error Analysis As shown in [26], the uncertainty both in and Ä is a decreasing function of the number of steps walked, Æ, and hence can be effectively reduced by using a large Æ, regardless of whether the uncertainty is caused by random or systematic error [3]. Systematic error can for example be caused by strong shadows that are segmented with the silhouette. However, the uncertainty in height does not possess this nice property; it does not necessarily decrease when estimated over many frames. Intuitively, this is because height is estimated independently in each frame (hence obtaining several height measurements from the sequence), while a single estimate of cadence and stride length is computed over the entire sequence. Assuming for simplicity that Ú Ø ¼ in Equations 4-6, then the uncertainty in each height measurement is given by: Ö Á µ ¾ Á µ¾ Thus the only way to reduce this uncertainty is by reducing the uncertainty in and Á, denoted by and Á, respectively. Obviously, the values of or cannot be controlled since they depend on the actual position and height of person. Furthermore, if the source of this uncertainty is from random (i.e. non-systematic) errors only, then by by estimating as the average of Ò measurements (from Ò or more frames) we reduce its uncertainty by a factor of ½ Ô Ò. However, repeated measurements of cannot reduce the uncertainty caused by systematic error (such as persistent shadows in the silhouette or calibration error) [3]. 3.5 Gait Classifier Stride length and cadence are both known to be (approximately) increasing linear functions of Ô Î [31, 6, 8, 32]. Obviously this implies that and Ä vary linearly as the person changes their walking speed. Furthermore, stride length is determined mainly by the person s stature, weight, age and gender [7] (though age has no significance unless the person is less than 7 or past 6 years old). However, the coefficient of correlation between stride length and stature is typically found to be less than or around.6. Hence stature accounts for at most 36% of the variation in stride length, and so it cannot be predicted solely from cadence and stride length. Apparent height, i.e. the pair and «, also varies with Î. Specifically, «increases with speed, while both and «decrease as speed increases [6, 28]. For example, the mean value for «is ¾ Ñ for free-speed walking and ¼ Ñ for fast-walking (measured over 3 normal adult men [28]). Thus, our four gait features,, «, and Ä, are all correlated, and we expect that they together form tight modes in 4-D space. While we can not claim these features uniquely characterize a person, our goal is to assess their discriminability, and hence how much they can filter identity in a large population. To this end, we build a supervised pattern classifier that uses these four features as the input feature vector. Specifically, we use K-nearest neighbor non-parametric pattern classifier. 4 Experiments and Results The method is tested both on a set of fronto-parallel sequences, taken in an outdoor environment with 41 people (7 females and 34 males) on 2 different days (Figure 6(a)), and a set of 131 non-fronto-parallel sequences of 17 different people (Figure 6(b)).

The distributions of the corresponding height and stride features are illustrated in Figure 7 for both sets of sequences. Height estimates (À «) for this configuration are accurate to within Ñ. Fig. 6. Example of (a) fronto-parallel and (b) non-fronto-parallel outdoor sequences used to test method. We used 1-NN (or KNN with à ½) and the leave-one-out cross validation technique [33] to estimate the classification rate. Results are shown in Figure 8(a) and Figure 8(b) in terms of the rank order statistic for three different feature subsets. This is defined as the (cumulative) probability that the actual class of a test measurement is among its top matches; is called the rank [34]. Hence it effectively characterizes the filtering capability of the classification features, i.e. how much of the database is eliminated as possible matches of the given person with some confidence. For example, Figure 8(a) indicates that, based on all 4 gait features, a person is identified as one of 12 people with 9% confidence, which eliminates more than 2/3 of the database. Obviously, the feature set «Äµ is a much better filter than ĵ, but is only slightly better than ĵ, i.e. using «does not seem to be buying us much. Obviously, the best identification accuracy for the fronto-parallel sequences (Figure 8(a)) is obtained when all four features are used. Furthermore, the addition of into the classification feature set induced a significantly larger performance improvement than «, which seems to suggest that has better discriminability than «. Perhaps this can be explained by the fact that «varies over a much smaller range than, and hence is more easily swamped by noise in. Indeed the amount of noise in the gait feature affects its class separability properties. The identification performance for non-fronto-parallel sequences (Figure 8(a)) also improves significantly when is added to the feature set (from 51% to 65%). However, when «is added to the feature set, performance degrades significantly (47%). Again, this can be explained by the presence of large noise in (which is even larger than in fronto-parallel sequences because of larger calibration errors) that affects «more than because of its smaller range of variation. 5 Conclusions and Future Work We presented a parametric approach for human identification from low-resolution video using height and stride parameters of walking gait. It achieves its accuracy by exploiting the periodic nature of human walking, and computing the gait features over many steps. We found a significant improvement in identification performance (47% and 65%) when both height and stride

15 2 3 Mean Height (cm) 5 Height Variation Amplitude (cm) 19 25 4 18 2 15 3 Stride length (cm) 17 16 15 14 13 1 5 5 15 16 17 18 19 2 Cadence (steps/min) 2 1 5 1 2 3 4 5 Stride length (cm) 12 4 4 11 135 13 125 12 115 Cadence (steps/min) 11 15 1 16 17 Height (cm) 18 19 3 2 1 1 11 12 13 14 3 2 1 1 12 14 16 18 2 (b) 3 Mean Height (cm) 35 Height Variation Amplitude (cm) 22 25 3 Stride length (cm) 2 18 16 2 15 1 5 14 16 18 2 22 25 2 15 1 5 5 5 1 15 14 35 Cadence (steps/min) 3 Stride length (cm) 3 25 12 14 13 12 11 1 Cadence (steps/min) 9 8 15 16 19 18 17 Height (cm) 2 21 25 2 15 1 5 8 1 12 14 2 15 1 5 12 14 16 18 2 22 (d) Fig. 7. Distribution of data in feature space of fronto-parallel (top row) and non-fronto-parallel (bottom row) sequences: (a,c) Scatter plot of, and Ä. (b,d) Histograms of (clockwise from top-left to lower-right), «, and Ä, respectively. 1 1.9.9.8 Cumulative match score.8.7.6.5.4.3.2 Cumulative match score.7.6.5.4.3.2.1 cadence and stride only mean height, cadence and stride Rank all 5 1 15 2 25 3 35 4.1 cadence and stride only mean height, cadence and stride all 2 4 6 8 1 12 14 16 Rank Fig. 8. Identification performance in terms of rank order statistics (the classification rate corresponds to ÖÒ ½) for: (a) Fronto-parallel sequences (b) Non-fronto-parallel sequences.

parameters are used than when stride parameters only are used (18% and 51%). Perhaps the best approach for achieving better identification results is to combine these gait features with other biometrics, such as face recognition and hair color. The method is view-invariant, works with low-resolution video, and is robust to changes in lighting, clothing, and tracking errors. However, estimation of the proposed gait features is more accurate in near-fronto-parallel viewpoints, which inevitably affects their class separability (i.e. discriminability) across viewpoints. Finally, while stride estimation is robust to both random and systematic tracking errors, height estimation is only robust to random errors. We plan to investigate ways to alleviate this problem. Acknowledgment The support of DARPA (Human ID project, grant No. 5-28944) is gratefully acknowledged. References 1. Cattin, P.C., Zlatnik, D., Borer, R.: Biometric system using human gait. In: Mechatronics and Machine Vision in Practice. (21) 2. Hayfron-Acquah, J.B., Nixon, M.S., Carter, J.N.: Recognising human and animal movement by symmetry. In: AVBPA. (21) 3. Johansson, G.: Visual perception of biological motion and a model for its analysis. Perception and Psychophysics 14 (1973) 4. Cutting, J., Kozlowski, L.: Recognizing friends by their walk: Gait perception without familiarity cues. Bulletin Psychonomic Soc. 9 (1977) 353 356 5. Barclay, C., Cutting, J., Kozlowski, L.: Temporal and spatial factors in gait perception that influence gender recognition. Perception and Psychophysics 23 (1978) 145 152 6. Inman, V., Ralston, H.J., Todd, F.: Human Walking. Williams and Wilkins (1981) 7. Winter, D.: The Biomechanics and Motor Control of Human Gait. Univesity of Waterloo Press (1987) 8. Perry, J.: Gait Analysis: Normal and Pathological Function. SLACK Inc. (1992) 9. Niyogi, S., Adelson, E.: Analyzing and recognizing walking figures in XYT. In: CVPR. (1994) 1. Murase, H., Sakai, R.: Moving object recognition in eigenspace representation: gait analysis and lip reading. PRL 17 (1996) 11. Little, J., Boyd, J.: Recognizing people by their gait: the shape of motion. Videre 1 (1998) 12. Huang, P.S., Harris, C.J., Nixon, M.S.: Comparing different template features for recognizing people by their gait. In: BMVC. (1998) 13. He, Q., Debrunner, C.: Individual recognition from periodic activity using hidden markov models. In: IEEE Workshop on Human Motion. (2) 14. BenAbdelkader, C., Cutler, R., Davis, L.: Eigengait: Motion-based recognition of people using image self-similarity. In: AVBPA. (21) 15. Cunado, D., Nixon, M., Carter, J.: Gait extraction and description by evidence gathering. In: AVBPA. (1999) 16. Yam, C., Nixon, M.S., Carter, J.N.: Extended model-based automatic gait recognition of walking and running. In: AVBPA. (21) 17. Johnson, A., Bobick, A.: Gait recognition using static activity-specific parameters. In: CVPR. (21) 18. Yasutomi, S., Mori, H.: A method for discriminating pedestrians based on rythm. In: IEEE/RSG Intl Conf. on Intelligent Robots and Systems. (1994)

19. Cutler, R., Davis, L.: Robust real-time periodic motion detection, analysis and applications. PAMI 13 (2) 2. Song, Y., Feng, X., Perona, P.: Towards detection of human motion. In: CVPR. (2) 21. Campbell, L.W., Bobick, A.: Recognition of human body motion using phase space constraints. In: ICCV. (1995) 22. Meyer, D., Psl, J., Niemann, H.: Gait classification with hmms for trajectories of body parts extracted by mixture densities. In: BMVC. (1998) 23. Davis, J.W.: Visual categorization of children and adult walking styles. In: AVBPA. (21) 24. Elgammal, A., Harwood, D., Davis, L.: Non-parametric model for background subtraction. In: ICCV. (2) 25. Haritaoglu, I., Harwood, D., Davis, L.: W4s: A real-time system for detecting and tracking people in 21/2 d. In: ECCV. (1998) 26. BenAbdelkader, C., Cutler, R., Davis, L.: Stride and cadence as a biometric in automatic person identification and verification. In: FGR. (22) 27. Rose, J., Gamble, J.G.: Human Walking. Williams and Wilkins (1994) 28. Murray, M., Kory, C., Clarkson, B.H., Sepic, S.B.: Comparison of free and fast speed walking patterns of normal men. American Journal of Physical Medicine 45 (1966) 29. Bloomfield, P.: Fourier Analysis of Time Series: an Introduction. John Wiley and Sons (1976) 3. Bevington, P.R., Robinson, D.K.: Data reduction and error analysis for the physical sciences. McGraw-Hill (1992) 31. Grieve, D.W., Gear, R.: The relationship between length of stride, step frequency, time of swing and speed of walking for children and adults. Journal of Ergonomics 5 (1966) 32. Zatsiorky, V.M., Werner, S.L., Kaimin, M.A.: Basic kinematics of walking. Journal of Sports Medicine and Physical Fitness 34 (1994) 33. Fukunaga, K.: Introduction to Statistical Pattern Recognition. New York Academic Press (199) 34. Philips, J., Hyeonjoon, Rizvi, S., Rauss, P.: The feret evaluation methodology for face recognition algorithms. PAMI 22 (2)