Biped locomotion on the Hoap2 robot


 Erika Fleming
 8 months ago
 Views:
Transcription
1 Biped locomotion on the Hoap robot December 15, 6 Christian Lathion Computer Science Master Project Biologically Inspired Robotics Group (BIRG) Ecole Polytechnique Fédérale de Lausanne
2 Contents 1 Introduction Project objectives Biped Walking Different Approaches Trajectory based Using heuristics Central Pattern Generators Hybrid approaches Described controller The robot Webots The coupled oscillator model 8.1 General description Effects of coupling Generation of the joint angle trajectories Stepping Balancing Stepping Biped walking Summarized controller architecture Webots implementation Coupled oscillator Feet pressure sensors Hardware sensors modelization Detection of the center of pressure Estimation of the contact angle with the floor Initial and rest postures Actuator delay Parameter space exploration Stepping Chosen stepping parameters Biped walking Gait performance Parameters study Extensions to the existing controller Arm movement Feet placement stabilization in the sagittal plane Speed control Direction control
3 7 Discussion of the results obtained for a generated gait Chosen parameters Analysis of the chosen gait Resistance to perturbations Continuous perturbations (wind) Discrete perturbations (shock) Stability on different floors Floor physical properties Slope Conclusion Achieved results Further work Personal knowledge A Complete definition of HOAPjoints 43 B Implications of stable points in the phase coupling equation 43
4 1 Introduction 1.1 Project objectives The objectives of this master project are an implementation and study of a biped locomotion controller, originally described by Morimoto et al. in Modulation of simple sinusoidal patterns by a coupled oscillator model for biped walking on the Fujistu HOAP robot. 1. Biped Walking Make a machine walk like an human has been a long time challenge, initiated in the mid sixties, and still far to be achieved. It represents a huge field of applications, given the obvious advantages of biped locomotion on other means, in terms of adaptability to human environment. Biped locomotion is considered to be a very complex task, as it implies controlling a very large number of degrees of freedom (DOFs), the nonlinear dynamics of the humanoid body and a wide range of interactions with the environment (gravity field, landscape, perturbations, etc.). The main difficulty is to achieve dynamical stability, and particularly resistance to unexpected perturbations. Difficulty also raises from the actual complexity of biped locomotion mechanisms, which are still not completely understood by biologists. Many studies have focused on human locomotion, both on the mechanical and neural aspects. For instance, we can mention [16], [9]. (a) HOAP (b) Qrio (c) Asimo Figure 1: Currently most advanced commercial humanoid robots Currently, the most impressive results have probably been achieved by Honda s Asimo robot. It can not only walk up to.7 km/h, but also run (straight at 6 km/h or in circular patterns), climb steps, up to 3 degrees slopes, etc. 1. By seeing this, one could think the biped locomotion problem is near to be solved. But despite these amazing demonstrations, we don t know how it would behave in a real environment, where any perturbation even insignificant for a human can have a dramatic influence. 1 Movies of Asimo s capabilities can be found at 3
5 1.3 Different Approaches Many different solutions have been experimented to achieve stable biped locomotion, from hand tweaking based on biological observation to complex optimization algorithms. Here we present some of the most widely used techniques. A first general overview of the hardware and software requirements for biped locomotion is presented in [4] Trajectory based Trajectory based methods use offline generation of trajectories. This can be done with various optimization and constraint satisfaction algorithms, that will produce joints trajectories that they satisfy a number of given constraints (e.g. stability, smoothness of the transitions, minimization of impacts with the ground, etc.). For stability, the most widely used constraint is the Zero Moment Point (ZMP) criterion, proposed by Vukobratovic in 197. The ZMP is defined as the point of contact with the ground where the total moment of all active forces equals zero. The resulting gait is said to be dynamically stable if the ZMP stays inside the support polygon defined by all the contact points with the ground. In this case, the ZMP and center of pressure (COP) coincide. By applying this criterion, one can give a formal proof of the gait stability, before transferring it to the real robot. Obviously, a very precise model of the robot s dynamics and environment is needed for the offline optimization process. If the model is not precise enough, stability is absolutely not guaranteed once applied in the real world situation. This is the main weakness of these methods, because the real environment perturbations can strongly influence the obtained behavior. Another limitation is that these methods don t provide any methodology to actually design the resulting walking pattern, but only give the resulting trajectories that match the constraints at best. This can make further modifications or integration of feedback to the controller very difficult. Despite these drawbacks, trajectorybased methods have been widely and successfully implemented. For instance, a detailed description of a ZMP based locomotion controller with results on the HOAP and Qrio robots is presented in [1] Using heuristics Heuristic methods have strong similarities with the trajectory based approach. Joint trajectories are also precomputed from an external optimization process, only the generation strategy differs. This time, heuristic or evolutionary based techniques are used (e.g. genetic algorithms) to design the desired trajectories. Consequently, these methods have a weaker mathematical formalism. One cannot prove that the provided solution actually is a local optimum, which is not very satisfactory. It can also be more difficult to specify precise constraints on the resulting gait, for instance to get a realistic humanlike gait. They globally suffer from the same limitations as trajectory based methods, except that they seem to be slightly less sensitive to perturbations, once applied to the real world. Unless if the environment is totally controlled. 4
6 1.3.3 Central Pattern Generators The Central Pattern Generators (CPGs) approach inspires from the biological mechanism of vertebrates locomotion[7]. CPGs are neural networks, located in the spinal cord (grouped in different semiautonomous centers), that control the muscles activity. Their main characteristic is the ability to produce a periodic output from a nonperiodic input signal. Neural inspiration represents a very interesting solution, as it mimics the natural mechanisms of locomotion. In practice, CPGs are modeled by artificial neural networks, that generate appropriate periodic limb movements 3 from a simple input signal (e.g. a speed or direction parameter). They allow using sensory fedback through reflexes, providing a much better adaptability to the environment. They appear to be a very promising approach, and several CPGbased controllers have already been implemented on the HOAP robot, e.g. in [19], [11] and [15]. As of the trajectorybased methods, CPGs can learn any arbitrary periodic pattern and don t provide a clear design methodology, but this starts to be adressed in [17] and [18]. Another drawback is that the controller can become very complex, making modifications or addition of more feedback difficult Hybrid approaches The distinction between the described approaches is certainly not so clear in the state of the art biped locomotion controllers. While their inner workings are usually kept secret, they probably use some mixture of several control methods, to get the best of each approach. For instance one could design a global offline trajectory with the ZMP criterion, with additional CPG control to implement reflexes. 1.4 Described controller The controller described in [1] presents some similarities with CPGs. The trajectory of each DOF is modeled by a specific oscillator, but these are globally synchronized, instead of being partially autonomous. Unlike trajectory based methods, one doesn t start with generated limb patterns or a formal proof of stability. Instead of this, the model has been designed (probably based on human locomotion observation and intuition of feasibility), and then tuned to obtain the desired effect. As a consequence of the model simplicity, further modifications of the controller are made very easy. One can add more feedback in the control loop, or modify the generated trajectories without having to restart an global optimization process. The parameters are simple and expressive (amplitudes and frequency), and the design methodology is relatively clear. However, it represents a much less generic approach, as only sinusoidal trajectories 4 are applied to the joints. It also lacks some formal optimization tool, as handtweaking doesn t usually lead to the best possible results. 5
7 Figure : The HOAP robot 1.5 The robot In the past decades, there has been a fast development in humanoid robots technology. Instead of custom robots specifically developed for each application, complete autonomous robots that tend to match the human capabilities are now commercially available for a reasonable cost. Figure 1 on page 3 shows the three currently most advanced commercial robots: the Honda Asimo, Sony Qrio and Fujitsu HOAP. Hoap Qrio Asimo Height 5cm 58cm 13cm Weight 7kg 6.5kg 54kg DOFs Table 1: Robots main characteristics comparison For this project, the controller has been implemented on the Humanoid for Open Architecture Platform (HOAP) robot 5, developed by Fujitsu Automation Limited. Unlike most commercial humanoid platforms, its hardware specifications are open 6, which gives a better understanding of its inner mechanisms and capabilities. HOAP is a 7kg, 5cm tall humanoid robot, providing 5 degrees of freedom (the complete joints definition is presented in appendix A). As originally described, the controller uses 1 of the 1 DOFs present in the legs 7. Later we will add control on all the legs and arm joints. The robot is fairly similar in height, weight and available DOFs to the Sony Qrio robot, that has been used for the initial implementation of the controller. 3 Usually one CPG controls one DOF of a joint. 4 Not exactly sinusoidal, as we will see later. 5 General HOAP information can be found at 6 See the HOAP instruction manual [3] for detailed specifications. 7 LLEG JOINT[..6] and RLEG JOINT[..6]A. 6
8 1.6 Webots For practical reasons, all experiments have been performed using the Webots software, a robot prototyping and simulation environment developed by Cyberbotics Ltd. [5]. It allows a complete 3D modelization of the robot, as well as realistic simulation based on the Open Dynamics Engine (ODE). The controller is implemented in C++, using the Webots API. The robot and its environment are modeled using the Virtual Reality Markup Language (VRML), allowing a fast prototyping and easy further modifications of the robot architecture. A large number of VRML robot models and environments (called worlds) are included in the software distribution. One of these is an accurate model of the HOAP, designed at the BIRG laboratory in []. It has only been slightly modified to modelize the feet pressure sensors and increase actuator performance. Using Webots allows a rapid prototyping of the controller, and a total control on the simulation environment. Of course it is also much safer in the early development steps, as the real robot could be damaged by falls, or excessive force and torque applied to the joints. It also avoids the long calibration process that is needed before performing an experiment on the real hardware. A last interesting point is that the obtained simulation results are deterministic (the same parameters will always lead to the exact same gait). This is very useful when trying to quantify the effects of a modification of the controller. This behavior can also be changed to introduce some randomness in the simulation, which allows validating the results on a more realistic environment, where everything is not controlled. 7
9 The coupled oscillator model.1 General description The developed controller is described by Morimoto et al. in [1]. It has originally been implemented with good results on Sony s Qrio robot, a custom human sized humanoid robot and a simplified 3D model 8. This suggests the controller is generic enough to work on the HOAP as well. Figure 3: Position of the center of pressure The rhythmic component of the gait is described by a coupled oscillators system, modelling the controller and robot phase, respectively φ c and φ r. Their temporal behavior follows a differential equations system: φ c = ω c + K c sin (φ r φ c ) (1) φ r = ω r + K r sin (φ c φ r ) () These two simple equations are sufficient to synchronize the controller and robot dynamics. However, this theoretical model cannot be directly applied to the real controller, as the robot natural phase ω r and coupling constant K r are usually unknown. They depend on the robot s dynamics (center of mass, posture, physics, etc.). To overcome this, the robot phase is detected through pressure sensors, located under the robot feet, which indicate the position of the COP3. From its position x and velocity ẋ, the robot phase is obtained by the following transformation: (ẋ ) φ r (χ) = arctan (3) x Basically, φ r (χ) models the stance and swing leg transitions. Now that the robot dynamics are known, equation 1 can be solved to obtain the corresponding controller phase φ c. A last modification is applied to the theoretical equation to obtain several phase differences, which will be used to generate synchronized and symmetrical limb trajectories. φ c is finally expressed as: φ i c = ω c + K c sin ( φ r (χ) φ i ) c + α i, (4) [ α i =, π, π, 3π ] (5) 8 Which only has legs and torso. 8
10 So that numerical integration of equations 4 gives us the four controller phases φ i c at each time step. Finally, the joint trajectories will be derived from the controller s dynamics by using simple sinusoidal patterns.. Effects of coupling The goal of the coupled oscillator model is to achieve phase and frequency locking between the controller and robot dynamics from an arbitrary initial situation so that the walking pattern takes place. Both oscillators start running at their natural frequencies ω r and ω c, and once locked converge to their coupled frequency ω, with phase difference Ψ ω r ω c (K c = ) ω c (K c = ) ω c (K c = 4) ω c (K c = 5) ω c (K c = 9) Figure 4: Controller (ω c ) and robot phase (ω r ) convergence for different coupling values K c. ω = K rω c + K c ω r K c + K r, (6) Ψ = φ r φ c = ω r ω c K c + K r (7) Figures 5 on the next page and 4 illustrate the phase locking property of the coupled oscillator model. To evaluate the controller adaptation, we arbitrarily set the robot natural frequency ω r = π, and the initial controller frequency ω c = π. The robots dynamics are modeled by a sinusoidal function. We examine the phase and frequency evolution by varying the coupling constant K c. For this example, the robot phase is uncoupled (i.e. K r = ). This is not the case in the real robot simulation, where both phases are coupled, so that the robot s dynamics will also be influenced by the controller. If coupling is too weak (K c =.), the controller frequency stays almost unchanged from its initial value of π. As K c rises, the controller frequency starts to adapt to the target frequency, and finally reaches it when K c = 5.. But the phase difference Ψ is still too important. By increasing the coupling 9
11 1 ω r ω c (K c = ) 1 ω r ω c (K c = 4) (a) K c =. ω r ω c (K c = 5) (b) K c = 4. ω r ω c (K c = 9) (c) K c = (d) K c = 9. Figure 5: Controller (sin (ω c )) and robot phase (sin (ω r )) convergence for different coupling values. constant to K c = 9., the controller finally matches the robot s dynamics quite well, and almost immediately. On the HOAP we will use K c = 9.4, which is the same value as in the Qrio implementation. This confirms that the two robots have very similar dynamics. These results indicate that a strong coupling value is necessary to get the desired effect. Unfortunately, too much coupling implies some issues. As seen in equation 6, a high K c value gives less importance to the target controller frequency ω c, so that control of the gait speed won t be possible by modifying ω c. Another problem is the probability to reach a stable point 9, where the controller phase will converge to a fixed value, and the robot will get stuck. The risk of convergence is due to a specificity of this controller, which uses K c > ω c. This is usually not the case in a standard coupling equation. 9 See appendix B 1
12 3 Generation of the joint angle trajectories While the coupled oscillator model describes the rhythmic component of the gait, it gives no indication on the desired joint angle values. In this section we present the generation of the periodic limb trajectories from the controller phase. Each controlled DOF follows a specific trajectory, which is generated by a simple sinusoidal function of the form A sin ( φc) i + θrest. The rotational axis and considered joint angles in the three planes are defined in figure 6. The gait can be divided in two submovements, stepping and walking, that we will describe separately. (a) Considered joints in the frontal plane (b) Considered joints in the sagittal plane (c) Direction of the rotational axis in each plane Figure 6: Definition of considered joint angles and rotational axis 3.1 Stepping For the stepping movement, we want the robot to periodically transfer the COP from one leg (stance leg) to the other (swing leg). The stepping pattern can also be decomposed in two parts: balancing in the frontal plane (roll amplitudes A hipr and A ankler ), and leg lifting in the sagittal plane (pit amplitude A p ) Balancing Figure 7: Simple balancing movement. Here we want to generate a simple balancing movement, that will periodically transfer the center of mass (COM) from one side to the other, without lifting the leg. This modelizes the inverted pendulum dynamics of the robot s body in the frontal plane. 11
13 We only need to control the hip and ankle roll angles, right and left limb trajectories being identical. The hip and ankle joint angles must be coordinated in a way that the robot oscillations are stable in the frontal plane. The desired behavior would be to oppose the ankle and hip joint angles, as follows: θhip d r (φ c ) = A hipr sin ( φ 1 ) c θankle d r (φ c ) = A ankler sin ( φ 1 ) c These are the equations as proposed in the article. However, while this should be the correct behavior, we couldn t obtain a stable balancing by using this setting 1. So we tried a different approach. We keep the hip oscillator as proposed, and search for the adequate ankle behavior, by using the global robot roll angle θ rollr in the frontal plane. If we set the ankle orientation to the inverse of the robot roll angle (θ ankler = θ rollr ), we ensure that both feet always stay parallel to the ground. All what is left is to build a new oscillator for the ankle that matches the desired trajectory. It appears that a phase difference of π 4 between the hip and ankle oscillators perfectly synchronizes the trajectories, as shown on figure 8. (8) (9) 1 8 actual desired 6 4 angle [deg] Figure 8: Actual (θ ankler ) and desired ( θ rollr ) ankle joint trajectories in the frontal plane So that our final joint trajectories for frontal plane balancing are defined as follows: θhip d r (φ c ) = A hipr sin ( φ 1 ) c (1) ( θankle d r (φ c ) = A ankler sin φ 1 c π ) (11) 4 Directly using the inverse of the body roll angle θ rollr to achieve ankle placement could seem to be a better solution, but it suffers several drawbacks. First, 1 Due to excessive oscillations in the frontal plane, that we couldn t stabilize. 1
14 we would need to determine the body roll angle precisely. If this is easy in simulation, on the HOAP it can only be measured through a gyroscope, which doesn t provide absolute angle values but angular speed 11. A second problem is that this solution doesn t achieve the desired oscillatory behavior. If the ankle joint simply follows θ roll, overrolling cannot be compensated. If the robot encounters a perturbation in the frontal plane, it will simply fall to the side without resistance. This is why we decided to approximate this ideal behavior, but kept an ankle oscillator Stepping Figure 9: Stepping movement To generate the complete stepping movement, we don t only need balancing. The robot has to push the floor with its stance leg, while bending the swing leg. This will allow the further forward movement without the feet rubbing the ground. To obtain this, we now need to control the hip, knee and ankle joints in the sagittal plane (pit amplitude A p ). We want the ankle inclination relative to the ground to remain zero also in the sagittal plane, so that the robot doesn t move from its initial position. To achieve this with a single amplitude A p, the knee joint angle is opposed to the hip and ankle and uses a doubled amplitude A p. This ensures that the resulting foot orientation (θ hipp + θ kneep + θ anklep ) stays parallel to the floor during all the stepping phase. A temporal modification was needed to achieve a stable stepping movement. A supplementary phase difference of π 3 is introduced in equation 4 on page 8, so that α i = α i π 3. This was already proposed in the article, and makes the controller react earlier to the actual robot phase transitions. The final joint trajectories for stepping in the sagittal plane are defined as follows: θhip d p (φ c ) = A p sin ( φ 1 ) c + θ res hip p (1) ) + θ res θ d knee p (φ c ) = A p sin ( φ 1 c θ d ankle p (φ c ) = A p sin ( φ 1 c knee p (13) ) + θ res ankle p (14) To generate an opposed symmetrical movement between right and left legs, two different phases are used. Equations 1 to 14 present right leg trajectories (φ 3 c), while left leg uses phase (φ 3 c = φ 1 c + π). The complete joint trajectories for stepping are presented on figure 1 on the following page, with and without coupling (time axis scale differs for the 11 Even if the resulting angle could be obtained by integration of the angular speed values. 13
15 uncoupled oscillator, which runs at a lower speed). We can clearly see the effects of phaselocking: while the uncoupled oscillator runs at its nominal frequency ω c = π, the coupled version adapts to the robot s dynamics, so that ω c 3π. The shape of the joint trajectories is also modified by coupling, that don t remain pure sine waves hip roll ankle roll hip pit knee pit ankle pit left COP angle [deg] right (a) Coupled (K c = 9.4) hip roll ankle roll hip pit knee pit ankle pit left COP angle [deg] right (b) Uncoupled (K c =.) Figure 1: Joint trajectories for stepping movement (right limb), with corresponding COP time course (time axis scale differs for uncoupled oscillator) The COP time course clearly shows that the experiment is successful. For the uncoupled oscillator, the leg transitions are perturbed, while they are immediate and clean with coupling. In this example, the controller frequency ω c doesn t exactly match the robot frequency ω r, to demonstrate that coupling really works. But we must mention that by setting the oscillator close enough to the robot natural frequency (ω c ω r ), the simple stepping pattern can be perfectly achieved without coupling 1. Figure 11 on the next page compares the obtained stepping trajectories with the ones obtained in the original article. We see that the results are similar, even if the obtained coupled frequency and the generated joint trajectories are different. Unfortunately, the only available data in the article is for the simplified robot model, which apparently doesn t closely match the HOAP dynamics. 1 This wasn t the case in the original article implementation, where not even a single support phase could be produced without coupling. 14
16 (a) ω c =.5rad/s on simulated robot (b) ω c = 3.6rad/s on simulated robot 6 hip roll 6 hip roll 4 4 angle [deg] angle [deg] (c) ω c = 3.6rad/s on HOAP Figure 11: Right hip joint trajectories with uncoupled (left) and coupled (right) oscillator in original article[1] for the simulated model. Same results on the HOAP robot (identical desired and actual trajectories) 3. Biped walking The last building block of our gait is to make the robot actually move forward. We want the stance leg to move backwards to push the robot to the front (propulsion movement), while the swing leg moves forward. Instead of remaining parallel to the ground, the foot should now push the floor to help the forward displacement at the end of the stance phase (toe off ), and be lifted at the end of the swing phase (heel swing) to follow the leg orientation. Two additional sinusoidal trajectories are defined, with amplitudes A hips for the step length and A ankles for the foot orientation. They use a different phase of φ c = φ 1 c + π for right limb, and φ4 c = φ 1 c + 3π for the left. The ankle joint pit angle θ anklep is opposed to the hip angle θ hipp, so that it compensates the ankle orientation when the leg is lifted. As a consequence, A hips < A ankles doesn t produce a valid gait pattern. In this case, the robot hits the floor with the toes at the end of the swing phase, which is not the desired behavior 13. The right leg joints trajectories in the sagittal plane are modified as follows (the knee oscillator remains unchanged): 13 But as we will see later, this seems to have positive effects on stability. 15
17 Figure 1: Qrio (top) and HOAP (bottom) complete walking pattern θhip d p (φ c ) = A p sin ( φ 1 c) + Ahips sin ( φ ) c + θ res hip p (15) θankle d p (φ c ) = A p sin ( φ 1 c) Aankles sin ( φ ) c + θ res ankle p (16) The effects of coupling on the trajectories and COP time course, presented on figure 13 on the following page, are similar than for the stepping phase. But this time, the uncoupled oscillator was not able to perform an effective gait 14, even if the robot and controller dynamics match. This shows that sinusoidal joint trajectories are not complex enough to produce the a walking pattern. 3.3 Summarized controller architecture Here we regroup the different components of the controller to have a more general view of its mechanisms. The synchronization of the robot phase and controller phase is achieved through numerical integration of the following equation: φ i c = ω c + K c sin ( φ r (χ) φ i c + α i ), αi = [, π, π, 3π After what the joint trajectories are simply obtained from the controller phases by simple sinusoidal functions describing the frontal: θhip d r (φ c ) = A hipr sin ( ) φ 1 c θankle d r (φ c ) = A ankler sin ( ) φ 1 c π 4 and sagittal plane trajectories: θ d hip p (φ c ) = A p sin ( φ 1 c) + Ahips sin ( φ c) + θ res hip p θ d knee p (φ c ) = A p sin ( φ 1 c) + θ res knee p θ d ankle p (φ c ) = A p sin ( φ 1 c) Aankles sin ( φ c) + θ res ankle p 14 Even if COP transitions appear, the feet rub permanently on the ground, preventing to produce an effective locomotion pattern. ] 16
18 5 4 3 hip roll ankle roll hip pit knee pit ankle pit left COP angle [deg] right (a) Coupled (K c = 9.4) hip roll ankle roll hip pit knee pit ankle pit left COP angle [deg] right (b) Uncoupled (K c =.) Figure 13: Joint trajectories for biped walking (right limb), with corresponding COP time course (time axis varies) So that all the controller behavior can be expressed with six simple equations. From now on, our goal is to apply the model and tune the parameters accordingly to produce the desired gait. 17
19 4 Webots implementation 4.1 Coupled oscillator As the coupling mechanism is mathematically simple, there were no particular implementation issues on this part. Numerical integration of the coupling differential equation is done using the Euler scheme, which is simple and doesn t require much computation: ( ( φ i c = φ i c + ω c + K c sin φ r φ i c + α i π )) dt (17) 3 The only constraint when using this method is to have a small integration step dt to get accurate resukts. So we set the simulation refresh time to one millsecond, as a very short time step is also needed to get good sensory performance. Otherwise, the delay between the current and measured pressure sensors measures can be an issue. Now we need to detect the robot phase φ r from the feet pressure sensors. 4. Feet pressure sensors To synchronize with the controller with the robot dynamics, only foot pressure information is used. To detect the robot dynamics, the knowledge of which foot actually touches the ground is sufficient, so that a basic contact/no contact sensory input would be enough. But additional information can be gathered from the sensors that we will use later for robot stabilization so we want to have an accurate model. As the considered Webots version 15 didn t allow to directly modelize the touch sensors 16, a custom physics library had to be used 17. It will handle the collisions between the feet and the floor, letting other undesired collisions i.e. when the robot falls be treated by the internal Webots libraries Hardware sensors modelization Each foot of the HOAP robot is equipped with four pressure sensors, which are disposed as shown on figure 14 on the next page. They are not directly in contact with the ground, but located under a plastic plate that covers the whole sole. To fit the hardware at best, each sole of our model is divided in four rectangular regions, representing the contact surface of each sensor. Each pressure measurement is then obtained from the force that applies to the corresponding plate. At each simulation step, the pressure is measured and sent to the controller. From these informations, we can compute the weight of the robot, the COP position, or approximate the foot contact angle w.r.t the ground. Physical calculations in the simulated world are based on the Open Dynamics Engine (ODE) [6]. To model the interaction between the sensory plate and the ground, we use ODE s collision detection engine. It creates virtual joints between the colliding surfaces, on which we can compute the resulting contact 15 Version The sensors return values are not continuous, but defined in a lookup table, and so cannot be really accurate. 17 Inspired from an existing library developed at the Graz University 18
20 Figure 14: Pressure sensors forces and torques. Instead of defining fixed contact points (e.g. the four corners of each sensor), we let ODE determine the most relevant contacts. This allows a very generic sensors modelization; their position and shape are defined in the VRML world model, and can be changed without modifying the physics library. Several parameters can be tuned to modelize the floor physical properties: the Coulomb friction coefficient µ, force dependant slip, bounciness of the surface, etc 18. The default ODE parameters seemed to produce very hard contacts, leading to irregularities in the pressure repartition under the foot. The contact force was almost exclusively applied to one edge of the foot, even when it was almost flat relative to the ground. To get a better contact, we introduced more elasticity to the floor 19. As a consequence, the feet can now slightly penetrate into the floor, which makes the resulting contact less edgy. The obtained results seem to be more realistic, by considering the pressure repartition under the sole. As far as we could test them, the results provided by the physics library are accurate. The measured weight of the robot in a static posture is approx. 7.kg, which matches the real weight. During the gait, illustrated on figure 15 on the following page, we can clearly discernate the steps, and the measured mass also corresponds to the real robot 1. The only drawback of using a specific physics library is a reduced simulation speed, especially when many contact points are present. To reduce the amount of calculation, we limit the total number of contact points that can be created between the colliding objects. A maximum of four possible contacts for each sensor seems to be sufficient to obtain accurate 18 As these parameters are not well documented, we cannot guarantee a totally realistic modelization of the surfaces. 19 This is obtained by increasing the constraint force mixing (CFM) value, which controls the sponginess and springiness of a joint. In our case, we act on the temporary joints that are created by ODE between the colliding surfaces. This makes the joint constraints softer, and allows the colliding objects to interpenetrate. We graphically display the footfloor contact repartition in the Webots simulation. 1 With a difference due to the cinematic forces applied to the COM, which explains the observed peaks. 19
21 5 robot weight weight [kg] Figure 15: Measured weight of the robot during walking, observed on right foot sensors measures. 4.. Detection of the center of pressure The detection of the COP position COP pos from the sensors data is immediate: COP pos = sensor left sensor right sensor left + sensor right, (18) where sensor right and sensor left are the sum of the four sensors values of each foot. As the real position has no influence on the robot phase (only the transitions matter), COP pos is scaled to the [1..1] interval, instead of detecting the effective feet distance. The robot mass and COP transitions are correctly detected, but the obtained time course, presented on figure 16 on the next page, doesn t behave as expected. The observed transitions are too sharp to look natural, and we obtain an almost square signal. This is not caused by the simple position detection of equation 18, but is already present in the world simulation. Transitions between right and left foot contact are abrupt and occur in a short time interval (approx. ms). It is probably caused by ODE s collision detection mechanism, and shouldn t be so pronounced on the real robot. The observed transitions are also smoother in the original article. This can be an issue for the generation of the limb joints trajectories. The initial sinusoidal trajectories are shaped when the robot phase or COP transitions occur, as shown on figure 17 on the next page. This should be different once applied on the real robot. An hardware ZMP/COP detection mechanism is present, which should produce more accurate measures, and certainly a more continuous COP time course. On the other hand, the measured values would probably be much more noisy than with our ideal simulated sensors on a perfectly flat floor.
22 right COP left (a) Obtained results (b) Original article[1] (simulated environment) (c) Original article[1] (real environment) Figure 16: COP time course during stepping 8 6 COP hip roll ankle roll 4 angle [deg] Figure 17: Shaping of the sinusoidal joint trajectories on COP transitions 4..3 Estimation of the contact angle with the floor Having four sensors under each sole, we can think of detecting the sole inclination relative to the ground. This is not really interesting in the simulated world, where we can have all desired information about the robot body or sole inclination. But in the real world, we only have a gyroscope located in the robot s chest which can only provide the angular velocity of the robot torso. As described in figure 18 on the following page, we use two force measures, F front and F back to approximate the contact angle θ contact by: ( ) Ffront F back θ contact = arctan, (19) K where K is a scaling factor. Approximated measures of θ contact are shown on figure 19 on the next page, with the actual foot inclination 3. Of course the measured angle is only defined These two forces are simply the sum of the two respectively front/back sensors of the stance foot. 3 That can be obtained in simulation by using a Webots GPS node. 1
23 Figure 18: Sensors measures for flat and tilted contact with the floor for the stance leg. In the swing phase, θ contact =. 8 6 actual measured 4 contact angle [deg] Figure 19: Right foot contact angle in the sagittal plane, sensors approximation and exact value Obviously, the foot sensors can only provide a rough estimation of the contact angle. The measured values are noisy, not very accurate, and the approximated angle trajectory is not continuous. But in fact, we cannot logically expect to get exact measures by this mean. For instance, they are necessary biased if the foot rubs against the floor. F front gets bigger for a constant θ contact angle, which will be overestimated. It also doesn t work well for the ankle inclination in the frontal plane, probably because of the smaller distance between the sensors 4. But despite these limitations, these angle measures can be very useful for foot stabilization in the sagittal plane. 4.3 Initial and rest postures We define the rest posture P res as the average angle value on each controlled joint, around which the oscillators run. On this controller, it is modelized by simple offsets on the sinusoidal patterns (θi res values). This allows easy adaptation to different rest postures. P res differs from the simulation initial position P init, which corresponds to the initial state of the system (controller phase φ c = ). At initialization, the 4 The feet are twice longer than large.
24 (a) Rest (b) Initial Figure : HOAP rest and initial simulation postures, in frontal and sagittal planes robot s left leg is lifted to help the walking pattern to take place. It is important to start from a stable position, as the simulation results can be biased by initial perturbations. This was not easy to achieve under Webots, as there is no mean to precisely place the robot in this position 5. Instead of a perfect initial position adapted to each possible parameters combination, we used several worlds with different P init, depending on the hip amplitude. Joint Angle Hip 1 Knee 4 Ankle 19 Shoulder 6 Body 3 to 9 Table : Rest posture offset angles from straight posture [degrees] A human usually adopts a straight walking posture, without bending his legs. This is different for an humanoid robot: on most artificial gaits, the knee angle will oscillate around a positive value close to 345 degrees, so that the leg will always be slightly bent. This helps stabilizing the inverted pendulum dynamics of the robot, giving more inertia against undesired balancing by lowering the center of mass. To produce a realistic gait, we tried to keep these offsets as low as possible. The chosen rest posture has an important effect on the gait stability, a slight change in it can strongly influence the obtained result. On the HOAP robot, the center of mass is a bit excentered to the back of the robot, due to the battery compartment 6. To correct this, we tried to find a walking posture which doesn t apply too much weight to the back of the foot, by bending the robot s torso to the front. However, the COM must not be located too much to the front, as the inertia force would make it fall forward 7. We will later need 5 We can arbitrarily move and rotate the robot, but this caused more problems than it solved. 6 This doesn t seem to be the case on the Qrio robot, or to a lesser extent. This could explain why the original implementation, shown in figure 1 on page 16, can walk with its knees stretched. 7 This is much more prone to happen as displacement speed increases. 3
25 to modify its position depending on the gait speed. 4.4 Actuator delay The Webots simulation platform allows to define maximum joint force, velocity and torque values, to match the robot actuators capabilities. The provided HOAP model seems to underestimate the actual motor performance. This leads to a too long delay before the desired angle values are effectively reached, and limits the maximal joint values. The threshold values for actuator force and velocity have been increased 8, so that the joints now closely follow the desired trajectories. Figure 1 illustrates this, for the right hip joint (other joints have a similar behavior). 3 desired actual 3 desired actual 1 1 angle [deg] angle [deg] (a) Default values (b) Increased thresholds Figure 1: Right hip joint trajectory (frontal plane), desired and actual values Both figures use the same parameters. With the default thresholds, the robot was not able to perform clean steps, which explains the differences in the target trajectories. 8 With the maxforce, maxvelocity and controlp parameters of the Webots joint node. 4
26 5 Parameter space exploration Finding the best set of amplitudes for a stable and efficient gait is not a trivial problem. With five different amplitudes to control, it is hardly impossible to obtain a really stable gait only by hand tuning. This could be done using an optimization algorithm, but as the parameters space is relatively limited, we can do an exhaustive search on the whole set, which will give us an overview of all the working parameters. Apart from finding an optimal solution, this exploration is important to determine the adaptiveness and robustness of the controller. Ideally, one would expect that the amplitude parameters are quite permissive, and don t need to be too finely tuned at every conditions change. This is particularly crucial for a real world application, where actuator performance (e.g. in case of motor backlash or inaccuracy) and external perturbations cannot be totally controlled. Lastly, it could exhibit relationships between the parameters, allowing to reduce their number 9. To evaluate all possible combinations of parameters, we launch several batch simulations with different joint amplitudes. This can be easily done in Webots by using a supervisor robot, that has full control on the simulation. We use it to stop or revert the process and store the results. 5.1 Stepping For the stepping, we have three parameters to evaluate: A hipr, A ankler and A p. The efficiency criterion is the number of steps achieved. At this point, we need to fix some parameters, as three amplitudes already represent a important number of possible variations. In fact, the sagittal plane amplitude A p can vary in a much smaller range than the two others, so we consider it as a constant. If it is too big (A p > 4), the length of the leg changes too much between the bended and straight position. This leads to important oscillations in the frontal plane. If too small (A p < 1.5), stepping doesn t correctly take place as the feet are not lifted enough. Since we want a stable gait, a minimal leg lifting seems to be a good choice, so we consider A p =. Now only the two balancing amplitudes A hipr and A ankler remain. These two are strongly correlated, as their role is to keep the foot nearly parallel to the ground in the frontal plane 3. In order not to miss possible candidates, we first choose a large range for the hip amplitude, from to 15 degrees. Higher hip amplitudes cause too much balancing to be exploited, and wouldn t produce a realistic movement. Then we search for the corresponding ankle amplitudes between and 1 degrees. Again, higher ankle amplitudes cannot achieve stability, and lower amplitudes don t initiate the stepping pattern. This gives us a general overview of the stepping requirements, presented on figure on the following page. Simulation time is limited to 1 seconds, and the solutions where the robot gets stuck both feet stay in contact with the ground or falls are discarded (the number of steps is set to zero). For the stepping we must be very selective on the accepted experiments, otherwise we 9 For instance, replacing A hips, A ankles and ω c by a global speed parameter. 3 In the sagittal plane, the foot already stays parallel to the ground by the way leg lifting is defined. 5
27 obtain a lot of unsuccessful movement patterns 31. steps steps hip ampl [deg] ankle ampl [deg] 1 steps hip ampl [deg] Figure : Variation of A hipr and A ankler parameters (stepping only) in the whole parameter space. Number of steps,.5 degrees tics An interesting point to notice is that the ankle amplitude is more determinant than the hip amplitude for the stepping to take place. Under a certain threshold (A ankler < 4.), it cannot take place, and the robot mainly gets stuck. Similarly, beyond a maximal amplitude (A ankler > 1.), the robot roll angle becomes too large, and it always falls. Stepping is robust in the successful region, as a small variation of the parameters don t influence the number of achieved steps or stability. This is not the case in the borders of the region, where a slight variation makes the movement unstable 3. Some regions are out of range on the plot (represented by white squares). They correspond to simulation runs with many small unsuccessful steps, and can be discarded. To go further in the parameter space restriction, we are not interested by the combinations where A hipr > A ankler, which don t correspond to humanlike stepping patterns. 31 For instance when a too important number of steps is reached, as the observed frequency could only produce 15 to steps in 1 seconds. 3 This mainly comes from the way we discard the unstable runs. Considering the number of steps before the fall would lead to a more continuous but less representative graph. 6
28 Even if the successful patterns achieve stability 33, a high number of them are not really good. The point is now to find a criterion, or more probably several combined, to determine if the parameter set is satisfactory. For instance, we can use the time between two steps, the robot roll angle, the quality of the ground contacts, etc. to pick the best candidates among our initial set. Figure 3 shows the mean angle between the foot and the ground at the contact moment. contact angle angle [deg] hip roll ampl [deg] ankle roll ampl [deg] contact angle hip roll ampl [deg] Figure 3: Variation of A hipr and A ankler parameters (stepping only) in the whole parameter space. Foot contact angle with the ground,.5 degrees tics 5. Chosen stepping parameters From the previous measures and visual examination of the best solution candidates, we can now pick an optimal set of parameters: As far as we could evaluate them 34, the successful stepping patterns remain stable for an arbitrarily large number of steps. The chosen parameters have been tested on a 1 minutes simulation, resulting in more than 5 left to right limb transitions without a single failure. 33 At least in the considered 1 seconds time interval. 34 The simulation doesn t run in real time, but approx..x, so we would need a huge amount of time to exhaustively test all working parameters combinations. 7
29 A hipr 3. A ankler 7.5 A p. Table 3: Chosen stepping amplitudes [degrees] Even with the chosen parameters, the stepping movement is still not perfect. While we would expect the foot to stay absolutely parallel to the ground (in the frontal plane) during the whole stance phase, oscillations occur. As a consequence, the best stability is not achieved with the foot being flat at heel strike, but with a small inclination 35. The foot will only be flat at the middle of the stance phase, before toe off. Even if this works quite well, it causes some problems during the gait, which cannot always maintain a straight direction. 5.3 Biped walking Gait performance Now that we have a stable stepping movement, we examine the gait performance for different walking amplitudes A hips and A ankles. For the moment, we make the assumption that the optimal stepping movement is the best choice for all walking patterns, but this will have to be validated. To classify the gait performance, we measure the distance that the robot covers in a fixed time interval of seconds. Again, the experiment is discarded (distance is set to zero) if the robot falls or gets stuck. The fall is detected with the torso inclination, and we consider it gets stuck if too much time elapses between two steps. As for the stepping, we start with a large covering of the entire parameter set. Figure 4 on the following page shows the range of to 15 degrees for both amplitudes, with steps of.5 degrees. We only consider the displacement in the z axis, so that experiments where the robot turns are penalized. The gait is visibly stable, at least for small hip amplitudes. Similarly to the stepping movement, the successful region is unique and continuous. But the settings are still not optimal, as the successful surface gets narrower and hashed when speed increases. As a result, the top achieved speed actually decreases when A hips > 8. degrees. This performance degradation is caused by important oscillations of the center of mass in the sagittal plane, often leading to the fall of the robot in the displacement direction. This is a natural behavior of biped walking, caused by inertia forces, that increase with the gait speed. To improve stability, we slightly displace the COM to the back of the robot, using the torso joint inclination θ 36 torso. Gait performance using the adjusted posture is shown on figure 5 on page 3. The gait is now more stable, and the achieved distance rises linearly on all the hip amplitude range. This illustrates the importance of the COM position, and the relative fragility of the generated gait, even in the simulated environment. However, tuning the COM position is no more sufficient to stabilize the robot when A hip > So that the contact is made on the external edge of the foot. 36 BODY JOINT[1]A. 8
30 distance distance hip ampl [deg] ankle ampl [deg] 1 distance hip ampl [deg] Figure 4: Variation of A hips and A ankles parameters (biped walking) in the whole parameter space. Achieved distance,.5 degrees tics To have a more precise view of the working parameters combinations, we make a finer exploration on the higher hip amplitude range (11. < A hips < 14.). This should allow us to obtain a fast and stable gait. Figure 6 on page 31 illustrates this, with.1 degrees amplitude intervals on the previous best region. We see that the transition between the success and nonworking region is again really sharp (it changes in less than.5 degrees). As for the stepping performance graph, this is due to the way we evaluate performance, by totally discarding the simulation runs where the robot falls. It doesn t mean the robot fells immediately, and thus they also produce a valid gait for several seconds Parameters study To reduce the number of parameters describing our gait, we want to find relationships between them. Figure 5 on the next page clearly exhibits a linear relation between the hip and ankle amplitudes. This is not surprising, if we recall how the sagittal plane joint trajectories are defined. The stepping pattern keeps the feet horizontal in both directions, so that the resulting foot inclination 9
31 distance [m] distance hip ampl [deg] ankle ampl [deg] distance hip ampl [deg] Figure 5: Variation of A hips and A ankles parameters (biped walking) in the whole parameter space. Achieved distance,.5 degrees tics (with adjusted COM position) only depends on the walking amplitudes A hips and A ankles : θ contactp = θ hips θ ankles () For stability, the value of θ contactp at the end of the swing phase when the foot enters in contact with the ground is crucial. At this moment, the foot should be flat or slightly lifted depending on the gait speed to induce a continuous movement in the displacement direction. If θ footp gets out of an acceptable range, the robot will basculate, either to the front or to the back. On figure 7 on page 3, we see the working A hips and A ankles combinations. The relationship is still linear, and can be approximated by: A ankles = 4 5 A hip s (1) We see that the acceptable value range for θ contactp gets narrower as speed increases. A hips = A ankles should produce the most stable gait, with an overall flat contact angle. However, this is only true for really slow gaits, when the robot body doesn t get much movement inertia. More surprisingly, we get the 3
32 distance hip ampl [deg] Figure 6: Variation of A hips and A ankles parameters (biped walking) in the high amplitudes range. Achieved distance,.1 degrees tics opposite of what we expected for θ contactp. Logically, the foot orientation should help the forward displacement, with the heel touching the floor first (heel strike) at the end of the swing phase, and pushing the robot body forward at the end of the stance phase (toe off ). Instead of this, best results are obtained 37 when the foot hits the ground with the toes. By doing this, the body oscillations in the sagittal plane are reduced, but it doesn t produce a very realistic behavior. 37 More precisely, the normal heel strike movement doesn t lead to stable gaits. 31
33 1 1 opt var contact 4/5*hip ampl 8 angle [deg] hip ampl [deg] Figure 7: Relationship between hip and ankle amplitudes, with resulting optimal foot contact angle in the sagittal plane. 6 Extensions to the existing controller 6.1 Arm movement A first and simple modification of the proposed controller is arm balancing. This plays an important role in all types of human locomotion[1], both for stability and efficiency. In biped locomotion, arm balancing partially compensates the center of mass displacement, in both sagittal and frontal planes, and thus stabilizes the inverted pendulum dynamics of the robot. It also gives a much more realistic impression to the gait. In humanoid biped locomotion, arm balancing is simply opposed to the leg displacement (left arm is synchronized with right leg, right arm with left leg). So we need two new oscillators using a π phase difference with the corresponding leg. Since we also want the arm amplitude to be proportional to the leg displacement, we can just reuse the leg s sagittal plane oscillators (amplitudes don t even need to be adapted, except that we use half of the knee amplitude for the elbow), so that arm balancing doesn t require any additional parameters. The equations for the right arm joint angle are as follows: θshoulder d p (φ c ) = A p sin ( φ 3 c) + Ahips sin ( φ 4 ) c + θ rest shoulder p () θelbow d p (φ c ) = A p sin ( φ 3 ) c + θ rest elbow p (3) Unfortunately, the effects of arm balancing on the gait were not as important as expected. It reduces the overall robot balancing, but in terms of stability, a successful gait isn t perturbed if we remove the arm movement. But on a longer time interval, it helps to keep the locomotion direction straight. 6. Feet placement stabilization in the sagittal plane The second modification also concerns stabilization. The main encountered issue and probably one of the crucial points to achieve stable biped locomotion is the foot inclination relative to the ground. For a stable support of the stance 3
34 leg, the foot has to remain parallel to the ground in both planes. Otherwise, undesired balancing effects occur, which lead to an inefficient gait or the robot falling 38. As proposed, the controller doesn t use any stabilization mechanism, other than the adaptation to the robot dynamics. The article proposes a stabilization method for frontal plane balancing, by using feedback based on the robot roll angle, but it wasn t necessary on the HOAP robot and neither was on the Qrio because the large feet of the robot 39 already prevent overrolling. Another drawback of this method is the use of the body roll angle at its extremes, which should be approximated with the gyroscope. Instead of this, we try to use only information obtained from the pressure sensors. What we want here is to stabilize the gait in the sagittal plane, as it is our main problem when the robot walks fast. The goal is not to force bad suited parameters to work, but to slightly stabilize existing combinations. To do this, we correct the oscillations of the ankle joint with a proportionalintegralderivative (PID) controller. It is a simple and widely used method to minimize a chosen error in a control loop. In our case, the considered error is the inclination of the stance foot w.r.t the ground, θ contact. It is obtained from the pressure sensors, as presented in section 4..3 on page 1. To filter the high frequency noise on the sensory measures, the correction angle θ corr is expressed as a differential equation: θ corr = K p θ contact θ corr (4) We only use the proportional term of the PID, as it is already sufficient to stabilize the sole placement without side effects. But one needs to be careful when choosing the proportional constant K p, in order not to cause overshoot (i.e. oscillations around the target value). We also want to limit the influence of the PID controller, otherwise it could suppress the ankle propulsion movement. But virtually, the PID can produce a totally flat and stable foot contact. The applied correction is presented on figure contact angle correction 5 original corrected 1 angle [degrees] angle [degrees] (a) Correction value based on the approximated contact angle (b) Effects of the correction on the ankle joint trajectory Figure 8: PID correction applied to the right ankle joint 38 This is of course accentuated in case of perturbations or on an uneven floor. 39 Which are proportionally much larger than a human s. 33
35 distance hip ampl [deg] distance hip ampl [deg] Figure 9: Variation of A hips and A ankles parameters (biped walking). Top: using the PID controller. Bottom: previous results without PID As expected, the successful walking region gets broader, as shown on figure 9. The controller is now more adaptive to a misplacement of the sole in the sagittal plane. Unfortunately it doesn t improve the top obtained speed. In fact it doesn t behave really well for large hip amplitudes A hips > 1.. By sticking the foot to the ground, it breaks some part of the robot dynamics, which doesn t lead to the expected behavior. Unless mentioned, we won t use the PID controller except on low speed gaits, where it has a beneficial effect. 6.3 Speed control Controlling the robot s walk speed was one of the initial objectives of the project. Two different factors can influence the resulting locomotion speed: the step length and gait frequency ω. Ideally, a combination of these two factors would lead to realistic gaits. Unfortunately, as discussed in section. on page 9, we don t have much control on the final gait frequency, as coupling constrains it to the robot natural frequency ω r. On the HOAP, the controller converges to an observed frequency of ω 5.rad/s. This corresponds to approximately two steps per second, could be too fast for the robot hardware 4. There is not much to do to change this behavior, as modifying the controller frequency ω c has no impact on ω. 4 Or at least to produce a realistic gait. 34
36 Without deeply modifying the controller, one solution would be to reduce the coupling constant. This couldn t give us full control on the gait speed as only K c = would lead to ω = ω c but could help reducing the observed frequency. This also wasn t possible, as lowering K c breaks the locomotion pattern before having an observed influence on the speed 41. So we can only influence the resulting gait speed by modifying the oscillators amplitudes. As seen in section 5.3 on page 8, the displacement speed increases linearly with the hip amplitude, and we can derive the corresponding ankle amplitude by a simple function. Figure 3 shows the relationship between the gait speed and hip..14 speed.1.1 speed [m/s] hip ampl [deg] Figure 3: Gait speed in relationship with hip amplitude While this is not a perfect solution, it can provide a continuous control on the displacement velocity. The maximal achieved speed is.15 m/s. This is still far from the human gait speed, which is approximately 1 to 1.5 m/s. This is mainly due to the short step length, which is smaller than the foot length. But of course the robot couldn t perform as well as a human even if it could exactly reproduce a given human gait because of its smaller size (7 cm). On the real robot, the speed progression would probably not be as linear as in the simulated world. At least, we can be sure that a speed threshold will be present, as the motors cannot follow the desired joint trajectories indefinitely. 6.4 Direction control Another interesting improvement would be to control the locomotion direction. A detailed study of human steering control is presented in [14]. It is a relatively complex task, that takes place on two different steps 4. The human steering phase is mainly composed of three parts: Displacing the COM in the new displacement direction, through foot placement and/or trunk roll motion Turning the head in the travel direction Orient the trunk in the new direction 41 K c < 9. already affects the gait performance. 4 Unlike e.g. step length modification, that can be initiated in a single step. 35
37 Basically, the body orientation is modified so that the COM follows the new direction, and the corresponding foot follows a supplementary displacement in the frontal plane. The HOAP robot cannot exactly reproduce this pattern, as it doesn t have a trunk yaw joint to perform the final body reorientation 43. To overcome this, we use leg yaw rotation 44. We also wouldn t need head reorientation, as we don t use any visual tracking, but it gives a more natural impression to the movement. The resulting movement is as follows, supposing we want the robot to turn to the left 45. The left foot slightly moves to the left at the end of the swing phase, to change the overall locomotion direction. To maintain stability, we need to cancel a part of the balancing to the right in the following right stance. This is done by rotating the right foot in the opposite of the new locomotion direction. As time was lacking, the steering mechanism is far from complete and has stability problems, but already gives interesting results. This shows we can easily add functionalities to the controller. 43 The trunk only has one DOF for rotation in the sagittal plane (BODY JOINT[1])A. 44 By adding control on joints LLEG JOINT[1] and RLEG JOINT[1]A. 45 Steering is initiated during the next right stance phase. 36
38 7 Discussion of the results obtained for a generated gait 7.1 Chosen parameters Given all these performance measures, we can now choose an optimal gait. We decided not to take the fastest possible gait, to avoid instability effects. Our criterions are speed, dynamical stability and visual appearance. From now on, all presented results will use these parameters 46, presented in table 4. A hipr 3. A ankler 7.5 A p. A hips 11.5 A ankles 9.1 Speed.11 m/s Table 4: Chosen optimal gait parameters 7. Analysis of the chosen gait The results are still not perfect for the chosen gait, as undesired direction changes happen from time to time. This is mainly caused by imperfections in the stepping movement, foot support in the frontal plane not being always perfect, so that the propulsion phase can be perturbed. The gait appearance could also be improved, as the robot gives has the tendency to walk on its toes. Apart from this, stability seems to be totally achieved. As for the stepping, the robot could walk for an arbitrary long time interval, in the absence of external perturbations 47. At least, it achieved a five minutes simulation run without falling hip roll ankle roll hip pit knee pit ankle pit hip roll ankle roll hip pit knee pit ankle pit angle [deg] 1 angle [deg] (a) Right leg (b) Left leg Figure 31: Joint angles trajectories for the chosen gait 46 Unless mentioned, without using the PID ankle stabilization. 47 The main limitations being the size of the floor and the available simulation time. 37
39 The time course of the COM is presented on figure 3, both for absolute position and rotation. We can see that the displacement and oscillations are quite regular once the movement has stabilized. The body oscillations are relatively important in the sagittal plane, with approx. 1 degrees amplitude. We would certainly gain in stability by lowering them. But as already said, the PID stabilization doesn t apply well at this speed, so we would need to find another stabilization mechanism frontal plane sagittal plane frontal plane sagittal plane position [m].6.4 angle [deg] (a) Position (b) Rotation Figure 3: Displacement and rotation of the COM during the gait, frontal and sagittal planes 7.3 Resistance to perturbations Different types of perturbations may happen during the gait: external forces (sudden or continuous), joint angle modifications (e.g. in case of injury or fatigue[1]), etc. The goal is that the controller returns smoothly to a stable state after the perturbation. As it is hard to estimate what level of perturbation should be tolerated to consider that the gait is robust, these experiments will only give a general overview of the adaptability of the controller. We also won t be able to study all possible perturbations in details in details. As originally described, the controller doesn t really provide a mechanism to recover from a perturbation 48. We hope that the additional PID stabilizating control will provide a better adaptability. In the simulated Webots world, external forces and torques can simply be applied to the robot. They can model a continuous wind force or, more interestingly, a discrete shock. Unfortunately, they can only apply on the COM of the robot, and not on a particular part of the body Continuous perturbations (wind) To have a first idea of the allowable perturbation range, we start by testing the robot in its rest and initial postures. We apply an increasing force in the front, back and side directions until the robot falls. The chosen experimental protocol is as follows: the robot walks unperturbed for 5 seconds to reach a stable gait cycle, after what the continuous force 48 Especially if it occurs in the sagittal plane. 38
40 is applied. If the robot achieves to walk for 1 more seconds, we consider the experiment is successful. Front Back Side Static Walking Table 5: Maximal resistance to continuous force [N] applied to the COM In this case, the robot reacts really poorly to the perturbations, with a maximal tolerable force of one Newton. But this can be explained, as the controller was not designed to react to these kind of perturbations. Without adapting the global inclination of the robot, it is logical that even a slow wind can make it basculate Discrete perturbations (shock) Recovering from a sudden and brutal perturbation (e.g. the foot hitting an obstacle) is more complex than for a continuous force. It should raise a set of reactions to maintain the gait stability[13]. Reflexes or voluntary movement should be triggered, implying the legs, but also the arms and trunk. The stability recovery can also occur during several consecutive steps. We proceed as previously: the robot walks for five seconds, and then the force is applied during 1 ms to model a brief shock. The robot successes if it can recover from the perturbation. Front Back Side Static Walking Table 6: Maximal resistance to discrete force [N] applied to the COM for 1 ms The tolerance to a discrete perturbation is much higher, especially if it comes from the back. In this case, the robot stops its fall with the swing leg, and the rythmic pattern recovers easily. But of course these results strongly depend on the precise moment where the perturbation arises. 7.4 Stability on different floors The floor properties can definitely play a major role in locomotion performance. While most artificially generated gaits perform well on a controlled surface, the obtained results can dramatically degrade as the floor gets slippy, uneven or inclinated Floor physical properties We cannot model a wide range of ground irregularities under Webots, as the colliding surfaces have to be flat. But we can modify the contact surface physical 39
41 properties, e.g. slipiness and bounciness. Here we focus on the slipness of the floor, as a robot evolving in a traditional human environment it is more likely to meet this kind of perturbation (e.g. when walking on a carpet, wooden floor etc.) than a bumpy floor. In the original controller implementation, the Qrio robot successfully achieved walking through four different surfaces (carpet, plastic, rubber and metal plates) of different heights (from to 3.5 mm) without any modification of the parameters. A detailed study of walking on low friction floor is presented in [8]. Here we only propose an very simplified overview of the problem, for several reasons. First we don t know how accurate ODE s friction model is 49, or if the available Coulomb friction parameter µ reflects real surfaces. As we don t know what is the robot sole material anyway, we couldn t precisely set the friction parameter. Figure 33: Modified Webots world using different floor surfaces Our experiment is as follows: we divide the walking surface in three regions 33, the first and last modeling a default surface 5, while the middle one is more slippy. To be successful, the robot has to go through the whole floor without falling or getting stuck. In a first friction range (1 < µ <.), the experiment is successful, and no noticeable effect appears, both visually and on the gait stability. Beyond this threshold, the floor quickly gets very slippy. Trajectory errors start to appear for µ =.18, and if we still decrease this value, the robot always falls. These results are relatively good, if we assume ODE s friction model is accurate. For comparison, a Coulomb coefficient of µ =. corresponds to the friction between two sheets of plastic, or between pieces of wet wood. Even if these results are certainly not very precise, the controller should be at least capable of locomotion on most common materials. Another positive point is that the transitions between different floors happen flawlessly Slope Making the robot climb a slope is a totally different problem. As for a human, the gait needs to be adapted to make shorter steps and lift the swing leg higher[]. With an appropriate gait, the problem is not so different from a flat surface 51. Unsurprisingly, the gait that we tuned to walk on a flat ground performs very poorly, not being able to walk from an inclination of one degree. 49 As it performs several approximations for performance reasons. 5 Which use a friction parameter µ = 1, corresponding to a not slippy contact. 51 Assuming the floor doesn t slip. 4