controlo holon´omico e comportamentos na silva costa e equipa … · 2016. 8. 8. · resumo...

70
Universidade de Aveiro Departamento de Electr´onica,Telecomunica¸ c˜oes e Inform´ atica, 2009 Jo˜ ao Alexandre da Silva Costa e Cunha Controlo holon´omico e comportamentos na equipa CAMBADA Holonomic control and behaviours for the CAMBADA Robotic Soccer Team

Upload: others

Post on 13-Feb-2021

0 views

Category:

Documents


0 download

TRANSCRIPT

  • Universidade de AveiroDepartamento deElectrónica, Telecomunicações e Informática,

    2009

    João Alexandre da

    Silva Costa e

    Cunha

    Controlo holonómico e comportamentos na

    equipa CAMBADA

    Holonomic control and behaviours for the

    CAMBADA Robotic Soccer Team

  • Universidade de AveiroDepartamento deElectrónica, Telecomunicações e Informática,

    2009

    João Alexandre da

    Silva Costa e

    Cunha

    Controlo holonómico e comportamentos na

    equipa CAMBADA

    Holonomic control and behaviours for the

    CAMBADA Robotic Soccer Team

    Dissertação apresentada à Universidade de Aveiro, para cumprimento dos

    requisitos necessários à obtenção do grau de Mestre em Engenharia de

    Computadores e Telemática, realizada sob a orientação cient́ıfica de Prof.

    Doutor Nuno Lau e Prof. Doutor João Rodrigues, professores auxiliares do

    Departamento de Electrónica, Telecomunicações e Informática da Universi-

    dade de Aveiro.

  • o júri

    presidente Doutor Tomás António Mendes Oliveira e Silvaprofessor associado da Universidade de Aveiro

    Doutor Agostinho Gil Teixeira Lopesinvestigador auxiliar da Universidade do Minho

    Doutor José Nuno Panelas Nunes Lauprofessor auxiliar da Universidade de Aveiro

    Doutor João Manuel de Oliveira e Silva Rodriguesprofessor auxiliar da Universidade de Aveiro

  • acknowledgements /agradecimentos

    À minha faḿılia, em especial aos meus pais, Manuel e Estefânia, eao meu irmão, agradeço o apoio incondicional ao longo destes anos.Aos professores Nuno Lau e João Rodrigues agradeço a orientação e adisponibilidade quando dúvidas surgiam. Agradeço a todos os elemen-tos da equipa CAMBADA pela ajuda e ambiente criativo fornecido.Agradeço em especial ao Nuno Figueiredo a ajuda para que o previsãoda posição da bola estivesse completa a tempo do Robótica’09 e aoGustavo Corrente pela ajuda em adaptar a solução dos Tribots. Porfim agradeço a todos os meus amigos, pela amizade e especialmente acompreensão pelo meu desaparecimento nestes últimos meses. Esperover-vos em breve...

  • Abstract CAMBADA is the RoboCup Middle Size League robotic soccer teamcreated by researchers of the ATRI group of IEETA of University ofAveiro. This thesis presents the developed contributions in holonomicmotion control and high-level behaviours.At the motion control level, several restrictions affecting holonomicmotion were addressed. This is of vital importance since that giventhe highly dynamic environment of a soccer game it is crucial to moveand position the robots efficiently in the field.At the behaviours level, given the importance of the ball in the soc-cer game, and considering previous work regarding the estimation ofthe ball velocity, the active interception behaviour was implementedallowing the robots to engage the ball predicting its movement ratherthan moving directly to it considering it static.Given the full autonomy of the robots, their perception of the gameshould be as close to reality as possible. In order to supply additionalinformation to the robot regarding the state of the game, a method todetect if it is stuck was developed.The implemented work improved the team performance and contribu-ted to the victory of a National Championship (Robótica’09) and aremarkable third-place in the RoboCup 2009 in Graz, Austria.

  • Resumo CAMBADA é a equipa de futebol robótico da liga de tamanho médiodo RoboCup criada por investigadores do grupo ATRI do IEETA daUniversidade de Aveiro. Esta dissertação apresenta as contribuiçõesdesenvolvidas no controlo do movimento holonómico e comportamen-tos de alto-ńıvel.Ao ńıvel do controlo do movimento, várias restrições que afectam omovimento holonómico foram tratadas. Isto é de vital importânciavisto que dado o ambiente altamente dinâmico de um jogo de futebolé crucial mover e posicionar os robots eficazmente no campo.Ao nivel dos comportamentos, dada a importância da bola no jogode futebol, e considerando o trabalho realizado no que diz respeitoà estimativa da velocidade da bola, o comportamento de intercepçãoactiva foi desenvolvido permitindo aos robots apanhar a bola prevendoo seu movimento em vez de se moverem directamente para a elaconsiderando-a estática.Dada a autonomia completa dos robots, a sua percepção do mundodeve ser o mais próxima possivel da realidade. De modo a fornecer, aorobot, informação adicional respeitante ao estado do jogo, um métodopara determinar se está preso foi desenvolvido.O trabalho implementado melhorou a performance da equipa e con-tribuiu para vitória de um campeonato nacional, Robótica’09, e a umnotável terceiro lugar no RoboCup’09 em Graz, Áustria.

  • Contents

    1 Introduction 11.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.3 RoboCup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

    1.3.1 Middle Size League (MSL) . . . . . . . . . . . . . . . . . . . . . . . 31.4 Thesis Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

    2 Robot Locomotion 52.1 Wheeled Robots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

    2.1.1 Wheels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.1.2 Typical Wheel Configurations . . . . . . . . . . . . . . . . . . . . . 62.1.3 Inverse Kinematics . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

    2.2 Biological Inspiration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.2.1 Legged Robots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.2.2 Other Biologically Inspired Types of Locomotion . . . . . . . . . . 9

    2.3 Common Wheel Configurations in MSL . . . . . . . . . . . . . . . . . . . . 122.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

    3 The CAMBADA Team 153.1 CAMBADA Robots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153.2 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

    3.2.1 Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163.2.2 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

    3.3 Robot Motion Control Loop . . . . . . . . . . . . . . . . . . . . . . . . . . 213.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

    4 Holonomic Control 234.1 Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

    4.1.1 Discrete Time Control . . . . . . . . . . . . . . . . . . . . . . . . . 244.1.2 Acceleration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254.1.3 Actuator Saturation . . . . . . . . . . . . . . . . . . . . . . . . . . 264.1.4 Control Latency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

    4.2 Predictive Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

    i

  • 4.2.1 Robot Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294.2.2 Ball Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

    4.3 New Acceleration Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324.4 Estimating Model Parameters . . . . . . . . . . . . . . . . . . . . . . . . . 344.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

    5 Stuck Detection 395.1 Known Methods for Stuck Detection . . . . . . . . . . . . . . . . . . . . . 39

    5.1.1 Haptic Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395.1.2 Bumper Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405.1.3 Stasis Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405.1.4 Accelerometer Sensor . . . . . . . . . . . . . . . . . . . . . . . . . . 405.1.5 Image Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

    5.2 CAMBADA Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . 415.2.1 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

    5.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

    6 Behaviours 456.1 Active Interception . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

    6.1.1 Corrections to the Interception Point . . . . . . . . . . . . . . . . . 496.1.2 Role Assignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

    6.2 Intercepting the ball near the border lines . . . . . . . . . . . . . . . . . . 516.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

    7 Conclusion and Future Work 557.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 557.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

    ii

  • List of Figures

    1.1 A Robótica’2009 game. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.2 A Middle Size League field with official markings. . . . . . . . . . . . . . . 4

    2.1 Example of orientable wheels. . . . . . . . . . . . . . . . . . . . . . . . . . 62.2 Example of swedish wheels . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.3 Three examples of commercially available humanoid robots . . . . . . . . . 92.4 Example of a crawling robot. . . . . . . . . . . . . . . . . . . . . . . . . . . 102.5 Example of a realist fish robot. . . . . . . . . . . . . . . . . . . . . . . . . 112.6 Example of an underwater biologically inspired robot, RoboLobster [12]. . 112.7 Example of a flying robot. . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

    3.1 A CAMBADA robot. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153.2 Different robots shapes in MSL. . . . . . . . . . . . . . . . . . . . . . . . . 163.3 The CAMBADA top layer. . . . . . . . . . . . . . . . . . . . . . . . . . . . 173.4 The middle layer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183.5 The lower layer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193.6 The micro-controller network. . . . . . . . . . . . . . . . . . . . . . . . . . 203.7 The software architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213.8 Flow-chart of the robot motion control. . . . . . . . . . . . . . . . . . . . . 22

    4.1 Deviation in the robot movement. . . . . . . . . . . . . . . . . . . . . . . . 244.2 Discrete time control effect. . . . . . . . . . . . . . . . . . . . . . . . . . . 254.3 Acceleration limiter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264.4 Effect of control latency over cycle time. . . . . . . . . . . . . . . . . . . . 274.5 Predicting the robot position at the end of delay considering previous com-

    mands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284.6 Changes in the information flow in the robot motion control scope. . . . . 284.7 The considered robot dimensions assuming the ball as a particle. . . . . . 304.8 The reflection of the ball velocity when it collides with the robot. . . . . . 314.9 The method for faster ball velocity convergence after a collision. . . . . . 324.10 Robot path with different values of maximum acceleration. . . . . . . . . . 334.11 New implementation of the acceleration limiter. . . . . . . . . . . . . . . . 344.12 The robot path using new acceleration limiter and robot pose prediction

    after delay. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

    iii

  • 4.13 The resulting robot path with constant angular velocity. . . . . . . . . . . 364.14 Predicted ball positions after control delay. . . . . . . . . . . . . . . . . . . 37

    5.1 Diagram of the discrepancy between the desired and estimated velocities. . 425.2 Evolution of desired and estimated robot speeds when stuck. . . . . . . . . 44

    6.1 Class diagram of all CAMBADA behaviours. . . . . . . . . . . . . . . . . . 456.2 Described path when the robot moves to the estimated ball position . . . . 466.3 Geometric representation of an interception. . . . . . . . . . . . . . . . . . 486.4 The captured position during an interception. . . . . . . . . . . . . . . . . 496.5 Corrections to the interception point. . . . . . . . . . . . . . . . . . . . . . 506.6 isBallInDangerZone verifications. . . . . . . . . . . . . . . . . . . . . . . . 526.7 getApproachPoint implementation. . . . . . . . . . . . . . . . . . . . . . . 53

    iv

  • List of Tables

    7.1 CAMBADA results in national competitions . . . . . . . . . . . . . . . . . 56

    v

  • Chapter 1

    Introduction

    1.1 Motivation

    CAMBADA1 is the University of Aveiro’s Middle Size League robotic soccer team. Theproject started in 2003 within the IEETA’s2 ATRI3 research group. It involves researchersfrom different scientific areas such as image analysis and processing, control, artificialintelligence, multi-agent coordination, sensor fusion, etc.

    Since its creation, CAMBADA has participated in several competitions both nationaland international revealing an impressive progress which culminated in the 5th place in theWorld Championship RoboCup 2007, the 1st place in Robótica 2007, Robótica 2008 andRobótica 2009 becoming tri-National Champion, and the 1st place in the World Champi-onship RoboCup 2008, in Suzhou, China.

    In order to face new challenges and an ever-growing competition, the team aims forcontinuous improvement to maintain the level of achieved performance. With this inmind and considering previous years improvements in sensor information interpretation,the necessity for an efficient robot control arises to enable the development of predictivebehaviours, specially in the robot motion control since part of the team’s success relies inplacing the robot in the desired position.

    1.2 Objectives

    It is the objective of this project to refine and revise the existent robots behaviours andimplement new ones if necessary, as well as the implementation of sensor and informationfusion techniques to build a better representation of the robot’s worldstate. Holonomiccontrol related issues should also be addressed to improve the robot’s motion. The inno-

    1CAMBADA is an acronym of Cooperative Autonomous Mobile roBots with Advanced Distributed

    Architecture2Instituto de Engenharia Electrónica e Telemática de Aveiro - Aveiro’s Institute of Electronic and

    Telematics Engineering3Actividade Transversal em Robótica Inteligente - Transverse Activity in Intelligent Robotics

    1

  • vations developed in this project should improve the robot’s individual performance andthe team’s overall gameplay.

    1.3 RoboCup

    RoboCup is an international project focused on fostering innovation in robotics, arti-ficial intelligence and related areas. Its long term goal is, by the year 2050, to developa team of fully autonomous humanoid robots that can play and win against the humanworld champion soccer team [1]. To accomplish such goal RoboCup is divided in variouscompetitions which can be divided in different leagues:

    • RoboCup Junior

    – Dance League

    – Rescue League

    – Soccer League

    • RoboCup Rescue

    – Robot League

    – Simulation League

    ∗ Agent Competition

    ∗ Infrastructure Competition

    • RoboCup Soccer

    – Humanoid Leagues

    ∗ Kid Size

    ∗ Teen Size

    – Middle Size League

    – Mixed Reality

    – Simulation Leagues

    ∗ 2D Simulation League

    ∗ 3D Simulation League

    ∗ 3D Development League

    – Small Size League

    – Standard Platform League

    • RoboCup@Home

    2

  • Each competition focuses on potential future applications of robotics such as searchand rescue in catastrophic scenarios or daily activities at home. Given its singularities, thedifferent leagues enable researchers to address problems in different areas of robotics suchas hardware, machine vision or software.

    1.3.1 Middle Size League (MSL)

    CAMBADA robots were designed to compete in RoboCup’s Middle Size League. Inthis league teams of 5 robots with maximum dimension of 50cm × 50cm × 80cm and amaximum weight of 40 Kg play in a 12m×18m depicted in 1.2 according to robot-adaptedofficial FIFA rules [2].

    Figure 1.1: A Robótica’2009 game.

    Robots must be fully autonomous and have all sensors on-board. For cooperationpurposes, robots can communicate with team-mates and the coach through a wirelessconnection. The coach, must be autonomous as well, and is normally in an externalcomputer that must have no sensors, deciding accordingly with the information providedby the players. The sole human intervention is the one provided by the referee’s decisionswhich enforces the game rules. The environment in which the robots play has well definedcolours, such as the green field, the white lines, the orange ball, and the mainly blackrobots. However this will tend to disappear such as the already planned change in the2010 edition, where the robots will play with a generic FIFA ball instead of the currentorange coloured ball.

    3

  • Figure 1.2: A Middle Size League field with official markings.

    1.4 Thesis Structure

    This thesis is structured as follows:In chapter 2 an overview of the state of the art in mobile robots is presented with special

    focus for wheeled robots, in which CAMBADA robots are included, such as different wheelsused and different wheel configurations.

    Chapter 3 introduces the CAMBADA robots architecture and developed work at thebeginning of this thesis work.

    In chapter 4 the restrictions affecting holonomic motion control and the implementationof predictive control are discussed.

    Chapter 5 presents methods for self-detection of stuck situations and the implementa-tion used in the CAMBADA team.

    Chapter 6 describes the contributions made at the behavioral level of the robot, suchas refinement of existing behaviours and implementation of new ones.

    Finally, chapter 7 discusses the results achieved and presents the conclusion and sug-gestions for future work.

    4

  • Chapter 2

    Robot Locomotion

    This chapter presents an overview of the state of the art on robot locomotion. Specialfocus is given to wheeled robots, of which CAMBADA robots are an example. Biologicallyinspired locomotion methods are also discussed such as legged locomotion, underwaterlocomotion and flying robots.

    2.1 Wheeled Robots

    Since the dawn of robotics, with Walter’s tortoise, robots move with wheels. Althoughnot biologically inspired, these robots offer some advantages relative to other methods oflocomotion. Wheeled robots do not suffer from static or dynamic stability problems sincethe robot center of gravity does not change when stopped neither in motion. From a controla point of view, wheeled robots are also simpler than other robots, since to achieve thedesired movement power needs to be supplied to the wheels which is accomplished withfeedback and PIDs. However, to move for instance legged robots, complex models must beused in order to obtain a forward motion while maintaining a vertical posture.

    2.1.1 Wheels

    In mobile robotics there are different type of wheels in use. The different combinationsof these wheels allow for diverse types of motions. Depending on the wheeled robot purpose,a designer should choose wisely the adequate type of motion, since in most cases a simplerwheel combination should suffice.

    The different types of wheels used in robotics are [3]:

    Fixed wheel as the name implies is a wheel with its center fixed to the robot frame andthe angle between the wheel plane and the robot frame is constant.

    Centered orientable wheel differs from the fixed wheels since the wheel plane rotatesaround a vertical axle which passes through the center of the wheel. Therefore theangle between the robot frame and the wheel plane is time varying. A centeredorientable wheel is in the center of Fig 2.1.

    5

  • Off-centered orientable wheel or “castor wheel” is also orientable in respect to therobot frame. However the vertical axle of rotation does not pass through the centerof the wheel. Several off-centered orientable are shown in the corners of Fig 2.1.

    Swedish wheels are quite different from conventional wheels. In a conventional wheel,the velocity of the point of contact between the wheel and the ground is equal tozero. This is not true for a swedish wheel. The velocity of the point of contactbetween a swedish wheel and the ground has only one component that is equal tozero. Examples of swedish wheels are depicted in Fig 2.2.

    Figure 2.1: Example of orientable wheels, adapted from [4].

    Figure 2.2: Example of swedish wheels, adapted from [5].

    2.1.2 Typical Wheel Configurations

    As mentioned, the combinations of the different wheels presented result in a varietyof motions. The types of motion are usually characterized by the controlable degrees offreedom enabled by a specific combination of wheels. The most common types are [3]:

    6

  • Omni-Directional Motion means a wheeled robot can move in any direction while fac-ing any orientation. Therefore the robot does not need to perform any reorientationof the wheels in order to move to any given direction. This is accomplished by com-bining three or more swedish wheels or three or more off-centered orientable wheels.Since these robots control all degrees of freedom they are considered holonomic.

    Differential Motion is characterized by robots with two fixed wheels placed on a com-mon axle but driven by separate motors. This kind of wheel configuration allowsthe robot to turn without having to steer any of its wheels. However, robots usingdifferential motion are non-holonomic since they can not move sideways.

    Ackerman steering [6] is widely used in automobiles. It combines one or more fixedwheels placed on a common axle and a separate axle with one or more centeredorientable wheels for steering. The robot can move forward and backwards and turnsideways. When turning the robot describes an arc around an instantaneous center ofrotation (ICR) which is orthogonal to all the wheel planes. If more than one steeringwheel is used the orientation of the wheels must be coordinated to ensure a singleICR, otherwise the robot will suffer from slippage of the front wheels when turning.Robots using this type of motion are non-holonomic since the fixed wheels preventthe robots from moving sideways.

    Synchro Drive is a configuration of at least two centered orientable wheels. This allowsthe robot to achieve holonomy however to perform a direction variation, the wheelsmust be reoriented. If there is no slippage the robot can only describe a curve witha center in the interception of the lines orthogonal to the wheel planes. Thereforethe orientation of the wheels must be changed in order to achieve the other degreesof freedom.

    2.1.3 Inverse Kinematics

    To move a wheeled robot, velocities to all the driven wheels must be supplied. This isusually achieved by translating the desired velocities to be applied at the robot frame intothe wheels setpoints and it is also known as inverse kinematics.

    Inverse kinematic models can be deduced based on previously mentioned constraints[3]. For conventional wheels the assumed constraints are that the velocities along andorthogonal to the wheel planes must be zero. As aforecited in swedish wheels only one ofthe velocity components must be zero.

    The constraints of each wheel composing the robot’s wheel configuration can be writtenunder the general matrix form [3]:

    J1(βc, βoc) · R(θ) · ξ + J2 · ϕ = 0 (2.1)

    C1(βc, βoc) · R(θ) · ξ + C2 · βoc = 0 (2.2)

    7

  • where J1(βc, βoc) is a matrix composed by the constraints orthogonal to the wheel planeand J2 is a diagonal matrix composed by the radius of each wheel. C1(βc, βoc) is a matrixcomposed by the constraints along the wheel plane and C2βoc is a matrix with the informa-tion of the distances between the center of the off-centered orientable wheels and the axleof rotation. βc, βoc are the orientation of the centered orientable wheels and off-centeredorientable wheels, respectively. ξ are the desired velocities in world coordinates and ϕ arethe velocities to be applied at each wheel. R(θ) is a rotation matrix to translate the desiredvelocities from world coordinates to velocities with respect to the robot frame, dependingon the orientation of the robot, θ.

    Given a set of desired velocities in world coordinates, ξ, the desired velocities to beapplied at each wheel and the orientation of the centered and off-centered orientable wheelscan be calculated by solving the equations for ϕ, βc and βoc, respectively.

    2.2 Biological Inspiration

    Although wheeled robots offer some advantages regarding other types of locomotionthey suffer from a major drawback shared by other wheeled vehicles: they need roadsor evens surfaces to move. However most scenarios where future robots are expected tobe applied do not provide these conditions, for example: search and rescue situationswhere robots might have to crawl over rubble, domestic robots might have to climb stairs,planetary exploration where robots might have to move over different surfaces such as rocksand sand, aerial surveillance or robots required to perform tasks underwater.

    For these situations robotic engineers take inspiration from animal locomotion that canbe emulated in robotics. This lead to the development of walking robots with differentnumber of legs, swimming, flying and crawling robots.

    2.2.1 Legged Robots

    Although a seemingly easy task, walking on legs is a fairly complex process. Animalsneed a learning period to do it properly. Hence, it is no surprise that no robot can walkwith the same dexterity as animals, though this subject has been heavily researched in thepast decades.

    To effectively make a robot walk two issues must be considered: stability control toguarantee a vertical posture and motion control to enable a forward motion [6]. Leggedrobots must be able to stand vertically when stopped, also known as static stability, andwhile moving, dynamic stability.

    Full static stability can be achieved with three legs, as long as the vertical projection ofthe center-of-gravity vector lies between the points of contact of the legs with the ground.

    Static stability is obviously more difficult with two legs since it is only possible to guar-antee lateral stability when the center-of-gravity vector lies between the feet. Backward-forward stability is ensured by active control systems or large feet, which increases the areain contact with the ground.

    8

  • To obtain dynamic stability during motion, the support forces of the legs, momentumand inertial forces must be considered [6].

    Three examples of two-legged humanoid robots are shown in Fig 2.3.

    a) b) c)

    Figure 2.3: Three examples of commercially available humanoid robots. a) Honda’s Asimo,adapted from [7]. b) Bioloid adaptation of Nimbro’s humanoid, adapted from [8]. c)Aldebaran’s NAO robot, adapted from [9].

    2.2.2 Other Biologically Inspired Types of Locomotion

    Animals developed other types of locomotion that robotic engineers have been tryingto mimic with their robots.

    One example are the snake-like robots that move their limbless bodies by crawlingsegments of the robot body as shown in Fig 2.4. In fact the segments that composethe robot body enable a large number of degrees of freedom which characterizes theserobots as hyperredundant. Although the segments have limited rotation, the combinedmotion of the segments allow large movements of the body. Snake robots achieve forwardmotion by supplying internal torques to the segments, as snakes do with muscle contraction.Although the earlier snake robots were designed to move on two-dimensions, there havebeen developed robots the can move some body segments vertically adapting to groundvariations.

    Another type of locomotion inspiring robotic engineers is underwater locomotion, shownin Fig 2.5. Unlike ground robots, underwater robots must move in 3D environments. Wa-

    9

  • Figure 2.4: Example of a crawling robot, adapted from [10].

    ter also poses additional problems. For instance water density and currents, for instance,introduce non-linearities which hinder the robot control. Besides being waterproof, under-water robots must also be robust to sustain water pressure when diving great depths. Therobot dynamics may also be affected if it is equipped with manipulators which may extendor contract and carry variable payload.

    Some tasks performed by underwater robots such as ocean exploration or monitoriza-tion may require a robot to stay underwater for long periods of time, thus battery andpower consumption are a crucial point when designing such robots, especially the type oflocomotion which can be very consuming.

    Many underwater robots are equipped with thrusters to move. However these actuatorsare not quite efficient in terms of power consumption and therefore robotic engineers areturning to fish for inspiration when designing underwater robots. Fish are exquisitelyadapted to underwater environment and motion is produced by the water displacementresultant of fin movement, specially the tail fin movement.

    Other underwater animals are the source of inspiration for robot designers such aslobsters, crabs and water snakes [6], as shown in Fig 2.6.

    At last, robotic engineers have also been creating flying robots. These robots presentsimilar control issues to underwater robots. They also have to move in a 3D environmentand variations in atmospheric density at different altitudes render PIDs and linear con-trol nearly impossible [6]. Flying robots must also overcome gravity and drag to remainairbourne.

    Although most flying robots are inspired in airplanes and helicopters and use propellersto achieve motion, there have been some recent developments in bird-like robots. As otherflying animals, these robots move by flapping their wings. These robots must be light

    10

  • Figure 2.5: Example of a realist fish robot, adapted from [11].

    Figure 2.6: Example of an underwater biologically inspired robot, RoboLobster [12].

    so that the ratio of wing area to body mass is large enough to allow flight [6]. Howeverrobots with these specifications suffer weather conditions such as wind gusts which makesfull-autonomy quite difficult. An example of a biologically inspired flying robot is depictedin Fig 2.7.

    11

  • Figure 2.7: Example of a flying robot, adapted from [13].

    2.3 Common Wheel Configurations in MSL

    The RoboCup Middle Size League is composed entirely by wheeled robots. In thissection, the locomotion types used by most teams are presented.

    Given the nature of a soccer game, omni-directional motion offers several advantagesspecially being able to move sideways while facing the ball. This can make a pass receptioneasier than any other type of locomotion presented. Therefore it is only natural that almostall teams choose to use this type of locomotion.

    All teams achieve omni-directional motion through the use of swedish wheels. Asmentioned in 2.1.1, at least three swedish wheels must be used. In fact this is the configu-ration most teams use, including CAMBADA. However in recent years, a trend is arisingwhere teams, like MRL [14], 1. RFC Stuttgart [15] and NuBot [16], use an additionalswedish wheel. This configuration appears to take advantage of the fact that althoughomni-directional, the robots move forward most of the time during a soccer game. Inthis situation, using three wheels, the wheel in the back of the robot is passive and addsnothing to the forward movement. However using four swedish wheels, the number ofactive wheels doubles as all wheels contribute to the forward movement, considering thatthe configuration of the wheels doesn’t have any swedish wheel orthogonal to the fowardmotion.

    An exception to the use of omni-directional motion is the ISePorto team that uses twodifferential driven fixed wheels and two castor wheels. The team chooses to use this typeof motion since it is easier to apply to other land robots while aware of its inefficiency inthe soccer field [17].

    12

  • 2.4 Summary

    This chapter presented the state of the art on robot locomotion. Types of wheelsand locomotion types used on wheeled robots were described. An overview of biologicallyinspired locomotions like crawling, swimming and flying were presented. Finally the wheelconfigurations used in the MSL and its advantages were discussed.

    13

  • Chapter 3

    The CAMBADA Team

    The CAMBADA team is composed by six robots designed to specifically play soccer.The first version of the robots was completed in 2004 and since then there has been a steadyevolution in the robots structure to face occurring challenges. This chapter describes thestage of the CAMBADA robots at the beginning of this thesis work.

    3.1 CAMBADA Robots

    Competing in the RoboCup’s Middle Size League, the CAMBADA robots must notexceed the maximum dimensions of 50cm×50cm×80cm. The rules however don’t imposea particular shape leaving that decision to each team. CAMBADA robots have a conicalshape with a base radius of 24cm and are 71cm high as can be seen in Fig 3.1.

    Figure 3.1: A CAMBADA robot.

    15

  • However, CAMBADA robots are the exception rather than the rule, as most of the MSLteams choose to build their robots on a quadrangular or triangular base [18][15][19][20][16][21][22][23][24], shown in Fig 3.2.

    a) b)

    c) d)

    Figure 3.2: Different robots shapes in MSL. a) Eigen Robot. b) TKU Robot. c) TechU-nited Robot. d) Tribots Robots.

    One of the CAMBADA robots, the goalkeeper, has metal appendices covering morearea, while still complying with the maximum dimensions, that emulate the goalkeepers’arms. Some teams in the MSL choose to build mobile frames to improve the goalkeeper’sperformance. The CAMBADA goalkeeper frame is fixed to the robot.

    3.2 Architecture

    3.2.1 Hardware

    The robot’s physical structure is built on a modular approach [25] with four mainmodules or layers [26].

    16

  • The top layer has the robot’s vision system. This system is comprises a camera pointingupwards to an hyperbolic mirror allowing the robot to have a 360 degrees view and a frontalcamera to allow the detection of the ball at greater distances [27]. Also in the top layer,an electronic compass is used for localization. For more detail on the vision sub-systemplease refer to [28][29]. The top layer is highlighted in Fig 3.3

    a) b)

    Figure 3.3: The CAMBADA top layer. a] The vision system. b) The electronic compass.

    The middle layer, shown in Fig 3.4 houses the processing unit, currently a 12” laptop,which collects the data from the sensors and computes the commands provided to theactuators. The laptop executes the vision software along with all high level and decisionsoftware and can be seen as the brain of the robot. Given the positional advantage, a ballretention device is placed on this layer.

    Finally, the lowest layer is composed by the robots motion system and kicking device.The robots move with the aid of a set of three omni-wheels, disposed at the periphery ofthe robot at angles that differ 120 degrees from each other, powered by 3 24 V / 150 WMaxon motors. On this layer there is an electromagnetic kicking device. Also, for ballhandling purposes, a barrier sensor is installed underneath the robot’s base, that signalsthe higher level that the ball is under control. The different components are shown in Fig3.5

    Beneath the middle layer, a network of micro-controllers is placed to control the low-level sensing/actuation system, or the nervous system of the robot. In order to complywith real-time constraints the FTT-CAN1 protocol is used [30]. The sensing and actua-tion system is highly distributed, meaning that the nodes in the network control differentfunctions of the robot, namely [31]:

    1Flexible Time-Triggered communication over CAN

    17

  • a) b)

    Figure 3.4: The middle layer. a) The processing unit. b) The ball retention system.

    Motion which controls the desired speed to be applied at each motor;

    Odometry that calculates the displacement of the robot;

    Kick that applies the desired power to the kicking device and the ball retention systemand transmits the state of the barrier sensor;

    Compass that relays the electronic compass data;

    System Monitor that forwards the robot’s batteries info as well as the state of all thenodes.

    The different nodes in the low-level sensing/actuation system and its physical imple-mentation can be seen in Fig 3.6

    3.2.2 Software

    Following the CAMBADA hardware approach, the software is also distributed. There-fore, five different processes are executed concurrently. All the processes run at the robot’sprocessing unit in Linux.

    All processes communicate by means of a RTDB2 which is physically implemented inshared memory. The RTDB is a data structure which contains the essential state variablesto control the robot. The RTDB is divided in two regions, the local and shared regions.The local section holds the data local to the processes and is not to be broadcasted to the

    2Real-Time DataBase

    18

  • a) b)

    c) d)

    Figure 3.5: The lower layer. a) The motion system. b) Detail of an omni-wheel. c) Thebarrier sensor. d) The kicking system.

    other robots. The shared section is further divided to contain the data of the world stateas perceived by the team. One of the areas is written by the robot itself and broadcastedto the rest of the team while the other remaining areas store the information received fromthe other team-mates. The RTDB implementation guarantees the temporal validity of thedata, with small tolerances [32].

    The software architecture is depicted in Fig 3.7The processes composing the CAMBADA software are:

    Vision which is responsible for acquiring the visual data from the cameras in the visionsystem, processing and transmitting the relevant info to the CAMBADA agent. Thetransmitted data is the position of the ball, the lines detected for localization purposesand obstacles positions. Given the well structured environment the robots play in,all this data is currently acquired by color segmentation [28][27].

    Agent is the process that integrates the sensor information and constructs the robot’sworldstate. The agent then decides the command to be applied, based on the per-ception of the worldstate, accordingly to a pre-defined strategy [33].

    19

  • a)

    b)

    Figure 3.6: The micro-controller network. a) The hardware architecture, adapted from[25]. b) The physical structure.

    Comm that handles the inter-robot communication, receiving the information shared bythe team-mates and transmitting the data from the shared section of the RTDB tothe team-mates [34][35].

    HWcomm or hardware communication process is responsible for transmitting the datato and from the low-level sensing and actuation system.

    Monitor that checks the state of the remaining processes relaunching them in case ofabnormal termination.

    Given the real-time constraints all process scheduling is handled by a library specificallydeveloped for the task, pman, process manager.

    20

  • Figure 3.7: The software architecture, adapted from [25].

    3.3 Robot Motion Control Loop

    Part of this project’s work focuses on robot motion control. Since CAMBADA robotsare a discrete time closed-loop controlled system, this sections presents an overview of acontrol cycle.

    Firstly, the robot control cycle is imposed by the camera’s frame-rate, which means thatwhen a frame is captured a new control cycle begins. The remainder of the robot controlmust be done before a new frame is available. The CAMBADA robots are therefore a goodexample of a real-time system since careful task scheduling must be achieved.

    After a frame is acquired, processed and its relevant data injected in the RTDB, thevision process signals the agent process to begin executing.

    The agent starts by integrating the sensorial data recently acquired, such as the visioninfo, the odometry displacement, the electronic compass data and the barrier state data.After all data is integrated, the worldstate is updated.

    Then a decision module, assigns a given role to the robot so it will execute the team’sstrategy. Consequently, the role chooses an adequate behavior that performs a specifictask.

    In the behaviour a command is calculated for each of the robots actuators. In the robotmotion control scope, setpoints to each of the robot’s wheels must be provided at everycontrol cycle.

    This is firstly done by translating the desired pose from field coordinates to robotcoordinates which gives an offset relative to the robot’s current pose. From this offset,error measurements are determined that are then applied to PID controllers to determinethe linear and angular velocities to be applied at the robot’s frame. The command data isthen injected at the RTDB.

    21

  • Finally the low-level communication process, reads the command data from the RTDBand produces the respective messages to be sent to the low-level sensing and actuationsystem.

    Historically, there was a micro-controller in the low-level sensing and actuation system,responsible for the specific task of mapping the wheels setpoints from the velocities to theapplied at the robot’s frame. However the computation costs were considered too heavyfor a PIC and this task is currently executed in an Holonomic Motion module implementedin the HWcomm process.

    The HWcomm process also collects the data from the low-level system with the recentlow-level sensor readings and introduces the data in the RTDB for the next control cycle.

    The overall motion control cycle is depicted in Fig 3.8.

    Figure 3.8: Flow-chart of the robot motion control.

    3.4 Summary

    At the end of this chapter, one should be familiar with the CAMBADA robots structureand architecture. The different hardware components were presents, namely, the sensors,controllers and actuators. The software architecture was also discussed along with anoverview of a control cycle.

    22

  • Chapter 4

    Holonomic Control

    In this chapter an overview of the CAMBADA motion is presented and holonomiccontrol related issues are referred. The current solutions are discussed and compared todeveloped contributions.

    4.1 Restrictions

    As mentioned before, the CAMBADA robots are mobile wheeled robots. They movewith the aid of 3 omni-wheels placed at the periphery of the robot at angles that differ120 degrees from each other, each being powered by a 24 V / 150 W Maxon motor. Suchconfiguration enables a robot to move in any direction while facing any orientation. Sinceall degrees of freedom are controlable the robot’s motion is holonomic.

    Ideally the robot’s linear and angular velocities would be mutually independent. Thismeans that a robot could move in a straight line while rotating over itself. In practice,such control is negatively affected by a diverse array of factors that if ignored results in anundesired robot movement.

    While rotating and moving at the same time, the robot deviates considerably from theideal straight line. This deviation is consistently to the right when rotating clockwise andto the left when rotating counter-clockwise, as shown in Fig 4.1.

    An initial solution to minimize the deviation consists in rotating the robot at the initialposition and then applying the linear velocity. This solution is not ideal since it doesn’t takeadvantage of holonomic motion. Also, CAMBADA robots are among the slowest movingrobots in the MSL [36][37][38][39], so this solution only emphasizes the speed difference.

    The physical restrictions affecting holonomic motion control presented in the followingsections are: discrete time control, maximum acceleration, actuator saturation, and controllatency. Given the difficulty in quantification, wheel slippage and slacks are not accountedfor the solution of the problem at this stage.

    For clarification on the remainder of the chapter, a command at this level is a tuplevx, vy, ω which represents the desired velocities with respect to the robot frame.

    23

  • −2 −1.5 −1 −0.5 0 0.5−1

    0

    1

    2

    3

    4

    5

    XX world coordinate(m)

    YY

    wor

    ld c

    oord

    inat

    e(m

    )

    Figure 4.1: Deviation in the robot movement when it rotates and moves simultaneously.The dashed line is the real robot path and the solid line would be the ideal straight path.The robot is rotating clockwise and moving towards (0,0).

    4.1.1 Discrete Time Control

    Firstly, no digital computer closed loop control exists that can update its output ona continuous basis since these systems are discrete in nature. Also, no real system canread their sensors measurements, process and apply commands instantaneously. Thereforediscrete time control usually happens sequentially over a course of time defined as controlcycle. This means that a command applied at any given instant is active until the commandof the following cycle is applied.

    This limitation prevents some robot movements as is the case of moving in a straightline while rotating over its center, as this movement would require continuous variationsof the wheels speed setup. Using constant setups during the control cycle, the robot willdescribe an arc instead of moving over a straight line. The solution is to make the initialand final positions of the arc described in each control cycle lie on the intended straightline. This can be obtained by applying an angle offset to the linear velocity vector. Besidesthis angular offset, the speed needs to be increased since the length of an arc is longer thanthe length of its corresponding chord.

    This problem can be addressed mathematically using the circle geometry and the chord-arc relationship as represented in Fig 4.2. The direction deviation is given by the angleα between the chord and the tangent to the arc in the chord’s initial position. This shallbe half the integral of the angular velocity. The chord-arc relationship states that given achord and its central angle, the length of the arc connecting the initial and final positionsof the chord is given by multiplying the chord’s length by

    24

  • ArcLength

    ChordLength= 2αr

    2 sin(α)r= α

    sin(α)

    Since the angle used for correcting the direction,α is an inscribed angle, the arc’s centralangle shall be the double. So, as mentioned before, α = ω·∆t

    2.

    Figure 4.2: While rotating and moving simultaneously the robot will describe a series ofarcs along the straight line.

    In the robot, at every cycle, the length of the chord or straight line is calculated byintegrating desired linear velocity to be applied over a cycle time. Then if the desiredrotational velocity is not zero the length of the arc is calculated as mentioned above. Thefinal velocity to be applied is obtained by ~varc =

    αsin(α)

    × ~vline, where ~vline =√

    v2x + v2y.

    4.1.2 Acceleration

    Another restriction affecting omni-directional motion is the allowed values for the ac-celeration. In the RoboCup Small Size League some teams limit their robots accelerationby wheel traction only [40]. Given the Middle Size League robot’s dimensions, this wouldresult in a very dangerous behavior for the robot because if a large velocity variation isallowed in a single cycle, there is a possibility of tipping off, severely damaging the robot’scomponents. Given the electric nature of the used motors such unlimited acceleration canalso cause large current peaks reducing the hardware lifetime and battery charge.

    Therefore, the CAMBADA robots have an acceleration limiter at software level. Thismeans that at each cycle the robot’s velocity varies only up to a certain amount aroundthe previous cycle’s velocity. The velocity then changes over time smoothly. A geometricrepresentation of the limiting algorithm is given in Fig 4.3.

    The CAMBADA robots also have an angular acceleration filter mainly to avoid currentpeaks since a large variation of angular velocity would not make the robot tip off.

    25

  • vcurrent

    vdesired

    vout

    Figure 4.3: Acceleration limiter. The circle represents all the velocities that can be achievedgiven the maximum acceleration.

    4.1.3 Actuator Saturation

    Another factor adversely affecting an omni-directional robot control is the saturation ofits different actuators. Of all the different actuators of CAMBADA robots, this project’swork will focus only on the motors controlling the omni-wheels. The saturation of themotors is an indirect indication of the maximum velocity a robot can achieve. In theCAMBADA case, simulation results point to a maximum speed of 2.2 m/s. This physicalfactor must not be overlooked. It affects not only the proportion between linear andangular velocities but also the proportion between the linear velocities components (vx, vy)diverting the robot from its path in case of actuator saturation.

    Assuming a saturation value C, considering the 3 omni wheels, the velocities vx, vy andva, as the velocities along the robot’s axis and angular velocity, respectively, the setpointsto apply at each wheel are calculated. Assuming that all setpoints exceed C, if no measureis taken the robot would rotate at maximum speed instead of moving according vx, vyand va. The solution involves determining a correction factor to be multiplied by all thevelocities to be applied [41].

    This correction factor is C|S|

    , where S is the highest setpoint calculated.

    This ensures that one of the setpoints is applied the maximum input possible andthe others are in the original proportion before the saturation limitation. The resultingmovement will be slower than desired but the direction of movement will be maintainedwhile moving as fast as possible.

    26

  • 4.1.4 Control Latency

    The final physical restriction mentioned in this thesis is the system delay. In theRoboCup Middle Size League, some teams have already addressed this issue, as is theexample of the Tribots team [42]. System delay results from the sum of all small delays inthe control loop and greatly affects the robot control. A delay between the sensory inputand the actuator output means a command calculated based on a worldstate A will beexecuted on a worldstate B that may or may not be the same as A. Considering the highlydynamic nature of a robotic soccer game, the worldstates will most likely not be the same.The problem is depicted in 4.4.

    Figure 4.4: Effect of control latency over cycle time. A command issue at based oninformation of a given cycle will only have effect some cycles later.

    The problem worsens as the delay increases not only because the error between theworldstates increases but also because as the delay overcomes the cycle time, more andmore commands will affect the worldstate before the current command will start to produceeffects. This fact explains the strong deviation while moving and rotating simultaneouslyshown in Fig 4.1. The robot is stopped, and will try to move and rotate. Since thecommand calculated in the initial cycle will only reach the actuator some cycles later, allthe control cycles in between them will calculate the same command since the robot willnot move until the first command arrives. Ideally after the robot moved from the firstcommand, a new one should be input considering the robot’s new pose.

    4.2 Predictive Control

    Reflecting other teams solutions [40], a prediction module was implemented to deter-mine the pose of a robot when the current command will reach the actuators, given thesensed pose and a buffer storing the commands that have already been issued but have yetto be executed due to the system delay. Thus, all commands are calculated without delay.

    The commands considered in the buffer of the prediction module do not reflect thedesired velocities, but the actual input commands at the motors, after the saturation andacceleration limiters are taken in to account.

    Considering the CAMBADA software structure presented in chapter 3, the desiredcommands are calculated on the agent process and transmitted, through the RTDB, to thelow-level communication handler process, or HWcomm, where they are processed through

    27

  • Figure 4.5: Predicting the robot position at the end of delay considering previous com-mands.

    the saturation and acceleration limiters before being mapped to setpoints. In order toimplement the prediction module in the agent process where the desired commands will becalculated accounting the predicted robot pose, the output commands of HWcomm mustbe sent back to the agent process, again through the RTDB, as shown in Fig 4.6.

    Figure 4.6: Changes in the information flow in the robot motion control scope.

    With the information available in the RTDB, the module Integrator was altered to readthe last processed command from the RTDB and store it in a sliding window with size⌈

    tdelaytcycle

    ⌉.

    Predicting the robot pose at the end of control latency is not sufficient. For coherencyall of the world state as perceived by the robot should be predicted such as the ball,the team-mates and the opponents position. However such goal is not feasible since aCAMBADA robot doesn’t visually distinguish an obstacle from friend or foe. While it ispossible since the team-mates positions are shared through wireless communication, thisprocess is not synchronized with the robot control cycle, running in the best case every0.100 seconds. Thus making the prediction calculus inaccurate. Also, at the momentno tracking is done to determine an opponent path, thus no velocity estimation is done.Therefore predicting the opponents position at the end of the control latency is impossible,

    28

  • since opponent are considered stationary, for obstacle avoidance purposes. On the otherhand, the ball’s position and velocity are estimated and therefore a position prediction ispossible.

    These predictions are executed in the method predict at the end of the Integratormodule execution which edits the robot pose and ball position in the constructed worldstatethat will then support all the high-level decision taken by the agent.

    4.2.1 Robot Model

    As each command is active for an entire cycle and given the electric motors used, apurely uniform movement is assumed for each cycle. Then the predicted pose, p̂tpred forthe robot is given by,

    p̂tpred = ptsensed + rem · ctn−1 +∑

    i

    cti∆t, i = sensed − n..sensed − 1 (4.1)

    where

    • ptsensed is the self-localization determined pose,

    • cti is the command issued in the i-th control cycle,

    • n = ⌊tdelaytcycle

    ⌋,

    • rem = delay − n · ∆t.

    4.2.2 Ball Model

    Since at the moment the robots don’t detect if the ball is accelerating or slowing down,it is assumed the ball moves with uniform movement. Thus, the predicted ball position,ppredb , after the delay is,

    ppredb = psensedb + vb × ∆t (4.2)

    where psensedb is the position of the ball determined by the robot sensors, vb is the ballvelocity and ∆t is the control latency.

    However, this could cause a situation where the robot would predict the ball inside it’sown body frame. Therefore the ball prediction model must account for collisions with therobot.

    Instead of calculating when two circular bodies collide, the ball and the robot, forsimplicity, the ball is considered as a particle and the ball radius,rB is added to the robot’sradius,rR, since the robot has a circular base.

    Then a collision point is calculated when a line, the path described by the ball center,intersects a moving circle, the robot, with a radius equal to the robot radius and the ball

    29

  • radius. The path described by the moving circle is the resulting predicted path from thecommands that have not yet arrived at the motors.

    The collision time ,t , is calculated by,

    (x − xR)2 + (y − yR)

    2 = (rR + rB)2 (4.3)

    x = xB + vBx × t (4.4)

    y = yB + vBy × t (4.5)

    where xR and yR are the components of the world coordinates of the center of the robotframe given by 4.1.

    If the collision time is lower than the control delay, the ball will collide with the robotbefore all commands sent are executed, and the collision point,pcollision, will be given byreplacing t in 4.4 and 4.5.

    However, since the robot base is not a full circle, the robot’s front where the ball isengaged must also be accounted for. The robot model is depicted in Fig 4.7. The real robotdimensions are represented by the black line and the considered dimensions, for collisionwith the ball, are represented by the blue line. If the ball center is on the blue line, thenan edge of the ball has collided with the robot’s frame.

    Figure 4.7: The considered robot dimensions assuming the ball as a particle.

    To calculate the direction of the ball velocity after the collision, it is assumed that theball is reflected, like light on a mirror, after hitting the robot’s body. Despite not being arealistic model, the obtained results show its simplicity outweight its inaccuracy.

    Thus the ball position and velocity, pBpred and vBpred , respectively, after the collisionshall be,

    30

  • pBpred = pcollision + vBpred · ∆t (4.6)

    vBpred = (vB − vR) − 2N(vB − vR) ∗ N

    ‖N‖2+ vR (4.7)

    where ∆t is the difference between the control delay and the collision time and N is thenormal vector to the robot’s surface in the collision point. Note that the reflected velocityis the relative velocity between the ball and the robot which is then added to the robotvelocity. This happens so that when the robot is chasing the ball, the reflected velocityresulting from a collision will not point towards the robot body. A reflection scheme isdepicted in Fig 4.8.

    Figure 4.8: The reflection of the ball velocity when it collides with the robot.

    Despite of the fact that a ball collision with the robot’s front could result in the robotengaging the ball due to the ball retention system, the ball model makes no assumptionsin this case, assuming there will always be a collision and therefore a reflection. On theother hand, if the ball is detected as engaged no ball prediction is done. At the end of therobot prediction position the ball is “placed” at the robot’s front maintaining its relativeposition to the robot’s center.

    However a problem remains. The current estimation of the ball velocity is based on alinear regression with a dynamic number of samples. The algorithm starts with a minimumnumber of three samples and evolves up to a sliding window with a maximum of the tenmost recent samples. This algorithm detects a collision by predicting the ball positionconsidering the last known position and the last estimated velocity. Then it compares withthe detected position of the ball. If the prediction fails by a large margin three consecutivetimes, the number of samples is reset to the minimum three samples [43].

    This means the described method only detects a collision three cycles after the colli-sion occurred. Ideally in the cycle after the collision occurred, a new velocity should becomputed. However with the current implementation a velocity with a similar direction tothe velocity before the collision is returned.

    This would mean that the ball model of the predictive control would detect a collisionsince the estimated velocity would still point towards the robot’s frame.

    31

  • Thus a method is proposed that allows a faster convergence of the ball velocity whenthe robot detects it will collide with the ball. In the predictive control algorithm, if acollision is detected then the reflected velocity is estimated as mentioned earlier. Howeverif the collision will happen in less than a cycle period then the number of samples in thelinear regression is reset to three. These three samples are then moved to form a linewith the direction of the reflected velocity. This will seem as if the ball came from insidethe robot frame. Therefore the returned velocity in the following cycle will point awayfrom the robot’s frame. And since the linear regression has few samples, to represent thesmall degree of confidence in velocity, the estimation will rapidly converge to the real ballvelocity, provided by the samples of the following cycles.

    This method is explained in Fig 4.9.

    Figure 4.9: The method for faster ball velocity convergence after a collision.

    4.3 New Acceleration Filter

    Surprisingly the resulting path using predictive control was not the expected as therobot still doesn’t move in a straight line. This is an undesired result from the currentimplementation of the acceleration limiter. As can be seen in Fig 4.3, if the desired velocitydiverges too much from the current velocity, the output of the limiter will be a velocitythat will neither be in the desired orientation nor in the desired norm. A solution tothis problem is to increase the maximum acceleration of the robot, increasing the velocityvariation every control cycle. However, simulation results showed that to compensate thelarge divergences between the velocities, the acceleration would have to be increased tovalues where the robot’s motion stability would be compromised. These conditions whentested revealed another issue. While such high maximum acceleration allows for greatvelocity variations from cycle to cycle, the same high variations produce a considerablewheel slippage. Since this factor is not accounted for in the model, the resulting prediction

    32

  • will be very inaccurate. Therefore, the robot will not generate the anticipated path, asshown by Fig 4.10.

    −2 −1.5 −1 −0.5 0 0.5−1

    0

    1

    2

    3

    4

    5

    XX world coordinate(m)

    YY

    wor

    ld c

    oord

    inat

    e(m

    )

    −2 −1.5 −1 −0.5 0 0.5−1

    0

    1

    2

    3

    4

    5

    XX world coordinate(m)Y

    Y w

    orld

    coo

    rdin

    ate(

    m)

    a) b)

    −2 −1.5 −1 −0.5 0 0.5−1

    0

    1

    2

    3

    4

    5

    XX world coordinate(m)

    YY

    wor

    ld c

    oord

    inat

    e(m

    )

    −2 −1.5 −1 −0.5 0 0.5−1

    0

    1

    2

    3

    4

    5

    XX world coordinate(m)

    YY

    wor

    ld c

    oord

    inat

    e(m

    )

    c) d)

    Figure 4.10: Robot path with different values of maximum acceleration. The dashedline represents the real robot path while the solid line represents the simulated robotpath. a) Maximum acceleration 3m/s2. b) Maximum acceleration 5m/s2. c) Maximumacceleration 7m/s2. d) Maximum acceleration 10m/s2.

    In this section, a new algorithm to update the velocity while complying with a maximumacceleration is suggested which prioritizes the velocity’s direction over its norm. Thesolution involves reducing the robot speed, in order to successfully rotate, similarly towhat we humans do when driving. In order to reduce the speed by a minimum necessarythe following situations are considered:

    Firstly, the closest point from the current velocity to the desired velocity vector iscalculated. Uniting this point to the current velocity always creates a perpendicular lineto the desired velocity. So the distance,d , between the current velocity and the closestpoint will be,

    d = sin(Θ) × |currentV elocity|,where Θ is the angle difference between the desired and current velocities.Then if the distance is larger than the maximum velocity variation allowed, it means

    the robot doesn’t have enough acceleration to reach the desired velocity direction. So to

    33

  • minimize the direction error, the output velocity of the acceleration limiter will be in thedirection of the closest point.

    If the distance is smaller, that means the robot can achieve the desired direction of thevelocity even if not at the same norm. So the interception point between a circumferencecentered at the current velocity and with a radius of maximum velocity variation and thedesired velocity vector is calculated. This point represents the maximum velocity the robotcan achieve given the maximum acceleration limit while moving for the desired direction.

    vcurrent

    vdesired

    vout

    vcurrent

    vdesired

    vout

    a) b)

    Figure 4.11: New implementation of the acceleration limiter. Left a): Priority is given tothe direction of the desired velocity over its norm. b): When the desired velocity can’t beachieved given the maximum acceleration the output will be in the perpendicular line tothe desired velocity that passes in the current velocity point.

    An exception must be made when the angle difference between the desired velocity andthe current velocity is greater than 90 degrees. The resulting velocity using the perpendic-ular method would point in the opposite direction of the desired velocity. In this case theoriginal implementation of the acceleration limiter is used.

    4.4 Estimating Model Parameters

    The next step is to determine the control delay. For this purpose some captures weremade while the robot moved. In these captures, the robot pose determined by self-localization and the commands sent to the actuators were recorded. Since the captureproduces results every cycle, the delay is determined by the difference between the cycle

    34

  • corresponding to the first command sent and the cycle corresponding to the robot move-ment. This way, it means that when the robot pose changed the first command arrivedat the motors. However, since the framerate of the camera used is 25 fps, it means thedelay will be in an interval of 1

    25= 0.040 seconds. Using this method the delay affecting

    the CAMBADA robots would be in [0.160; 0.200[ seconds. Experiments were conductedto determine the delay with more precision, which tried to minimize the error between thesimulated robot path and the robot real path. However at this stage none have proventrustworthy. So, as last resort, different delay values, between 0.160 and 0.200 seconds,were tested while the robot moved between the same two points. This method showedthat when delay tends to 0.200 the robot path deviates further from the straight line. Thechosen value for delay was 0.165 seconds, which provided the best resulting paths.

    4.5 Results

    Using the prediction module combined with the new acceleration limiter implementa-tion the robot successfully moved in a straight line while rotating simultaneously as shownin Fig 4.12.

    −1.4 −1.2 −1 −0.8 −0.6 −0.4 −0.2 00

    0.5

    1

    1.5

    2

    2.5

    3

    3.5

    4

    XX world coordinate(m)

    YY

    wor

    ld c

    oord

    inat

    e(m

    )

    Figure 4.12: The robot path using new acceleration limiter and robot pose prediction afterdelay. The delay value is 0.165 seconds. The robot is rotating counter-clockwise towards(0,0)

    An experience was then conducted to compare the developed solution and the originalsolution, where the robot rotates first and moves afterwards. The experience consistedon moving the robot between two points using the previously mentioned solutions. Theprocess was repeated ten times and the times were recorded for every run.

    35

  • With the original solution, the robot took an average 3.62 seconds to move from onepoint to the other with a standard deviation of 0.11 seconds. With the developed solution,the robot took an average 3.28 seconds to move between the same two points with astandard deviation of 0.16 seconds.

    This results in a mean time difference of 0.34 seconds. Although the improvement isnot considerably large, given the fast pace of a soccer game, the increased speed mightprove crucial in some game situations.

    Finally an experience was conducted where the robot moved with a constant angularvelocity. Even in this adverse condition the robot performance was acceptable, even despitenot moving in a straight line and producing an overshoot.

    −1.4 −1.2 −1 −0.8 −0.6 −0.4 −0.2 0 0.2−0.5

    0

    0.5

    1

    1.5

    2

    2.5

    3

    3.5

    4

    XX world coordinate(m)

    YY

    wor

    ld c

    oord

    inat

    e(m

    )

    Figure 4.13: The resulting robot path with constant angular velocity. The robot is rotatingcounter-clockwise towards (0,0).

    The reflective collision ball model also proved effective specially with the method forfaster convergence of the ball velocity estimation. The robot can now predict, considerablywell, the ball position after the control delay, as shown in Fig 4.14.

    Following the implementation of predictive control, some ad-hoc corrections could beremoved from the existing behaviours, that were implemented because of the undesiredrobot movement due to control delay.

    4.6 Summary

    Several aspects affecting holonomic motion control were presented along with developedcontributions. Special focus was given to predictive control to address control latencyissues. Also an innovative algorithm was implemented to update the robot velocity which

    36

  • −1.5 −1 −0.5 0 0.5 1 1.5

    −0.5

    0

    0.5

    1

    1.5

    2

    XX world coordinate(m)

    YY

    wor

    ld c

    oord

    inat

    e(m

    )

    Figure 4.14: Predicted ball positions after control delay. The red line represents theball positions sensed by the vision sensor and the green line represents the predicted ballpositions. The blue line is the ball velocity.

    prioritizes the direction of movement over its speed. The implemented work on this fieldimproved the robot motion and overall speed.

    37

  • Chapter 5

    Stuck Detection

    In robotics in general and in robotic soccer in particular, autonomous robots maybenefit if they are able to detect stuck situations. A stuck situation happens when arobot is blocked from progressing in the desired trajectory. Despite one of the robot basicbehaviours is obstacle avoidance to prevent these situations, robots tend to get stuck.

    In the robotic soccer scope, given the highly dynamic environment of a game thissituation happens oftenly, especially when two robots dispute the ball. If the situationholds for some time, a drop ball is awarded, where the robots must stay one meter awayfrom the ball and can only move after the referee gives the signal. However, if the teamshave similar robots with respect to speed, they will reach the ball at the same time and anew drop-ball may be awarded.

    If not handled properly, stuck situations can have severe outcomes. The robot candamage itself, the field, the opponent’s robots or worse, one of the referees. Therefore theteams must make an effort to preserve the welfare and property of the different stakeholders.

    In this chapter different methods of stuck detection are presented and the implemen-tation used in the CAMBADA team is discussed.

    5.1 Known Methods for Stuck Detection

    5.1.1 Haptic Sensors

    Haptic, from the greek haptikós is the ability to grasp or perceive an object throughtouch [44]. Despite being considered one of the five human senses, touch is actually asomatosensory system [45] comprising pressure, texture and temperature sensors. Howeverhaptic sensors don’t fully replicate touch as they are only capable of measuring pressure.

    In robotics, haptic sensors found great use in solving manipulation problems applied inrobotic arms. In the stuck detection scope the pressure measured by these sensors couldeasily determine the state of the robot.

    Although these sensors reproduce the same information used by humans to get unstuck,it would be impractical to apply these sensors to the same end in robotics, since all the

    39

  • robot surface would have to be coated with haptic receptors which would be very costly.

    5.1.2 Bumper Sensors

    Bumper sensors are simpler versions of haptic sensors. Bumper sensors have only twopossible states, they either detect contact or not.

    Its simplicity can be an advantage as well as a downfall. Although much cheaper thanhaptic sensors, it is still needed to coat all the robot surface in order to detect a stucksituation. This is accomplished using a high amount of sensors which would allow todetect from which side the robot was being blocked or with few sensors but it would resultin a low angular resolution. The addition of hardware is always undesired and collisionswith robots of faster teams could even damage the sensors in the process.

    5.1.3 Stasis Sensors

    Stasis is a state of stability, in which all forces are equal and opposing, therefore theycancel each other [46].

    There is a large variety of sensors able to detect stasis situations. An example of astasis sensor could be a simple drag wheel with an optical encoder [47]. When in a stasissituation the encoders from the driven wheels will indicate a large displacement from theprevious position, when all the power supplied to the wheels was lost in slipagge. Howeverthe encoder from the drag wheel will demonstrate a small displacement since the robot didnot actually move.

    Although stasis detection can be simple and effective it suffers from two major draw-backs. First, when the robot is stuck, this doesn’t always result in a stasis situation, es-pecially when disputing the ball with other robots, since the robot oscillates. Hence somestuck situations could pass undetected with this method. Second, this implementationinvolves adding hardware to the robot which has associated costs and must be maintained.Also to achieve faster velocities the robots must be as light as possible, therefore additionalhardware should not be added to reduce the weight of the robot.

    5.1.4 Accelerometer Sensor

    An alternative method to detect stuck situations is through accelerometers. Accelerom-eters measure the acceleration experienced, relative to freefall.

    Besides sensing orientation, vibration and shocks, accelerometers can also detect slip-page and have been used to generate kinematic models for robotic arms [48]. In [49]accelerometers are also used to determine environmental states such as the type of surfacethe robots walk on, and stuck states through C4.5 classifier.

    Besides involving additional hardware that is not currently integrated in the robot,this method might also require previous supervised learning in a training phase beforethe method can detect stuck situations since the change from game fields may change the

    40

  • pattern through which the classifier detects these situations. This is inconvenient since thesetup time should never be increased.

    5.1.5 Image Processing

    Stuck detection is also possible through image processing. In conjunction with theinformation from the wheel encoders, frame comparison can accurately determine the robotstate. [50] proposes a subdivision of the frames, where only the portions of the frames thatcorrespond to closer distances are accounted for since they vary less when the robot isstuck. Then the colour of each sub-frame is compared to the previous time-step sub-frame.If the colour difference of the sub-frames is above a pre-determined threshold the robot isstuck. This method optimizes stuck detection while filtering false positives.

    Although image processing doesn’t require additional hardware, since the CAMBADArobots are already equipped with cameras, this method has computational costs associatedwhich can have severe impact due to the real-time constraints affecting the robot. Anotherproblem with this approach is that the robotic soccer field is mainly green and the lack offeatures may render this method very erroneous.

    5.2 CAMBADA Implementation

    Although, there are diverse solutions, such as the previously presented, an adaptationof the solution used by the Brainstormers Tribots team was used [42].

    This solution doesn’t involve additional hardware and has very low computational costswhich is ideal given the real-time constraints affecting robot control.

    The method itself relies on the assumption that, when stuck, the estimated robotvelocity is very low, since the robot will be almost stopped, despite the decision layerorders the robot to move.

    Therefore, using the difference between the desired velocity, vd, and estimated velocity,ve, it is possible to determine if the robot was stuck. This is accomplished when thedifference, e = ‖vd − ve‖, is larger than a pre-determined threshold. However, as thisthreshold might be exceeded when the robot is accelerating or decelerating, these situationsmust be filtered by testing if the derivative of the desired velocity is lower than a secondthreshold.

    Since stuck situations tend to persist for at least a few seconds, intervals of time areanalyzed instead of isolated time instants.

    Despite the good results obtained by the Tribots team [42], the method presented theirpaper was not directly applied and it was tailored to the CAMBADA robots specifications.

    Due to the fact that the CAMBADA robots determine their velocity through a linearregression, the estimated velocity is a two dimension variable {vx, vy} while the desiredvelocity has three dimensions {vx, vy, ω}. This raises a problem, since it is not correctto identify stuck situations based on the comparison of the linear components of bothvelocities, since it might result in a false positive in situations where the robot was rotating

    41

  • around an external point while facing it, as it happens when the robot is dribbling the ballwhile avoiding opponents This situation is depicted in Fig 5.1.

    In this case the desired velocity non-null components will be vx and ω. However, as aresult of the linear regression, the estimated velocity will also have a non-zero component invy. Therefore, the difference between the velocities would be larger than the threshold andthe robot would detect itself as stuck while it was rotating freely. To solve this problem thesituation is determined not by the norm of difference of the two velocities, e = ‖vd − ve‖,but, by the difference of the norms of the two velocities, e = ‖vd‖ − ‖ve‖. Thanks to thismethod, the robot no longer detects itself as stuck in the previous situation.

    Figure 5.1: Diagram of the discrepancy between the desired and estimated velocities.

    As mentioned before, the most common situation where robots get stuck is while dis-puting the ball, when the robots have opposite velocities and try to avoid one anotherkeeping the ball under control.

    The developed method assumes that when stuck the robot’s estimated velocity is lowerthan the desired velocity. However, when disputing the ball, the opposite may happen.Given the different imposed velocities by the robots, the ball contracts and pushes therobots when it expands. This means the robot actually moves back and forth and sidewaysin a matter of a few seconds which provokes a high estimated velocity. Therefore theabsolute value is considered, e = |‖vd‖ − ‖ve‖|.

    Another change was done in the determination of the threshold in the stuck detection.As M. Lauer mentions [42], this threshold must be manually configured. However thisthreshold shouldn’t be a fixed value since if the robot gets stuck with a desired velocitylower than the threshold the detection is impossible. Thus a threshold in function of thedesired velocity was used.

    For an accurate comparison, the desired velocity is the oldest command sent fromthe HWcomm process which should be the current executed command. However, sincethe commands are processed through the acceleration filter and the maximum accelera-tion value is quite restrictive, the filter often affects the desired value. Thus testing thederivative of the desired velocity proposed in [42] makes no sense since the robot is alwaysaccelerating or decelerating as a result of the acceleration filter.

    42

  • Therefore, the interval of time analised and difference threshold were tuned so thatthese situations don’t result in a false positive. Consequently, the CAMBADA robotsdetect themselves as stuck if the condition e = ‖vd‖ · 65% > |‖vd‖−‖ve‖| holds for at least0.5 seconds.

    Finally, a boolean variable stuck was added to the shared section of the RTDB which isupdated every control cycle in the Integrator module as the robot worldstate is updated.

    5.2.1 Results

    The developed method was firstly tested in the CAMBADA training field in teleoper-ated mode to safeguard the robot hardware. Stuck situations were replicated by drivingthe robot against the field limits or the goal. The robot successfully identified all situationsand have never reported a false positive while moving unhindered. However all tests wereperformed with desired velocities at least higher than 0.5 m/s. This is quite importantsince when tested in game situations, the goalkeeper in particular due to fine repositionsreport a high number of false positives per game. Therefore an additional condition wasadded to the method, if the desired speed is lower than the 0.5 m/s the robot considershimself unstuck.

    The fact that the method doesn’t work on small desired speeds might explain why theTribots team don’t mention a difference threshold depending on the desired speed. It ismost likely that the threshold is above the accepted minimum speed and therefore notbeing able to identify stuck situations when the robot is moving with a speed lower thanthe threshold is better than risking false positives.

    Fig 5.2 represents the evolution of the desired and estimated speed during a stucktest. The red line is only for visualization purposes and represents the condition testedfor stuck identification, e = ‖vd‖ · 65% − |‖vd‖ − ‖ve‖| > 0. The robot determines if itis stuck if e is below zero for more than 13 cycles which represents 0.5 ms. At first therobot is stopped and begins by accelerating trying to achieve the desired speed. Althoughthe condition is negative in an acceleration phase the time threshold is not exceeded andtherefore the robot assumes it is not being hindered on its way. Around time step 50, therobot collides with an obstacle which blocks the robot from moving therefore the estimatedvelocity begins to decrease while the desired velocity remains constant. Since the situationremained unchanged for more than 0.5 ms a stuck situation was identified around timestep 66.

    5.3 Summary

    This chapter introduces the importance of detecting stuck situations during a gameduration. Different methods to achieve this goal were presented and the CAMBADAimplementation was discussed. The output of this method is consequently used in theconstruction of the robot Worldstate.

    43

  • 0 10 20 30 40 50 60 70 80−0.5

    0

    0.5

    1

    1.5

    2

    2.5

    time steps (control cycles)

    spee

    d (m

    /s)

    estimated speed

    desired speed

    tested condition

    Figure 5.2: Evolution of desired and estimated robot speeds when stuck.

    44

  • Chapter 6

    Behaviours

    The scope of this project’s work also involved development at the robot’s behaviourallevel. In this chapter the CAMBADA robotic behaviours are presented along with thedeveloped contributions.

    The different CAMBADA behaviours represent the basic tasks to be performed by therobot, such as move to a position in the field, dribble or kick the ball. A behaviour can thenbe seen as the basic block of a CAMBADA robot attitude. A behaviour executes a specifictask by computing the desired velocities to be applied at the robot frame, activating theball handling device and the desired strength to be applied the kicking system.

    The choice of a given behaviour at each instant of the game is executed by a role whichis basically a finite-state machine composed of various behaviours that allow the differentrobots to play distinct parts of the team overall strategy.

    The various CAMBADA behaviours are depicted in Fig 6.1.

    Figure 6.1: Class diagram of all CAMBADA behaviours.

    45

  • All the CAMBADA behaviours derive from a generic behaviour class Behaviour. Thisclass implements the method execute which inserts in the RTDB the different values thatwill later be translated to the powers to be applied at the various robot actuators.

    Since the control carried out on the ball handling system is on/off, method grabberCon-trol implemented on the Behaviour class activates the device based on the position of theball and when the ball is engaged.

    All the derived behaviours have therefore the responsability of calculating the desiredvelocities to be applied at the robot frame and the behaviour Kick in particular calculatesthe strength to be applied at the kicking system.

    6.1 Active Interception

    A behaviour previously implemented, that required refinement was the ActiveIntercep-tion.

    The ability to intercept the ball in its path is of major importance in the robotic soccercontext. The alternative, moving to the current ball position, is by no means optimal,since a robot takes more time to catch the ball and in some cases it might not even catchit. Fig 6.2 shows a possible described path when the robot moves to the estimated ballposition.

    Figure 6.2: Described path when the robot moves to the estimated ball position, adaptedfrom [43].

    When considering that the ball could be dribbled by an opponent robot in the goal’sdirection, not intercepting the ball could have severe negative consequences. The problemscales as the other teams strive to improve their robot’s top speed.

    The previous implementation [43] was based on an iterative process that consideredthe ball path during 20 meters in steps of 0.2 meters and calculated wether the ball or the

    46

  • robot would achieve faster a given point over the ball path. When the robot would reachthat point before the ball, that would be the interception point.

    However in the calculation of the interception point, both the ball and the robot dy-namic models were considered to have uniform movement. This means that a constantmean velocity is assumed as a parameter that represents the mean velocity with which therobot moves during the interception. This however raises a problem, since it is assumedthat a robot can achieved the assumed mean velocity instantly even if it has an oppositedirection of the current robot movement.

    The new implementation assumes an uniformly accelerated kinematic model instead of auniform movement kinematic model. Given the work previously described in Chapter 4 thevalue of the maximum acceleration imposed on the robot movement is known, a = 3m/s2.Therefore the acceleration and current speed are taken into account in the interceptionpoint calculation, as well as the robot’s maximum speed, maxspeed. Since the maximumvelocity depends on the direction of the movement and wheel slippage and slacks, its valuewas empirically obtained.

    Then the position of the robot, ~p(t, θ) is,

    ~p(t, θ) =

    {

    ~p0 + ~v0 · t +12· ~a ·