{{indexmenu_n>1}} ====== Lab01 - Introduction to V-REP and Robot Locomotion Control ====== ^ Motivations and Goals ^ | Become familiar with the V-REP robotic simulator | | Be able to control hexapod walking robot | ^ Tasks ([[courses:b4m36uir:internal:instructions:lab01|teacher]]) ^ | Familiarize with a simple "nature inspired" robot locomotion to control a hexapod walking robot | | Create a set of "motion primitives" to abstract the robot motion control **(2 Points)** | ^ Lab resources ^ | Lab scripts: {{:courses:b4m36uir:labs:lab01.zip|lab01 resource files}} | | V-REP scenes: {{:courses:b4m36uir:labs:scenes.zip|simple_plain.ttt}}| | V-REP remoteAPI: {{:courses:b4m36uir:labs:hexapod_vrep.zip|hexapod_vrep}}| ===== Robotic Simulator V-REP ===== [[http://www.coppeliarobotics.com/|V-REP]] is a powerful cross-platform 3D simulator based on a distributed control architecture: control programs (or scripts) can be directly attached to scene objects and run simultaneously in a threaded or non-threaded fashion. It features advanced physics engines which allows to simulate real-world physics and object interactions (collisions, object dynamics, etc.). [[http://www.coppeliarobotics.com/helpFiles/en/externalControllerTutorial.htm|V-REP control methods]]\\ [[http://www.coppeliarobotics.com/helpFiles/en/remoteApiFunctionsPython.htm|V-REP Python remote API documentation]]\\ [[http://www.coppeliarobotics.com/helpFiles/en/remoteApiFunctions.htm|V-REP C++ remote API documentation]] [[http://www.edisondev.net/VREP/04PythonTutorial|V-REP Python remote API tutorial]] ===== Hexapod model for V-REP ===== {{:courses:b4m36uir:labs:phantomx.jpg?200|PhantomX MarkII hexapod robot}}\\ {{:courses:b4m36uir:labs:phantom_model.zip|Hexapod model}} Hexapod servos numbering:\\ {{:courses:b4m36uir:labs:hexapod.png?200|}} ===== Robot Locomotion Using Central Pattern Generator ===== Central Pattern Generator (CPG) is a biologically inspired neural network that produce rhythmic patterned outputs (( [[https://www.cs.cmu.edu/~hgeyer/Teaching/R16-899B/Papers/Ijspeert08NeuralNEtworks.pdf|A. J. Ijspeert, "Central pattern generators for locomotion control in animals and robots: A review", In Neural Networks, Volume 21, Issue 4, 2008, Pages 642-653]] )). CPGs are composed from individual neurons connected by mutual inhibition. The most used model and structure of the CPG is a Matsuoka oscillator (( [[http://www.cs.cmu.edu/afs/cs/Web/People/hgeyer/Teaching/R16-899B/Papers/Matsuoka85BiolCybern.pdf |K. Matsuoka, "Sustained oscillations generated by mutually inhibiting neurons with adaptation." Biological cybernetics, Volume 52, Issue 6, 1985, Pages 367-376.]] )). The main problem is parameter tweaking of individual connections to achieve limit cycles in the neural network (( [[http://www.roboticsproceedings.org/rss02/p25.pdf| L. Righetti, A. J. Ijspeert, "Design methodologies for central pattern generators: an application to crawling humanoids." Proceedings of robotics: Science and systems, 2006, Pages 191-198]] ))\\ We will use a Matsuoka oscillator formed by four neurons in mutual inhibition.\\ {{:courses:b4m36uir:labs:cpg_scheme.png?direct&250| CPG scheme}}\\ The CPGs are connected in a network where each leg is driven by one CPG (( [[http://ieeexplore.ieee.org/document/7909020/?reload=true | G. Zhong, L. Chen, Z. Jiao, J. Li and H. Deng, "Locomotion Control and Gait Planning of a Novel Hexapod Robot Using Biomimetic Neurons," in IEEE Transactions on Control Systems Technology, Volume PP, Number 99, Pages 1-13]] )). \\ {{:courses:b4m36uir:labs:cpg_net.png?direct&200|Network structure}}\\ Typical output of the CPG network and the transition between different gaits.\\ {{:courses:b4m36uir:labs:oscilator_output.png?400| CPG output}}\\ Translation of the CPG output on the actuators can be done directly [[#fn__1|1)]], or using the post-processing of the signal and inverse kinematics [[#fn__4|4)]] (( [[ http://ceur-ws.org/Vol-1649/131.pdf | P. Milička, P. Čížek, and J. Faigl, “On chaotic oscillator-based central pattern generator for motion control of hexapod walking robot,” in ITAT, CEUR Workshop Proceedings, Volume 1649, 2016, Pages 131–137.]] )) to calculate the foot-tip trajectories. In our work we are using the direct approach to map the CPG output to the joint angles. **Task 1**\\ Inspect the provided CPG-based locomotion controller and play with the parameter settings of the CPG to obtain different gaits and gait transitions. ===== Controlling the Robot in V-REP (2 Points) ===== In intelligent robotics the vital task for the robot is the navigation. Hence, the robot has to be aware of its position with respect to the goal and then find a suitable way to achieve it. In this course we ar einterested mostly in the artificial intelligence and planning, hence, the localization is provided in 6 Degrees Of Freedom (DOF) in global coordinates by the simulator. **Task 2**\\ Based on the provided locomotion controller and localization routines implement a function which will guide the robot directly to the goal given by its 2D position in the global reference frame and the desired orientation of the robot, i.e., the state of the robot is described as triple $(x, y, \theta)$. **(2 Points)**\\ Note that the real robot can never reach the precise goal position, hence, a small neighborhood of the goal position with a diameter of half size of the robot is considered as suitable for navigation. ===== Provided materials ===== Lab exercise materials are available for {{:courses:b4m36uir:labs:lab01.zip|download}}\\ The directory structure of the archive is as follows:\\ * ''lab01'' : source files for lab01 * ''lab01.py'' : main file with the locomotion control demo * ''oscilator_constants.py'' : implementation of the central pattern generator according to [[#fn__4|4)]] * ''oscilator_network.py'' : auxiliary file with constants for the central pattern generator