Tutorials

The following tutorials will be offered during parallel sessions, Sept 5th-8th.


Building Artificial Life Systems for the Web with Empirical

Tuesday 5 Sept 10:30 – 12:30 • Huma. 109

Organizers: Emily Dolson, Charles Ofria (MSU, US)

Description: Artificial Life has the potential to transform research and engage to the public, but few projects manage to meet both of these goals. This challenge is due, in part, to the difficulty of simultaneously maintaining a fast research version of the software and a more accessible public version (ideally web-based). The recently released Emscripten compiler gives us the best of both worlds: it converts C++ to high-efficiency Javascript allowing for easy development of software that is both fast (~10x faster than native Javascript) and highly accessible.

Empirical (https://github.com/devosoft/Empirical) is a lightweight C++ library designed to facilitate writing scientific software using Emscripten without needing to know the intricacies of Javascript or web development. Empirical provides intuitive tools to produce high-performance Artificial Life software, including population managers, data trackers, configuration handlers, worlds, organisms, and a variety of user-interface and data visualization features.

In this tutorial, attendees will learn the basics of the Empirical library, using it to create and visualize simple Artificial Life systems (including NK simulations, cellular automata, and agent-based models). Note that his is a hands on tutorial and laptops are strongly encouraged.

Intended audience: intermediate.
Duration: 2*2h


Evolution in the cloud

Tuesday 5 Sept 10:30 – 12:30 • Huma. 111

Organizer: JJ Merelo (Granada, Spain)

Description: The main objective of the tutorial is to familiarize the attendees with cloud computing concepts so that, by the end of the talk, they are able to use them in the design, development and eventual deployment of artificial life systems or complex systems applications in the cloud. Cloud computing includes a series of technologies that imply changes in the way applications are designed and built. In the case of artificial, we are mainly talking about distributed implementations of agent-based systems, but the changes go deeper and will need probably changes in the shape and design of the algorithm itself. At the end of the day, this tutorial addresses scientific programming in the cloud, in general, using examples drawn from our own experience in evolutionary computation, complex systems and artificial life. In this tutorial we will walk through the different technologies and methodologies involved with cloud computing showing with working examples how they can be applied to artificial life. We will also talk about different cloud service providers and how they can be used, for free or a low price, for carrying out experiments in reproducible and thus more efficient way.

Intended audience: beginner to intermediate, a working knowledge of some programming language is a plus.
Duration: 2h


Evolutionary robotics – a practical guide to experiments with real hardware

Tuesday 5 Sept 14:00 – 16:00 • René Char

Organizers: Jacqueline Heinerman, Dr. Julien Hubert, Evert Haasdijk, Gusz Eiben (VUA, NL)

Description: An important research topic in Artificial Life is Evolutionary Robotics (ER). ER aims to evolve the controllers, the morphologies, or both, for real and/or simulated autonomous robots. Most research in evolutionary robotics is partly or completely carried on in simulation. Although simulation has advantages, e.g., it is cheaper and it can be faster, it suffers from the notorious reality gap. Recently, affordable and reliable robots became commercially available. Hence, setting up a population of real robots is within reach for a large group of research groups today. This tutorial focuses on the know-how required to utilize such a population for running evolutionary experiments. To this end we use Thymio II robots with Raspberry Pi extensions (including a camera). The tutorial explains and demonstrates the work-flow from beginning to end, by going through a case study of a group of Thymio II robots evolving their neural network controllers on-the-fly.

Intended audience: beginner to intermediate.
Duration: 2h


Digital coevolution: a beginners approach

Tuesday 5 Sept 14:00 – 16:00 • Huma. 111

Organizer: Miguel A. Fortuna

Description: Species are constantly evolving, bound together in complex networks of interdependencies. The architecture of these species interaction networks is shaped by evolutionary and ecological mechanisms (changes in traits and in population abundances, respectively). One of these evolutionary processes is coevolution, i.e., reciprocal evolutionary change among interacting species driven by natural selection. Yet, quantifying the role of coevolution in shaping the entangled web of life is a challenge facing researchers in the lab and in the wild. Could the coevolving web of life be disentangled by studying self-replicating computer programs that interact and coevolve within a user-defined computational environment? In this tutorial we will introduce the concept of digital coevolution as it has been recently implemented in Avida, the most widely-used artificial life software platform for the study of evolution. We will focus on a host-parasite framework that resembles the coevolutionary dynamics among bacteria and phages. On the one hand, bacteria must have receptors on their surface in order to import resources from the environment. On the other hand, phages must attach to those receptors in order to infect bacteria. Therefore, a trade-off exists between having receptors for obtaining nutrients and being susceptible to phages. Coevolutionary dynamics results from bacteria evolving phage resistance by changing their surface receptors, and phages countering resistance by altering their tail fibers to attach to the novel receptors. Analogously, digital hosts must compute logic operations to consume resources and thus replicate, but those traits leave them susceptible to infection by digital parasites.

Level: Beginner
Duration: 2h


Simulating Complex Systems with FLAME GPU

Wednesday 6 Sept 10:30 – 12:30 • Huma. 109

Organizers: Paul Richmond and Mozhgan Kabiri Chimeh (Sheffield, UK)

Description: Modelling and simulation of complex problems has become an established ‘third pillar’ of science, complementary to theory and experimentation. The multi-agent approach to modelling allows complex systems to be constructed in such as way as to add complexity from understanding at an individual level (i.e. a bottom-up approach). This approach is extremely powerful in a wide range of domains as diverse as computational biology to economics and physics. Whilst multi-agent modelling provides a natural and intuitive method to model systems the computational cost of performing large simulations is much greater than for top-down, system level alternatives.

In order for multi-agent modelling and simulation to be used as a tool for delivering excellent science, it is vital that simulation performance can scale, by targeting readily available computational resources effectively. Developed in UK since 2008, FLAME GPU provides this computational capacity by targeting readily available Graphics Processing Units capable of simulating many millions of interacting agents with performance which exceeds that of traditional CPU based simulators. Developed in UK since 2008, FLAME GPU is an extended version of the FLAME (Flexible Large-scale Agent-based Modelling Environment) framework and is a mature and stable agent-based modelling simulation platform that enables modellers from various disciplines like economics, biology and social sciences to easily write agent-based models. Importantly it abstracts the complexities of the GPU architecture away from modellers to ensure that modellers can concentrate on writing models without the need to acquire specialist knowledge typically required to utilise GPU architectures.

This tutorial is aimed at the intermediate level. No knowledge of GPUs is required however basic knowledge multi agent modelling approaches is expected (i.e. formulating a problem as a set of individuals within a system) as well as understanding of XML document structure and basic programming ability.

The target audience for this tutorial is researchers/graduate students who are interested in the simulation of large multi-agent systems. In addition, members of the community who are constrained by current performance limitations in multi-agent software will find the tutorial particularly appealing. This tutorial will be of interest to a wide range of domain modellers within the ECAL community many of which use complex systems modelling as a tool within their research. For example, FLAME GPU has been successfully applied to (and provides example models for) bio-inspired modelling, ecology, crowd dynamics, cellular automaton, swarms and economics. All of which are topics of interest within the ECAL call for papers.

By the end of the practical session, it is expected that the participants will understand how to write and execute a multi-agent model for FLAME GPU from scratch. Participants will leave with an appreciation of the key techniques, concepts, and algorithms which have been used.

Intended audience: intermediate.
Duration: 2h


Evolution of Neural Networks

Wednesday 6 Sept 10:30 – 12:30 • Huma. 111

Organizer: Risto Miikkulainen (The University of Texas at Austin and Sentient Technologies, Inc.)

Description: Evolution of artificial neural networks has recently emerged as a powerful technique in two areas. First, while the standard value-function based reinforcement learning works well when the environment is fully observable, neuroevolution makes it possible to disambiguate hidden state through memory. Such memory makes new applications possible in areas such as robotic control, game playing, and artificial life. Second, deep learning performance depends crucially on the network architecture and hyperparameters. While many such architectures are too complex to be optimized by hand, neuroevolution can be used to do so automatically. As a result, deep learning can be scaled to utilize more of the available computing power, and extended to domains combining multiple modalities. In this tutorial, I will review (1) neuroevolution methods that evolve fixed-topology networks, network topologies, and network construction processes, (2) ways of combining gradient-based training with evolutionary methods, and (3) applications of these techniques in control, robotics, artificial life, games, image processing, and language.

Duration: 2h


Avida-ED: a web-based, GUI implementation of the Avida software platform, for educational use

Thursday 7 Sept 10:30 – 12:30 • Huma. 109

Organizers: Michael Wiser, Robert Pennock (MSU, US)

Description: Evolution is the fundamental principle for understanding the biological world, and it exemplifies the power of science to uncover deep explanatory patterns in nature. The National Academy of Sciences and other scientific organizations have frequently emphasized the importance of teaching evolution and the nature of science, but there are well-known impediments in this area. Predominantly, the gradual pace of evolution means that many people, largely non-scientists, do not immediately recognize that it is working in the natural world. As such, many people deny that evolution can produce complex traits and believe that evolution cannot be observed or tested. While evolution can be observed in a laboratory, the skills and expense of doing so put it out of reach of most schools. Artificial Life, on the other hand, provides users with the ability to see evolution as it happens and easily manipulate the system to test hypotheses.

We have developed Avida-ED as a web-based educational version of the artificial life software platform Avida. Organisms in this world are self-replicating computer programs that experience a user-defined mutation rate, evolving in a world where they can complete bit-wise logical tasks to gain additional CPU cycles and therefore replicate faster than their competitors. In this workshop, attendees will become familiar with the capabilities of this free software. We will work through several introductory exercises already field-tested in multiple classrooms, and distribute the lab manual we’ve developed for the software. We will further showcase this program’s potential for classroom research, highlighting independent research projects teams of undergraduate students have undertaken.

Level: Beginner to Intermediate.
Duration: 2h


Using MABE (Modular Agent Based Evolution)

Thursday 7 Sept 14:00 – 16:00 • Huma. 109

Organizer: Clifford Bohm (MSU, US).
Description: In this introduction to MABE, you will learn about what MABE is, what it can be used for and how to use it. MABE is a computational system for creating and studying evolving populations of digital organisms. MABE has been designed to be usable by researchers with limited computer experience and also to be extendable to suit the needs of advanced users. Organisms in MABE can contain genomes (which may be multi-chromosome and multi-ploidy) and/or brains (Markov, Genetic Programing, ANN, etc.) both of which can be extensively configured. Organisms are be evaluated in worlds either individually or as a population or multiple populations. Two things that set MABE apart from other systems is that users can design their own genomes, brains and worlds, and it is trivial to switch between different types of genomes, brains, and worlds! MABE has been used to study topics including: evolution of behavior, intelligence and, learning; chemotaxis; genetic dynamics; game theory; sexual selection/origin of sexes.

Intended audience: beginner to intermediate.
Duration: 2h


Evolutionary Game Theory: Models and Applications

Thursday 7 Sept 14:00 – 16:00 • Huma. 111

Organizer: Marco Alberto Javarone (University of Hertfordshire, United Kingdom).

Description: Evolutionary Game Theory (EGT) represents the attempt to describe the evolution of populations by combining the formal framework of Game Theory with the Darwinian principles of evolution. Nowadays, a long list of applications of EGT spans from biology to socio-economic systems, aiming to describe the behavior of complex phenomena. In particular, explaining the emergence of cooperation constitutes one of the most interesting challenges in this area. For instance, ‘slow random motion’, ‘conformity’, and ‘network-reciprocity’ seem to be some of the possible underlying mechanisms that trigger cooperation in complex scenarios. These behaviors acquire special interest when considered in real contexts, as social systems and ecological ones. In addition, since EGT can be framed in the area of complex systems, approaches based on statistical physics may constitute the right key for getting further insights in this field.

As result, the proposed tutorial is organized as follows:
• General Introduction to EGT
• Emergence of cooperation and related mechanisms
• General Overview of statistical physics models of complex system
• Statistical Physics approach to EGT
• Modeling biological phenomena by EGT

Notably, I will use as reference two famous games: the Prisoner’s Dilemma and the Public Goods Game. In addition, both classical models and other based on complex network theory will be discussed. The proposed tutorial is devised for people without a preliminary knowledge of evolutionary game theory and of statistical physics. However, some topics will be of interest also for people already working in this area.

Intended audience: beginner to intermediate.
Duration: 2h


Optimal behaviour through criticality in agents and robot swarms CANCELLED

Thursday 7 Sept 10:30 – 12:30

Organizer: J. Michael Herrmann (Edinburgh, UK)

Description: The tutorial will present a dynamical systems approach to swarm intelligence with particular emphasis on applications in metaheuristic optimisation and swarm robotics. We will mainly emphasize recent convergent developments that call for a general theoretical framework.

Biological inspiration is much more than a way of popularising certain computational methods by linking them to well known phenomena in nature. We follow the more general idea of Per Bak’s “How nature works” in order to provide a unique approach to swarm intelligence that is based on the principle of criticality. Similar approaches have been employed in various contexts and various optimisation algorithms, such as PSO, DE, and cuckoo search, in the last two decades.

Theoretical methods in order to ensure convergence (or non-convergence) of an algorithm are often complex, although mathematically elementary. A slightly more advanced mathematical framework can reveal similarities between otherwise unrelated approaches and is thus able to tame the unbounded number of variants of existing algorithms.

As we can show clear advantages for concrete algorithms as well as practical applications, the advancement of the theoretical methods will appear worthwhile to potential participants.

Criticality has been studied in the context (of modelling) of neural dynamics (Beggs and Plenz, 2003) and examples from a number of real systems (earth quakes, family trees, sand piles and rice piles, coupled physical systems, evolution, biological motor-control). Being first studied in natural systems, SOC is an enormous opportunity for fields like metaheuristic optimisation, organic computing, robot collectives, shared control, human-machine interaction etc. We will proceed with the development of the theory specifically with respect to population-based optimisation algorithms, optimal exploration, experimental design, and move on to applications to the interaction among multiple adaptive systems, active learning, and control of autonomous robots and prostheses. In the course of the presentation we will present a number of techniques and experiences such as optimisation algorithms for realistic problems, unbiased estimation of parameters of power law distribution, issues with large-scale simulations, data structures, discretisation, and dimensionality, finite-size scaling, problems with self-averaging in critical systems, algorithms combining exploratory and goal-oriented learning, etc.

We wish to point researchers and developers from various backgrounds to the advantages of working in marginally stable regimes which provide optimal sensitivity and flexibility as well as predictability and performance at the same time. The general concepts will be explained and presented in various contexts. In this way we can facilitate the transfer between areas which is one of the benefits of a mathematical approach. We will provide the participants with working knowledge to advance the theory of metaheuristic optimisation that is directly useful for applications.

Intended audience: intermediate.
Duration: 2h

Comments are closed.