Directions from the Airport

Directions from Airport

Full Programme

Programme Icon

3 Day Weather Forecast

Weather Forecast

Download the
Call For Papers

Document Icon

Information and Enquiries

cec09@idi.ntnu.no

Last Modified
13 January 2009
Valid XHTML
Valid CSS
Civitas

This site uses Google
Analytics to track visits.
Privacy Statement

Tutorials

A special feature of IEEE CEC is the Tutorial programme which enables all delegates to increase their knowledge of evolutionary computation from leading expects in the field. Topics are offered at introductory, intermediate and advanced levels in a range of subjects ranging from the fundamentals to specialist subjects. Tutorials last 2 hours and will take place on May 18 2009, the day before the main conference begins. Attendance at Tutorial sessions is included in the conference registration fee and all tutorial slides will be available on CD as part of the conference registration pack.

Tutorials

Tutorial TitlePresenter
Cartesian Genetic Programming Julian Miller, University of York, UK
Co-evolutionary Learning of Game-playing Strategies Xin Yao,CERCIA, University of Birmingham, UK
Evolutionary Computation: A Unified Approach Kenneth De Jong, George Mason University
Evolutionary Computation in Finance and Economics Edward Tsang, University of Essex, UK
Evolvable Hardware Jim Tørresen, University of Oslo
Lukas Sekanina, Brno University of Technology
Genetic Programming Practice and Theory Riccardo Poli, University of Essex, UK
Hyper-heuristics: An Emerging Paradigm in Evolutionary Computation Edmund Burke, University of Nottingham, UK
Introduction to the use of EAs in Bioinformatics L. Gwenn Volkert, Kent State University
Clare Bates Congdon, Colby College
Daniel Ashlock, University of Guelph
Learning to Play Games Simon Lucas, University of Essex, UK
Particle Swarm Optimization: A Universal Optimizer? Andries P Engelbrecht, University of Pretoria, South Africa
Principled Approaches to tuning EA parameters A.E. Eiben, Vrije Universiteit Amsterdam
Recent Challenges to Evolutionary Multi-Criterion Optimization (EMO) Kalyanmoy Deb, Indian Institute of Technology Kanpur

Cartesian Genetic Programming

  • Julian Miller, University of York, UK
Julian Miller

Cartesian Genetic Programming is a form of genetic programming. It was developed by Julian Miller with Peter Thomson in 1997. In its classic form it uses a very simple integer based genetic representation of a program in the form of a directed graph. It employs a representation in which some genes are non-coding (i.e. are 'junk'). Simple evolutionary algorithms can be used to exploit this redundancy through genetic drift. In a number of studies, CGP has been shown to be highly efficient compared to other GP techniques on a range of benchmark problems.

Since its original inception CGP has been enhanced in various ways by Julian Miller and James Walker to include automatically defined functions. The tutorial will cover the classic technique, advanced developments and applications to a variety of problem domains.

Julian. F. Miller, BSc (Lond), PhD (City), PGCLTHE (Bham). Dr. Miller's obtained his first degree in Physics from the University of London in 1980 and obtained his doctorate in Nonlinear Mathematics from the City University in 1988. He worked at Napier University from 1989-1999 and the Natural Computation research group in the School of Computer Science at the University of Birmingham from 1999-2003. He joined the Department of Electronics at the University of York in Oct 2003. He has chaired twelve international workshops and conferences in Evolutionary Computation, Genetic Programming (GP) and Evolvable Hardware. He is an associate editor of the Journal of Genetic Programming and Evolvable Machines and the IEEE Transactions on Evolutionary Computation. He is on the editorial board of the journals: Evolutionary Computation and International Journal of Unconventional Computing. His research is concerned with, but are not limited to: the enhancement of Cartesian Genetic Programming (CGP), applications of CGP, and models and applications of computational development. Dr Miller a highly cited author with over 2000 citations and over 140 publications.

Co-evolutionary Learning of Game-playing Strategies

  • Xin Yao, CERCIA, University of Birmingham, UK
Xin Yao Photo

Co-evolutionary learning has been used to learn strategies for playing various games, from iterated prisoner's dilemma to chess and from trading games to checkers. This tutorial uses iterated prisoner's dilemma games as an example to illustrate how co-evolution can be used to learn game-playing strategies without any teacher information. A number of important research issues, which are not specific to any particular games, will be discussed, including:

  1. Representation of strategies and the genotype-phenotype mapping
  2. The role of diversity (including genetic diversity and behavioural diversity) in co-evolutionary learning
  3. Generalisation ability of evolved strategies and an ensemble approach to improve generalisation ability
  4. A vigorous quantitative framework for measuring generalisation of co-evolutionary learning

Selected References (downloadable from my web site):

  • X. Yao and P. Darwen, An experimental study of N-person iterated prisoner's dilemma games, Informatica, 18(4):435--450, 1994.
  • P. J. Darwen and X. Yao, Speciation as automatic categorical modularization, IEEE Transactions on Evolutionary Computation, 1(2):101-108, 1997.
  • X. Yao and P. J. Darwen, How Important Is Your Reputation in a Multi-Agent Environment, Proc. of the 1999 IEEE Conference on Systems, Man, and Cybernetics, IEEE Press, Piscataway, NJ, USA, pp.II-575 - II-580, October 1999.
  • S. Y. Chong and X. Yao, Behavioral Diversity, Choices, and Noise in the Iterated Prisoner's Dilemma, IEEE Transactions on Evolutionary Computation, 9(6):540-551, December 2005.
  • S. Y. Chong and X. Yao, Multiple Choices and Reputation in Multi-Agent Interactions, IEEE Transactions on Evolutionary Computation, 11(6):689-711, December 2007.
  • S. Y. Chong, P. Tino and X. Yao, Measuring Generalization Performance in Co-evolutionary Learning, IEEE Transactions on Evolutionary Computation, 12(4):479-505, August 2008.

Evolutionary Computation: A Unified Approach

  • Kenneth De Jong, Department of Computer Science, George Mason University
Kenneth De Jong Photo

The field of Evolutionary Computation has experienced tremendous growth over the past 20 years, resulting in a wide variety of evolutionary algorithms and applications. The result poses an interesting dilemma for many practitioners in the sense that, with such a wide variety of algorithms and approaches, it is often hard to se the relationships between them, assess strengths and weaknesses, and make good choices for new application areas.

This tutorial is intended to give an overview of a general EC framework that can help compare and contrast approaches, encourages crossbreeding, and facilitates intelligent design choices. The use of this framework is then illustrated by showing how traditional EAs can be compared and contrasted with it, and how new EAs can be effectively designed using it.

Finally, the framework is used to identify some important open issues that need further research.

Evolutionary Computation in Finance and Economics

  • Edward Tsang, University of Essex, UK
Edward Tsang

Computing has changed many aspects of our daily life. It certainly has shaken the foundation of finance and economics research and changed the research frontier. Advances in both hardware and software allow us to study finance and economic in ways that were previously impossible. Expertise in both computing and finance and economics allow us to make impacts that neither computer scientists nor financial experts or economists can make on their own. For example, evolutionary computation has helped us to discover forecasting rules, bargaining strategies and economic models. The use of artificial markets enables us to understand better some of the fundamental concepts in economics, such as rationality and the efficient market hypothesis. Finance and economics are not the only disciplines that benefit from this interdisciplinary research. Computer science benefit too, as applications drive research. Search techniques were developed to focus an evolutionary in solutions that are important to users. The need for finding scarce opportunities motivated new computational methods in genetic programming. In this tutorial, I shall give an overview of research in these techniques as well as their applications.

Edward Tsang is professor of Computer Science at the University of Essex. He specializes in business application of artificial intelligence and his research interests include artificial intelligence applications, computational finance, constraint satisfaction, evolutionary computation, and heuristic search. He was co-founder and Deputy Director (2003-2008) of Centre for Computational Finance and Economic Agents (CCFEA) at University of Essex and is also a founding member and Deputy Director (2008-) of the Centre for Computational Intelligence at University of Essex, which specializes in applications of computational intelligence techniques. Edward is the author of Foundations of Constraint Satisfaction, the first book to define the scope of the field and founded the Computation Finance and Economics Technical Committee in IEEE's Computational Intelligence Society in 2004. He has been consultant to GEC Marconi, British Telecom, the Commonwealth Secretariat and other organizations.

Evolvable Hardware

  • Jim Tørresen, University of Oslo
  • Lukas Sekanina, Brno University of Technology
Lukas Sekanina Jim Torresen

Traditionally, hardware has been static at run-time. However, with the introduction of reconfigurable technology and devices, dynamic hardware is now possible to realize by using automatic design schemes. One method for automatic design is evolvable hardware. The recent years of development of this field can be characterized as a continuous search for promising problems from the point of view of evolutionary design.

In the first part of the tutorial, fundamental concepts of evolutionary circuit design and evolvable hardware will be introduced. Examples of evolved innovative designs will be presented in domains of small combinational circuits, middle-size circuits (such as image filters or arithmetic circuits) and large circuits (benchmarks for testability analysis methods), covering thus circuit complexity from a few gates to millions of gates. The second part will give an overview of the field targeting run-time evolvable systems. This includes an introduction of reconfigurable technology followed by a description of a number of the adaptable architectures that have been proposed. Challenges related to evolving systems in operation will also be addressed. A special focus will be given to the architectures designed for real-world applications.

Jim Torresen received his M.Sc. and Dr.ing. (Ph.D) degrees in computer architecture and design from the Norwegian University of Science and Technology, University of Trondheim in 1991 and 1996, respectively. He has been employed as a senior hardware designer at NERA Telecommunications (1996-1998) and at Navia Aviation (1998-1999). Since 1999, he has been a professor at the Department of Informatics at the University of Oslo (associate professor 1999-2005). Jim Torresen has been a visiting researcher at Kyoto University, Japan for one year (1993-1994) and four months at Electrotechnical laboratory, Tsukuba, Japan (1997 and 2000).

His research interests at the moment include reconfigurable hardware, evolvable hardware, system-on-chip design and applying this to complex real-world applications. Several novel methods have been proposed. He has published a number of scientific papers in international journals, books and conference proceedings. He is in the program committee of more than ten different international conferences as well as a regular reviewer of a number of international journals (mainly published by IEEE and IET). He also acts as an evaluator for proposals in EU FP7.

For a list and collection of publications see http://www.ifi.uio.no/~jimtoer/papers.html.

Lukas Sekanina (MSc - 1999, PhD - 2002) received all his degrees from Brno University of Technology, Czech Republic. He was awarded the Fulbright scholarship and worked on the evolutionary circuit design with NASA Jet Propulsion Laboratory in Pasadena in 2004. He was a visiting lecturer with Pennsylvania State University and visiting researcher with Department of Informatics, University of Oslo in 2001. Lukas Sekanina is author of the monograph Evolvable Components (Springer Verlag, 2004). He co-authored more than 60 papers mainly on evolvable hardware. He has served as a program committee member of several conferences and received several awards for his work (including the Silver Medal at Humies 2008). Currently he is an associate professor and deputy head of the Department of Computers Systems at the Faculty of Information Technology, Brno University of Technology. Member of IEEE. For more information, see http://www.fit.vutbr.cz/~sekanina.

Genetic Programming Practice and Theory

  • Riccardo Poli, Department of Computing and Electonic Systems, University of Essex, UK
Riccardo Poli Photo

Genetic programming (GP) is an evolutionary technique for getting computers to automatically solve problems without having to tell them explicitly how to do it (Koza, 1992). Since its inception genetic programming has been used to solve many practical problems including producing a number of human competitive results and even patentable new inventions (Poli et al, 2008).

This tutorial includes two parts. In the first part I will introduce the basics of GP practice, briefly explaining GP representations, operators and search algorithm, and showing examples of real runs. This will mainly be based on (Koza, 1992) and (Poli et al, 2008). In the second part of the tutorial, I will then concentrate on explaining how and why GP works. This will done by first characterising GP's search space (the space of all possible programs) and then by explaining the way in which GP explores such a space. This will be based mainly on (Langdon and Poli, 2002) and (Poli et al, 2008).

Despite its technical contents, the tutorial will require limited mathematical background.

  • J.R. Koza, Genetic Programming: On the Programming of Computers by Means of Natural Selection, MIT Press, 1992.
  • W.B. Langdon and R. Poli, Foundations of Genetic Programming, Springer, 2002.
  • R. Poli, W.B. Langdon and N.F. McPhee, A Field Guide to Genetic Programming, Lulu.com, 2008 (freely available from the Internet).

Hyper-heuristics: An Emerging Paradigm in Evolutionary Computation

  • Edmund Burke, School of Computer Science, University of Nottingham, UK
Riccardo Poli Photo

This tutorial will introduce and discuss an emerging methodology in search/optimisation and decision support systems: Hyper-heuristics. The term can be thought of as describing "heuristics to choose heuristics". It is concerned with search methods which explore a search space of heuristics (rather than a search space of potential solutions to a problem). The approach is (partially) motivated by the aim of investigating the automation of the heuristic design process. The hope is that hyper-heuristics will lead to more general systems that are able to automatically operate over a wider range of problem domains than is possible today. The goal is to develop methodlogies which can adapt to different environments without manually having to customise the search, or its parameters, for each particular problem domain. This can be seen as one of the drawbacks of many current metaheuristic and evolutionary implementations, which tend to have to be customised for a particular class of problems or even specific problem instances. Of course, evolutionary methods and other metaheuristics can be developed and employed as hyper-heuristics. Indeed, recently published work has explored a range of metaheuristic and evolutionary algorithms within the context of hyper-heuristics and this tutorial will present and discuss these (and other) approaches.

Although the term hyper-heuristic is relatively new, the basic idea has actually been around for over 40 years and this tutorial will give a brief history of the area, before discussing work carried out to date and then focussing on some observations and promising research directions.

Introduction to the use of EAs in Bioinformatics

  • L. Gwenn Volkert, Kent State University
  • Clare Bates Congdon, Colby College
  • Daniel Ashlock, University of Guelph
Daniel Ashlock Clare Bates Congdon L. Gwennn Volkert

This tutorial is aimed at researchers interested in learning about the field of bioinformatics and some of the biological background needed to undertake EA based work in this area. It presumes no particular background in biology and is intended to equip newcomers with the necessary background to participate in the bioinformatics related sessions and presentations at CEC 2009 and help new researchers in developing research projects in this growing area. Topics covered will include an overview of biological basics, including the "central dogma" of biology, gene and protein structure, biotechnologies and bio-databases, followed by an overview of several problems in bioinformatics that are particularity well suited to evolutionary algorithm based approaches, including a look at several hybrid approaches.

L. Gwenn Volkert received a PhD in Computer Science from Wayne State University in 2001. She is an Associate Professor of Computer Science at Kent State University where she is also a faculty member in the School of Biomedical Science. Dr. Volkert is a senior member of the IEEE and is currently the chair of the IEEE CIS Bioinformatics and Bioengineering Task Force.

Clare Bates Congdon received a PhD in Computer Science and Engineering from The University of Michigan in 1995. She is a faculty member of the Computer Science Department at the University of Southern Maine where she also serves as Director of Bioinformatics and Intelligent Systems. She is funded by the NIH INBRE program for her project "Machine Learning for Phylogenetics and Genomics". Dr. Congdon is a senior member of the IEEE and a co-chair of the CEC 2009 Special Session on Bioinformatics.

Daniel Ashlock received a PhD in Mathematics from Caltech in 1990. He is now a Professor of Mathematics and Statistics at the University of Guelph where he holds a Chair in Bioinformatics. He is currently funded by the NSF to work on the sequencing of the maize genome and on the analysis and display of biological networks. He also is funded by the NIH to work on recombination in retroviruses. Dr. Ashlock is a senior member of the IEEE and an associate editor of the IEEE Transactions on Evolutionary Computation.

Learning to Play Games

  • Simon Lucas, University of Essex, UK
Simon Lucas

This tutorial provides a practical introduction to game learning with function approximation architectures. The tutorial will cover the two main approaches to learning in games: evolution (including co-evolution), and temporal difference learning (TDL).

It will be shown that the choice of function approximation architecture has a critical impact on what is learned. In addition to standard MLPs, attention is also given to N-Tuple systems and interpolated table-based approximators, as these have recently shown great potential to learn quickly and effectively. Examples will be used to illustrate how the function approximator must be appropriately matched with the learning algorithm in order to get best results.

Each method will be demonstrated with reference to some simple fragments of software, illustrating how the learning algorithm is connected with the game and with the function approximation architecture. Example games will include GridWorld, Othello, and Simulated Car Racing.

Recent results on the information rate limits attainable for these learning algorithms (in terms of bits of information gained per game played) will also be discussed.

Dr. Simon M. Lucas (SMIEEE) is a reader in computer science at the University of Essex (UK). His main research interests are evolutionary computation, games, and pattern recognition, and he has published widely in these fields with over 130 peer-reviewed papers, mostly in leading international conferences and journals. He was chair of IAPR Technical Committee 5 on Benchmarking and Software (2002 - 2006) and is the inventor of the scanning n-tuple classifier, a fast and accurate OCR method. He was appointed inaugural chair of the IEEE CIS Games Technical Committee in July 2006, has been competitions chair for many international conferences, and co-chaired the first IEEE Symposium on Computational Intelligence and Games in 2005. He was program chair for IEEE CEC 2006, program co-chair for IEEE CIG 2007, and for PPSN 2008. He is an associated editor of IEEE Transactions on Evolutionary Computation, and the Journal of Memetic Computing. He was an invited keynote speaker at IEEE CEC 2007, and an invited tutorial speaker at IEEE WCCI 2008, at PPSN 2008, and at IEEE CIG 2008. Dr. Lucas was recently appointed as the founding Editor-in-Chief of the IEEE Transactions on Computational Intelligence and AI in Games.

Particle Swarm Optimization: A Universal Optimizer?

  • Andries P. Engelbrecht, University of Pretoria, South Africa
Andries P. Engelbrecht

The main objective of this tutorial will be to answer the question if particle swarm optimization (PSO) can be considered as a universal optimizer. In the context of this tutorial, this means that the PSO can be applied to a wide range of optimization problem types as well as search domain types. The tutorial will start with an overview of the original, basic PSO, its origins, and some of the first adaptations of the basic PSO to improve its performance. This will be followed by a short discussion on heuristics to select proper values for control parameters. The remainder and bulk of the tutorial will cover a classification of different problem types, and will show how PSO can be applied to solve problems of these types. This part of the tutorial will be organized in the following sections, one for each problem type:

  • Continuous-valued versus discrete-valued domains
  • Unimodal versus multi-modal landscapes
  • Multi-solution problems requiring niching capabilities
  • Constrained versus unconstrained problems, also covering boundary constraints
  • Multi-objective optimization
  • Dynamic environments

It will be shown that PSO can solve these problems with minor adaptation of the basic PSO without violating the foundational principles of PSO. For each of these problem types a small subset of the most successful algorithms will be discussed.

Principled Approaches to tuning EA parameters

  • A.E. Eiben, Vrije Universiteit Amsterdam
Gusz Eiben

Finding appropriate parameter values for evolutionary algorithms (EA) is one of the persisting grand challenges of the evolutionary computing (EC) field. On the one hand, all EC researchers and practicioners acknowledge that good parameter values are essential for good EA performance. On the other hand, even after 30 years of EC research there is very little known about the effect of EA parameters on EA performance. Users mostly rely on conventions (mutation rate should be low), ad hoc choices (why not use uniform crossover), and experimental comparisons on a limited scale (testing combinations of three different crossover rates and three different mutation rates). Hence, there is a striking gap between the widely acknowledged importance of good parameter values and the widely exhibited ignorance concerning principled approaches to tune EA parameters.

The aims of this tutorial are threefold: creating awareness, providing guidelines, and presenting a vision for future development. As for the awareness, we will discuss the matter of EA parameters, catagorize different ways of setting them, and discuss the most important related issues, inlcuding performance measures and test functions. In the guidelines section we review existing algorithmic approaches to parameter tuning. We will discuss sweep methods, search methods, and combined methods, positioning them on a feature map and present (comparative) results on their usefulness. In the last part, we take a future-oriented attitude and identify research areas with high relevance and potential impact.

Recent Challenges to Evolutionary Multi-Criterion Optimization (EMO)

  • Kalyanmoy Deb, Indian Institute of Technology Kanpur, India
Kalyanmoy Deb Photo

Many real-world search and optimization problems involve multiple objectives which are conflicting to each other. These problems give rise to a set of Pareto-optimal solutions which must be found and a decision-making task must be used to choose a particular preferred solution. Since 1993, Evolutionary algorithms (EAs) have been amply demonstrated to find a well-distributed set of near Pareto-optimal solutions in many problems. In this tutorial, we shall discuss a number of such evolutionary multi-objective optimization (EMO) techniques, assess the current state-of-the-art techniques, and highlight a number of recent challenges which must be paid more attention. Some of these challenges include:

  1. Inclusion of multi-criterion decision-making aides within EMO framework so as to achieve both optimization and decision-making tasks to find a single preferred solution.
  2. Handling a large number of objectives (such as 10 or more) which often arise in real-world scenarios.
  3. Handling practical complexities using EMO, such as uncertainties in decision variables and parameters, reliability issues, computationally demanding solution evaluation, non-linear constraints etc.
  4. Handling dynamic multi-objective optimization problems in which objectives and constraints change with the progress of an EMO simulation
  5. Use of EMO principles to other problem solving tasks
  6. Use of EMO methodology for knowledge extraction in real-world problems

This tutorial is aimed for both novices and users of EMO. Those without any knowledge in EMO will have adequate ideas of the procedures and their importance in computing and problem-solving tasks. Those who have been practicing EMO will also have enough ideas and materials for future research, know state-of-the-art results and techniques, and make a comparative evaluation of their research.

Latest News

Check the News Section.

Schooling & Child Care Facilities

Child Care
NTNU Schooling &
Child Care
Special Session Proposals
1st September 2008
Paper Submissions
1st November 2008
14th November 2008
Tutorial Proposals
1st December 2008
Notification of Acceptance
16th January 2009
6th February 2009
Final Paper Submission
16th February 2009
27th February 2009
Conference Starts
18th May 2009