Genetic algorithm

From Wikipedia, the free encyclopedia
Jump to: navigation, search

Genetic algorithms are a kind of algorithm used to find approximations in search problems. Genetic algorithms are a class of evolutionary algorithms that use ideas inspired by evolution to find a solution.

Introduction[change | change source]

The concept of genetic algorithms is a search technique often used in computer science to find complex, non-obvious solutions to algorithmic optimisation and search problems. Genetic algorithms are categorised as global search heuristics [1], and have a wide variety of applications, particularly in generating useful Artificial Intelligence agents in computer games.

For decades, games and the field of game theory have provided competitive, dynamic, often unpredictable environments that make ideal test beds for computational intelligence theories, architectures, and algorithms. Natural evolution can be modelled as a game, in which the rewards for an organism that plays a good game of life are the propagation of its genetic material to its successors and its continued survival [2]. In natural evolution, the performance of an individual is defined with respect to its competitors and collaborators, as well as to the environment. More simply described, genetic algorithms are a simulation in which a population of abstract representations (called chromosomes or the genotype of the genome, after their biological counterparts) of candidate solutions (called individuals, creatures, or phenotypes) to an optimisation problem.

Candidates are evaluated and crossbred in an attempt to generate high quality solutions which would be non obvious and extremely time consuming to a human programmer. An evolutionary phase is initialised with a population of randomly generated entities (or human specified instances of high quality). The process is subdivided into different generations. In each generation, the fitness of every individual in the population is evaluated, and multiple individuals are stochastically selected from the current population (based on their fitness), and modified (recombined and possibly randomly mutated) to form a new population. The new population is then used in the next iteration of the algorithm. The algorithm terminates when either a maximum number of generations has been produced, or a satisfactory fitness level has been reached for the population. If the algorithm has terminated due to a maximum number of generations, a satisfactory will not necessarily have been obtained.

Applications[change | change source]

Genetic algorithms have been successfully used in many fields of computer science, including but not limited to the optimisation of complex algorithms, the training of text classification systems, and the evolution of intelligent artificial agents in stochastic environments.

Board Games[change | change source]

Board games are a very relevant part of the area of genetic algorithms as applied to game theory problems. Much of the early work on computational intelligence and games was directed toward classic board games, such as tic-tac-toe [3], chess, and checkers [4]. Board games can now, in most cases, be played by a computer at a higher level than the best humans, even with blind exhaustive search techniques. Go is a noted exception to this tendency, and has so far resisted machine attack. The best Go computer players now play at the level of a good novice [5] [6]. Go strategy is said to rely heavily on pattern recognition, and not just logical analysis as with chess and other more piece-independent games. The huge effective branching factor required for finding high quality solutions heavily restricts the look-ahead that can be used within a move sequence search.

Computer Games[change | change source]

The genetic algorithm can be utilised in computer games - for example, to allow an enemy opponent to adapt in order to cater against an effective but repetitive tactic exhibited by a human player. This allows for a more realistic game experience; if a human player can find a sequence of steps which, repeated in different games always lead to success, there can be no challenge left. Conversely if a learning technique such as a genetic algorithm for a strategist can avoid repeating past mistakes, the game will have increased playability.

Genetic algorithms require the following components:

  • A method for representing the challenge in terms of the solution (e.g. routing soldiers in an attack in a strategy game)
  • A fitness or evaluation function in order to determine the quality of an instance (e.g. a measurement of damage done to an opponent in such an attack).

The fitness function accepts a mutated instantiation of an entity and measures its quality. This function is customised to the problem domain. In many cases, particularly those involving code optimisation, the fitness function may simply be a system timing function. Once a genetic representation and fitness function are defined, a genetic algorithm will instantiate initial candidates as described previously, and then improve through repetitive application of mutation, crossover, inversion and selection operators (as defined according to the problem domain).

The concept of having a full or partial view of the game changes the approach required significantly. An important point to note is that despite a computer obviously having a full view of the game state, making this fully available to the decision making process of an artificial player is going to make them behave unrealistically. Human-centered games are limited by what can easily be manipulated given human mental capacity and dexterity. Video games, on the other hand, operate under no such constraints and typically have a vastly more complex (internal) state space than even the most complex of human-centered games. This richer complexity encourages the development or evolution of more general purpose AI agents than are necessary for playing board or card games with sufficiently simulated skill levels. Currently, most computer controlled players in games implemented using manual scripting, which is quite tedious and time consuming to develop and test. The use of computational intelligence techniques offers an interesting alternative to scripted approaches, whereby the agent behavior can be controlled using an evolved neural network, for example, rather than being programmed. Since such a neural approach may result in unique, novel behaviour impossible to achieve with manual implementation, the resulting quality of the product could be far higher. Such approaches often result in significantly more original results for every different player, the replayability of the game is extended. Furthermore, evolved AI players tend to be excellent at exploiting loopholes in a game. Identifying and eliminating these elements (which are seen as problems by the players/developers) can be achieved by genetic algorithms. In addition to this, through the life of a game, human players will also discover these, giving them an unfair advantage. If an AI player can also take advantage of these, the playing field is levelled. To have game characters that effectively exhibit such “human” characteristics improves the game, extends its lifetime, and increases total revenue. These techniques must be developed very carefully, especially in cases where agent difficulty is curved to compete fairly with the player. It has been an established technique in such games for a human player to intentionally play badly earlier on in the game, thus easing the difficulty; which in the end will decrease the quality and longevity of the game. Unfortunately, players tend to blame the developer for allowing it to be possible.

References[change | change source]

  1. ^  Herrera, F.; Lozano, M.; and Verdegay, J. L. 1998. Tackling real-coded genetic algorithms: Operators and tools for behavioural analysis. Artif. Intell. Rev. 12(4):265–319.
  2. ^  Lucas, S., and Kendell, G. 2006. Evolutionary computation and games. In IEEE Comput Intell Mag. February, 10–18. IEEE.
  3. ^  Yao, X. Recent new development in evolutionary programming.
  4. ^  Samuel, A. L. 1995. Some studies in machine learning using the game of checkers. 71–105.
  5. ^  Muller, M. 2002. Computer go. Artif. Intell. 134(1-2):145–179.
  6. ^  Bouzy, B., and Cazenave, T. 2001. Computer go: an AI oriented survey. Artificial Intelligence 132:39–103.