Playing with Genetic Algorithms

Back when studying AI at University, Genetic Algorithms were always something I found quite exciting – a way to allow computers to figure out how to do things all by themselves.

I decided it was time to toy around with some of these ideas again. The first decision was to figure out the “building blocks” of our artificial creatures. I had to construct them in a way that was flexible enough that they could evolve novel strategies, but which is tolerant to mutation, and capable of merging.

For example, most computer software is not tolerant to mutation. If you change a random character in a typical computer program, chances are that it will stop working completely. Similarly, computer software is not tolerant to merging. You can’t really take two computer programs, stick them together, and expect them to do anything sensible.

So I opted for an approach based on the way biological brains work. Essentially you have a bunch of “neurons”, each of which is connected to every other neuron by two connections, one in each direction. Every neuron has an “activation”, represented by a number between 0 and 1. For some neurons, this activation is set externally, this is the input to our little brain. Similarily, some of these neurons are treated as “outputs”, their level of activation representing the output from the brain.

The brain proceeds in a series of discrete time steps. At each step, every neuron “transmits” its level of activation along all of its outgoing connections as an “impulse”. The stronger the activation, the stronger the impulse. Neurons determine their level of activation for the next time step by adding up all of the incoming impulses from other neurons, the higher the impulse, the stronger the activation (this is determined according to a simple sigmoid function).

There is one last wrinkle. Not all connections are equal, each has a “weight” associated with it, represented by a positive or negative number. Any impulse going through the connection is multiplied by this number. So, for example, an impulse from a neuron with activation of 0.5 going through a connection with weight 0.5 would only be 0.25 by the time it reached the destination neuron.

It is by adjusting the weights of these connections that our genetic algorithm makes these creatures (hopefully) do something intelligent.

Next I wanted to test it on a simple problem. I decided to try to evolve simple brains which could solve the boolean “exclusive-or” function. Very simply, this has two input neurons and a single output neuron. If the inputs are different (ie. one is ‘1’ and the other is ‘0’) the the output should be ‘1’, otherwise it should be ‘0’.

To illustrate the learning process, I set up a population of 10 creatures, each consisting of 4 neurons (this includes the input and output neurons), which runs through 4 iterations before we read the output.

I then drew a graph of what output is given for all input values between 0 and 1. Recall that we only care about the input and output for values of 0 and 1, but by looking at the intermediate values, we get an idea of the creature’s internal thought process.

This output can be represented as a two dimensional image, where black indicates that the output was 0, and white indicates an output of 1. The goal is therefore that the top-left and bottom-right corners are black, while the other corners are white. You can see an animation representing one such “run” here:


You will note that it starts out grey, but quickly finds a way to achieve the desired goal.

That was my progress for this weekend, next weekend I will try to make these things do something far more interesting (I hope).

Update: More on Genetic Algorithms

Leave a Reply

Your email address will not be published. Required fields are marked *