Explorations with Coding
Contents:
Introduction:
Recently, I’ve been exploring a lot of biological - and specifically ecological - concepts via coding. As much of ecology is based on the behavior of large systems, it’s a field that naturally lends itself to modeling by computers. Be it evolutionary development, neurology, or just basic ecology, it’s both relatively easy - and rather interesting - to model various systems via something as simple as vanilla JavaScript. Below, I’ll talk about a few of the projects I’ve been playing with recently.
Neural Network:
According to wikipedia, a neural network is “a computational approach which is based on a large collection of neural units loosely modeling the way a biological brain solves problems with large clusters of biological neurons connected by axons (Wikipedia: Artificial neural network)”.
What?
Well, put simply, it’s a virtual brain created in a computer. The neurons in the ‘brain’ are, just as in a biological brain, seperate simple subunits, and they’re connected by connections of differing ‘strength’. These differing strengths then determine the path that a particular signal will take through the ‘brain’, much like the neural signal in a real brain.
I created a relatively simple neural network simulation in JavaScript and Angular. As with most neural networks, this one needed to have a goal (otherwise, it’ll just sit there and ponder virtual philosophy or something). In this case, the brain attempts to steer a predator organism to catch a prey organism.
The neural network also needs to be able to react with its environment. In a biolgoical sense, this’d be like your eyes and ears, and your muscles. The network needs eyes and ears to get its current state, and muscles to adjust its state. In the case of my simulation, I have a pair of distance sensors which simply read the distance to the target, and a set of out ‘states’ that move the organism up, down, left, or right.
One interesting facet of this experiment is that, besides the initial goal and input/output of the organism, the initial network is not ‘programmable’. Instead, the network attempts to program itself, obeying the following rules:
The network runs until one of five conditions are met:
- The network runs out of time. This is essentially starvation, in a biological sense
- The network leaves the play area. This is a predator wandering out of its prey area
- The network has no active nodes. This constitutes random brain death.
- The network has an overwhelming majority of active nodes. This constitutes something like a seizure
- The organism successfully catches the prey item.
Each run, the network gets a score, based on two parameters (with lower being better):
- The final distance to the prey item
- The amount of time taken
During each run, the paths taken are recorded. At the end of a run, if the current score is better than a moving average, that path is strengthened. If the current score is worse, the path is weakened.
Over time, these relatively simple rules produce a learned behavior. Interestingly, the more nodes you have the better (as would be expected).
Lunar Lander Evolution
Note: The title of the above page refers to this as a neural network. It’s not!
One interesting biological simulation that can be done with code is that of simulated evolution. By giving some sort of ‘survival’ conditions, and a pseudo-randomly mutating spaceship with its own genome, we can produce some basic evolutionary trends.
In this example, I’ve created something similar to those ubiquitous lunar lander games that you (used to?) see everywhere: you land on some object with your little spacecraft. This time, however, you have absolutely no control over the spacecraft. Instead, two spacecrafts are generated, each with its own assortment of thrusters, each of which in turn has its own start time, end time, angle, and power settings.
Each generation, two spacecraft exist. Except for the initial condition, one of these is an older, ‘parent’ spacecraft, and the other is a mutation off of that parent. The spaceships again have the goal of reaching the target object (the ‘moon’), but this time control is given by an artificial genome that controls the number and stats of each engine.
After each generation is complete - i.e., after either one spaceship has landed, one has left the play area, or one has sat with no engines firing for more than 20 seconds - the scores of both spaceships is determined as, again, a combination of distance and time taken. The spaceship with the better score is kept, and the worse-scoring spaceship is discarded. The better spaceship is then duplicated, with a very small chance of mutations. This repeats until the user gets sick of staring at a bunch of circles moving around.
Some interesting observations about the behavior of this system:
- Like the Neural Network simulation above, this simulation is not explicitly programmed. That is, while there is ostensibly a success condition (and multiple failure ones!), the spaceship itself doesn’t actually know about this condition. Instead, each generation is rated as an isolated event, and only the emergent behavior of multiple generations leads to any changes.
- This, in turn, means that you might observe very little (if any!) progress in the first 30-60 generations or so. This actually makes complete sense, biologically, and is an example of the late Stephen Jay Gould’s Punctuated Equilibrium, whereby evolution generally occurs very slowly except for the occasional sudden burst of progress. For example, while life is generally agreed to have appeared 3600 million years ago, it took until about 7-800 million years ago (2800 years at least!) for life to figure out that bunching multiple cells together was a good idea. Pretty soon after that, however, we get an enormous number of diverse new evolutionary experiments as part of the Ediacaran fauna and the famous ‘Cambrian Explosion’.
- The simulation may reach a plateau, beyond which very little if any improvement occurs. This makes a lot of sense too, as while natural selection is goal-driven - that is, better organisms survive more - evolution itself really isn’t. We might refer to pre-avian dinosauria as trying to ‘figure out’ how to fly, for example, but this is really a combination of a colloquialism and hindsight: We can see that, over time, evolutionary pressures were driving certain organisms towards developing flight. However, neither the birds in this example nor the spaceships in this simulation are individually striving towards any particular goal.
Taxonomy Database
This example is perhaps the simplest of the bunch, and is still somewhat a work-in-progress. It’s basically just a taxonomy database. Current features:
- Add a taxon by entering its info in the box.
- Remove a taxon by going to davetaxonomy.herokuapp.com/remove/ plus the name of your taxon. So to remove a taxon called ‘myTaxa’, you’d go to davetaxonomy.herokuapp.com/remove/myTaxa.
- Draw a tree by picking a taxon and then clicking the ‘draw tree’ button.
- Get info on any particular taxon by going to davetaxonomy.herokuapp.com/info/ plus the name of your taxon. Case sensitive. Example for Bilateria
Upcoming features:
- Right now, you can only delete taxa via a specific url (davetaxonomy.herokuapp.com/remove/<nameOfTaxon>). Eventually I’ll make this part of the UI.
- While synapomorphies are entered as part of the taxon creation process, they’re never actually displayed after this. I’ll change that.
Ecology Simulation
This one is, so far, the most complex. It’s basically a food web/ecology simulation, in which there are a number of organisms with specific ‘diets’. The simulation follows these rules:
- Except for plants, which operate by their own rules (they’re hipsters like that), all organisms have a specific list of organisms that they like to eat.
- Organisms constantly lose health, and replenish this whenever they feed. This acts a starvation mechanic.
Interactions between organisms fall into a number of categories:
- Predation: a predator organism ‘tags’ a prey organism, and attempts to feed on it. If successful, the predator organism gains health.
- Intraspecific competition: Usually does not result in death, but may result in one of the organisms being injured (i.e., losing health).
- Interspecific, non-predatory competition: this can result in the death of one of the two organisms. However, it does not ever result in one of the organisms gaining health.
- Mating: a male and female of the same species have a good time together.
- Nothing: Finally, if two organisms have no particular interaction set between them (i.e., a wolf and a tree), then nothing happens: the two just go about their business
- Both mating and predation have a certain chance of success. Mating success simply determines whether an offspring is produced or not. Predation, if successful, kills the prey organism and heals the predator. If unsuccessful, the predator takes damage.
- To prevent ‘sticking’ (i.e., predator A continuously following prey item B until it makes a successful kill), organisms have a cooldown timer on things like predation and mating. While this actually isn’t as important for predation, as the predator might equally well end up killing itself, this is quite important for mating, where two ‘stuck together’ organism may continously mate, leading to intense overpopulation.
- newborn organisms have a slightly increased chance of winning predation event if they are the prey item.
- organisms have a few timers:
- a starvation timer. As mentioned above. This prevents organisms from just sitting there and doing nothing.
- a maturity timer. Basically, newborn organisms must wait a certain amount of time (dependent on species) before they themselves can mate.
- a gestation timer. This actually doesn’t determine how long after mating before a newborn is produced. This happens immediately. Instead, this misnamed timer acts as a time-between matings deally, preventing organisms from mating too many times. This is also based on real-world species-specific data
- a predation timer. Prevents an organism from eating too often
- The decision on what an organism is doing (i.e., is it just wandering, seeking prey, or seeking a mate) is determined by how recently each of the relevant events occured.
- Finally, the organisms, if not actively interacting, attempt to stay within a certain distance from each other as part of an underlying flocking simulation. A simplified version of this flocking sim can be seen by clicking the [Old Version] button.
Setup basically just consists of setting the number of organisms. Click each organism to see what eats. Note that the dragon basically acts as an apex predator, and its particular timers are based on estimates of Tyrannosaur data.
Finally, please note that the simulation is still buggy, and you may find that organisms tend to fall off the screen more often than not.