Deep Learning: Obstacle Course

Smart balls learning to complete an obstacle course.

Each ball is controlled by its individual brain (a neural network) that receives the following information about its environment:

  • the ball's velocity (in x direction)
  • the height of the next obstacle to the right
  • the distance to the next obstacle to the right

Based on these information, each ball continuously makes decisions on whether it should move left/right or jump, how fast it needs to accelerate and how high it must jump.

The first generation of balls is spawned with random brains and therefore performs very badly. However, when time has run out or all balls have died due to collision with an obstacle or because they idled for too long, a new generation is spawned. Since balls that performed better in the last generation are more likely to pass on their 'genes' - or in this case 'brains' - to the next generation, overall performance increases over time.

Evaluation criteria for ball performance are traveled distance (to the right) and jump efficiency (accumulated height of a ball's jumps divided by accumulated height of obstacles the ball has passed). Jump efficiency was simply added as a criterion in order to encourage varying jump heights. Otherwise balls would likely evolve to jumping always as high as possible, which would not look very interesting.

Btw: Each generation faces a different obstacle course. Up to level 10, obstacle height gradually increases and obstacle distance decreases. After level 10, obstacles are generated completely randomly. Pressing M starts the test mode, in which the highscore setting ball can be watched tackling a random course.


  • P - pause
  • R - reset
  • N - restart current generation / restart test run
  • M - change mode (training / test run)
  • 1 - normal sim speed
  • 2 - sim speed x2
  • 3 - sim speed x100

Sim speed can also be controlled via the slider in the top right corner.

This playlist was my most important resource for creating the neural network.