The duck and the fox: genetic-style simulated annealing for a spatial problem

I got devastatingly nerd-sniped by the following problem:

Suppose there is a duck in the middle of a circular pond, and a fox on the bank that wants to eat that duck.  The duck (for whatever reason) needs a running start to fly, so cannot take off from the water.  If it reaches the bank and the fox is not there waiting, it escapes; otherwise it is eaten.  How fast does the duck need to be relative to the fox to escape?

—SPOILERS BEGIN HERE—

Implicit in the question is a harder one: what is the optimal strategy for the duck?  The naive strategy is that the duck just swims the radius of the circle, and if it can do so faster than the fox can run half the circumference, it wins.  But can the duck do better (i.e. escape while moving slower than that) via a more clever strategy?  And what is that strategy?

Ok, the RIGHT answer here is to think through the problem and apply a little calculus.  But, maybe your calculus + trig is a bit rusty, and you’re aching to write a light genetic algorithm.  That was my situation, and here’s how I proceeded.

First, let’s discuss the fox’s strategy.  A clever fox might anticipate where the duck is heading, and make a bee-line for where the duck is pointing.  However, the duck could then juke back and forth like a ship tacking into the wind, causing the fox to forever change directions on the opposite side of the pond.  To avoid this degenerate equilibrium, I proposed a fox that moved towards the point on shore closest to the duck’s current position.  This fox is less clever, but can’t fooled.  Coding this is pretty simple, as you only need to find the angle off of center to the duck, then decide which way around the circle is fastest for the fox to get there.  Implicit in this is a discrete-time simulation, which I thought would be easier to implement than a continuous time version.  Never fear: for step-sizes approaching zero, this is an arbitrarily good approximation of the continuous-time variant.

I represent a duck as a vector of angles.  At each step in the discrete-time simulation, the duck turns some number of degrees, then moves forward one step.  Note that this is a REALLY dumb duck; it makes no strategic calculation with respect to its position nor vis-a-vis the fox.  It just has a plan (a set of angles) and executes it.

In my darwinian slaughterhouse, I set up a gene pool of, say, 500 ducks, and compete each one against the fox.  Ducks only survive to pass on their meager genetics if two conditions hold: first, the duck makes shore in under some maximum number of steps.  This trims out ducks that circle around aimlessly for a potentially infinite amount of time.  Second, and more interestingly, the duck has to reach shore and be some small amount of degrees away from the fox.

Ok, so now we have a way to determine survivors.  How do these ducks “reproduce?”  I have them reproduce asexually.  Ducks that survive continue to the next generation, and they produce offspring that are some positive number of mutations different from them.  I allow for 1-N mutations (drawn uniformly) of up to x degrees in each direction at each “gene” (angle of a duck’s strategy).  This means I can vary the average number and extremity of mutations.  More on this later.  Furthermore, I weight the probability of reproducing (among surviving ducks) according to how much each survivor beat the fox by.  This single factor forms the (admittedly somewhat casual) fitness function.  Finally, I limit the number of offspring in each generation to get back to the original number of ducks to avoid runaway counts in their population (i.e. I didn’t want the computational burden to grow exponentially if I picked bad starting parameters).

Lastly, to put increasing pressure on these ducks, I set up the fox’s speed to increase each generation.  After a little fiddling, I decided to make the fox’s speed increase a function of the proportion of ducks that survived their last encounter with the fox, and the count of generations.  Basically, the fox should get way faster if all the ducks are surviving, but not much faster if it catches them all.  Additionally, the fox’s maximum possible speed increase lessens with each generation, to track the intuition that eventually the fox’s speed will asymptote towards the answer: the ratio of duck speed to fox speed which is the break-even speed (or minimum escape speed for an optimal duck).

I decided to code this up, and then add some images.  Specifically, every generation, I pick one surviving duck at random, and plot it’s course to the edge of the pond with a series of black dots.  I also plot the fox’s response (in red, of course) around the perimeter of the pond.  I move the fox’s dots inward towards the water’s edge each time, so that if he makes more than one circuit or turns back on his path, that’s still visible.

The first image shows a very random duck from the first generation.  You can see the pond in black, the duck’s path shown by black dots, and the fox’s path in red (getting closer to shore, so you can see when it doubles back on itself).  More on the blue, interior circle in a moment.

You can see that the duck takes a very meandering path, first heading east, then north, then trundling northwest until it makes landfall.  The fox slowly tries to keep track, but is out-paced.  You can get a sense of how slow this fox is by how close together the red dots are.  Internally, I track the fox’s speed as it increases in successive generations in degrees per second (hint!).

By the sixth generation, the duck is starting to spiral outward.  In this instance, the duck makes a few odd turns, but basically spirals out clockwise, staying ahead of the fox.

By generation 21, the duck is mastering this strategy.  It spins away from the fox, making it take the long way around towards its ultimate destination, at around 9’oclock on the pond’s edge.

Finally, the duck has gotten about as good as in can get by the fifty-sixth generation (the actual number of generations depends modestly on initial conditions, how fast the fox speeds up each generation, and how much mutation you allow).  We see the duck spin outward, then take a line pretty much tangent to the blue inner circle.

It turns out, that is exactly the duck’s optimal strategy.  In the first portion of its strategy, it spins outward within the “circle of safety,” which is the radius within which its speed lets it match the fox’s revolutions per minute.  Then, it shoots straight towards the edge of the pond on a line tangent to the circle of safety.  You can show with a little calculus (as Henry does here) that this makes the duck travel a bit further than a line directly to the nearest bank, but makes the fox travel enough further that it is worth it.

After coding this up, I realized that, although I borrowed some terminology from the genetic algorithms world, the approach I’ve taken is much more akin to simulated annealing, a 1980’s-style optimization strategy for non-convex problems, in which you introduce a highly randomized search, and slowly allow less and less divergence from any local optima you identify along the way.

If you’d like to check out the (very casual R) code, you can find it on my GitHub.