3-D prey detection location videos

Our lab staff have been hard at work analyzing the foraging behavior of the fish we filmed last summer using VidSync.  Below are some key results from the first several videos we’ve analyzed. Ultimately we will be using detailed statistics that summarize these data for testing our models, but the visuals below give a nice sense for the pattern of prey detection by each fish.

The yellow dots indicate the estimated position of a possible prey item, relative to the fish, when the fish first detected it. The ‘x’ axis points directly downstream and the ‘z’ axis points straight upward. The length scales are very different for the different videos, but you can judge the distances to the detection positions in terms of the body length of the fish. The Chinook salmon were roughly around 5 cm long, the dolly varden 15 cm, and the grayling 45 cm.

It’s easier to watch all the videos below at higher resolution in their Youtube playlist.

Juvenile Chinook salmon

Dolly varden

Note: Most viewers can ignore the numbers (“Dolly Varden 4″, etc). They’re just internal reference numbers we use to keep track of objects in the videos. The “4” doesn’t mean we had 4 fish in the video; just that the fish was the 4th object we digitized.

Arctic grayling



Prey reaction field of a dolly varden

Many of our model tests will depend on video footage of feeding fish, from which we digitize the 3-D coordinates of attempted prey capture maneuvers to compare against behaviors predicted by the model.

We use a stereo camera setup to get two views of everything the fish does, and digitize it in VidSync like this.

After 33 minutes of digitizing behavior for that particular fish (so far), we can calculate a cloud of points (in green) representing the positions of potential prey, relative to the fish, when the fish detected them:

We’ll be comparing various aspects of these reaction fields to predicted volumes that look something like this:

Possible shapes of the region in which a drift-feeding fish detects its prey (red arrows indicate the direction of flow). These are from a preliminary version of our model.
Possible shapes of the region in which a drift-feeding fish detects its prey (red arrows indicate the direction of flow). These are from a preliminary version of our model.

Measuring tiny drifting debris in the river

A model we’re developing examines the idea that the behavior of drift-feeding fish is largely determined by cognitive limits on their ability to process visual information. We’ve hypothesized that the fish focus their efforts on a particular region (for example, within a 10 cm radius of their snout), or on particular types of prey, to avoid sensory overload from trying to search too much space or sort through too many drifting items at once.  Even with this focus, 91% of the foraging maneuvers of juvenile Chinook salmon in the Chena River are directed at items they reject (by abandoning the chase or spitting things out), and it’s obvious on video that there’s far more debris present than what the fish actually pursue.

But how much is there, exactly?

It’s easy to collect the debris with a drift net and weigh it, but that doesn’t tell us how many pieces there were, nor how big they were — both crucial pieces of information for understanding the visual challenge facing a fish as it tries to locate prey amongst the debris. To get that information, we’re developing a new technique based on computer vision. We use a DSLR to take thousands of pictures of water flowing through a well-lit rectangular chamber against a dark background. Here’s the first version of that contraption in the Chena River a few days ago:

Debris counter testWe took a picture per second to get around 2,500 pictures that look like this (click it to view full size):


Most of that junk against the black background is stuck to the chamber somehow, not actual drifting debris. We can get a picture that excludes the real drifting stuff (or anything that’s moving) by taking the average (median) of ten consecutive pictures, a technique borrowed from astronomy. Here’s the average (which was also converted from color to grayscale to speed up the calculations):

CountedDebrisMedianIt hardly looks any different. But if we subtract this background from one of the individual images containing drifting items, we theoretically get an image with only the drifting items. The reality is a bit messier, because some of the non-drifting items change slightly in appearance between frames and can show up in the subtracted image, but we have a nice bag of tools for dealing with that. In the end, the process lets us pick out almost all the drifting debris, highlighted and numbered in green boxes in the image below:

CountedDebrisGreenBoxesThe detected particles can then be counted and measured. Here are the close-ups, with the index # of each particle listed above its length in millimeters. The four images of each particle, from left to right, are from (1) the original image, (2) the median background, (3) the original minus the median background, and (4) a false-color overlay showing which pixels were actually counted as part of the particle.CountedDebrisTable

Here’s another one, just because they’re pretty:

2015-10-18 Debris-0039_GridGraphic

It takes about 30 seconds for the computer to crunch one image, so an overnight program run will allow us to count and measure tens of thousands of debris particles from several cubic meters of water.

We won’t need to do this every time we go in the field to observe fish. The particle size composition and average weight per particle probably don’t change too much within each stream at normal water levels, although they may be very different in different streams. So we’re planning to run this system alongside a drift net, a few times per stream, to establish relationships between weight of debris in the net and debris counts/sizes from the images. Then we’ll be able to reasonably infer debris counts and sizes based on debris weights in the nets at all our other study sites.

This was the very first test of this whole idea, and as expected there are some little problems, like a roughly hourglass-shaped region in the middle of the image where not all the debris particles were detected. We have easy fixes for every problem identified so far, and are going to get some really interesting data when we deploy this system in the field next summer.