Fall 2016 update

Our second main field season concluded this September, we obtained excellent data again despite a historically rainy summer in interior Alaska. Our fieldwork is now complete. Blog updates have been slow to come, because we now have our hands full with data to analyze and papers to write. Updates will become more frequent as those efforts come to fruition.  In the meantime, here’s a particularly nice example of a grayling drift feeding, from our new 2016 data:

 

3-D prey detection location videos

Our lab staff have been hard at work analyzing the foraging behavior of the fish we filmed last summer using VidSync.  Below are some key results from the first several videos we’ve analyzed. Ultimately we will be using detailed statistics that summarize these data for testing our models, but the visuals below give a nice sense for the pattern of prey detection by each fish.

The yellow dots indicate the estimated position of a possible prey item, relative to the fish, when the fish first detected it. The ‘x’ axis points directly downstream and the ‘z’ axis points straight upward. The length scales are very different for the different videos, but you can judge the distances to the detection positions in terms of the body length of the fish. The Chinook salmon were roughly around 5 cm long, the dolly varden 15 cm, and the grayling 45 cm.

It’s easier to watch all the videos below at higher resolution in their Youtube playlist.

Juvenile Chinook salmon

Dolly varden

Note: Most viewers can ignore the numbers (“Dolly Varden 4″, etc). They’re just internal reference numbers we use to keep track of objects in the videos. The “4” doesn’t mean we had 4 fish in the video; just that the fish was the 4th object we digitized.

Arctic grayling

 

 

Prey reaction field of a dolly varden

Many of our model tests will depend on video footage of feeding fish, from which we digitize the 3-D coordinates of attempted prey capture maneuvers to compare against behaviors predicted by the model.

We use a stereo camera setup to get two views of everything the fish does, and digitize it in VidSync like this.

After 33 minutes of digitizing behavior for that particular fish (so far), we can calculate a cloud of points (in green) representing the positions of potential prey, relative to the fish, when the fish detected them:

We’ll be comparing various aspects of these reaction fields to predicted volumes that look something like this:

Possible shapes of the region in which a drift-feeding fish detects its prey (red arrows indicate the direction of flow). These are from a preliminary version of our model.
Possible shapes of the region in which a drift-feeding fish detects its prey (red arrows indicate the direction of flow). These are from a preliminary version of our model.