“Gambling” wolves take more risks than dogs

Selected coverage: Science Magazine, Independent, Christian Science Monitor

Wolves pursue a high-risk, all-or-nothing strategy when gambling for food, while dogs are more cautious, shows a new study. This difference is likely innate and adaptive, reflecting the hunter versus scavenger lifestyle of wolves and dogs.

Would you rather get 100 euros for certain, or have a fifty-fifty chance of receiving either 200 euros or nothing? Most choose the first, as humans tend to be “risk-averse”, preferring a guaranteed pay-off over the possibility of a greater reward. It is thought that this human preference for “playing it safe” has evolved through natural selection: when you live precariously like our remote ancestors, losing all your food reserves might be catastrophic, while adding to them might makes less difference to your chances of survival.

Here, in one of the first studies on risk preferences in non-primates, scientists show through a series of controlled experiments that wolves are consistently more prone to take risks when gambling for food than dogs. When faced with the choice between an insipid food pellet and a fifty-fifty chance of either tasty meat or an inedible stone, wolves nearly always prefer the risky option, whereas dogs are more cautious.

“We compared the propensity to take risks in a foraging context between wolves and dogs that had been raised under the same conditions,” says Sarah Marshall-Pescini, a postdoctoral fellow at the University of Vienna and the Wolf Science Centre, Ernstbrunn, Austria, the study’s first author. “We found that wolves prefer the risky option significantly more often than dogs. This difference, which seems to be innate, is consistent with the hypothesis that risk preference evolves as a function of ecology.”

The study was done at the Wolf Science Centre, Ernstbrunn, Austria, a research institute where scientists study cognitive and behavioral differences and similarities between wolves and dogs. Here, wolves and dogs live in packs, under near-natural conditions within large enclosures.

Marshall-Pescini let each of 7 wolves and 7 dogs choose 80 times between two upside-down bowls, placed side-by-side on a moveable table-top. The animals had been trained to indicate the bowl of their choice with their paw or muzzle, after which they would receive the item that was hidden beneath it.

The researchers had taught the wolves and dogs that beneath the first bowl, the “safe” option, was invariably an insipid dry food pellet, while beneath the second bowl, the “risky” option, was either an inedible item, a stone, in a random 50% of trials, and high-quality food, such as meat, sausage, or chicken, in the other 50%. The side for the “safe” and “risky” option changed between trials, but the animals were always shown which side corresponded to which option; whether they would get a stone or high-quality food if they chose the “risky” option was the only unknown. Rigorously designed control trials confirmed that the animals understood this rule, including the element of chance.

Wolves are much more prone to take risks than dogs, show the results. Wolves chose the risky option in 80% of trials, whereas dogs only did so in 58% of trials.

The researchers believe that dogs evolved a more cautious temperament after they underwent an evolutionary shift from their ancestral hunter lifestyle to their current scavenger lifestyle, which happened between 18,000 to 32,000 years ago when humans first domesticated dogs from wolves. Previous research has suggested that species that rely on patchily distributed, uncertain food sources are generally more risk-prone. For example, chimpanzees, which feed on fruit trees and hunt for monkeys, have been shown to be more risk-prone than bonobos, which rely more on terrestrial vegetation, a temporally and spatially reliable food source.

“Wild wolves hunt large ungulates – a risky strategy, not only because hunts often fail, but also because these prey animals can be dangerous – whereas free-ranging dogs, which make up 80% of the world’s dog population, feed mostly by scavenging on human refuse, a ubiquitous, unlimited resource. So dogs no longer need to take risks when searching for food, and this may have selected for a preference to play it safe.” concludes Marshall-Pescini.

freya_etu_rooobertbayer
Freya and Etu, a dog and wolf from the Wolf Centre. Credit: RoooBert Bayer

 

 

 

etu_ela_rooobertbayer
Etu and Ela, wolf pups at the Wolf Centre. Credit: RoooBert Bayer
geronimo_rooobertbayer
Geronimo chose the “risky” option in 78% of trials. Credit: RoooBert Bayer

Antiphonal singing in indris

Selected coverage: Christian Science Monitor, Der Spiegel, National Geographic, Slate, Sciences et Avenir

“How to get noticed as a singer?” isn’t only a concern for young people aspiring to a career in the music industry. Young indris, critically endangered lemurs from Madagascar, sing in antiphony with their choirmates to increase their chances of getting noticed by rival groups, according to a new study in Frontiers in Neuroscience.

Indris (Indri indri) are one of the few species of primates that sing. They live only in the eastern rainforests of Madagascar, a habitat threatened by illegal logging. They live in small groups, which generally consist of a dominant female and male, their immature offspring, and one or more low-ranking young adults. Both females and males sing, and their songs play an important role in territorial defense and group formation.

In the new study, researchers from Italy, Germany, and Madagascar recorded 496 indri songs and analyzed their timing, rhythm, and pitch. The research is part of a long-term study on the ecology of indris in the vicinity of Andasibe-Mantadia National Park and the Maromizaha Forest, eastern Madagascar.

Group members carefully coordinate their singing, show the researchers. As soon as one indri starts to sing, all group members over two years typically old join in. Indris also tend to copy each other’s rhythm, synchronizing their notes.

“The chorus songs of the indri start with roars that probably serve as attention-getter for the other singers and continue with modulated notes of that are often grouped into phrases. In these phrases the indris give a high-frequency note at the beginning, and then the following ones descend gradually in frequency,” says Marco Gamba, a Senior Researcher at the Department of Life Sciences and Systems Biology of the University of Turin, Italy

“Synchronized singing produces louder songs, and this may help to defend the group’s territory from rival groups. Singing is interpreted as a kind of investment, which may help to provide conspecifics with information on the strength of the pair bond and the presence of potential partners,” says doctoral student Giovanna Bonadonna, who is one of the co-authors.

There is one exception to this pattern, however: young, lower-ranking individuals have a strong preference for singing in antiphony rather than synchrony with the rest of the chorus, alternating their notes with those sung by the dominant pair. Gamba and colleagues propose that this is a tactic that lets low-ranking indris maximize their solitary singing and emphasize their individual contribution to the song.

“Synchronized singing doesn’t allow a singer to advertise his or her individuality, so it makes sense that young, low-ranking indris sing in antiphony. This lets them advertise their fighting ability to members of other groups and signal their individuality to potential sexual partners,” says Bonadonna.

“Indris are indeed good candidates for further investigations into the evolution of vocal communication. The next steps in our studies will be to understand whether the acoustic structure of the units allows individual recognition and whether genetics plays a role in the singing structure,” says Professor Cristina Giacoma from the Department of Life Sciences and Systems Biology, the study’s final author.

 

EurekAlert! PR: http://www.eurekalert.org/pub_releases/2016-06/f-asi060716.php

Study: http://journal.frontiersin.org/article/10.3389/fnins.2016.00249/full

 

Fifteen shades of photoreceptor in a butterfly’s eye

Selected coverage: ABC, Christian Science Monitor, Science Magazine, Süddeutsche Zeitung

When researchers studied the eyes of Common Bluebottles, a species of swallowtail butterfly from Australasia, they were in for a surprise. These butterflies have large eyes and use their blue-green iridescent wings for visual communication – evidence that their vision must be excellent. Even so, no-one expected to find that Common Bluebottles (Graphium sarpedon) have at least 15 different classes of “photoreceptors” — light-detecting cells comparable to the rods and cones in the human eye. Previously, no insect was known to have more than nine.

“We have studied color vision in many insects for many years, and we knew that the number of photoreceptors varies greatly from species to species. But this discovery of 15 classes in one eye was really stunning,” says Kentaro Arikawa, Professor of Biology at Sokendai (the Graduate University for Advanced Studies), Hayama, Japan and lead author of the study.

Have multiple classes of photoreceptors is indispensable for seeing color. Each class is stimulated by light of some wavelengths, and less or not at all by other wavelengths. By comparing information received from the different photoreceptor classes, the brain is able to distinguish colors.

Through physiological, anatomical and molecular experiments, Arikawa and colleagues were able to determine that Common Bluebottles have 15 photoreceptor classes, one stimulated by ultraviolet light, another by violet, three stimulated by slightly different blue lights, one by blue-green, four by green lights, and five by red lights.

Why do Common Bluebottles need so many classes of photoreceptor? After all, many other insects have only three classes of photoreceptor and yet have excellent color vision. Likewise, humans have only three classes of cones, enough to distinguish millions of colors.

Arikawa and his colleagues believe that Common Bluebottles use only four classes of photoreceptor for routine color vision, and use the other eleven to detect very specific stimuli in the environment, for example fast-moving objects against the sky or colorful objects hidden among vegetation. A similar system is found in another butterfly previously studied by the same research group, the Asian swallowtail Papilio xuthus, which has six photoreceptors.

“Butterflies may have a slightly lower visual acuity than ourselves, but in many respects they enjoy a clear advantage over us: they have a very large visual field, a superior ability to pursue fast-moving objects and can even distinguish ultraviolet and polarized light. Isn’t it fascinating to imagine how these butterflies see their world?” says Arikawa.

The results are published in the open-access journal Frontiers in Ecology and Evolution.

EurekAlert! PR: http://www.eurekalert.org/pub_releases/2016-03/f-fso030116.php

Study: http://journal.frontiersin.org/article/10.3389/fevo.2016.00018/full

Linguists discover the best word order for giving directions

Selected coverage: NY Times, Christian Science Monitor, The Telegraph, The Independent, Daily Mail

Good directions start — literally — with the most obvious

To give good directions, it is not enough to say the right things: saying them in the right order is also important, shows a study in Frontiers in Psychology. Sentences that start with a prominent landmark and end with the object of interest work better than sentences where this order is reversed. These results could have direct applications in the fields of artificial intelligence and human-computer interaction.

“Here we show for the first time that people are quicker to find a hard-to-see person in an image when the directions mention a prominent landmark first, as in ‘Next to the horse is the man in red’, rather than last, as in ‘The man in red is next to the horse’,” says Alasdair Clarke from the School of Psychology at the University of Aberdeen, the lead author of the study.

Clarke et al. asked volunteers to focus on a particular human figure within the visually cluttered cartoons of the ‘Where’s Wally?’ children’s books (called ‘Where’s Waldo?’ in the USA and Canada). The volunteers were then instructed to explain, in their own words, how to find that figure quickly — no trivial task, as each cartoon contained hundreds of items. As expected, the volunteers often opted to indicate the position of the human figure relative to a landmark object in the cartoon, such as a building.

fpsyg-06-01793-g001
Example of “Where’s Wally?” image used in the experiment

What was surprising, however, was that they tended to use a different word order depending on the visual properties of the landmark. Landmarks that stood out strongly from the background — as measured with imaging software — were statistically likely to be mentioned at the beginning of the sentence, while landmarks that stood out little were typically mentioned at its end. But if the target figure itself stood out strongly, most participants mentioned that first.

In a separate experiment, the researchers show that the most frequently used word order, ‘landmark first-target-second’, is also the most effective: people who heard descriptions with this order needed on average less time to find the human figure in the cartoon than people who heard descriptions with the reverse order.

These results suggest that people who give directions keep a mental record of which objects in an image are easy to see, prefer to use these as landmarks, and treat them differently than harder-to-see objects when planning the word order of descriptions. This strategy helps listeners to find the target quickly.

“Listeners start processing the directions before they’re finished, so it’s good to give them a head start by pointing them towards something they can find quickly, such as a landmark. But if the target your listener is looking for is itself easy to see, then you should just start your directions with that,” concludes co-author Micha Elsner, Assistant Professor at the Department of Linguistics, Ohio State University.

These results could help to develop computer algorithms for automatic direction-giving. “A long-term goal is to build a computer direction-giver that could automatically detect objects of interest in the scene and select the landmarks that would work best for human listeners,” says Clarke.

EurekAlert! PR: http://www.eurekalert.org/pub_releases/2015-12/f-ldt120315.php

Study: http://journal.frontiersin.org/article/10.3389/fpsyg.2015.01793/full