It is now widely accepted that the brunt of animal communication is conducted via several modalities, e.g. acoustic and visual, either simultaneously or sequentially. This is a laudable multimodal... Show moreIt is now widely accepted that the brunt of animal communication is conducted via several modalities, e.g. acoustic and visual, either simultaneously or sequentially. This is a laudable multimodal turn relative to traditional accounts of temporal aspects of animal communication which have focused on a single modality at a time. However, the fields that are currently contributing to the study of multimodal communication are highly varied, and still largely disconnected given their sole focus on a particular level of description or their particular concern with human or non-human animals. Here, we provide an integrative overview of converging findings that show how multimodal processes occurring at neural, bodily, as well as social interactional levels each contribute uniquely to the complex rhythms that characterize communication in human and non-human animals. Though we address findings for each of these levels independently, we conclude that the most important challenge in this field is to identify how processes at these different levels connect. This article is part of the theme issue 'Synchrony and rhythm interaction: from the brain to behavioural ecology'. Show less
Mental imagery is a highly common component of everyday cognitive functioning. While substantial progress is being made in clarifying this fundamental human function, much is still unclear or... Show moreMental imagery is a highly common component of everyday cognitive functioning. While substantial progress is being made in clarifying this fundamental human function, much is still unclear or unknown. A more comprehensive account of mental imagery aspects would be gained by examining individual differences in age, sex, and background experience in an activity and their association with imagery in different modalities and intentionality levels. The current online study combined multiple imagery self-report measures in a sample (n = 279) with a substantial age range (18-65 years), aiming to identify whether age, sex, or background experience in sports, music, or video games were associated with aspects of imagery in the visual, auditory, or motor stimulus modality and voluntary or involuntary intentionality level. The findings show weak positive associations between age and increased vividness of voluntary auditory imagery and decreased involuntary musical imagery frequency, weak associations between being female and more vivid visual imagery, and relations of greater music and video game experience with higher involuntary musical imagery frequency. Moreover, all imagery stimulus modalities were associated with each other, for both intentionality levels, except involuntary musical imagery frequency, which was only related to higher voluntary auditory imagery vividness. These results replicate previous research but also contribute new insights, showing that individual differences in age, sex, and background experience are associated with various aspects of imagery such as modality, intentionality, vividness, and frequency. The study's findings can inform the growing domain of applications of mental imagery to clinical and pedagogical settings. Show less
Auditory cues are frequently used to support movement learning and rehabilitation, but the neural basis of this behavioural effect is not yet dear. We investigated the microstructural... Show moreAuditory cues are frequently used to support movement learning and rehabilitation, but the neural basis of this behavioural effect is not yet dear. We investigated the microstructural neuroplasticity effects of adding musical cues to a motor learning task. We hypothesised that music-cued, left-handed motor training would increase fractional anisotropy (FA) in the contralateral arcuate fasciculus, a fibre tract connecting auditory, pre-motor and motor regions. Thirty right-handed participants were assigned to a motor learning condition either with (Music Group) or without (Control Group) musical cues. Participants completed 20 minutes of training three times per week over four weeks. Diffusion tensor MRI and probabilistic neighbourhood tractography identified FA, axial (AD) and radial (RD) diffusivity before and after training. Results revealed that FA increased significantly in the right arcuate fasciculus of the Music group only, as hypothesised, with trends for AD to increase and RD to decrease, a pattern of results consistent with activity-dependent increases in myelination. No significant changes were found in the left ipsilateral arcuate fasciculus of either group. This is the first evidence that adding musical cues to movement learning can induce rapid microstructural change in white matter pathways in adults, with potential implications for therapeutic clinical practice. Show less
Visualizing acoustic features of speech has proven helpful in speech therapy; however, it is as yet unclear how to create intuitive and fitting visualizations. To better understand the mappings... Show moreVisualizing acoustic features of speech has proven helpful in speech therapy; however, it is as yet unclear how to create intuitive and fitting visualizations. To better understand the mappings from speech sound aspects to visual space, a large web-based experiment (n = 249) was performed to evaluate spatial parameters that may optimally represent pitch and loudness of speech. To this end, five novel animated visualizations were developed and presented in pairwise comparisons, together with a static visualization. Pitch and loudness of speech were each mapped onto either the vertical (y-axis) or the size (z-axis) dimension, or combined (with size indicating loudness and vertical position indicating pitch height) and visualized as an animation along the horizontal dimension (x-axis) over time. The results indicated that firstly, there is a general preference towards the use of the y-axis for both pitch and loudness, with pitch ranking higher than loudness in terms of fit. Secondly, the data suggest that representing both pitch and loudness combined in a single visualization is preferred over visualization in only one dimension. Finally, the z-axis, although not preferred, was evaluated as corresponding better to loudness than to pitch. This relation between sound and visual space has not been reported previously for speech sounds, and elaborates earlier findings on musical material. In addition to elucidating more general mappings between auditory and visual modalities, the findings provide us with a method of visualizing speech that may be helpful in clinical applications such as computerized speech therapy, or other feedback-based learning paradigms. Show less