Many digital reading applications have built-in features to control the presentation flow of texts by segmenting those texts into smaller linguistic units. Whether and how these segmentation... Show moreMany digital reading applications have built-in features to control the presentation flow of texts by segmenting those texts into smaller linguistic units. Whether and how these segmentation techniques affect the readability of texts is largely unknown. With this background, the current study examined a recent proposal that a sentence-by-sentence presentation mode of texts improves reading comprehension of beginning readers because this presentation mode encourages them to engage in more effortful sentence wrap-up processing. In a series of self-paced reading and eye-tracking experiments with primary school pupils as participants (6–9 years old; n = 134), reading speed and text comprehension were assessed in a full-page control condition—i.e., texts were presented in their entirety—and in an experimental condition in which texts were presented in sentence-by-sentence segments. The results showed that text comprehension scores were higher for segmented texts than for full-page texts. Furthermore, in the final word-regions of the sentences in the texts, the segmented layout induced longer reading times than the full-page layout did. However, mediation analyses revealed that these inflated reading times had no, or even a disruptive influence on text comprehension. This indicates that the observed comprehension advantage for segmented texts cannot be attributed to more effortful sentence wrap-up. A more general implication of these findings is that the segmentation features of reading applications should be used with caution (e.g., in educational or professional settings) because it is unclear how they affect the perceptual and cognitive mechanisms that underlie reading. Show less
Objectives: For children to understand the emotional behavior of others, the first two steps involve emotion encoding and emotion interpreting, according to the Social Information Processing model.... Show moreObjectives: For children to understand the emotional behavior of others, the first two steps involve emotion encoding and emotion interpreting, according to the Social Information Processing model. Access to daily social interactions is prerequisite to a child acquiring these skills, and barriers to communication such as hearing loss impede this access. Therefore, it could be challenging for children with hearing loss to develop these two skills. The present study aimed to understand the effect of prelingual hearing loss on children's emotion understanding, by examining how they encode and interpret nonverbal emotional cues in dynamic social situations. Design: Sixty deaf or hard-of-hearing (DHH) children and 71 typically hearing (TH) children (3-10 years old, mean age 6.2 years, 54% girls) watched videos of prototypical social interactions between a target person and an interaction partner. At the end of each video, the target person did not face the camera, rendering their facial expressions out of view to participants. Afterward, participants were asked to interpret the emotion they thought the target person felt at the end of the video. As participants watched the videos, their encoding patterns were examined by an eye tracker, which measured the amount of time participants spent looking at the target person's head and body and at the interaction partner's head and body. These regions were preselected for analyses because they had been found to provide cues for interpreting people's emotions and intentions. Results: When encoding emotional cues, both the DHH and TH children spent more time looking at the head of the target person and at the head of the interaction partner than they spent looking at the body or actions of either person. Yet, compared with the TH children, the DHH children looked at the target person's head for a shorter time (b = -0.03, p = 0.030), and at the target person's body (b = 0.04, p = 0.006) and at the interaction partner's head (b = 0.03, p = 0.048) for a longer time. The DHH children were also less accurate when interpreting emotions than their TH peers (b = -0.13, p = 0.005), and their lower scores were associated with their distinctive encoding pattern. Conclusions: The findings suggest that children with limited auditory access to the social environment tend to collect visually observable information to compensate for ambiguous emotional cues in social situations. These children may have developed this strategy to support their daily communication. Yet, to fully benefit from such a strategy, these children may need extra support for gaining better social-emotional knowledge. Show less
In a social environment composed mostly of people with typical hearing, deaf or hard of hearing (DHH) children experience social interactions differently from their typically hearing (TH) peers,... Show moreIn a social environment composed mostly of people with typical hearing, deaf or hard of hearing (DHH) children experience social interactions differently from their typically hearing (TH) peers, which could guide them towards different patterns for processing other people’s emotions. This thesis aimed to unravel whether hearing status affects how children encode, interpret, and react to others’ emotions in a social context, and whether their responses are associated with psychosocial functioning, using a variety of measures that included eye tracking, pupillometry, behavioral tasks, parent reports, and longitudinal follow-up. DHH children’s skills for perceiving others’ basic emotions were on par with their TH peers. Improved emotional functioning was associated with improved psychosocial functioning to a similar degree in DHH and TH children alike. Yet, DHH children still faced difficulties when they had to process an emotion with adequate knowledge about social rules and causes of emotions. Moreover, DHH children used a visual cue-based encoding strategy to compensate for ambiguous or unavailable information in social situations, and recruited more cognitive resources to process unfamiliar emotional expressions. The findings underscore the need to look into possible qualitative differences between typical and atypical development. These individual differences reflect compensatory strategies to support daily living, or signal a need for support in a certain domain. Show less
There is a lack of reliable, repeatable, and non-invasive clinical endpoints when investigating treatments for intellectual disability (ID). The aim of this study is to explore a novel approach... Show moreThere is a lack of reliable, repeatable, and non-invasive clinical endpoints when investigating treatments for intellectual disability (ID). The aim of this study is to explore a novel approach towards developing new endpoints for neurodevelopmental disorders, in this case for ARID1B-related ID. In this study, twelve subjects with ARID1B-related ID and twelve age-matched controls were included in this observational case-control study. Subjects performed a battery of non-invasive neurobehavioral and neurophysiological assessments on two study days. Test domains included cognition, executive functioning, and eye tracking. Furthermore, several electrophysiological assessments were performed. Subjects wore a smartwatch (Withings (R) Steel HR) for 6 days. Tests were systematically assessed regarding tolerability, variability, repeatability, difference with control group, and correlation with traditional endpoints. Animal fluency, adaptive tracking, body sway, and smooth pursuit eye movements were assessed as fit-for-purpose regarding all criteria, while physical activity, heart rate, and sleep parameters show promise as well. The event-related potential waveform of the passive oddball and visual evoked potential tasks showed discriminatory ability, but EEG assessments were perceived as extremely burdensome. This approach successfully identified fit-for-purpose candidate endpoints for ARID1B-related ID and possibly for other neurodevelopmental disorders. Next, results could be replicated in different ID populations or the assessments could be included as exploratory endpoint in interventional trials in ARID1B-related ID. Show less
Both prosociality in group context and morality are important aspects of social life and living together with others in society. In both situations, understanding the cognitive processes underlying... Show moreBoth prosociality in group context and morality are important aspects of social life and living together with others in society. In both situations, understanding the cognitive processes underlying the decisions is argued to be a crucial step in designing evidence-based interventions addressing not only choice outcomes, but the driving forces of the choices as well. Using fine-grained and unobtrusive measure of cognitive processes in the decision process, eye tracking is applied in the investigation of cognitive processes in this dissertation. Chapter 2 investigated active ignorance to others’ group membership. Chapter 3 presented two eye tracking studies, in which the cognitive processes of prosociality in intergroup contexts were investigated. Chapter 4 reported a study investigating the cognitive processes underlying moral decisions, speaking to the theoretical debate in moral decision making, advocating a choice discriminability perspective over the dual process theory of moral judgment. The work demonstrates the merit of further illuminating the inner workings of the “black box” of decision making, by using process-tracking techniques to gain insights about decision processes that would have been difficult to achieve when only using choices. Moreover, the work presented here makes a methodological contribution by developing a standardizable and incentivized moral dilemma task Show less
Although affect has been found to be an integral part of decision-making, it is largely ignored in the consumer choice modeling literature. Rational choice assumptions continue to be dominant in... Show moreAlthough affect has been found to be an integral part of decision-making, it is largely ignored in the consumer choice modeling literature. Rational choice assumptions continue to be dominant in discrete choice experiments (DCEs). One reason why affect has been ignored is that immediate affect during the choice process cannot be "seen" or measured easily. Consequently, most prior work on affect focuses on self-reports, which may be unreliable and merely self-justifications. Thus, we do not know whether immediate affect actually plays a key role in consumer choices. We addressed this gap by testing whether immediate affect can be observed in fairly trivial choices, and we tried to identify the drivers of and contexts in which affect occurs. We used a novel combination of eye tracking and facial electromyography (fEMG) methods to observe and measure integral affect for each choice option in a DCE. Results indicate the feasibility of the combination of eye tracking and fEMG during DCEs, the existence of affect in stated choice experiments for fairly trivial product categories, and provide insights into drivers and contexts of affective choice processes. Among others, best and worst task frames show to influence integral affect in DCEs. Findings stress the need for future joint investigations of cognitive and affective processes in consumer choice tasks. Better understanding of these processes should lead to valuable insights into how real-time marketing actions influence decisions, ways to improve the predictive performance of choice models, and novel ways to help consumers and organizations make better decisions. (C) 2015 Elsevier Ltd. All rights reserved. Show less
By using innovative paradigms, the present thesis provides convincing evidence that action-effect learning, and sensorimotor processes in general play a crucial role in the development of action-... Show moreBy using innovative paradigms, the present thesis provides convincing evidence that action-effect learning, and sensorimotor processes in general play a crucial role in the development of action- perception and production in infancy. This finding was further generalized to sequential action. Furthermore the thesis suggests that means-selection-, ends-selection information, and action-effect knowledge together feed into a unitary concept of goal. Both these findings have the potential to generate interesting new research question Show less
In het digitale tijdperk gaan multimedia een steeds belangrijkere rol spelen en de Universiteit Leiden onderzoekt de toegevoegde waarde van multimediale materialen in voorbereidend leesonderwijs.... Show moreIn het digitale tijdperk gaan multimedia een steeds belangrijkere rol spelen en de Universiteit Leiden onderzoekt de toegevoegde waarde van multimediale materialen in voorbereidend leesonderwijs. In dit dissertatieonderzoek lag het accent op multimediale toevoegingen aan prentenboeken . Naast of in plaats van gedrukte tekst is er gesproken tekst waardoor kleuters de boeken “zelfstandig” kunnen lezen. Voorts zijn de statische prenten vervangen door filmbeelden en aangevuld met welgekozen geluiden en muziek. Traditionele prentenboeken zijn een belangrijke voorbereiding op het latere lezen van teksten. Boeken met multimediale toevoegingen zouden deze functie wellicht in mindere mate vervullen. Filmbeelden die bijna voor zichzelf spreken zouden de motivatie verminderen om goed naar tekst te luisteren, vooral als kinderen veel hiaten hebben in verhaalbegrip zoals de Marokkaanse en Turkse kleuters in dit onderzoek. Voor deze veronderstelling is echter weinig empirische evidentie gevonden. Integendeel, prentenboeken met multimediale toevoegingen zijn zowel voor verhaalbegrip als voor woordenschat een extra stimulans. Dit onderzoek toont aan dat de verklaring moet worden gezocht in de match van beelden met taal. In vergelijking met traditionele prentenboeken maken prentenboeken met multimedia de kans op een goede match groter door met diverse technieken de aandacht te trekken naar details die op dat moment in de tekst genoemd worden. Show less