This dissertation investigates the early recognition of persistent somatic symptoms (PSS) in primary care. A stepwise approach was used mapping the optimal methods for re-using primary care records... Show moreThis dissertation investigates the early recognition of persistent somatic symptoms (PSS) in primary care. A stepwise approach was used mapping the optimal methods for re-using primary care records for predictive modeling of PSS. This is important since up to 10% of the general population experiences PSS. Moreover, general practitioners (GPs) often encounter difficulties in recognizing PSS, which may delay adequate intervention, subsequently resulting in unnecessary high burden on the patient and health care system. The findings from this dissertation show that a complex interplay between factors from all biopsychosocial domains contribute to PSS-onset. Survey results show that GPs differ in their methods of PSS-registration. Many GPs indicate missing an unambiguous classification scheme and report needing more support, tools, and/or education for PSS-related consultations. Predictive modeling of different PSS-syndromes shows both overlapping and syndrome-specific predictors. Early predictive modeling of the broad spectrum of PSS shows moderate predictive accuracy based on seven approaches for candidate-predictor selection, including theory-driven and temporal and non-temporal data-driven approaches. In conclusion, this dissertation provides comprehensive evidence of the complexity of identification of PSS. Furthermore, it indicates that simple data-driven approaches could support PSS classification in primary care, although this should be combined with a multidisciplinary care approach. Show less
Stroke is one of the leading causes of disability and death worldwide. Prevention of stroke is therefore essential. Effective prevention should be tailored to the clinical characteristics,... Show moreStroke is one of the leading causes of disability and death worldwide. Prevention of stroke is therefore essential. Effective prevention should be tailored to the clinical characteristics, lifestyle, and environment of the individual, among others. This is also known as precision prevention. An important example illustrating the need for precision prevention is the existence of sex differences in stroke occurrence. In practice, for predicting stroke risk, only traditional risk factors (such as smoking and hypertension) are included, and women-specific risk factors are not yet routinely included. As a result, women with an increased risk of stroke may be missed, which also prevents timely initiation of preventive treatments. In this thesis, I tried to lay the foundation for precision prevention of stroke in women.Part I discussed the pathophysiology underlying women-specific risk factors for stroke, and gender differences in the clinical presentation of stroke. I found that the mechanisms underlying the relationship between women-specific risk factors and stroke, in particular the relationship between migraine and cerebral infarctions, seem to be particularly significant in the childbearing phase of life.In Part II, I described how health data from the EHR can be used to develop prediction models for the risk of myocardial infarction or stroke specifically for women under 50 years of age, and found that women-specific risk factors can add value in the predictions. However, there is still a long way to go to actually implement these models in practice, such as testing them on new datasets, and complying with current laws and regulations for safe application. Show less
This dissertation studies the construction of Chinese nationalism by the Chinese government and media companies through mass communication of government-staged and abrupt events in the reform era... Show moreThis dissertation studies the construction of Chinese nationalism by the Chinese government and media companies through mass communication of government-staged and abrupt events in the reform era between 2008 and 2012. It examines how Chinese audiences express online nationalist sentiments, representing whether the communication of media events meets the social demands established by “dream discourses.” Using mixed qualitative and quantitative methods, it focuses on two case studies: the 2008 Beijing Olympics and the 2012 Diaoyu (Senkaku) Islands incident. The dissertation finds that these mass media events play a significant role in shaping Chinese state nationalism and popular nationalism. The related mass communication helps the Chinese government increase or, at least, maintain its legitimacy through various strategies. The findings of this dissertation also show that as Chinese audiences have increasingly voiced themselves in the information age, the government will keep treating the robust, uneasy entanglement of nationalism, globalization, and digital media more cautiously for its social development and stability. Show less
Strong evidence in support of guidelines for traumatic brain injury (TBI) is lacking. Large-scale observational studies may offer a complementary source of evidence to clinical trials to improve... Show moreStrong evidence in support of guidelines for traumatic brain injury (TBI) is lacking. Large-scale observational studies may offer a complementary source of evidence to clinical trials to improve the care and outcome for patients with TBI. They are, however, challenging to execute. In this review, we aim to characterize opportunities and challenges of large-scale collaborative research in neurotrauma. We use the setup and conduct of Collaborative European Neurotrauma Effectiveness Research in TBI (CENTER-TBI) as an illustrative example. We highlight the importance of building a team and of developing a network for younger researchers, thus investing toward the future. We involved investigators early in the design phase and recognized their efforts in a group contributor list on all publications. We found, however, that translation to academic credits often failed, and we suggest that the current system of academic credits be critically appraised. We found substantial variability in consent procedures for participant enrollment within and between countries. Overall, obtaining approvals typically required 4-6 months, with outliers up to 18 months. Research costs varied considerably across Europe and should be defined by center. We substantially underestimated costs of data curation, and we suggest that 15-20% of the budget be reserved for this purpose. Streamlining analyses and accommodating external research proposals demanded a structured approach. We implemented a systematic inventory of study plans and found this effective in maintaining oversight and in promoting collaboration between research groups. Ensuring good use of the data was a prominent feature in the review of external proposals. Multiple interactions occurred with industrial partners, mainly related to biomarkers and neuroimaging, and resulted in various formal collaborations, substantially extending the scope of CENTER-TBI. Overall, CENTER-TBI has been productive, with over 250 international peer-reviewed publications. We have ensured mechanisms to maintain the infrastructure and continued analyses. We see potential for individual patient data meta-analyses in connection to other large-scale projects. Our collaboration with Transforming Research and Clinical Knowledge in TBI (TRACK-TBI) has taught us that although standardized data collection and coding according to common data elements can facilitate such meta-analyses, further data harmonization is required for meaningful results. Both CENTER-TBI and TRACK-TBI have demonstrated the complexity of the conduct of large-scale collaborative studies that produce high-quality science and new insights. Show less
Velden, J. van der; Asselbergs, F.W.; Bakkers, J.; Batkai, S.; Bertrand, L.; Bezzina, C.R.; ... ; Thum, T. 2022
Cardiovascular diseases represent a major cause of morbidity and mortality, necessitating research to improve diagnostics, and to discover and test novel preventive and curative therapies. All of... Show moreCardiovascular diseases represent a major cause of morbidity and mortality, necessitating research to improve diagnostics, and to discover and test novel preventive and curative therapies. All of which warrant experimental models that recapitulate human disease. The translation of basic science results to clinical practice is a challenging task. In particular for complex conditions such as cardiovascular diseases, which often result from multiple risk factors and co-morbidities. This difficulty might lead some individuals to question the value of animal research, citing the translational 'valley of death', which largely reflects the fact that studies in rodents are difficult to translate to humans. This is also influenced by the fact that new, human-derived in vitro models can recapitulate aspects of disease processes. However, it would be a mistake to think that animal models cannot provide a vital step in the translational pathway as they do provide important pathophysiological insights into disease mechanisms particularly on a organ and systemic level. While stem cell-derived human models have the potential to become key in testing toxicity and effectiveness of new drugs, we need to be realistic, and carefully validate all new human-like disease models. In this position paper, we highlight recent advances in trying to reduce the number of animals for cardiovascular research ranging from stem cell-derived models to in situ modelling of heart properties, bioinformatic models based on large datasets, and improved current animal models, which show clinically relevant characteristics observed in patients with a cardiovascular disease. We aim to provide a guide to help researchers in their experimental design to translate bench findings to clinical routine taking the replacement, reduction and refinement (3R) as a guiding concept. Show less
Background The Coronavirus disease 2019 (COVID-19) pandemic has underlined the urgent need for reliable, multicenter, and full-admission intensive care data to advance our understanding of the... Show moreBackground The Coronavirus disease 2019 (COVID-19) pandemic has underlined the urgent need for reliable, multicenter, and full-admission intensive care data to advance our understanding of the course of the disease and investigate potential treatment strategies. In this study, we present the Dutch Data Warehouse (DDW), the first multicenter electronic health record (EHR) database with full-admission data from critically ill COVID-19 patients. Methods A nation-wide data sharing collaboration was launched at the beginning of the pandemic in March 2020. All hospitals in the Netherlands were asked to participate and share pseudonymized EHR data from adult critically ill COVID-19 patients. Data included patient demographics, clinical observations, administered medication, laboratory determinations, and data from vital sign monitors and life support devices. Data sharing agreements were signed with participating hospitals before any data transfers took place. Data were extracted from the local EHRs with prespecified queries and combined into a staging dataset through an extract-transform-load (ETL) pipeline. In the consecutive processing pipeline, data were mapped to a common concept vocabulary and enriched with derived concepts. Data validation was a continuous process throughout the project. All participating hospitals have access to the DDW. Within legal and ethical boundaries, data are available to clinicians and researchers. Results Out of the 81 intensive care units in the Netherlands, 66 participated in the collaboration, 47 have signed the data sharing agreement, and 35 have shared their data. Data from 25 hospitals have passed through the ETL and processing pipeline. Currently, 3464 patients are included in the DDW, both from wave 1 and wave 2 in the Netherlands. More than 200 million clinical data points are available. Overall ICU mortality was 24.4%. Respiratory and hemodynamic parameters were most frequently measured throughout a patient's stay. For each patient, all administered medication and their daily fluid balance were available. Missing data are reported for each descriptive. Conclusions In this study, we show that EHR data from critically ill COVID-19 patients may be lawfully collected and can be combined into a data warehouse. These initiatives are indispensable to advance medical data science in the field of intensive care medicine. Show less
Maarseveen, T.D.; Maurits, M.P.; Niemantsverdriet, E.; Helm-van Mil, A.H.M. van der; Huizinga, T.W.J.; Knevel, R. 2021
Background Electronic health records (EHRs) offer a wealth of observational data. Machine-learning (ML) methods are efficient at data extraction, capable of processing the information-rich free... Show moreBackground Electronic health records (EHRs) offer a wealth of observational data. Machine-learning (ML) methods are efficient at data extraction, capable of processing the information-rich free-text physician notes in EHRs. The clinical diagnosis contained therein represents physician expert opinion and is more consistently recorded than classification criteria components. Objectives To investigate the overlap and differences between rheumatoid arthritis patients as identified either from EHR free-text through the extraction of the rheumatologist diagnosis using machine-learning (ML) or through manual chart-review applying the 1987 and 2010 RA classification criteria. Methods Since EHR initiation, 17,662 patients have visited the Leiden rheumatology outpatient clinic. For ML, we used a support vector machine (SVM) model to identify those who were diagnosed with RA by their rheumatologist. We trained and validated the model on a random selection of 2000 patients, balancing PPV and sensitivity to define a cutoff, and assessed performance on a separate 1000 patients. We then deployed the model on our entire patient selection (including the 3000). Of those, 1127 patients had both a 1987 and 2010 EULAR/ACR criteria status at 1 year after inclusion into the local prospective arthritis cohort. In these 1127 patients, we compared the patient characteristics of RA cases identified with ML and those fulfilling the classification criteria. Results The ML model performed very well in the independent test set (sensitivity=0.85, specificity=0.99, PPV=0.86, NPV=0.99). In our selection of patients with both EHR and classification information, 373 were recognized as RA by ML and 357 and 426 fulfilled the 1987 or 2010 criteria, respectively. Eighty percent of the ML-identified cases fulfilled at least one of the criteria sets. Both demographic and clinical parameters did not differ between the ML extracted cases and those identified with EULAR/ACR classification criteria. Conclusions With ML methods, we enable fast patient extraction from the huge EHR resource. Our ML algorithm accurately identifies patients diagnosed with RA by their rheumatologist. This resulting group of RA patients had a strong overlap with patients identified using the 1987 or 2010 classification criteria and the baseline (disease) characteristics were comparable. ML-assisted case labeling enables high-throughput creation of inclusive patient selections for research purposes. Show less
Inflammatory Bowel Diseases (IBD) such as Crohn’s disease (CD) and ulcerative colitis (UC) are chronic immunological digestive diseases with a progressive character and associated with significant... Show moreInflammatory Bowel Diseases (IBD) such as Crohn’s disease (CD) and ulcerative colitis (UC) are chronic immunological digestive diseases with a progressive character and associated with significant healthcare costs. Different solutions have been proposed such as innovation in care monitoring or implementation of electronic health (eHealth). IBD is one of many chronic diseases that could benefit from eHealth, adding smartphone applications to the toolbox for care management has the potential improve disease understanding, enhance medication adherence, improve patient-physician communications, and for earlier interventions by medical professionals when problems arise. Furthermore, the accessibility to Big Data and increased computational resources have paved the way for Artificial Intelligence (AI) to provide potential solutions for the management of prototypical complex diseases with advanced heterogeneity and alternating disease states, like IBD. In this thesis we assessed the current economic and psychosocial impact of IBD by assessing its effect on indirect costs, productivity and caregiving. Furthermore, we observed if we can proactively identify IBD patients’ needs using eHealth and Artificial Intelligence. Lastly, we analyze the impact of monitoring IBD patients using eHealth interventions in order to facilitate the delivery of high-value care. Show less
Heuvel, L. van den; Dorsey, R.R.; Prainsack, B.; Post, B.; Stiggelbout, A.M.; Meinders, M.J.; Bloem, B.R. 2020
Clinical decision making for Parkinson's disease patients is supported by a combination of three distinct information resources: best available scientific evidence, professional expertise, and the... Show moreClinical decision making for Parkinson's disease patients is supported by a combination of three distinct information resources: best available scientific evidence, professional expertise, and the personal needs and preferences of patients. All three sources have clear value but also share several important limitations, mainly regarding subjectivity, generalizability and variability. For example, current scientific evidence, especially from controlled clinical trials, is often based on selected study populations, making it difficult to translate the outcome to the care for individual patients in everyday clinical practice. Big data, including data from real-life unselected Parkinson populations, can help to bridge this information gap. Fine-grained patient profiles created from big data have the potential to aid in identifying therapeutic approaches that will be most effective given each patient's individual characteristics, which is particularly important for a disorder characterized by such tremendous interindividual variability as Parkinson's disease. In this viewpoint, we argue that big data approaches should be acknowledged and harnessed, not to replace existing information resources, but rather as a fourth and complimentary source of information in clinical decision making, helping to represent the full complexity of individual patients. We introduce the `quadruple decision making' model and illustrate its mode of action by showing how this can be used to pursue precision medicine for persons living with Parkinson's disease. Show less
Through the exponential growth in digital devices and computational capabilities, big data technologies are putting pressure upon the boundaries of what can or cannot be considered acceptable from... Show moreThrough the exponential growth in digital devices and computational capabilities, big data technologies are putting pressure upon the boundaries of what can or cannot be considered acceptable from an ethical perspective. Much of the literature on ethical issues related to big data and big data technologies focuses on separate values such as privacy, human dignity, justice or autonomy. More holistic approaches, allowing a more comprehensive view and better balancing of values, usually focus on either a design-based approach, in which it is tried to implement values into the design of new technologies, or an application-based approach, in which it is tried to address the ways in which new technologies are used. Some integrated approaches do exist, but typically are more general in nature. This offers a broad scope of application, but may not always be tailored to the specific nature of big data related ethical issues. In this paper we distil a comprehensive set of ethical values from existing design-based and application-based ethical approaches for new technologies and further focus these values to the context of emerging big data technologies. A total of four value lists (from techno-moral values, value-sensitive design, anticipatory emerging technology ethics and biomedical ethics) were selected for this. The integrated list consists of a total of ten values: human welfare, autonomy, non-maleficence, justice, accountability, trustworthiness, privacy, dignity, solidarity and environmental welfare. Together, this set of values provides a comprehensive and in-depth overview of the values that are to be taken into account for emerging big data technologies. Show less
Through the exponential growth in digital devices and computational capabilities, big data technologies are putting pressure upon the boundaries of what can or cannot be considered acceptable from... Show moreThrough the exponential growth in digital devices and computational capabilities, big data technologies are putting pressure upon the boundaries of what can or cannot be considered acceptable from an ethical perspective. Much of the literature on ethical issues related to big data and big data technologies focuses on separate values such as privacy, human dignity, justice or autonomy. More holistic approaches, allowing a more comprehensive view and better balancing of values, usually focus on either a design-based approach, in which it is tried to implement values into the design of new technologies, or an application-based approach, in which it is tried to address the ways in which new technologies are used. Some integrated approaches do exist, but typically are more general in nature. This offers a broad scope of application, but may not always be tailored to the specific nature of big data related ethical issues. In this paper we distil a comprehensive set of ethical values from existing design-based and application-based ethical approaches for new technologies and further focus these values to the context of emerging big data technologies. A total of four value lists (from techno-moral values, value-sensitive design, anticipatory emerging technology ethics and biomedical ethics) were selected for this. The integrated list consists of a total of ten values: human welfare, autonomy, non-maleficence, justice, accountability, trustworthiness, privacy, dignity, solidarity and environmental welfare. Together, this set of values provides a comprehensive and in-depth overview of the values that are to be taken into account for emerging big data technologies. Show less
Through the exponential growth in digital devices and computational capabilities, big data technologies are putting pressure upon the boundaries of what can or cannot be considered acceptable from... Show moreThrough the exponential growth in digital devices and computational capabilities, big data technologies are putting pressure upon the boundaries of what can or cannot be considered acceptable from an ethical perspective. Much of the literature on ethical issues related to big data and big data technologies focuses on separate values such as privacy, human dignity, justice or autonomy. More holistic approaches, allowing a more comprehensive view and better balancing of values, usually focus on either a design-based approach, in which it is tried to implement values into the design of new technologies, or an application-based approach, in which it is tried to address the ways in which new technologies are used. Some integrated approaches do exist, but typically are more general in nature. This offers a broad scope of application, but may not always be tailored to the specific nature of big data related ethical issues. In this paper we distil a comprehensive set of ethical values from existing design-based and application-based ethical approaches for new technologies and further focus these values to the context of emerging big data technologies. A total of four value lists (from techno-moral values, value-sensitive design, anticipatory emerging technology ethics and biomedical ethics) were selected for this. The integrated list consists of a total of ten values: human welfare, autonomy, non-maleficence, justice, accountability, trustworthiness, privacy, dignity, solidarity and environmental welfare. Together, this set of values provides a comprehensive and in-depth overview of the values that are to be taken into account for emerging big data technologies. Show less
Part of a series of digital guest lectures from Leiden University scholars for use in secondary school education. For more information, see:https://www.universiteitleiden.nl/gastlessen/cursussen... Show morePart of a series of digital guest lectures from Leiden University scholars for use in secondary school education. For more information, see:https://www.universiteitleiden.nl/gastlessen/cursussen/digitale-gastlessen/artificial-intelligence Show less
In de achtste aflevering van een serie combinatiebesprekingen (digitaalandspeciaal) schenkt Jos Damen aandacht aan een onderzoek naar big data van bibliotheekcatalogi en een catalogus van het werk... Show moreIn de achtste aflevering van een serie combinatiebesprekingen (digitaalandspeciaal) schenkt Jos Damen aandacht aan een onderzoek naar big data van bibliotheekcatalogi en een catalogus van het werk van het Nederlandse genie Christiaan Huygens. Show less