Objectives: To demonstrate how researchers can identify and translate reporting gaps from a systematic review into checklist items for reporting guidelines. Study Design and Setting: Good quality... Show moreObjectives: To demonstrate how researchers can identify and translate reporting gaps from a systematic review into checklist items for reporting guidelines. Study Design and Setting: Good quality research reporting ensures transparency, reproducibility, and utility, facilitated by reporting guidelines. Conducting a systematic review is an essential step in the development of these guidelines. The Enhancing the QUAlity and Transparency Of health Research (EQUATOR) Network’s toolkit (2010) assists researchers in this process and is due for an update to address current gaps and evolving research methods. One significant gap is the translation of systematic review findings into checklist items. Reflecting on our experience developing the ACcurate Consensus Reporting Document, we illustrate this translation process aiming to empower researchers developing reporting guidelines to address potential biases and promote transparency. We highlight the challenges faced and how they were addressed. Results: The systematic review search process was iterative, involving multiple adjustments to balance precision and sensitivity. Excessively stringent exclusion criteria may lead to missed valuable insights, especially when studies offer relevant content. An information specialist was invaluable in developing the search strategy. Key lessons learned include the necessity of maintaining flexibility and openness during data extraction, continuous adaptation based on panelist feedback, and promoting clear communication through understandable language. These principles can guide the development of future reporting guidelines and the updating of the EQUATOR toolkit, promoting transparency and robustness in research reporting. Conclusion: Maintaining flexibility, capturing evolving insights, clear communication, and accommodating changes in research and technologies are key to translating systematic review findings into effective reporting checklists. Show less
In this reflective chapter, we examine the structural biases and empirical challenges underlying human trafficking ‘indicators’ (especially problem, risk and performance indicators) that are... Show moreIn this reflective chapter, we examine the structural biases and empirical challenges underlying human trafficking ‘indicators’ (especially problem, risk and performance indicators) that are routinely used to describe and measure human trafficking, assess risk, identify abuses, evaluate responses, and encourage accountability. While frequently used, such indicators can give an undue illusion of objectivity and reliability when they are neither neutral nor unskewed. In fact, numerous factors affect which elements are privileged as ‘indicators’ and which are obscured. We therefore examine here the selectivity, politics, racialized and gendered concerns that relate to the production and use of human trafficking indicators. Since human trafficking is a complex, highly-contested, and multi-faceted practice, it is not easily reduced to the crude generalizations upon which many indicators rest. We explore how the uncritical use of indicators can both contribute to stereotypical and unachievable ideals of victimhood and engender undue criminalization or withholding of victim support. In doing so, we disentangle some paradoxes around who is deemed ‘vulnerable’, ‘at risk’, ‘worthy of support’ and requiring ‘protection’. We highlight the – routinely overlooked – weak empirical basis and other limitations of many commonplace ‘indicators’ and challenges in building empirically-stronger and more robust indicators. The chapter concludes with overall implications of these critical reflections for policy, interventions, and research. Show less
Langenhuijsen, L.F.S.; Janse, R.J.; Venema, E.; Kent, D.M.; Diepen, M. van; Dekker, F.W.; ... ; Jong, Y. de 2023
Objectives: To (1) explore trends of risk of bias (ROB) in prediction research over time following key methodological publications, using the Prediction model Risk Of Bias ASsessment Tool (PROBAST)... Show moreObjectives: To (1) explore trends of risk of bias (ROB) in prediction research over time following key methodological publications, using the Prediction model Risk Of Bias ASsessment Tool (PROBAST) and (2) assess the inter-rater agreement of the PROBAST.Study Design and Setting: PubMed and Web of Science were searched for reviews with extractable PROBAST scores on domain and signaling question (SQ) level. ROB trends were visually correlated with yearly citations of key publications. Inter-rater agreement was asResults: One hundred and thirty nine systematic reviews were included, of which 85 reviews (containing 2,477 single studies) on domain level and 54 reviews (containing 2,458 single studies) on SQ level. High ROB was prevalent, especially in the Analysis domain, and overall trends of ROB remained relatively stable over time. The inter-rater agreement was low, both on domain (Kappa 0.04-0.26) and SQ level (Kappa -0.14 to 0.49). Conclusion: Prediction model studies are at high ROB and time trends in ROB as assessed with the PROBAST remain relatively stable. These results might be explained by key publications having no influence on ROB or recency of key publications. Moreover, the trend may suffer from the low inter-rater agreement and ceiling effect of the PROBAST. The inter-rater agreement could potentially be improved by altering the PROBAST or providing training on how to apply the PROBAST.& COPY; 2023 The Authors. Published by Elsevier Inc. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/). Show less
The focus of this thesis is on the technical methods which help promote the movement towards Trustworthy AI, specifically within the Inspectorate of the Netherlands.The goal is develop and assess... Show moreThe focus of this thesis is on the technical methods which help promote the movement towards Trustworthy AI, specifically within the Inspectorate of the Netherlands.The goal is develop and assess the technical methods which are required to shift the actions of the Inspectorate to a data-driven paradigm, concretely under a supervised classification framework of machine learning.The aspect of reliability is addressed as a data quality concern, viz. missingness and noise.The aspect of fairness is addressed as a counter to bias in the selection process of inspections.The conclusion is that, whilst no complete solution has yet been suggested, it is possible to address the concerns related to data quality and data bias, culminating in well-performing classification models which are reliable and fair. Show less
Objective: To assess whether the Prediction model Risk Of Bias ASsessment Tool (PROBAST) and a shorter version of this tool can identify clinical prediction models (CPMs) that perform poorly at... Show moreObjective: To assess whether the Prediction model Risk Of Bias ASsessment Tool (PROBAST) and a shorter version of this tool can identify clinical prediction models (CPMs) that perform poorly at external validation. Study Design and Setting: We evaluated risk of bias (ROB) on 102 CPMs from the Tufts CPM Registry, comparing PROBAST to a short form consisting of six PROBAST items anticipated to best identify high ROB. We then applied the short form to all CPMs in the Registry with at least 1 validation (n = 556) and assessed the change in discrimination (dAUC) in external validation cohorts (n = 1,147). Results: PROBAST classified 98/102 CPMS as high ROB. The short form identified 96 of these 98 as high ROB (98% sensitivity), with perfect specificity. In the full CPM registry, 527 of 556 CPMs (95%) were classified as high ROB, 20 (3.6%) low ROB, and 9 (1.6%) unclear ROB. Only one model with unclear ROB was reclassified to high ROB after full PROBAST assessment of all low and unclear ROB models. Median change in discrimination was significantly smaller in low ROB models (dAUC -0.9%, IQR -6.2-4.2%) compared to high ROB models (dAUC -11.7%, IQR -33.3-2.6%; P < 0.001). Conclusion: High ROB is pervasive among published CPMs. It is associated with poor discriminative performance at validation, supporting the application of PROBAST or a shorter version in CPM reviews. (c) 2021 The Authors. Published by Elsevier Inc. This is an open access article under the CC BY-NC-ND license ( http:// creativecommons.org/ licenses/ by- nc- nd/ 4.0/ ) Show less
Advice regarding the analysis of observational studies of exposure effects usually is against adjustment for factors that occur after the exposure, as they may be caused by the exposure (or mediate... Show moreAdvice regarding the analysis of observational studies of exposure effects usually is against adjustment for factors that occur after the exposure, as they may be caused by the exposure (or mediate the effect of exposure on outcome), so potentially leading to collider stratification bias. However, such factors could also be caused by unmeasured confounding factors, in which case adjusting for them will also remove some of the bias due to confounding. We derive expressions for collider stratification bias when conditioning and confounding bias when not conditioning on the mediator, in the presence of unmeasured confounding (assuming that all associations are linear and there are no interactions). Using simulations, we show that generally neither the conditioned nor the unconditioned estimate is unbiased, and the trade-off between them depends on the magnitude of the effect of the exposure that is mediated relative to the effect of the unmeasured confounders and their relations with the mediator. We illustrate the use of the bias expressions via three examples: neuroticism and mortality (adjusting for the mediator appears the least biased option), glycated hemoglobin levels and systolic blood pressure (adjusting gives smaller bias), and literacy in primary school pupils (not adjusting gives smaller bias). Our formulae and simulations can inform quantitative bias analysis as well as analysis strategies for observational studies in which there is a potential for unmeasured confounding. Show less
Worldwide, conflicts arise daily as a result of differences of opinion about the outcome of a business valuation and the role of valuation experts. These often lead to lengthy and costly lawsuits,... Show moreWorldwide, conflicts arise daily as a result of differences of opinion about the outcome of a business valuation and the role of valuation experts. These often lead to lengthy and costly lawsuits, the reasons for which are usually sought in technical issues. This thesis, however, focuses on the role of human behavior. More specifically, it examines the extent to which cognitive biases play a role in the assessment of business valuations and valuation experts.Four empirical studies were conducted with entrepreneurs, lawyers and valuation experts. The first three show that stakeholders can be affected by a range of biases, including buyer-seller position effects, anchoring bias, similarity bias, outcome bias, gender bias, and so-called engagement bias. The fourth study concerns a survey among an international group of leading valuation experts with the aim of verifying insights gained in the previous studies. A statement of principles to mitigate cognitive biases in valuation practice is also introduced.The thesis provides empirical insights into the existence of cognitive biases in the context of business valuation and thereby contributes to both practice and theory in this field. It also adds to legal theory with respect to improving our understanding of conflicts. Show less
We aimed to expand our knowledge about the level of neurocognitive functioning (NCF) and health-related quality of life (HRQoL) in patients with primary and secondary brain tumors during the... Show moreWe aimed to expand our knowledge about the level of neurocognitive functioning (NCF) and health-related quality of life (HRQoL) in patients with primary and secondary brain tumors during the disease course. We found that the tumor itself has the largest negative impact on NCF and HRQoL. At group level, treatment (surgery, radiotherapy and/or chemotherapy) did not seem to have a large extra detrimental effect on the short term. However, subgroups of patients, e.g. patients with tumors in the non-dominant hemisphere and long-term survivors, appeared to be vulnerable for cognitive decline after treatment. At the individual patient level, HRQoL varied to a large degree in the first months after treatment, confirming this is a multidimensional concept and that the impact of treatment differs for the different aspects. With the results from the studies described in this thesis, treatment and individual patient care can be optimized by minimizing the negative impact of treatment, e.g. by intraoperative monitoring of cognition during awake surgery, and by counseling and rehabilitation of patients. Besides, investigators should pay attention to methodological challenges in reporting of neurocognitive outcomes in research, as reporting of these outcomes is currently not sufficient, while evidence can be of value in clinical decision-making. Show less
Jenniskens, K.; Naaktgeboren, C.A.; Reitsma, J.B.; Hooft, L.; Moons, K.G.M.; Smeden, M. van 2019
Objectives: The objective of this study was to study the impact of ignoring uncertainty by forcing dichotomous classification (presence or absence) of the target disease on estimates of diagnostic... Show moreObjectives: The objective of this study was to study the impact of ignoring uncertainty by forcing dichotomous classification (presence or absence) of the target disease on estimates of diagnostic accuracy of an index test.Study Design and Setting: We evaluated the bias in estimated index test accuracy when forcing an expert panel to make a dichotomous target disease classification for each individual. Data for various scenarios with expert panels were simulated by varying the number and accuracy of "component reference tests" available to the expert panel, index test sensitivity and specificity, and target disease prevalence.Results: Index test accuracy estimates are likely to be biased when there is uncertainty surrounding the presence or absence of the target disease. Direction and amount of bias depend on the number and accuracy of component reference tests, target disease prevalence, and the true values of index test sensitivity and specificity.Conclusion: In this simulation, forcing expert panels to make a dichotomous decision on target disease classification in the presence of uncertainty leads to biased estimates of index test accuracy. Empirical studies are needed to demonstrate whether this bias can be reduced by assigning a probability of target disease presence for each individual, or using advanced statistical methods to account for uncertainty in target disease classification. (C) 2019 Elsevier Inc. All rights reserved. Show less
Groenwold, R.H.H.; Shofty, I.; Miocevic, M.; Smeden, M. van; Klugkist, I. 2018
Fluorescence bias in in signals from individual SNP arrays can be calibrated using linear models. Given the data, the system of equations is very large, so a specialized symbolic algorithm was... Show moreFluorescence bias in in signals from individual SNP arrays can be calibrated using linear models. Given the data, the system of equations is very large, so a specialized symbolic algorithm was developed. These models are also used to illustrate that genomic waves do not exist, but are merely an artifact of commonly used methods. Furthermore, a new semi-parametric, single array, approach to SNP genotyping is introduced and shown to be both effective and efficient. A refined algorithm for copy number estimation, using a zero-exponent norm is proposed, which performs well, as is illustrated by thorough comparisons with other methods. Indications that the signal calibration can improve (genotyping) results from lower quality samples are also discussed. A software suite that implements the above is described and illustrated. Show less