Purpose Outcome of endovascular treatment in acute ischemic stroke patients is depending on the collateral circulation maintaining blood flow to the ischemic territory. We evaluated the inter-rater... Show morePurpose Outcome of endovascular treatment in acute ischemic stroke patients is depending on the collateral circulation maintaining blood flow to the ischemic territory. We evaluated the inter-rater reliability and accuracy of raters and an automated algorithm for assessing the collateral score (CS, range: 0-3) in acute ischemic stroke patients. Methods Baseline CTA scans with an intracranial anterior occlusion from the MR CLEAN study (n=500) were used. For each core lab CS, ten CTA scans with sufficient quality were randomly selected. After a training session in collateral scoring, all selected CTA scans were individually evaluated for a visual CS by three groups: 7 radiologists, 13 junior and 9 senior radiology residents. Two additional radiologists scored CS to be used as reference, with a third providing a CS to produce a 2 out of 3 consensus CS in case of disagreement. An automated algorithm was also used to compute CS. Inter-rater agreement was reported with intraclass correlation coefficient (ICC). Accuracy of visual and automated CS were calculated. Results 39 CTA scans were assessed (1 corrupt CTA-scan excluded). All groups showed a moderate ICC (0.689-0.780) in comparison to the reference standard. Overall human accuracy was 65 +/- 7% and increased to 88 +/- 5% for dichotomized CS (0-1, 2-3). Automated CS accuracy was 62%, and 90% for dichotomized CS. No significant difference in accuracy was found between groups with different levels of expertise. Conclusion After training, inter-rater reliability in collateral scoring was not influenced by experience. Automated CS performs similar to residents and radiologists in determining a collateral score. Show less
Objectives Outcome of endovascular treatment in acute ischemic stroke patients depends on collateral circulation to provide blood supply to the ischemic territory. We evaluated the performance of a... Show moreObjectives Outcome of endovascular treatment in acute ischemic stroke patients depends on collateral circulation to provide blood supply to the ischemic territory. We evaluated the performance of a commercially available algorithm for assessing the collateral score (CS) in acute ischemic stroke patients. Methods Retrospectively, baseline CTA scans (<= 3-mm slice thickness) with an intracranial carotid artery (ICA), middle cerebral artery segment M1 or M2 occlusion, from the MR CLEAN Registry (n = 1627) were evaluated. All CTA scans were evaluated for visual CS (0-3) by eight expert radiologists (reference standard). A Web-based AI algorithm quantified the collateral circulation (0-100%) for correctly detected occlusion sides. Agreement between visual CS and categorized automated CS (0: 0%, 1: > 0- <= 50%, 2: > 50- < 100%, 3: 100%) was assessed. Area under the curve (AUC) values for classifying patients in having good (CS: 2-3) versus poor (CS: 0-1) collaterals and for predicting functional independence (90-day modified Rankin Scale 0-2) were computed. Influence of CTA acquisition timing after contrast material administration was reported. Results In the analyzed scans (n = 1024), 59% agreement was found between visual CS and automated CS. An AUC of 0.87 (95% CI: 0.85-0.90) was found for discriminating good versus poor CS. Timing of CTA acquisition did not influence discriminatory performance. AUC for predicting functional independence was 0.66 (95% CI 0.62-0.69) for automated CS, similar to visual CS 0.64 (95% CI 0.61-0.68). Conclusions The automated CS performs similar to radiologists in determining a good versus poor collateral score and predicting functional independence in acute ischemic stroke patients with a large vessel occlusion. Show less
Purpose: This study aims to investigate the correlation between myocardial area at risk at coronary computed tomography angiography (CCTA) and the ischemic burden derived from myocardial computed... Show morePurpose: This study aims to investigate the correlation between myocardial area at risk at coronary computed tomography angiography (CCTA) and the ischemic burden derived from myocardial computed tomography perfusion (CTP) by using the 17-segment model. Methods: Forty-two patients with chest pain complaints who underwent a combined CCTA and CTP protocol were identified. Patients with reversible ischemia at CTP and at least one stenosis of >= 50% at CCTA were selected. Myocardial area at risk was calculated using a Voronoi-based segmentation algorithm at CCTA and was defined as the sum of all territories related to a >= 50% stenosis as a percentage of the total left ventricular (LV) mass. The latter was calculated using LV contours which were automatically drawn using a machine learning algorithm. Subsequently, the ischemic burden was defined as the number of segments demonstrating relative hypoperfusion as a percentage of the total amount of segments (=17). Finally, correlations were tested between the myocardial area at risk and the ischemic burden using Pearson's correlation coefficient. Results: A total of 77 coronary lesions were assessed. Average myocardial area at risk and ischemic burden for all lesions was 59% and 23%, respectively. Correlations for >= 50% and >= 70% stenosis based myocardial area at risk compared to ischemic burden were moderate (r = 0.564; p < 0.01) and good (r = 0.708; p < 0.01), respectively. Conclusion: The relation between myocardial area at risk as calculated by using a Voronoi-based algorithm at CCTA and ischemic burden as assessed by CTP is dependent on stenosis severity. Show less
The main argument of this chapter is that digital ethnography is neither new nor consisting of one single approach. It’s a set of methods that studies the use of digital technology both on- and... Show moreThe main argument of this chapter is that digital ethnography is neither new nor consisting of one single approach. It’s a set of methods that studies the use of digital technology both on- and offline, while at the same time using affordances of these very same digital technologies for studying the impact of the digital on cultural practice and social relations. The chapter addresses some of the definitional issues of an ethnography of the digital: How is it defined as a form of inquiry? And do we need a separate sub-discipline in order to study the digital ethnographically? Secondly, and in a next section, it refers to some of the foundational moments of digital ethnography, explaining how these have triggered new approaches and novel ways of understanding the digital. The fourth section particularly focuses on the methodological consequences of such shifts, looking at some of the classical methods and techniques used in doing digital ethnography whilst similarly exploring new frontiers where the ‘fireworks’ are expected to happen. After a brief section delving into some of the emergent ethical issues in this field, I will conclude this contribution with recommendations on how to teach (ourselves) digital ethnography. Show less
Driest, F.Y. van; Geest, R.J. van der; Broersen, A.; Dijkstra, J.; Mahdiui, M. el; Jukema, J.W.; Scholte, A.J.H.A. 2021
Combination of coronary computed tomography angiography (CCTA) and adenosine stress CT myocardial perfusion (CTP) allows for coronary artery lesion assessment as well as myocardial ischemia.... Show moreCombination of coronary computed tomography angiography (CCTA) and adenosine stress CT myocardial perfusion (CTP) allows for coronary artery lesion assessment as well as myocardial ischemia. However, myocardial ischemia on CTP is nowadays assessed semi-quantitatively by visual analysis. The aim of this study was to fully quantify myocardial ischemia and the subtended myocardial mass on CTP. We included 33 patients referred for a combined CCTA and adenosine stress CTP protocol, with good or excellent imaging quality on CTP. The coronary artery tree was automatically extracted from the CCTA and the relevant coronary artery lesions with a significant stenosis (>= 50%) were manually defined using dedicated software. Secondly, epicardial and endocardial contours along with CT perfusion deficits were semi-automatically defined in short-axis reformatted images using MASS software. A Voronoi-based segmentation algorithm was used to quantify the subtended myocardial mass, distal from each relevant coronary artery lesion. Perfusion defect and subtended myocardial mass were spatially registered to the CTA. Finally, the subtended myocardial mass per lesion, total subtended myocardial mass and perfusion defect mass (per lesion) were measured. Voronoi-based segmentation was successful in all cases. We assessed a total of 64 relevant coronary artery lesions. Average values for left ventricular mass, total subtended mass and perfusion defect mass were 118, 69 and 7 g respectively. In 19/33 patients (58%) the total perfusion defect mass could be distributed over the relevant coronary artery lesion(s). Quantification of myocardial ischemia and subtended myocardial mass seem feasible at adenosine stress CTP and allows to quantitatively correlate coronary artery lesions to corresponding areas of myocardial hypoperfusion at CCTA and adenosine stress CTP. Show less
Algorithms have become increasingly common, and with this development, so have algorithms that approximate human speech. This has introduced new issues with which courts and legislators will have... Show moreAlgorithms have become increasingly common, and with this development, so have algorithms that approximate human speech. This has introduced new issues with which courts and legislators will have to grapple. Courts in the United States have found that search engine results are a form of speech that is protected by the Constitution, and cases in Europe concerning liability for autocomplete suggestions have led to varied results. Beyond these instances, insight into how courts handle algorithmic speech are few and far between.By focusing on three categories of algorithmic speech, defined as curated production, interactive/responsive production, and semiautonomous production, this Article analyzes these various forms of algorithmic speech within the international framework for freedom of expression. After a brief introduction of that framework and a look towards approaches to algorithmic speech in the United States, the Article then examines whether the creators or controllers of different forms of algorithms should be considered content providers or mere intermediaries, the determination of which ultimately has implications for liability, which is also explored. The Article then looks at possible interferences with algorithmic speech, and how such interferences may be examined under the three-part test—particular attention is paid to the balancing of rights and interests at play—in order to answer the question of the extent to which algorithmic speech is worthy of protection under international standards of freedom of expression. Finally, other relevant issues surrounding algorithmic speech are discussed that will have an impact going forward, many of which involve questions of policy and societal values that accompany granting algorithmic speech protection. Show less
This chapter explores how algorithms produce aesthetic forms and dystopian configurations across Palestinian cyber and digital spaces. Through surveillance and erasure, algorithms operate as... Show moreThis chapter explores how algorithms produce aesthetic forms and dystopian configurations across Palestinian cyber and digital spaces. Through surveillance and erasure, algorithms operate as infrastructures of (in)visibility on social media, digital maps, navigation apps, and augmented reality video-games. On the one hand, they serve the Israeli system of control by making Palestinian users and contents hyper-visible to surveillance. On the other, by imposing (self-)censorship and erasure from digital representations, they ultimately purport to delete Palestine from cyber spaces. Acting at the threshold of the (in)visible, algorithms do not only enact control and surveillance, but they also inform the creation of an aesthetics of disappearance. In this light, this chapter problematizes the normative assumption equating invisibility – in the form of masking or disconnection – to freedom and emancipation by introducing the concept of aesthetics by algorithms as new canon and form of ordering of the colonial space. Show less
Grootjans, W.; Serem, S.J.; Gomes, M.I.; Heijmen, L.; Bulten, B.; Mijnheere, E.P.; ... ; Broek, W.J. van den 2018
The GDPR poses special requirements for the processing of sensitive data, but it is not clear whether these requirements are sufficient to prevent the risk associated with this processing because...
The GDPR poses special requirements for the processing of sensitive data, but it is not clear whether these requirements are sufficient to prevent the risk associated with this processing because this risk is not clearly defined.
Furthermore, the GDPR’s clauses on the processing of—and profiling based on—sensitive data do not sufficiently account for the fact that individual data subjects are parts of complex systems, whose emergent properties betray sensitive traits from non-sensitive data.
The algorithms used to process big data are largely opaque to both controllers and data subjects: if the output of an algorithm has discriminatory effects coinciding with sensitive traits because the algorithm accidentally discerns an emergent property, this may remain unnoticed. At the moment, there are no remedies that can prevent the discovery of sensitive traits from non-sensitive data.
Managing the risks resulting from processing data that can reveal sensitive traits requires a strategy combining precautionary measures, public discourse, and enforcement until the risks are more completely understood. Insights from complex systems science are likely to be useful in better understanding these risks.
Unambiguous sequence variant descriptions are important in reporting the outcome of clinical diagnostic DNA tests. The standard nomenclature of the Human Genome Variation Society (HGVS)... Show moreUnambiguous sequence variant descriptions are important in reporting the outcome of clinical diagnostic DNA tests. The standard nomenclature of the Human Genome Variation Society (HGVS) describes the observed variant sequence relative to a given reference sequence. We propose an efficient algorithm for the extraction of HGVS descriptions from two DNA sequences. Our algorithm is able to compute the HGVS~descriptions of complete chromosomes or other large DNA strings in a reasonable amount of computation time and its resulting descriptions are relatively small. Additional applications include updating of gene variant database contents and reference sequence liftovers. Next, we adapted our method for the extraction of descriptions for protein sequences in particular for describing frame shifted variants. We propose an addition to the HGVS nomenclature for accommodating the (complex) frame shifted variants that can be described with our method. Finally, we applied our method to generate descriptions for Short Tandem Repeats (STRs), a form of self-similarity. We propose an alternative repeat variant that can be added to the existing HGVS nomenclature. The final chapter takes an explorative approach to classification in large cohort studies. We provide a ``cross-sectional'' investigation on this data to see the relative power of the different groups. Show less
Many scientists are focussed on building models. We nearly process all information we perceive to a model. There are many techniques that enable computers to build models as well. The field... Show more Many scientists are focussed on building models. We nearly process all information we perceive to a model. There are many techniques that enable computers to build models as well. The field of research that develops such techniques is called Machine Learning. Many research is devoted to develop computer programs capable of building models (algorithms). Many of such algorithms exist, and these often consist of various options that subtly influence performance (parameters). Furthermore, there is mathematical proof that there exists no single algorithm that works well on every dataset. This complicates the task of selecting the right algorithm for a given task. The field of meta-learning aims to resolve these problems. The purpose is to determine what kind of algorithms work well on which datasets. In order to do so, we developed OpenML. This is an online database on which researches can share experimental results amongst each other, potentially scaling up the size of meta-learning studies. Having earlier experimental results freely accessible and reusable for others, it is no longer required to conduct time expensive experiments. Rather, researchers can answer such experimental questions by a simple database look-up. This thesis addresses how OpenML can be used to answer fundamental meta-learning questions. Show less
We describe a formal notation for DNA molecules that may contain nicks and gaps. The resulting DNA expressions denote formal DNA molecules. Different DNA expressions may denote the same molecule.... Show moreWe describe a formal notation for DNA molecules that may contain nicks and gaps. The resulting DNA expressions denote formal DNA molecules. Different DNA expressions may denote the same molecule. Such DNA expressions are called equivalent. We examine which DNA expressions are minimal, which means that they have the shortest length among all equivalent DNA expressions. Among others, we describe how to construct a minimal DNA expression for a given molecule. We also present an efficient, recursive algorithm to rewrite a given DNA expression into an equivalent, minimal DNA expression. For many formal DNA molecules, there exists more than one minimal DNA expression. We define a minimal normal form, i.e., a set of properties such that for each formal DNA molecule, there is exactly one (minimal) DNA expression with these properties. We finally describe an efficient, two-step algorithm to rewrite an arbitrary DNA expression into this normal form. Show less
This thesis is about algorithms for analyzing large real-world graphs (or networks). Examples include (online) social networks, webgraphs, information networks, biological networks and scientific... Show moreThis thesis is about algorithms for analyzing large real-world graphs (or networks). Examples include (online) social networks, webgraphs, information networks, biological networks and scientific collaboration and citation networks. Although these graphs differ in terms of what kind of information the objects and relationships represent, it turns out that the structure of each these networks is surprisingly similar.For computer scientists, there is an obvious challenge to design efficient algorithms that allow large graphs to be processed and analyzed in a practical setting, facing the challenges of processing millions of nodes and billions of edges. Specifically, there is an opportunity to exploit the non-random structure of real-world graphs to efficiently compute or approximate various properties and measures that would be too hard to compute using traditional graph algorithms. Examples include computation of node-to-node distances and extreme distance measures such as the exact diameter and radius of a graph. Show less
Modern radiotherapy requires accurate region of interest (ROI) inputs for plan optimization and delivery. Target delineation, however, remains operator-dependent and potentially serves as a major... Show moreModern radiotherapy requires accurate region of interest (ROI) inputs for plan optimization and delivery. Target delineation, however, remains operator-dependent and potentially serves as a major source of treatment delivery error. In order to optimize this critical, yet observer-driven process, a flexible web-based platform for individual and cooperative target delineation analysis and instruction was developed in order to meet the following unmet needs: (1) an open-source/open-access platform for automated/semiautomated quantitative interobserver and intraobserver ROI analysis and comparison, (2) a real-time interface for radiation oncology trainee online self-education in ROI definition, and (3) a source for pilot data to develop and validate quality metrics for institutional and cooperative group quality assurance efforts. The resultant software, Target Contour Testing/Instructional Computer Software (TaCTICS), developed using Ruby on Rails, has since been implemented and proven flexible, feasible, and useful in several distinct analytical and research applications. Show less
Evolution and Interaction are two processes in Computer Science that are used in many algorithms to create, shape, find and optimize solutions to real world problems. Evolution has been very... Show moreEvolution and Interaction are two processes in Computer Science that are used in many algorithms to create, shape, find and optimize solutions to real world problems. Evolution has been very successfully applied as a pow-erful tool to solve complex search problems in fields ranging from physics, chemistry and biology all the way to commercial application such as aircraft fuselage design and civil engineering grading plans. Defining interaction is a big part of algorithm design. Not only defining the inputs and outputs of an algorithm but for a complex algorithm the interactions inside of an al- gorithm are as important. This thesis will concentrate on where Evolution overlaps Interaction. It will show how evolution can be used to evolve in- teraction, how the interaction inside an evolutionary algorithm impacts its performance and how an evolutionary algorithm can interact with humans. By touching on these three forms of overlap this thesis tries to give insight into the world of evolution and interaction Show less
This thesis is about arithmetic, analytic and algorithmic aspects of modular curves and modular forms. The arithmetic and analytic aspects are linked by the viewpoint that modular curves are... Show moreThis thesis is about arithmetic, analytic and algorithmic aspects of modular curves and modular forms. The arithmetic and analytic aspects are linked by the viewpoint that modular curves are examples of arithmetic surfaces. Therefore, Arakelov theory (intersection theory on arithmetic surfaces) occupies a prominent place in this thesis. Apart from this, a substantial part of it is devoted to studying modular curves over finite fields, and their Jacobian varieties, from an algorithmic viewpoint. The end product of this thesis is an algorithm for computing modular Galois representations. These are certain two-dimensional representations of the absolute Galois group of the rational numbers that are attached to Hecke eigenforms over finite fields. The running time of our algorithm is (under minor restrictions) polynomial in the length of the input. This main result is a generalisation of that of work of Jean-Marc Couveignes, Bas Edixhoven et al. Several intermediate results are developed in sufficient generality to make them of interest to the study of modular curves and modular forms in a wider sense. Show less
The increase in capabilities of information technology of the last decade has led to a large increase in the creation of raw data. Data mining, a form of computer guided, statistical data analysis,... Show moreThe increase in capabilities of information technology of the last decade has led to a large increase in the creation of raw data. Data mining, a form of computer guided, statistical data analysis, attempts to draw knowledge from these sources that is usable, human understandable and was previously unknown. One of the potential application domains is that of law enforcement. This thesis describes a number of efforts in this direction and reports on the results reached on the application of its resulting algorithms on actual police data. The usage of specifically tailored data mining algorithms is shown to have a great potential in this area, which forebodes a future where algorithmic assistance in "combating" crime will be a valuable asset. Show less
It is shown how to solve diagonal forms in many variables over finite fields by means of a deterministic efficient algorithm. Applications to norm equations, quadratic forms, and elliptic curves... Show moreIt is shown how to solve diagonal forms in many variables over finite fields by means of a deterministic efficient algorithm. Applications to norm equations, quadratic forms, and elliptic curves are given. Show less
Many databases do not consist of a single table of fixed dimensions, but of objects that are related to each other: the databases are relational, or structured. We study the discovery of patterns... Show moreMany databases do not consist of a single table of fixed dimensions, but of objects that are related to each other: the databases are relational, or structured. We study the discovery of patterns in such data. In our approach, a data analyst specifies constraints on patterns that she believes to be of interest, and the computer searches for patterns that satisfy these constraints. An important constraint on which we focus, is the constraint that a pattern should have a significant number of occurrences in the data. Constraints like this allow the search to be performed reasonably efficiently. We develop algorithms for searching ppatterns taht are represented in formal first order logic, tree data structures and graph data structures. We perform experiments in which these algorithms, and algorithms proposed by other researchers, are compared with each other, and study which properties determine the efficiency of the algorithms. As a result, we are able to develop more efficient algorithms. As application we study the discovery of fragments in molecular datasets. The aim is to discover fragments that relate the structure of molecules to their activity. Show less