We are living in an information era where the amount of image and video data increases exponentially. It is important to develop intelligent visual understanding systems to satisfy our need for... Show moreWe are living in an information era where the amount of image and video data increases exponentially. It is important to develop intelligent visual understanding systems to satisfy our need for searching information of interest. An important example of such a system that, with the current increasing concern for public security, is urgently required, is an automated person Re-Identification (ReID) system. This thesis mainly focuses on exploring ReID systems via deep learning methods. To enable ReID systems to meet the so-called open-world challenges, we explore three themes that are challenging yet practical in real application scenarios: lifelong learning, unsupervised domain adaptation and cross-modality challenge. Furthermore, this thesis provides numerous experiments and in-depth analysis, which can help motivate further research on the three research themes. Show less
Image registration is the process of aligning images by finding the spatial relation between the images. Assuming two images called fixed and moving images are taken at different time, different... Show moreImage registration is the process of aligning images by finding the spatial relation between the images. Assuming two images called fixed and moving images are taken at different time, different spatial location, or via a different imaging technique, the aim of image registration is to find an optimal transformation that aligns the fixed and the moving images. Performing an automatic fast image registration with less manual finetuning can speed up numerous medical image processing procedures. In addition, an automatic quality assessment of registration can speed up this time-consuming task. In this thesis, we developed a fast learning-based image registration technique called RegNet.Predicting registration error can be useful for evaluation of registration procedures, which is important for the adoption of registration techniques in the clinic. In addition, quantitative error prediction can be helpful in improving the registration quality. In this thesis, we proposed two quality assessment mechanisms using random forests (RF) and convolutional long short term memory (ConvLSTM), in which the latter performs faster and more accurate. Show less
In 2018, the number of mobile phone users will reach about 4.9 billion. Assuming an average of 5 photos taken per day using the built-in cameras would result in about 9 trillion photos annually... Show moreIn 2018, the number of mobile phone users will reach about 4.9 billion. Assuming an average of 5 photos taken per day using the built-in cameras would result in about 9 trillion photos annually. Thus, it becomes challenging to mine semantic information from such a huge amount of visual data. To solve this challenge, deep learning, an important sub-field in machine learning, has achieved impressive developments in recent years. Inspired by its success, this thesis aims to develop new approaches in deep learning to explore and analyze image data from three research themes: classification, retrieval and synthesis. In summary, the research of this thesis contributes at three levels: models and algorithms, practical scenarios and empirical analysis. First, this work presents new approaches based on deep learning to address eight research questions regarding the three themes. In addition, it aims towards adapting the approaches to practical scenarios in real world. Furthermore, this thesis provides numerous experiments and in-depth analysis, which can help motivate further research on the three research themes. Computer Vision Multimedia Applications Deep Learning Show less
In conclusion, this thesis proposes a new approach for reconstruction of coronary artery and the implanted BRS by fusion of OCT and X-ray angiography to analyze intracoronary ESS in vivo. The... Show moreIn conclusion, this thesis proposes a new approach for reconstruction of coronary artery and the implanted BRS by fusion of OCT and X-ray angiography to analyze intracoronary ESS in vivo. The studies conducted in this thesis demonstrate the feasibility of the proposed approach to analyze the detailed local coronary hemodynamics in patients, including the SS patterns after BRS implantation in coronary bifurcations. We observed that in vivo assessment of ESS was closely related to: (1) reconstruction of the side branches; (2) reconstruction of the BRS; (3) patient-specific flow; (4) post-processing portion size in which ESS was calculated. Based on these findings, we propose the following standard analysis procedures for assessment of intracoronary ESS in vivo: (1) reconstruct both the main vessel and its side branches to create a more accurate geometric model. (2) reconstruct BRS in naturally-bent shape and include it in the CFD analysis for assessment of ESS after BRS implantation; (3) use patient-specific coronary flow in the CFD analysis to have more accurate boundary condition; (4) set the proportion size according to the interrogated region of interest for quantification of the ESS in portions. Show less
The aim of this thesis is to develop image processing solutions that enable the fully automatic pre-operative planning of aorta-related procedures, such as Trans-catheter aortic valve... Show moreThe aim of this thesis is to develop image processing solutions that enable the fully automatic pre-operative planning of aorta-related procedures, such as Trans-catheter aortic valve replacement and aorta dilatation diagnosis. Hence, the objectives of this thesis are as follows: 1. To fully automatically quantify the aorto-iliac vascular access route, including the aortic root by image processing methods in CTA. 2. To broaden the scope of automatic methods into the detection of aorta dilatation. 3. To integrate the automatic quantification methods into applications which allow manual interactions and the calculation of clinically relevant parameters. 4. To demonstrate the accuracy and feasibility of the fully automatic planning and quantification methods in different patient cohorts. Show less
Carotid atherosclerosis, a disease in which plaque builds up inside the vessel wall, is a major cause of ischemic stroke. Traditionally, atherosclerosis risk stratification is heavily based on... Show moreCarotid atherosclerosis, a disease in which plaque builds up inside the vessel wall, is a major cause of ischemic stroke. Traditionally, atherosclerosis risk stratification is heavily based on the percentage of stenosis. However, a growing body of evidence suggests that luminal stenosis may not be the only cause of symptoms but the plaque composition may be more likely to impact the disease outcome. High-resolution vessel wall magnetic resonance imaging (VWMRI) is one of the most promising modalities for visualizing and evaluating carotid atherosclerotic plaque. The quantitative assessment of carotid atherosclerotic disease requires vessel wall segmentation and plaque classification, which is generally performed by manual delineations. However, manual contour tracing is labor-intensive, time-consuming and subject to inter-observer and inter-scan variability, which makes manual image analysis impractical for studies where large volume of data needs to be processed. Therefore, the main goal of this thesis is to: 1) develop approaches to automatically, robustly and reproducibly segment the carotid vessel wall and classify the atherosclerotic plaque from multi-spectral VWMRI; 2) validate the developed methods with reference standard; 3) extract the imaging biomarkers that can assist carotid artery disease evaluation. Show less
In this dissertation we developed a number of automatic methods for multi-modal data registration, mainly between mass spectrometry imaging, imaging microscopy, and the Allen Brain Atlas. We... Show moreIn this dissertation we developed a number of automatic methods for multi-modal data registration, mainly between mass spectrometry imaging, imaging microscopy, and the Allen Brain Atlas. We have shown the importance of these methods for performing large scale preclinical biomarker discovery investigations for neurological disorders. We have also proposed a data-driven approach to stratify patients’ tumor tissues into molecularly distinct tumor subpopulations and automatically identify those tumor subpopulations that drive patient outcome. Show less
This thesis describes a versatile tuple-based optimization framework. This framework is capable of optimizing traditional imperative codes (such as sparse matrix computations) as well as... Show moreThis thesis describes a versatile tuple-based optimization framework. This framework is capable of optimizing traditional imperative codes (such as sparse matrix computations) as well as declarative codes (such as database queries). In the first part of this thesis, the vertical integration of database applications is discussed. Using the described framework it is possible to represent the application codes as well as the declarative database queries within the same intermediate representation, unlocking many optimization opportunities. The second part of this thesis explores the optimization of irregular codes using this framework. It is shown that by expressing irregular codes within the presented framework, many different variants of this code using different data structures can be generated automatically. Show less
Heterogeneous computing platforms support the traditional types of parallelism, such as e.g., instruction-level, data, task, and pipeline parallelism, and provide the opportunity to exploit a... Show moreHeterogeneous computing platforms support the traditional types of parallelism, such as e.g., instruction-level, data, task, and pipeline parallelism, and provide the opportunity to exploit a combination of different types of parallelism at different platform levels. The architectural diversity of platform components makes tapping into the platform potential a challenging programming task. This thesis makes an important step in this direction by introducing a novel methodology for automatic generation of structured, multi-level parallel programs from sequential applications. We introduce a novel hierarchical intermediate program representation (HiPRDG) that captures the notions of structure and hierarchy in the polyhedral model used for compile-time program transformation and code generation. Using the HiPRDG as the starting point, we present a novel method for generation of multi-level programs (MLPs) featuring different types of parallelism, such as task, data, and pipeline parallelism. Moreover, we introduce concepts and techniques for data parallelism identification, GPU code generation, and asynchronous data-driven execution on heterogeneous platforms with efficient overlapping of host-accelerator communication and computation. By enabling the modular, hybrid parallelization of program model components via HiPRDG, this thesis opens the door for highly efficient tailor-made parallel program generation and auto-tuning for next generations of multi-level heterogeneous platforms with diverse accelerators. Show less
This thesis proposes several new algorithms including X-ray angiographic image enhancement, three-dimensional (3D) angiographic reconstruction, angiographic overlap prediction, and the co... Show moreThis thesis proposes several new algorithms including X-ray angiographic image enhancement, three-dimensional (3D) angiographic reconstruction, angiographic overlap prediction, and the co-registration of X-ray angiography with intracoronary imaging devices, such as intravascular ultrasound (IVUS) and optical coherence tomography (OCT). The algorithms were integrated into prototype software packages that were validated at a number of clinical centers. The feasibility of using such software packages in typical clinical population was verified, while the advantages and accuracy of the proposed algorithms were demonstrated by phantoms and in-vivo clinical studies. In addition, based on the proposed approaches and the conducted studies, this thesis reports a number of findings including the impact of acquisition angle difference on 3D quantitative coronary angiography (QCA), the clinical characteristics of bifurcation optimal viewing angles and bifurcation angles, and the discrepancy of lumen dimensions as assessed by 3D QCA and by IVUS or OCT. Show less
In contemporary computer systems, data layout has great influence on performance. Traditionally, automatic restructuring in type-unsafe languages has been hard, especially in the presence of... Show moreIn contemporary computer systems, data layout has great influence on performance. Traditionally, automatic restructuring in type-unsafe languages has been hard, especially in the presence of pointers. In this thesis, the foundations are laid for successful restructuring of pointer linked data structures in type-unsafe languages such as C. Show less
With the increasing prevalence and hospitalization rate of ischaemic heart disease, an explosive growth of diagnostic imaging for ischaemia is ongoing. Clinical decision making on revascularization... Show moreWith the increasing prevalence and hospitalization rate of ischaemic heart disease, an explosive growth of diagnostic imaging for ischaemia is ongoing. Clinical decision making on revascularization procedures requires reliable viability assessment to assure long-term patient survival and to elevate cost effectiveness of the therapy and treatment. As such, the demand is increasing for a computer-assisted diagnosis (CAD) method for ischaemic heart disease that supports clinicians with an objective analysis of infarct severity, a viability assessment or a prediction of potential functional improvement before performing revascularization. The goal of this thesis was to explore novel mechanisms that can be used for CAD in ischaemic heart disease, particularly through wall motion analysis from cardiac MR images. Existing diagnostic treatment of wall motion analysis from cardiac MR relies on visual wall motion scoring, which suffers from inter- and intra-observer variability. To minimize this variability, the automated method must contain essential knowledge on how the heart contracts normally. This enables automatic quantification of regional abnormal wall motion, detection of segments with contractile reserve and prediction of functional improvement in stress. Show less
In this thesis we aim at automating the analysis of 3D echocardiography, mainly targeting the functional analysis of the left ventricle. Manual analysis of these data is cumbersome, time-consuming... Show moreIn this thesis we aim at automating the analysis of 3D echocardiography, mainly targeting the functional analysis of the left ventricle. Manual analysis of these data is cumbersome, time-consuming and is associated with inter-observer and inter-institutional variability. Methods for reconstruction of 3D echocardiographic images from fast rotating ultrasound transducers is presented and methods for analysis of 3D echocardiography in general, using tracking, detection and model-based segmentation techniques to ultimately fully automatically segment the left ventricle for functional analysis. We show that reliable quantification of left ventricular volume and mitral valve displacement can be achieved using the presented techniques. Show less
This thesis presents a parallel and distributed approach for the purpose of processing network traffic at high speeds. The proposed architecture provides the processing power required to run one or... Show moreThis thesis presents a parallel and distributed approach for the purpose of processing network traffic at high speeds. The proposed architecture provides the processing power required to run one or more traffic processing applications at line rates by means of processing full packets at multi-gigabits speeds using a parallel and distributed processing environment. Moreover, the architecture is flexible and scalable to future needs by supporting heterogeneous processing nodes such as different hardware architectures or different generations of the same hardware architecture. In addition to the processing, flexibility, and scalability features, our architecture provides an easy-to-use environment with the help of a new programming language, called FPL, for traffic processing in a distributed environment. The language and its compiler come to hide specific programming details when using heterogeneous systems and a distributed environment. Show less
Workloads play an important role in experimental performance studies of computer systems. This thesis presents a comprehensive characterization of real workloads on production clusters and Grids. A... Show moreWorkloads play an important role in experimental performance studies of computer systems. This thesis presents a comprehensive characterization of real workloads on production clusters and Grids. A variety of correlation structures and rich scaling behavior are identified in workload attributes such as job arrivals and run times, including pseudo-periodicity, long range dependence, and strong temporal locality. Based on the analytic results workload models are developed to fit the real data. For job arrivals three different kinds of autocorrelations are investigated. For short to middle range dependent data, Markov modulated Poisson processes (MMPP) are good models because they can capture correlations between interarrival times while remaining analytically tractable. For long range dependent and multifractal processes, the multifractal wavelet model (MWM) is able to reconstruct the scaling behavior and it provides a coherent wavelet framework for analysis and synthesis. Pseudo-periodicity is a special kind of autocorrelation and it can be modeled by a matching pursuit approach. For workload attributes such as run time a new model is proposed that can fit not only the marginal distribution but also the second order statistics such as the autocorrelation function (ACF). The development of workload models enable the simulation studies of Grid scheduling strategies. By using the synthetic traces, the performance impacts of workload correlations in Grid scheduling is quantitatively evaluated. The results indicate that autocorrelations in workload attributes can cause performance degradation, in some situations the difference can be up to several orders of magnitude. The larger the autocorrelation, the worse the performance, it is proved both at the cluster and Grid level. This study shows the importance of realistic workload models in performance evaluation studies. Regarding performance predictions, this thesis treats the targeted resources as a ``black box'' and takes a statistical approach. It is shown that statistical learning based methods, after a well-thought and fine-tuned design, are able to deliver good accuracy and performance. Show less
Accurate visualization and quantification of atherosclerosis in a non-invasive manner by means of MR is, nowadays, of high importance, not only regarding morphology but also composition of the... Show moreAccurate visualization and quantification of atherosclerosis in a non-invasive manner by means of MR is, nowadays, of high importance, not only regarding morphology but also composition of the atherosclerotic plaques. MR has proven to be capable of detecting early, subclinical vulnerable plaque. However, when the analysis of atherosclerosis is based on visual interpretation of the images or manually delineated structures, the outcome is often not reliable. The main objective of this thesis was to investigate novel image processing techniques to automatically quantify markers of the severity of atherosclerosis in a reproducible manner, by automatically outlining the boundaries of the blood vessel wall, lumen and plaque burden. Different algorithms have been developed and applied to different vascular beds (aorta and carotid arteries) proving to be versatile and powerful tools, that provide quantitative reproducible parameters. These new techniques have been validated in limited populations, proving to be accurate and reproducible. They are, therefore, suitable to be further adapted and to be employed in the clinical vascular research. This assists the physician in making a diagnosis and identifying high-risk patients, who may benefit from treatment. Show less
Modern compilers implement a number of optimization switches and they must be configured carefully in order to obtain the best performance. However, there exist few strategies to configure these... Show moreModern compilers implement a number of optimization switches and they must be configured carefully in order to obtain the best performance. However, there exist few strategies to configure these compiler switches or flags. This is caused by the fact that the performance of a code is both dependent on the target architecture and the application. Additionally, the effect of the compiler optimizations is highly dependent on other compiler optimizations which are employed, causing the actual effect to be masked and not predictable. In this thesis, we propose to use statistical analysis to determine the effectiveness of the compiler optimizations. This enables us to construct systematic methodologies for determining settings of compiler optimizations automatically. The proposed methodologies are all independent from the implementation of compilers and implementation of applications, therefore, it is easy to apply our methodologies to any combination of compilers and applications. This versatility makes our results unique compared to other approaches. Additionally, with our methodologies, users can choose their optimization objective, for example, execution time or code size, etc. From the results shown in this thesis, we can concluded that the statistical tuning of compiler optimization is both possible and useful. Show less
The remainder of this thesis is organized as follows. Chapters 2 and 3 introduce the specification formalisms that are used in this thesis. In Chapter 2 we present the computation language. We show... Show moreThe remainder of this thesis is organized as follows. Chapters 2 and 3 introduce the specification formalisms that are used in this thesis. In Chapter 2 we present the computation language. We show that it facilitates the description of specifications that are not partial to a particular mode of execution. Furthermore, we present a semantics and a logic for reasoning about correctness of programs. In Chapter 3 we present the coordination language. We define its semantics and show how it connects to the computation language. In Chapters 4 and 5 we develop a theory of refinement. This theory provides a number of proof techniques that enable us to incrementally refine the behavioural aspects of a program. These chapters form the most theoretical part of this thesis. It should be possible to get an understanding of the methods derived in these chapters without going through all these proofs. In Chapter 7 we illustrate the method of design by considering some case studies. Comparisons with related work and conclusions are described in Chapters 8 and 9. Show less