Pulmonary function tests (PFTs) play an important role in screening and following-up pulmonary involvement in systemic sclerosis (SSc). However, some patients are not able to perform PFTs due to... Show morePulmonary function tests (PFTs) play an important role in screening and following-up pulmonary involvement in systemic sclerosis (SSc). However, some patients are not able to perform PFTs due to contraindications. In addition, it is unclear how lung function is affected by changes in lung structure in SSc. Therefore, this study aims to explore the potential of automatically estimating PFT results from chest CT scans of SSc patients and how different regions influence the estimation of PFTs. Deep regression networks were developed with transfer learning to estimate PFTs from 316 SSc patients. Segmented lungs and vessels were used to mask the CT images to train the network with different inputs: from entire CT scan, lungs-only to vessels-only. The network trained on entire CT scans with transfer learning achieved an ICC of 0.71, 0.76, 0.80, and 0.81 for the estimation of DLCO, FEV1, FVC and TLC, respectively. The performance of the networks gradually decreased when trained on data from lungs-only and vessels-only. Regression attention maps showed that regions close to large vessels were highlighted more than other regions, and occasionally regions outside the lungs were highlighted. These experiments show that apart from the lungs and large vessels, other regions contribute to PFT estimation. In addition, adding manually designed biomarkers increased the correlation (R) from 0.75, 0.74, 0.82, and 0.83 to 0.81, 0.83, 0.88, and 0.90, respectively. This suggests that that manually designed imaging biomarkers can still contribute to explaining the relation between lung function and structure. Show less
Helden, G. van; Werf, V. van der; Saunders-Smits, G.N.; Specht, M.M. 2023
Automated Machine Learning (AutoML) frameworks are designed to select the optimal combination of operators and hyperparameters. Classical AutoML-based Bayesian Optimization (BO) approaches often... Show moreAutomated Machine Learning (AutoML) frameworks are designed to select the optimal combination of operators and hyperparameters. Classical AutoML-based Bayesian Optimization (BO) approaches often integrate all operator search spaces into a single search space. However, a disadvantage of this history-based strategy is that it can be less robust when initialized randomly than optimizing each operator algorithm combination independently. To overcome this issue, a novel contesting procedure algorithm, Divide And Conquer Optimization (DACOpt), is proposed to make AutoML more robust. DACOpt partitions the AutoML search space into a reasonable number of sub-spaces based on algorithm similarity and budget constraints. Furthermore, throughout the optimization process, DACOpt allocates resources to each sub-space to ensure that (1) all areas of the search space are covered and (2) more resources are assigned to the most promising sub-space. Two extensive sets of experiments on 117 benchmark datasets demonstrate that DACOpt achieves significantly better results in 36% of AutoML benchmark datasets: 5% when to compared to TPOT, 8% - to AutoSklearn, 15% - to H20 and 18% - to ATM. Show less
In the context of the current COVID-19 pandemic, various sophisticated epidemic and machine learning models have been used for forecasting. These models, however, rely on carefully selected... Show moreIn the context of the current COVID-19 pandemic, various sophisticated epidemic and machine learning models have been used for forecasting. These models, however, rely on carefully selected architectures and detailed data that is often only available for specific regions. Automated machine learning (AutoML) addresses these challenges by allowing to automatically create forecasting pipelines in a data-driven manner, resulting in high-quality predictions. In this paper, we study the role of open data along with AutoML systems in acquiring high-performance forecasting models for COVID-19. Here, we adapted the AutoML framework auto-sklearn to the time series forecasting task and introduced two variants for multi-step ahead COVID-19 forecasting, which we refer to as (a) multi-output and (b) repeated single output forecasting. We studied the usefulness of anonymised open mobility datasets (place visits and the use of different transportation modes) in addition to open mortality data. We evaluated three drift adaptation strategies to deal with concept drifts in data by (i) refitting our models on part of the data, (ii) the full data, or (iii) retraining the models completely. We compared the performance of our AutoML methods in terms of RMSE with five baselines on two testing periods (over 2020 and 2021). Our results show that combining mobility features and mortality data improves forecasting accuracy. Furthermore, we show that when faced with concept drifts, our method refitted on recent data using place visits mobility features outperforms all other approaches for 22 of the 26 countries considered in our study. Show less
Stein, B. van; Raponi, E.; Sadeghi, Z.; Bouman, N.; Ham, R.C.H.J. van; Bäck, T.H.W. 2022
Explainable Artificial Intelligence (XAI) is an increasingly important field of research required to bring AI to the next level in real-world applications. Global sensitivity analysis (GSA) methods... Show moreExplainable Artificial Intelligence (XAI) is an increasingly important field of research required to bring AI to the next level in real-world applications. Global sensitivity analysis (GSA) methods play an important role in XAI, as they can provide an understanding of which (groups of) parameters have high influence in the predictions of machine learning models and the output of simulators and real-world processes. In this paper, we conduct a survey into global sensitivity methods in an XAI context and present both a qualitative and a quantitative analysis of these methods under different conditions. In addition to the overview and comparison, we propose an open source application, GSAreport, that allows you to easily generate extensive reports using a carefully selected set of global sensitivity analysis methods depending on the number of dimensions and samples, to gain a deep understanding of the role of each feature for a given model or data set. We finally present the methods discussed in a complex real-world application of genomic prediction and draw conclusions about when to use which GSA methods. Show less
CNN design and deployment on embedded edge-processing systems is an error-prone and effort-hungry process, that poses the need for accurate and effective automated assisting tools. In such tools,... Show moreCNN design and deployment on embedded edge-processing systems is an error-prone and effort-hungry process, that poses the need for accurate and effective automated assisting tools. In such tools, pre-evaluating the platform-aware CNN metrics such as latency, energy cost, and throughput is a key requirement for successfully reaching the implementation goals imposed by use-case constraints. Especially when more complex parallel and heterogeneous computing platforms are considered, currently utilized estimation methods are inaccurate or require a lot of characterization experiments and efforts. In this paper, we propose an alternative method, designed to be flexible, easy to use, and accurate at the same time. Considering a modular platform and execution model that adequately describes the details of the platform and the scheduling of different CNN operators on different platform processing elements, our method captures precisely operations and data transfers and their deployment on computing and communication resources, signiflcantly improving the evaluation accuracy.We have tested our method on more than 2000 CNN layers, targeting an FPGA-based accelerator and a GPU platform as reference example architectures. Results have shown that our evaluation method increases the estimation precision by up to 5fl for execution time, and by 2fl for energy, compared to other widely used analytical methods. Moreover, we assessed the impact of the improved platform-awareness on a set of neural architecture search experiments, targeting both hardware platforms, and enforcing 2 sets of latency constraints, performing 5 trials on each search space, for a total number of 20 experiments. The predictability is improved by 4fl, reaching, with respect to alternatives, selection results clearly more similar to those obtained with on-hardware measurements. Show less
Winderickx, J.; Bellier, P.; Duflot, P.; Mentens, N. 2021
Skeletal muscles generate force, enabling movement through a series of fast electro-mechanical activations coordinated by the central nervous system. Understanding the underlying mechanism of such... Show moreSkeletal muscles generate force, enabling movement through a series of fast electro-mechanical activations coordinated by the central nervous system. Understanding the underlying mechanism of such fast muscle dynamics is essential in neuromuscular diagnostics, rehabilitation medicine and sports biomechanics. The unique combination of electromyography (EMG) and ultrafast ultrasound imaging (UUI) provides valuable insights into both electrical and mechanical activity of muscle fibers simultaneously, the excitation-contraction (E-C) coupling. In this feasibility study we propose a novel non-invasive method to simultaneously track the propagation of both electrical and mechanical waves in muscles using high-density electromyography and ultrafast ultrasound imaging (5000 fps). Mechanical waves were extracted from the data through an axial tissue velocity estimator based on one-lag autocorrelation. The E-C coupling in electrically evoked twitch contractions of the Biceps Brachii in healthy participants could successfully be tracked. The excitation wave (i.e. action potential) had a velocity of 3.9 +/- 0.5ms(-1) and the subsequent mechanical (i.e. contraction) wave had a velocity of 3.5 +/- 0.9ms(-1). The experiment showed evidence that contracting sarcomeres that were already activated by the action potential (AP) pull on sarcomeres that were not yet reached by the AP, which was corroborated by simulated contractions of a newly developed multisegmental muscle fiber model, consisting of 500 sarcomeres in series. In conclusion, our method can track the electromechanical muscle dynamics with high spatio-temporal resolution. Ultimately, characterizing E-C coupling in patients with neuromuscular diseases (e.g. Duchenne or Becker muscular dystrophy) may assess contraction efficiency, monitor the progression of the disease, and determine the efficacy of new treatment options. Show less
Manual or automatic delineation of the esophageal tumor in CT images is known to be very challenging. This is due to the low contrast between the tumor and adjacent tissues, the anatomical... Show moreManual or automatic delineation of the esophageal tumor in CT images is known to be very challenging. This is due to the low contrast between the tumor and adjacent tissues, the anatomical variation of the esophagus, as well as the occasional presence of foreign bodies (e.g. feeding tubes). Physicians therefore usually exploit additional knowledge such as endoscopic findings, clinical history, additional imaging modalities like PET scans. Achieving his additional information is time-consuming, while the results are error-prone and might lead to non-deterministic results. In this paper we aim to investigate if and to what extent a simplified clinical workflow based on CT alone, allows one to automatically segment the esophageal tumor with sufficient quality. For this purpose, we present a fully automatic end-to-end esophageal tumor segmentation method based on convolutional neural networks (CNNs). The proposed network, called Dilated Dense Attention Unet (DDAUnet), leverages spatial and channel attention gates in each dense block to selectively concentrate on determinant feature maps and regions. Dilated convolutional layers are used to manage GPU memory and increase the network receptive field. We collected a dataset of 792 scans from 288 distinct patients including varying anatomies with air pockets, feeding tubes and proximal tumors. Repeatability and reproducibility studies were conducted for three distinct splits of training and validation sets. The proposed network achieved a DSC value of 0.79 +/- 0.20, a mean surface distance of 5.4 +/- 20.2mm and 95% Hausdorff distance of 14.7 +/- 25.0mm for 287 test scans, demonstrating promising results with a simplified clinical workflow based on CT alone. Our code is publicly available via https://github.com/yousefis/DenseUnet_Esophagus_Segmentation. Show less
In this paper we propose a supervised method to predict registration misalignment using convolutional neural networks (CNNs). This task is casted to a classification problem with multiple classes... Show moreIn this paper we propose a supervised method to predict registration misalignment using convolutional neural networks (CNNs). This task is casted to a classification problem with multiple classes of misalignment: "correct" 0-3 mm, "poor" 3-6 mm and "wrong" over 6 mm. Rather than a direct prediction, we propose a hierarchical approach, where the prediction is gradually refined from coarse to fine. Our solution is based on a convolutional Long Short-Term Memory (LSTM), using hierarchical misalignment predictions on three resolutions of the image pair, leveraging the intrinsic strengths of an LSTM for this problem. The convolutional LSTM is trained on a set of artificially generated image pairs obtained from artificial displacement vector fields (DVFs). Results on chest CT scans show that incorporating multi-resolution information, and the hierarchical use via an LSTM for this, leads to overall better F1 scores, with fewer misclassifications in a well-tuned registration setup. The final system yields an accuracy of 87.1%, and an average F1 score of 66.4% aggregated in two independent chest CT scan studies. Show less
Elmahdy, M.S.; Beljaards, L.; Yousefi, S.; Sokooti, H.; Verbeek, F.; Heide, U.A. van der; Staring, M. 2021
Medical image registration and segmentation are two of the most frequent tasks in medical image analysis. As these tasks are complementary and correlated, it would be beneficial to apply them... Show moreMedical image registration and segmentation are two of the most frequent tasks in medical image analysis. As these tasks are complementary and correlated, it would be beneficial to apply them simultaneously in a joint manner. In this paper, we formulate registration and segmentation as a joint problem via a Multi-Task Learning (MTL) setting, allowing these tasks to leverage their strengths and mitigate their weaknesses through the sharing of beneficial information. We propose to merge these tasks not only on the loss level, but on the architectural level as well. We studied this approach in the context of adaptive image-guided radiotherapy for prostate cancer, where planning and follow-up CT images as well as their corresponding contours are available for training. At testing time the contours of the follow-up scans are not available, which is a common scenario in adaptive radiotherapy. The study involves two datasets from different manufacturers and institutes. The first dataset was divided into training (12 patients) and validation (6 patients), and was used to optimize and validate the methodology, while the second dataset (14 patients) was used as an independent test set. We carried out an extensive quantitative comparison between the quality of the automatically generated contours from different network architectures as well as loss weighting methods. Moreover, we evaluated the quality of the generated deformation vector field (DVF). We show that MTL algorithms outperform their Single-Task Learning (STL) counterparts and achieve better generalization on the independent test set. The best algorithm achieved a mean surface distance of 1.06 +/- 0.3 mm, 1.27 +/- 0.4 mm, 0.91 +/- 0.4 mm, and 1.76 +/- 0.8 mm on the validation set for the prostate, seminal vesicles, bladder, and rectum, respectively. The high accuracy of the proposed method combined with the fast inference speed, makes it a promising method for automatic re-contouring of follow-up scans for adaptive radiotherapy, potentially reducing treatment related complications and therefore improving patients quality-of-life after treatment. The source code is available at https://github.com/moelmahdy/JRS-MTL. Show less
Elmahdy, M.S.; Beljaards, L.; Yousefi, S.; Sokooti, H.; Verbeek, F.J.; Heiden, U.A. van der; Staring M. 2021
Adaptive intelligence aims at empowering machine learning techniques with the additional use of domain knowledge. In this work, we present the application of adaptive intelligence to accelerate MR... Show moreAdaptive intelligence aims at empowering machine learning techniques with the additional use of domain knowledge. In this work, we present the application of adaptive intelligence to accelerate MR acquisition. Starting from undersampled k-space data, an iterative learning-based reconstruction scheme inspired by compressed sensing theory is used to reconstruct the images. We developed a novel deep neural network to refine and correct prior reconstruction assumptions given the training data. The network was trained and tested on a knee MRI dataset from the 2019 fastMRI challenge organized by Facebook AI Research and NYU Langone Health. All submissions to the challenge were initially ranked based on similarity with a known groundtruth, after which the top 4 submissions were evaluated radiologically. Our method was evaluated by the fastMRI organizers on an independent challenge dataset. It ranked #1, shared #1, and #3 on respectively the 8x accelerated multi-coil, the 4x multi-coil, and the 4x single-coil tracks. This demonstrates the superior performance and wide applicability of the method. Show less
With a steadily growing number of vehicles, our roads are getting more and more crowded. As a consequence, traffic jams are becoming common. Vehicle platoon systems form a possible solution in the... Show moreWith a steadily growing number of vehicles, our roads are getting more and more crowded. As a consequence, traffic jams are becoming common. Vehicle platoon systems form a possible solution in the short term. It consists of a number of vehicles automatically following a leader vehicle, in-line, one after another at a short but safe distance. Ideally, all vehicles have to maintain the same speed, so as to have a better usage of the road by minimizing the distance between two vehicles. In this paper we present a timed automata model of a vehicle platoon system with the goal of finding a minimal but guaranteed safe distance between two vehicles under variable speed. Contrary to other models based on cooperative adaptive cruise control, we assume no (Internet) communication among different vehicles or road system. Instead of such global perspective we rather take a local point of view: each vehicle relies on its own sensors to dynamically calculate and maintain a safe distance with the preceding member of the platoon. We use the model checker UPPAAL to verify that the system does not deadlock, and most importantly, that it is safe, avoiding crashes at all time. Show less
With the development of aerospace technologies, the mission planning of agile earth observation satellites has to consider several objectives simultaneously, such as profit, observation task number... Show moreWith the development of aerospace technologies, the mission planning of agile earth observation satellites has to consider several objectives simultaneously, such as profit, observation task number, image quality, resource balance, and observation timeliness. In this paper, a five-objective mixed-integer optimization problem is formulated for agile satellite mission planning. Preference-based multi-objective evolutionary algorithms, i.e., T-MOEA/D-TCH, T-MOEA/D-PBI, and T-NSGA-III are applied to solve the problem. Problem-specific coding and decoding approaches are proposed based on heuristic rules. Experiments have shown the advantage of integrating preferences in many-objective satellite mission planning. A comparative study is conducted with other state-of-the-art preference-based methods (T-NSGA-II, T-RVEA, and MOEA/D-c). Results have demonstrated that the proposed T-MOEA/D-TCH has the best performance with regard to IGD and elapsed runtime. An interactive framework is also proposed for the decision maker to adjust preferences during the search. We have exemplified that a more satisfactory solution could be gained through the interactive approach. Show less