Bayesian optimization is often used to optimize expensive black box optimization problems with long simulation times. Typically Bayesian optimization algorithms propose one solution per iteration.... Show moreBayesian optimization is often used to optimize expensive black box optimization problems with long simulation times. Typically Bayesian optimization algorithms propose one solution per iteration. The downside of this strategy is the sub-optimal use of available computing power. To efficiently use the available computing power (or a number of licenses etc.) we introduce a multi-point acquisition function for parallel efficient multi-objective optimization algorithms. The multi-point acquisition function is based on the hypervolume contribution of multiple solutions simultaneously, leading to well spread solutions along the Pareto frontier. By combining this acquisition function with a constraint handling technique, multiple feasible solutions can be proposed and evaluated in parallel every iteration. The hypervolume and feasibility of the solutions can easily be estimated by using multiple cheap radial basis functions as surrogates with different configurations. The acquisition function can be used with different population sizes and even for one shot optimization. The strength and generalizability of the new acquisition function is demonstrated by optimizing a set of black box constraint multi-objective problem instances. The experiments show a huge time saving factor by using our novel multi-point acquisition function, while only marginally worsening the hypervolume after the same number of function evaluations. Show less
Kefalas, M.; Stein, B. van; Baratchi, M.; Apostolidis, A.; Bäck, T.H.W. 2022
This paper proposes a novel Self-Adaptive algorithm for Multi-Objective Constrained Optimization by using Radial Basis Function Approximations, SAMO-COBRA. The algorithm automatically determines... Show moreThis paper proposes a novel Self-Adaptive algorithm for Multi-Objective Constrained Optimization by using Radial Basis Function Approximations, SAMO-COBRA. The algorithm automatically determines the best Radial Basis Function-fit as surrogates for the objectives as well as the constraints, to find new feasible Pareto-optimal solutions. The algorithm also uses hyper-parameter tuning on the fly to improve its local search strategy. In every iteration one solution is added and evaluated, resulting in a strategy requiring only a small number of function evaluations for finding a set of feasible solutions on the Pareto frontier. The proposed algorithm is compared to a wide set of other state-of-the-art algorithms (NSGA-II, NSGA-III, CEGO, SMES-RBF) on 18 constrained multi-objective problems. In the experiments we show that our algorithm outperforms the other algorithms in terms of achieved Hypervolume after given a fixed small evaluation budget. These results suggest that SAMO-COBRA is a good choice for optimizing constrained multi-objective optimization problems with expensive function evaluations. Show less
Ponse, K.; Kononova, A.V.; Loleyt, M.; Stein, B. van 2021
Automated symmetry detection is still a difficult task in 2021. However, it has applications in computer vision, and it also plays an important part in understanding art. This paper focuses on... Show moreAutomated symmetry detection is still a difficult task in 2021. However, it has applications in computer vision, and it also plays an important part in understanding art. This paper focuses on aiding the latter by comparing different state-of-the-art automated symmetry detection algorithms. For one of such algorithms aimed at reflectional symmetries, we propose postprocessing improvements to find localised symmetries in images, improve the selection of detected symmetries and identify another symmetry type (rotational). In order to detect rotational symmetries, we contribute a machine learning model which detects rotational symmetries based on provided reflection symmetry axis pairs. We demonstrate and analyze the performance of the extended algorithm to detect localised symmetries and the machine learning model to classify rotational symmetries. Show less
This contribution shows how, in the preliminary design stage, naval architects can make more informed decisions by using machine learning. In this ship design phase, little information is available... Show moreThis contribution shows how, in the preliminary design stage, naval architects can make more informed decisions by using machine learning. In this ship design phase, little information is available, and decisions need to be made in a limited amount of time. However, it is in the preliminary design phase where the most influential decisions are made regarding the global dimensions, the machinery, and therefore the performance and costs. In this paper it is shown that a machine learning algorithm trained with data from reference vessels are more accurate when estimating key performance indicators compared to existing empirical design formulas. Finally, the combination of the trained models with optimization algorithms shows to be a powerful tool for finding Pareto-optimal designs from which the naval architect can learn. Show less
Rios, T.; Wollstadt, P.; Stein, B. van; Bäck, T.H.W.; Xu, Z.; Sendhoff, B.; Menzel, S. 2019
Geometric Deep Learning (GDL) methods have recently gained interest as powerful, high-dimensional models for approaching various geometry processing tasks. However, training deep neural network... Show moreGeometric Deep Learning (GDL) methods have recently gained interest as powerful, high-dimensional models for approaching various geometry processing tasks. However, training deep neural network models on geometric input requires considerable computational effort. Even more so, if one considers typical problem sizes found in application domains such as engineering tasks, where geometric data are often orders of magnitude larger than the inputs currently considered in GDL literature. Hence, an assessment of the scalability of the training task is necessary, where model and data set parameters can be mapped to the computational demand during training. The present paper therefore studies the effects of data set size and the number of free model parameters on the computational effort of training a Point Cloud Autoencoder (PC-AE). We further review pre-processing techniques to obtain efficient representations of high-dimensional inputs to the PC-AE and investigate the effects of these techniques on the information abstracted by the trained model. We perform these experiments on synthetic geometric data inspired by engineering applications using computing hardware with particularly recent graphics processing units (GPUs) with high memory specifications. The present study thus provides a comprehensive evaluation of how to scale geometric deep learning architectures to high-dimensional inputs to allow for an application of state-of-the-art deep learning methods in real-world tasks. Show less
Rios, T.; Sendhoff, B.; Menzel, S.; Bäck, T.H.W.; Stein, B. van 2019
A crucial step for optimizing a system is to formulate the objective function, and part of it concerns the selection of the design parameters. One of the major goals is to achieve a fair trade-off... Show moreA crucial step for optimizing a system is to formulate the objective function, and part of it concerns the selection of the design parameters. One of the major goals is to achieve a fair trade-off between exploring feasible solutions in the design space and maintaining admissible computational effort. In order to achieve such balance in optimization problems with Computer Aided Engineering (CAE) models, the conventional constructive geometric representations are substituted by deformation methods, e.g. free form deformation, where the position of a few control points might be capable of handling large scale shape modifications. In light of the recent developments in the field of geometric deep learning, autoencoders have risen as a promising alternative for efficiently condensing high-dimensional models into compact representations. In this paper, we present a novel perspective on geometric deep learning models by exploring the applicability of the latent space of a point cloud autoencoder in shape optimization problems with evolutionary algorithms. Focusing on engineering applications, a target shape matching optimization is used as a surrogate to the computationally expensive CAE simulations required in engineering optimizations. Through the quality assessment of the solutions achieved in the optimization and further aspects, such as shape feasibility, point cloud autoencoders showed to be consistent and suitable geometric representations for such problems, adding a new perspective on the approaches for handling high-dimensional models to optimization tasks. Show less