The problem of approximating the Pareto front of a multiobjective optimization problem can be reformulated as the problem of finding a set that maximizes the hypervolume indicator. This paper...Show moreThe problem of approximating the Pareto front of a multiobjective optimization problem can be reformulated as the problem of finding a set that maximizes the hypervolume indicator. This paper establishes the analytical expression of the Hessian matrix of the mapping from a (fixed size) collection of n points in the d-dimensional decision space (or m dimensional objective space) to the scalar hypervolume indicator value. To define the Hessian matrix, the input set is vectorized, and the matrix is derived by analytical differentiation of the mapping from a vectorized set to the hypervolume indicator. The Hessian matrix plays a crucial role in second-order methods, such as the Newton-Raphson optimization method, and it can be used for the verification of local optimal sets. So far, the full analytical expression was only established and analyzed for the relatively simple bi-objective case. This paper will derive the full expression for arbitrary dimensions (m ≥ 2 objective functions). For the practically important three-dimensional case, we also provide an asymptotically efficient algorithm with time complexity in O(n log n) for the exact computation of the Hessian Matrix’ non-zero entries. We establish a sharp bound of 12m−6 for the number of non-zero entries. Also, for the general m-dimensional case, a compact recursive analytical expression is established, and its algorithmic implementation is discussed. Also, for the general case, some sparsity results can be established; these results are implied by the recursive expression. To validate and illustrate the analytically derived algorithms and results, we provide a few numerical examples using Python and Mathematica implementations. Open-source implementations of the algorithms and testing data are made available as a supplement to this paper.Show less
A building spatial design (BSD) determines external and internal walls and ceilings of a building. The design space has a hierarchical structure, in which decisions on the existence or non... Show moreA building spatial design (BSD) determines external and internal walls and ceilings of a building. The design space has a hierarchical structure, in which decisions on the existence or non-existence of spatial components determine the existence of variables related to these spaces, such as sizing and angles. In the optimization of BSDs it is envisioned to optimize various performance indicators from multiple disciplines in concert, such as structural, functional, thermal, and daylight performance. Existing representations of design spaces suffer from severe limitations, such as only representing orthogonal designs or representing the structures in parametric superstructure, allowing only for limited design variations. This paper proposes prism nets - a new way of representing the search space of BSDs based on triangulations defining space filling collections of triangular prisms that can be combined via coloring parameters to spaces. Prism nets can accommodate for non-orthogonal designs and are flexible in terms of topological variations. We follow the guidelines for representation and operator design proposed in the framework of metric-based evolutionary algorithms. The main contribution of the paper is a detailed discussion of the search space representation and corresponding mutation operators. Moreover, a proof of concept example demonstrates the integration into multi-objective evolutionary algorithms and provides first results on a simple, but reproducible, benchmark problem. Show less
Echtenbruck, P.; Emmerich, M.T.M.; Echtenbruck, M.; Naujoks, B. 2021
In drug discovery, classification is a well established in silico method based on machine learning algorithms. However, since the activity value on a target protein is described as a continuous... Show moreIn drug discovery, classification is a well established in silico method based on machine learning algorithms. However, since the activity value on a target protein is described as a continuous value, a regressional approach is worth being considered as well. The results of the regression can then be turned into classification results with a given threshold value. To further improve the results, a new method called optimally weighted ensembles that uses a combination of more than one model to build a better performing ensemble of models, is applied here. This method is used in the context of drug discovery for the first time. Naturally, it is crucial to choose suitable models for the specific problem, if any prior knowledge is available. Statistical significance of the approach is verified using a second dataset with a different target. In this work, we show to what extent the obtained classification results compare to previous, highly optimized, single model results as presented in previous work. Furthermore, the results are compared to ensembles where none of the contributing models were optimized beforehand. All case studies are performed using the publicly available database ChEMBL. Show less
Wang, Y.; Stein, N. van; Bäck, T.H.W.; Emmerich, M.T.M. 2020
A customized multi-objective evolutionary algorithm (MOEA) is proposed for the flexible job shop scheduling problem (FJSP) with three objectives: makespan, total workload, critical workload. In... Show moreA customized multi-objective evolutionary algorithm (MOEA) is proposed for the flexible job shop scheduling problem (FJSP) with three objectives: makespan, total workload, critical workload. In general, the algorithm can be integrated with any standard MOEA. In this paper, it has been combined with NSGA-III to solve the state-of-the-art benchmark FJSPs, whereas an off-the-shelf implementation of NSGA-III is not capable of solving them. Most importantly, we use the various algorithm adaptations to enhance the performance of our algorithm. To be specific, it uses smart initialization approaches to enrich the first-generation population, and proposes new crossover operator to create a better diversity on the Pareto front approximation. The MIP-EGO configurator is adopted to automatically tune the mutation probabilities, which are important hyper-parameters of the algorithm. Furthermore, different local search strategies are employed to explore the neighborhood for better solutions. The experimental results from the combination of these techniques show the good performance as compared to classical evolutionary scheduling algorithms and it requires less computing budget. Even some previously unknown non-dominated solutions for the BRdata benchmark problems could be discovered. Show less
This paper provides a short summary of a novel algorithm tailored towards multi-objective flexible job shop scheduling problems (FJSP). The result shows that for challenging real-world problems in... Show moreThis paper provides a short summary of a novel algorithm tailored towards multi-objective flexible job shop scheduling problems (FJSP). The result shows that for challenging real-world problems in combinatorial optimization, off-the-shelf implementations of multi-objective optimization evolutionary algorithms (MOEAs) might not work, but by using various adaptations, these methods can be tailored to provide excellent results. This is demonstrated for a state of the art MOEA, that is NSGA-III, and the following adaptations: (1) initialization approaches to enrich the first-generation population, (2) various crossover operators to create a better diversity of offspring, (3) parameter tuning, to determine the optimal mutation probabilities, using the MIP-EGO configurator, (4) local search strategies to explore the neighborhood for better solutions. Using these measures, NSGA-III has been enabled to solve benchmark multi-objective FJSPs and experimental results show excellent performance. Show less
Wang, H.; Emmerich, M.T.M.; Preuss, M.; Plaat, A. 2020
Recently, AlphaZero has achieved outstanding performance in playing Go, Chess, and Shogi. Players in AlphaZero consist of a combination of Monte Carlo Tree Search and a deep neural network, that is... Show moreRecently, AlphaZero has achieved outstanding performance in playing Go, Chess, and Shogi. Players in AlphaZero consist of a combination of Monte Carlo Tree Search and a deep neural network, that is trained using self-play. The unified deep neural network has a policy-head and a value-head, and during training, the optimizer minimizes the sum of policy loss and value loss. However, it is not clear if and under which circumstances other formulations of the loss function are better. Therefore, we perform experiments with different combinations of these two minimization targets. In contrast to many recent papers who adopt single run experiments and use the whole history Elo ratings from self-play, we propose to use repeated runs. The results show that this method can describe the training performance quite well within each training run, but there is a high self-play bias, such that it is incomparable among different training runs. Therefore, inspired by the AlphaGo series papers, a self-play bias avoiding performance assessment, final best player Elo rating, is adopted to evaluate the playing strength in a direct competition between the evolved players. For relatively small games, based on this new evaluation method, surprisingly, minimizing only value loss achieves the strongest playing strength in the final best players' round-robin tournament. These results indicate that more research is needed into the relative importance of value function and policy function in small games. Show less
After the recent groundbreaking results of AlphaGo and AlphaZero, we have seen strong interests in deep reinforcement learning and artificial general intelligence (AGI) in game playing. However,... Show moreAfter the recent groundbreaking results of AlphaGo and AlphaZero, we have seen strong interests in deep reinforcement learning and artificial general intelligence (AGI) in game playing. However, deep learning is resource-intensive and the theory is not yet well developed. For small games, simple classical table-based Q-learning might still be the algorithm of choice. General Game Playing (GGP) provides a good testbed for reinforcement learning to research AGI. Q-learning is one of the canonical reinforcement learning methods, and has been used by (Banerjee & Stone, IJCAI 2007) in GGP. In this paper we implement Q-learning in GGP for three small-board games (Tic-Tac-Toe, Connect Four, Hex), to allow comparison to Banerjee et al. We find that Q-learning converges to a high win rate in GGP. For the ϵ" role="presentation" style="display: inline-table; line-height: normal; letter-spacing: normal; word-spacing: normal; overflow-wrap: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0px; min-height: 0px; border-width: 0px; border-style: initial; position: relative;">ϵ-greedy strategy, we propose a first enhancement, the dynamic ϵ" role="presentation" style="display: inline-table; line-height: normal; letter-spacing: normal; word-spacing: normal; overflow-wrap: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0px; min-height: 0px; border-width: 0px; border-style: initial; position: relative;">ϵ algorithm. In addition, inspired by (Gelly & Silver, ICML 2007) we combine online search (Monte Carlo Search) to enhance offline learning, and propose QM-learning for GGP. Both enhancements improve the performance of classical Q-learning. In this work, GGP allows us to show, if augmented by appropriate enhancements, that classical table-based Q-learning can perform well in small games. Show less
Ribeiro de Almeida, L.; Emmerich, M.T.M.; Da Silva Soares, A.; Woerle de Lima, T. 2019
Bayesian Global Optimization (BGO) is a very efficient technique to optimize expensive evaluation problems. However, the application domain is limited to continuous search spaces when using a BGO... Show moreBayesian Global Optimization (BGO) is a very efficient technique to optimize expensive evaluation problems. However, the application domain is limited to continuous search spaces when using a BGO algorithm. To solve mixed integer problems with a BGO algorithm, this paper adapts the heterogeneous distance function to construct the Kriging models and applies these new Kriging models in Multi-objective Bayesian Global Optimization (MOBGO). The proposed mixed integer MOBGO algorithm and the traditional MOBGO algorithm are compared on three mixed integer multi-objective optimization problems (MOP), w.r.t. the mean value of the hypervolume (HV) and the related standard deviation. Show less
Blom, K. van der; Yang, K.; Bäck, T.; Emmerich, M.T.M. 2019
Many problems are of a mixed integer nature, rather than being restricted to a single variable type. Although mixed integer algorithms exist for the single-objective case, work on the multi... Show moreMany problems are of a mixed integer nature, rather than being restricted to a single variable type. Although mixed integer algorithms exist for the single-objective case, work on the multi-objective case remains limited. Evolution strategies are stochastic optimisation algorithms that feature step size adaptation mechanisms and are typically used in continuous domains. More recently they were generalised to mixed integer problems. In this work, first steps are taken towards extending the single-objective mixed integer evolution strategy for the multi-objective case. First results are promising, but step size adaptation for the multi-objective case can likely be improved. Show less
It is a common technique in global optimization with expensive black-box functions, to learn a regression model (or surrogate-model) of the response function from past evaluations and to use this... Show moreIt is a common technique in global optimization with expensive black-box functions, to learn a regression model (or surrogate-model) of the response function from past evaluations and to use this model to decide on the location of future evaluations. In surrogate model assisted optimization it can be difficult to select the right modeling technique. Without preliminary knowledge about the function it might be beneficial if the algorithm trains as many different surrogate models as possible and selects the model with the smallest training error. This is known as model selection. Recently a generalization of this approach was proposed: instead of selecting a single model we propose to use optimal convex combinations of model predictions. This approach, called model mixtures, is adopted and evaluated in the context of sequential parameter optimization. Besides discussing the general strategy, the optimal frequency of learning the convex weights is investigated. The feasibility of this approach is examined and its benefits are compared to simpler methods. Show less
The aim of evolutionary level set approximation is to find a finite representation of a level set of a given black box function. The problem of level set approximation plays a vital role in solving... Show moreThe aim of evolutionary level set approximation is to find a finite representation of a level set of a given black box function. The problem of level set approximation plays a vital role in solving problems, for instance in fault detection in water distribution systems, engineering design, parameter identification in gene regulatory networks, and in drug discovery. The goal is to create algorithms that quickly converge to feasible solutions and then achieve a good coverage of the level set. The population based search scheme of evolutionary algorithms makes this type of algorithms well suited to target such problems. In this paper, the focus is on continuous black box functions and we propose a challenging benchmark for this problem domain and propose dual mutation strategies, that balance between global exploration and local refinement. Moreover, the article investigates the role of different indicators for measuring the coverage of the level set approximation. The results are promising and show that even for difficult problems in moderate dimension the proposed evolutionary level set approximation algorithm (ELSA) can serve as a versatile and robust meta-heuristic. Show less