A building spatial design (BSD) determines external and internal walls and ceilings of a building. The design space has a hierarchical structure, in which decisions on the existence or non... Show moreA building spatial design (BSD) determines external and internal walls and ceilings of a building. The design space has a hierarchical structure, in which decisions on the existence or non-existence of spatial components determine the existence of variables related to these spaces, such as sizing and angles. In the optimization of BSDs it is envisioned to optimize various performance indicators from multiple disciplines in concert, such as structural, functional, thermal, and daylight performance. Existing representations of design spaces suffer from severe limitations, such as only representing orthogonal designs or representing the structures in parametric superstructure, allowing only for limited design variations. This paper proposes prism nets - a new way of representing the search space of BSDs based on triangulations defining space filling collections of triangular prisms that can be combined via coloring parameters to spaces. Prism nets can accommodate for non-orthogonal designs and are flexible in terms of topological variations. We follow the guidelines for representation and operator design proposed in the framework of metric-based evolutionary algorithms. The main contribution of the paper is a detailed discussion of the search space representation and corresponding mutation operators. Moreover, a proof of concept example demonstrates the integration into multi-objective evolutionary algorithms and provides first results on a simple, but reproducible, benchmark problem. Show less
Optimization problems with multiple objectives and many input variables inherit challenges from both large-scale optimization and multi-objective optimization. To solve the problems, decomposition... Show moreOptimization problems with multiple objectives and many input variables inherit challenges from both large-scale optimization and multi-objective optimization. To solve the problems, decomposition and transformation methods are frequently used. In this study, an improved control variable analysis is proposed based on dominance and diversity in Pareto optimization. Further, the decomposition method is used in a cooperative coevolution framework with orthogonal sampling mutation. The algorithm's performances are compared against the weighted optimization framework. The results show that the proposed decomposition method has much better accuracy compared to the traditional method. The results also show that the cooperative coevolution framework with a good grouping is very competitive. Additionally, the number of search directions in orthogonal sampling can be easily configured. A small number of search directions will reduce the search space greatly while also restricting the area that can be explored and vice versa. Show less
Vehicle fleets support a diverse array of functions and are increasing rapidly in the world of today. For a vehicle fleet, maintenance plays a critical role. In this article, an evolutionary... Show moreVehicle fleets support a diverse array of functions and are increasing rapidly in the world of today. For a vehicle fleet, maintenance plays a critical role. In this article, an evolutionary algorithm is proposed to optimize the vehicle fleet maintenance schedule based on the predicted remaining useful lifetime (RUL) of vehicle components to reduce the costs of repairs, decrease maintenance downtime and make them safer for drivers. The multi-objective evolutionary algorithm (MOEA) is then enhanced to focus precisely on the preferred solutions. Moreover, stability is involved as another objective in the dynamic MOEA for handling the problem under changes in the environment. To implement the complete maintenance process, a simulator is developed that can define vehicles, predict the RUL of components and optimize the maintenance schedule in a rolling-horizon fashion. The results of the proposed MOEAs under different scenarios are reported and compared. Show less
Wang, Y.; Stein, N. van; Bäck, T.H.W.; Emmerich, M.T.M. 2020
A customized multi-objective evolutionary algorithm (MOEA) is proposed for the flexible job shop scheduling problem (FJSP) with three objectives: makespan, total workload, critical workload. In... Show moreA customized multi-objective evolutionary algorithm (MOEA) is proposed for the flexible job shop scheduling problem (FJSP) with three objectives: makespan, total workload, critical workload. In general, the algorithm can be integrated with any standard MOEA. In this paper, it has been combined with NSGA-III to solve the state-of-the-art benchmark FJSPs, whereas an off-the-shelf implementation of NSGA-III is not capable of solving them. Most importantly, we use the various algorithm adaptations to enhance the performance of our algorithm. To be specific, it uses smart initialization approaches to enrich the first-generation population, and proposes new crossover operator to create a better diversity on the Pareto front approximation. The MIP-EGO configurator is adopted to automatically tune the mutation probabilities, which are important hyper-parameters of the algorithm. Furthermore, different local search strategies are employed to explore the neighborhood for better solutions. The experimental results from the combination of these techniques show the good performance as compared to classical evolutionary scheduling algorithms and it requires less computing budget. Even some previously unknown non-dominated solutions for the BRdata benchmark problems could be discovered. Show less
This paper provides a short summary of a novel algorithm tailored towards multi-objective flexible job shop scheduling problems (FJSP). The result shows that for challenging real-world problems in... Show moreThis paper provides a short summary of a novel algorithm tailored towards multi-objective flexible job shop scheduling problems (FJSP). The result shows that for challenging real-world problems in combinatorial optimization, off-the-shelf implementations of multi-objective optimization evolutionary algorithms (MOEAs) might not work, but by using various adaptations, these methods can be tailored to provide excellent results. This is demonstrated for a state of the art MOEA, that is NSGA-III, and the following adaptations: (1) initialization approaches to enrich the first-generation population, (2) various crossover operators to create a better diversity of offspring, (3) parameter tuning, to determine the optimal mutation probabilities, using the MIP-EGO configurator, (4) local search strategies to explore the neighborhood for better solutions. Using these measures, NSGA-III has been enabled to solve benchmark multi-objective FJSPs and experimental results show excellent performance. Show less
Kriging or Gaussian Process Regression is applied in many fields as a non-linear regression model as well as a surrogate model in the field of evolutionary computation. However, the computational... Show moreKriging or Gaussian Process Regression is applied in many fields as a non-linear regression model as well as a surrogate model in the field of evolutionary computation. However, the computational and space complexity of Kriging, that is cubic and quadratic in the number of data points respectively, becomes a major bottleneck with more and more data available nowadays. In this paper, we propose a general methodology for the complexity reduction, called cluster Kriging, where the whole data set is partitioned into smaller clusters and multiple Kriging models are built on top of them. In addition, four Kriging approximation algorithms are proposed as candidate algorithms within the new framework. Each of these algorithms can be applied to much larger data sets while maintaining the advantages and power of Kriging. The proposed algorithms are explained in detail and compared empirically against a broad set of existing state-of-the-art Kriging approximation methods on a well-defined testing framework. According to the empirical study, the proposed algorithms consistently outperform the existing algorithms. Moreover, some practical suggestions are provided for using the proposed algorithms. Show less
It is a common technique in global optimization with expensive black-box functions, to learn a regression model (or surrogate-model) of the response function from past evaluations and to use this... Show moreIt is a common technique in global optimization with expensive black-box functions, to learn a regression model (or surrogate-model) of the response function from past evaluations and to use this model to decide on the location of future evaluations. In surrogate model assisted optimization it can be difficult to select the right modeling technique. Without preliminary knowledge about the function it might be beneficial if the algorithm trains as many different surrogate models as possible and selects the model with the smallest training error. This is known as model selection. Recently a generalization of this approach was proposed: instead of selecting a single model we propose to use optimal convex combinations of model predictions. This approach, called model mixtures, is adopted and evaluated in the context of sequential parameter optimization. Besides discussing the general strategy, the optimal frequency of learning the convex weights is investigated. The feasibility of this approach is examined and its benefits are compared to simpler methods. Show less
The Expected Hypervolume Improvement (EHVI) is a frequently used infill criterion in Multi-Objective Bayesian Global Optimization (MOBGO), due to its good ability to lead the exploration. Recently,... Show moreThe Expected Hypervolume Improvement (EHVI) is a frequently used infill criterion in Multi-Objective Bayesian Global Optimization (MOBGO), due to its good ability to lead the exploration. Recently, the computational complexity of EHVI calculation is reduced to O(n log n) for both 2-D and 3-D cases. However, the optimizer in MOBGO still requires a significant amount of time, because the calculation of EHVI is carried out in each iteration and usually tens of thousands of the EHVI calculations are required. This paper derives a formula for the Expected Hypervolume Improvement Gradient (EHVIG) and proposes an efficient algorithm to calculate EHVIG. The new criterion (EHVIG) is utilized by two different strategies to improve the efficiency of the optimizer discussed in this paper. Firstly, it enables gradient ascent methods to be used in MOBGO. Moreover, since the EHVIG of an optimal solution should be a zero vector, it can be regarded as a stopping criterion in global optimization, e.g., in Evolution Strategies. Empirical experiments are performed on seven benchmark problems. The experimental results show that the second proposed strategy, using EHVIG as a stopping criterion for local search, can outperform the normal MOBGO on problems where the optimal solutions are located in the interior of the search space. For the ZDT series test problems, EHVIG still can perform better when gradient projection is applied. Show less
The receiver operating characteristic (ROC) and detection error tradeoff(DET) curves are frequently used in the machine learning community to analyze the performance of binary classifiers.... Show moreThe receiver operating characteristic (ROC) and detection error tradeoff(DET) curves are frequently used in the machine learning community to analyze the performance of binary classifiers. Recently, the convex-hull-based multiobjective genetic programming algorithm was proposed and successfully applied to maximize the convex hull area for binary classifi- cation problems by minimizing false positive rate and maximizing true positive rate at the same time using indicator-based evolutionary algorithms. The area under the ROC curve was used for the performance assessment and to guide the search. Here we extend this re- search and propose two major advancements: Firstly we formulate the algorithm in detec- tion error tradeoffspace, minimizing false positives and false negatives, with the advantage that misclassification cost tradeoffcan be assessed directly. Secondly, we add complexity as an objective function, which gives rise to a 3D objective space (as opposed to a 2D pre- vious ROC space). A domain specific performance indicator for 3D Pareto front approxima- tions, the volume above DET surface, is introduced, and used to guide the indicator-based evolutionary algorithm to find optimal approximation sets. We assess the performance of the new algorithm on designed theoretical problems with different geometries of Pareto fronts and DET surfaces, and two application-oriented benchmarks: (1) Designing spam filters with low numbers of false rejects, false accepts, and low computational cost us- ing rule ensembles, and (2) finding sparse neural networks for binary classification of test data from the UCI machine learning benchmark. The results show a high performance of the new algorithm as compared to conventional methods for multicriteria optimization. Show less