This paper investigates how often the popular configurations of Differential Evolution generate solutions outside the feasible domain. Following previous publications in the field, we argue that... Show moreThis paper investigates how often the popular configurations of Differential Evolution generate solutions outside the feasible domain. Following previous publications in the field, we argue that what the algorithm does with such solutions and how often this has to happen is important for the overall performance of the algorithm and interpretation of results. Significantly more solutions than what is usually assumed by practitioners have to undergo some sort of 'correction' to conform with the definition of the problem's search domain. A wide range of popular Differential Evolution configurations is considered in this study. Conclusions are made regarding the effect the Differential Evolution components and parameter settings have on the distribution of percentages of infeasible solutions generated in a series of independent runs. Results shown in this study suggest strong dependencies between percentages of generated infeasible solutions and every aspect mentioned above. Further investigation of the distribution of percentages of generated infeasible solutions is required. Show less
Early lifecycle demand forecast is critical to consumer technology products with a fast innovation speed, as firms which compete on these products focus on timely responding to market changes... Show moreEarly lifecycle demand forecast is critical to consumer technology products with a fast innovation speed, as firms which compete on these products focus on timely responding to market changes through new product development and efficient product diffusion, rather than sustaining product sales. The challenge for obtaining an accurate long-range forecast is that sales volumes at the early lifecycle stages are small, which limits the forecast accuracy. We propose a two-step lifecycle forecast approach for consumer technology products with limited sales data. First, we segment products based on market and clustering. Second, we apply the Bass model to aggregated products in a group using the average periodic sales of all products in the group and then use the forecast for related new products. We validate our approach using a dataset collected from Philips Netherlands, which contains consumer healthcare products sold in US and China over an 8-year timespan. The results suggest that for forecasting the lifecycle of a new product, models based on aggregated products generally perform better than models based on an individual product. It highlights the value of data aggregation in product lifecycle forecasts. Clustering is also useful for improving the forecast accuracy: when aggregation is done using sufficient product sales data, the aggregated model based on products with which the new product has the most sales pattern similarities could provide a more accurate forecast than other aggregated models. Based on our results, we provide a practical guideline to firms for obtaining an accurate early product lifecycle forecast. Show less
Introducing new algorithmic ideas is a key part of the continuous improvement of existing optimization algorithms. However, when introducing a new component into an existing algorithm, assessing... Show moreIntroducing new algorithmic ideas is a key part of the continuous improvement of existing optimization algorithms. However, when introducing a new component into an existing algorithm, assessing its potential benefits is a challenging task. Often, the component is added to a default implementation of the underlying algorithm and compared against a limited set of other variants. This assessment ignores any potential interplay with other algorithmic ideas that share the same base algorithm, which is critical in understanding the exact contributions being made. We explore a more extensive procedure, which uses hyperparameter tuning as a means of assessing the benefits of new algorithmic components. This allows for a more robust analysis by not only focusing on the impact on performance, but also by investigating how this performance is achieved. We implement our suggestion in the context of the Modular CMA-ES framework, which was redesigned and extended to include some new modules and several new options for existing modules, mostly focused on the step-size adaptation method. Our analysis highlights the differences between these new modules, and identifies the situations in which they have the largest contribution. Show less
Structural Bias (SB) is an important type of algorithmic deficiency within iterative optimisation heuristics. However, methods for detecting structural bias have not yet fully matured, and recent... Show moreStructural Bias (SB) is an important type of algorithmic deficiency within iterative optimisation heuristics. However, methods for detecting structural bias have not yet fully matured, and recent studies have uncovered many interesting questions. One of these is the question of how structural bias can be related to anisotropy. Intuitively, an algorithm that is not isotropic would be considered structurally biased. However, there have been cases where algorithms appear to only show SB in some dimensions. As such, we investigate whether these algorithms actually exhibit anisotropy, and how this impacts the detection of SB. We find that anisotropy is very rare, and even in cases where it is present, there are clear tests for SB which do not rely on any assumptions of isotropy, so we can safely expand the suite of SB tests to encompass these kinds of deficiencies not found by the original tests.We propose several additional testing procedures for SB detection and aim to motivate further research into the creation of a robust portfolio of tests. This is crucial since no single test will be able to work effectively with all types of SB we identify. Show less
A new acquisition function is proposed for solving robust optimization problems via Bayesian Optimization. The proposed acquisition function reflects the need for the robust instead of the nominal... Show moreA new acquisition function is proposed for solving robust optimization problems via Bayesian Optimization. The proposed acquisition function reflects the need for the robust instead of the nominal optimum, and is based on the intuition of utilizing the higher moments of the improvement. The efficacy of Bayesian Optimization based on this acquisition function is demonstrated on four test problems, each affected by three different levels of noise. Our findings suggest the promising nature of the proposed acquisition function as it yields a better robust optimal value of the function in 6/12 test scenarios when compared with the baseline. Show less
Kefalas, M.; Baratchi, M.; Apostolidis, A.; Herik, D. van den; Bäck, T.H.W. 2021
Vehicle fleets support a diverse array of functions and are increasing rapidly in the world of today. For a vehicle fleet, maintenance plays a critical role. In this article, an evolutionary... Show moreVehicle fleets support a diverse array of functions and are increasing rapidly in the world of today. For a vehicle fleet, maintenance plays a critical role. In this article, an evolutionary algorithm is proposed to optimize the vehicle fleet maintenance schedule based on the predicted remaining useful lifetime (RUL) of vehicle components to reduce the costs of repairs, decrease maintenance downtime and make them safer for drivers. The multi-objective evolutionary algorithm (MOEA) is then enhanced to focus precisely on the preferred solutions. Moreover, stability is involved as another objective in the dynamic MOEA for handling the problem under changes in the environment. To implement the complete maintenance process, a simulator is developed that can define vehicles, predict the RUL of components and optimize the maintenance schedule in a rolling-horizon fashion. The results of the proposed MOEAs under different scenarios are reported and compared. Show less
This paper proposes a novel Self-Adaptive algorithm for Multi-Objective Constrained Optimization by using Radial Basis Function Approximations, SAMO-COBRA. The algorithm automatically determines... Show moreThis paper proposes a novel Self-Adaptive algorithm for Multi-Objective Constrained Optimization by using Radial Basis Function Approximations, SAMO-COBRA. The algorithm automatically determines the best Radial Basis Function-fit as surrogates for the objectives as well as the constraints, to find new feasible Pareto-optimal solutions. The algorithm also uses hyper-parameter tuning on the fly to improve its local search strategy. In every iteration one solution is added and evaluated, resulting in a strategy requiring only a small number of function evaluations for finding a set of feasible solutions on the Pareto frontier. The proposed algorithm is compared to a wide set of other state-of-the-art algorithms (NSGA-II, NSGA-III, CEGO, SMES-RBF) on 18 constrained multi-objective problems. In the experiments we show that our algorithm outperforms the other algorithms in terms of achieved Hypervolume after given a fixed small evaluation budget. These results suggest that SAMO-COBRA is a good choice for optimizing constrained multi-objective optimization problems with expensive function evaluations. Show less
Manuel Proença, H.; Grünwald, P.D.; Bäck, T.H.W.; Leeuwen, M. van 2021
The task of subgroup discovery (SD) is to find interpretable descriptions of subsets of a dataset that stand out with respect to a target attribute. To address the problem of mining large numbers... Show moreThe task of subgroup discovery (SD) is to find interpretable descriptions of subsets of a dataset that stand out with respect to a target attribute. To address the problem of mining large numbers of redundant subgroups, subgroup set discovery (SSD) has been proposed. State-of-the-art SSD methods have their limitations though, as they typically heavily rely on heuristics and/or user-chosen hyperparameters.We propose a dispersion-aware problem formulation for subgroup set discovery that is based on the minimum description length (MDL) principle and subgroup lists. We argue that the best subgroup list is the one that best summarizes the data given the overall distribution of the target. We restrict our focus to a single numeric target variable and show that our formalization coincides with an existing quality measure when finding a single subgroup, but that—in addition—it allows to trade off subgroup quality with the complexity of the subgroup. We next propose SSD++, a heuristic algorithm for which we empirically demonstrate that it returns outstanding subgroup lists: non-redundant sets of compact subgroups that stand out by having strongly deviating means and small spread. Show less