Optimization tasks in practice have multifaceted challenges as they are often black box, subject to multiple equality and inequality constraints and expensive to evaluate. The efficiency of a... Show moreOptimization tasks in practice have multifaceted challenges as they are often black box, subject to multiple equality and inequality constraints and expensive to evaluate. The efficiency of a constrained optimizer has a crucial importance when it comes to selecting a suitable method for solving real-world optimization problems from industry with strict resource limitations. The primary concern of this work is to develop new black box optimization algorithms which are generic enough to successfully handle a broad set of constrained optimization problems (COPs) efficiently and without requiring apriori parameter tuning for different classes of the problems. To achieve this goal we benefit from two main conceptual components in the development of new constrained solvers: 1. utilizing surrogate modeling techniques to save real function evaluations, 2. automatically adjusting sensitive problem-dependent parameters based on the information gained about the problems during the optimization procedure. This work eventuated in the development of two surrogate-assisted constrained solvers: SACOBRA and SOCU. It turns out that SACOBRA outperforms most other COP-solvers in solving the well-known G-problem suite and MOPTA08 (a COP from automotive industry), if the number of function evaluations is strongly limited. Show less
Real-world optimization problems often have expensive objective functions in terms of cost and time. It is desirable to find near-optimal solutions with very few function evaluations. Surrogate... Show moreReal-world optimization problems often have expensive objective functions in terms of cost and time. It is desirable to find near-optimal solutions with very few function evaluations. Surrogate-assisted optimizers tend to reduce the required number of function evaluations by replacing the real function with an efficient mathematical model built on few evaluated points. Problems with a high condition number are a challenge for many surrogate-assisted optimizers including SACOBRA. To address such problems we propose a new online whitening operating in the black-box optimization paradigm. We show on a set of high-conditioning functions that online whitening tackles SACOBRA's early stagnation issue and reduces the optimization error by a factor between 10 to 1e12 as compared to the plain SACOBRA, though it imposes many extra function evaluations. Covariance matrix adaptation evolution strategy (CMA-ES) has for very high numbers of function evaluations even lower errors, whereas SACOBRA performs better in the expensive setting (less than 1e03 function evaluations). If we count all parallelizable function evaluations (population evaluation in CMA-ES, online whitening in our approach) as one iteration, then both algorithms have comparable strength even on the long run. This holds for problems with dimension D <= 20. Show less