An algorithm is proposed for multi-objective optimisation of Lipschitz objective functions that each satisfy a Lipschitz condition of which a Lipschitz constant is a priori known. The number of... Show moreAn algorithm is proposed for multi-objective optimisation of Lipschitz objective functions that each satisfy a Lipschitz condition of which a Lipschitz constant is a priori known. The number of function evaluations is reduced by determining a good next point of evaluation using an Expected Hypervolume Improvement (EHVI) approach. It is closely related to Shubert’s Algorithm for single objective optimisation on one-dimensional decision space, but sampling sequences can be slightly different. Show less
It is a common technique in global optimization with expensive black-box functions, to learn a regression model (or surrogate-model) of the response function from past evaluations and to use this... Show moreIt is a common technique in global optimization with expensive black-box functions, to learn a regression model (or surrogate-model) of the response function from past evaluations and to use this model to decide on the location of future evaluations. In surrogate model assisted optimization it can be difficult to select the right modeling technique. Without preliminary knowledge about the function it might be beneficial if the algorithm trains as many different surrogate models as possible and selects the model with the smallest training error. This is known as model selection. Recently a generalization of this approach was proposed: instead of selecting a single model we propose to use optimal convex combinations of model predictions. This approach, called model mixtures, is adopted and evaluated in the context of sequential parameter optimization. Besides discussing the general strategy, the optimal frequency of learning the convex weights is investigated. The feasibility of this approach is examined and its benefits are compared to simpler methods. Show less