Satisfying the function evaluation constraints in Evolutionary Algorithms

Function evaluation will mean both Objective function evaluation and Constraint function evaluation

In a recent work for the IEEE CEC 2010 competition on Constrained Real parameter optimization problems, we (along with Prof. Deb and Ritu ) coupled an idea from the Gradient projection (GP) method for handling equality constraints with our Genetic Algorithms (GA) based algorithm for finding the global optima of the problems.  The approach worked wonderfully well for us.  There was a problem, however. There were limits on the maximum number of function evaluations and our algorithm did not perform well for some of the problems, with this constraint! (As if the actual constraints were not enough!! :D). So what we did was, instead of applying the GP method every generation, we started applying it only after some generations. We achieved decent results with this within the specified max. number of function evaluations. In every generation, we applied the GP to every individual. There could be a form of evolution control [1] in play here, which we didn’t explore during the course of our work because of time constraints.

This is a common problem when you are working with EAs such as a GA. Since there is a population of solutions being processed at each generation, the function evaluation count goes up every generation. Besides the huge number of function evaluations, sometimes in complex problems, the time taken for each evaluation is so high that parallel methods have to adopted in order to solve them. (In a very recent playing-around, the resident CUDA guru here, just parallelized the objective function evaluation in the Innovization guru’s code and they achieved a huge speedup).

Using approximation methods: Since parallelization is not always the best option, one way to avoid expensive function evaluations (or comply to the function evaluation limits – parallelization won’t help here ofcourse) is to use approximation models (or meta-model). That is, using a set of true data (samples) build up a model, and then use this model to obtain the approximation to an unknown input sample, thus saving the actual function evaluation(s). The important consideration,  however is how true the approximation model,  also called the fidelity of the model. If the model is not a very good approximation, then the EA might only be able to return a sub-optimal result which is not what we want.

Following are some papers I recently read and will be useful to anyone looking to investigate this field:

  1. Yaochu Jin ,  Markus Olhofer ,  Bernhard Sendhoff, “A Framework for Evolutionary Optimization With Approximate Fitness Functions
  2. El-Beltagy, M.A. and Keane, A.J. (1999), “Metamodeling Techniques For Evolutionary Optimization of Computationally Expensive Problems : Promises and Limitations”
  3. Y. Jin. “A comprehensive survey of fitness approximation in evolutionary computation.”
  4. Won, Ray, Tai, “A Framework for Optimization Using Approximate Functions”


Some notes on [1] above:

  • individual and generation based evolution control- evolution proceeds based on fitness evaluations using not only the approximate fitness model but also the original fitness function
  • interactive evolutionary computation and usage of approximate models therein
  • use in multi-modal test function optimization: smoothen the rough fitness function, and thus prevent stagnation in a local optimum
  • Two approaches of evolution control:  (1) Every generation, only certain individuals are evaluated using the original fitness function (2) All individuals only every N generations per M generations are evaluated- determination of the number of individuals in (1) and generations in (2) has been answered via simulations
  • frequency at which the original function is called and the approximate model is updated should be determined by the local fidelity of the approximate model.The lower the fidelity, more frequently the original function should be called. *local fidelity model* is stressed

I hope to experiment with approximation models for the work I mentioned at the beginning. The challenge would be to find the perfect evolution control strategy.

About these ads

2 thoughts on “Satisfying the function evaluation constraints in Evolutionary Algorithms

  1. the constraint on number of function evaluations rubs me the wrong way. as far as i know GAs have had most success as “offline” optimizers so what is this strange fascination with limiting the number of function evaluations. and i don’t see GA algorithms being embedded in real time systems any time soon.
    as far as computational methods vs analytical methods go … i feel its the guilt resident within us which makes us sigh when too many computations are needed to arrive at an answer.
    since we are talking multi-objective optimization anyways i suspect there might be a trade-off amongst computational effort and nearness to analytical solution. in such a scenario frowning upon computational effort is bound to put artifical limits on what can be achieved with computation alone.
    in my work at least i go full throttle ahead with the function evaluation … ho holds barred. nature has two weapons
    0) large number of experiments
    1) lots of time.
    while we adopt the first idea we are rejecting the second.

    1. number of function evaluations is often used as a measure of how “good” an optimizer is performing- useful for judging algorithms in a competition. So, if you want to win that competition, you gotta devise ways to be economical.

      In another scenario, saving on function evaluations is indeed very important, when each function evaluation is expensive (in terms of computational resources, and time consumption). I am aware of problems where a single function evaluation has taken upto 8 hours- in those cases, approximate models do help. Well, yeah it may be possible, that since we are approximating, the quality of solution decrease further. But I guess again, the approximation model used is obviously not a generic one and it varies from problem to problem.

      What do you think?

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s