[.DAVID.OPTIMIZE] (OS independent. Code should be easily transportable to other systems.) OPTIMIZATION Concept: Given a scalar function F, which is a function of several scalar variables {X(1),X(2),...,X(n)}, find the values of {X(1),X(2),...,X(n)} which MINIMIZE the value of function F. Example: Find {X(1),X(2)} which minimizes the function: _ F(X) = 10 * X(1)**4 - 20 * X(1)**2 * X(2) + 10 * X(2)**2 + X(1)**2 - 2*X(1) + 5 (the banana function) Technique: If you're not familiar with optimization techniques you may wish to check out a book on the subject. The references below give some examples. The general technique is to start at an arbitrary point X{0} (vector), and search in a given direction S{0} (vector) for a minimum. When the minimum is found, that becomes our new point X{1} (vector). X{1} = X{0} + ALPHA*S{0} We then repeat the process, starting at X{1}, searching this time in a new direction S{1} until we have found a minimum in the new S{1} direction, and making that our new point X{2}. There are numerous techniques for choosing the next search direction and for searching along the search direction for a minimum. The worst technique is to just start over again and search in the direction of steepest descent. This is the technique most people start out with being simple in concept, but it can quickly bog down and become exceedingly slow. Two better techniques are provided in OPTI2.FOR for your use. One is the Conjugate Direction Method of Fletcher and Reeves (CDM), and the other is POWELL's method. POWELL's method was used to find the optimum coefficients for the spline in program NEXTPRIME.FOR in directory [.PRIMES] . For an example of how these methods are better, I originally tried to solve the find the optimum coefficients for the spline in program NEXTPRIME.FOR in directory [.PRIMES] by using the steepest descent method, because it was quick and easy to write and I didn't have this optimization code written yet. I let the program run for several hours trying to squeeze out the best possible results. Later when this optimization code was written I reran the problem using POWELL's method. It took less than a minute and converged to a better answer than I had obtained after hours of steepest descent work. The technique provided in OPTI2.FOR for a one dimensional search along the S{} direction is the Golden Section Method. A brief discussion of the Golden Section Method is provided in subroutine GOLDEN_SECTION. To optimize a function the top level routine must provide the following: 1. The function F to be minimized. The function is called EVAL. It's purpose when called is to evaluate the function at the point X (vector) which is passed to it as a parameter. 2. An initial point X (vector) to start the search at. 3. NDV - The number of design variables, or the dimension of vector X. 4. STEP - A step size to use when searching for bounds for the golden section method. Too small a step size will take too long, too large a step size may be too coarse. 5. MAXSTEP - Maximum number of steps for one-dimensional search. A safety catch in case the search for bounds doesn't find the bounds in a reasonable amount of time. 6. ARES - Alpha resolution. Determines how close to the minimum the golden section search must get before it can terminate and say it's there. Too small may cause golden section search to take too long. Too large may cause it to terminate prematurely. 7. DS - Delta step size. Used by subroutine GRADIENT for the CDM method only. The step size to take when determining the gradient of the function. The step size must be large enough so a gradient can be detected, but not so large that the local gradient is missed. 8. MAXIT - Maximum number of iterations. A safety catch in case the search doesn't converge on the minimum and terminate normally. Example: File OPTI2.FOR, PROGRAM MYMAIN, is currently set up to solve the minimization problem presented in routine NEXTPRIME.FOR in directory [-.PRIMES]. That problem is further discussed in file NEXTPRIME.FOR . For this problem we have a real function EVAL which when given our design vector X and the number of design variables NDV will return for us a real number. This number we wish to make as small as possible. We choose an arbitrary starting point for vector X, and choose some appropriate values for the other variables. For technical reasons discussed below we choose to use the POWELL method of optimization. Then run it, and be impressed by how quickly it zeros in on the optimum values for X. To run this example: $ FORTRAN OPTI2,FCNPRIM $ LINK OPTI2,FCNPRIM $ RUN OPTI2 The output tells what the current X vector is and what the current value of our objective function EVAL which we wish to make as small as possible is. The rest is somewhat hands on experience. We choose to use the POWELL method here because our objective function EVAL looks like a staircase instead of a smooth function. The CDM method uses gradients whereas the POWELL method does not. This makes the CDM method more efficient in some situations, but the evaluation of a gradient on a staircase function is not feasible. Also note that some scaling of variable X(3) is done in routine EVAL. Turns out that function EVAL is very sensitive to small changes in X(3), so we scale that variable to make the sensitivity of it about the same as for the other two variables X(1) and X(2). Example #2: To solve the banana function given at the beginning, replace in file OPTI2.FOR the main routine and EVAL function (and FCNOBJ) with the following: PROGRAM MYMAIN IMPLICIT NONE PARAMETER NDV = 2 !Number of design variables INTEGER MAXSTEP,MAXIT REAL XOUT(NDV),X(NDV),STEP,ARES,DS,S(NDV),OBJ EXTERNAL EVAL REAL EVAL STEP = 0.1 !Step size when stepping to find bounds MAXSTEP = 700 !Maximum number of steps before abort ARES = 0.001 !alpha resolution DS = 0.001 !delta step size when determining slope MAXIT = 3 !Number of restarts to do X(1) = -4.0 !Initial X values X(2) = -4.0 OBJ = EVAL(X,NDV) WRITE(6,99) OBJ WRITE(6,101) X 99 FORMAT( ' INITIAL CONDITION. OBJ = ', F ) 101 FORMAT( F ) CALL CDM(X,NDV,STEP,MAXSTEP,ARES,DS,MAXIT,EVAL) !Main call C CALL POWELL(X,NDV,STEP,MAXSTEP,ARES,MAXIT,EVAL) STOP END REAL FUNCTION EVAL(X,NDV) !Function to be minimized IMPLICIT NONE INTEGER NDV REAL X(NDV) EVAL = 10 * X(1)**4 - 20 * X(1)**2 * X(2) + 10 * X(2)**2 * + X(1)**2 - 2*X(1) + 5 C (The banana function) RETURN END This time there are only two design variables, X(1) and X(2). And we have chosen to use the CDM method instead of the POWELL method. Either one should work well. And we've increased MAXIT (maximum number of iterations) to 20. Try running this and you'll see it very quickly comes close to the correct values of X(1) = 1 and X(2) = 1 where the objective function is minimized at a value of 4. REFERENCES: There are other methods besides POWELL's and CDM discussed in the literature. VMA ENGINEERING listed below sells software which uses more sophisticated algorithms for optimization, for those really big jobs where you can't wait all year for a less efficient algorithm to do the job. Vanderplaats, Garret N. "Numerical Optimization Techniques for Engineering Design: With Applications" McGraw-Hill Book Company 1984 Fox, Richard L. "Optimization methods for engineering design" Reading, Mass., Addison-Wesley Pub. Co. [1971]. Series title: Addison-Wesley series in mechanics and thermodynamics. Aoki, Masanao "Introduction to Optimization Techniques: Fundamentals and Applications of Nonlinear Programming." The Macmillian Company, New York. 1971. Luenberger, David G. "Linear and Nonlinear Programming" second edition. Addison-Wesley Publishing Company. 1984. The following company specializes in providing advanced state of the art optimization programs and consulting (the president was a former college professor of mine): VMA ENGINEERING (New address as of Sept. 1994) 1767 South 8th street Suite M-200 Colorado Springs, CO 80906 (719)473-4611 (719)473-4638 FAX Internet: vanderplaats@vma.com