# Optimization in MATLAB – Part I | MATLAB Tutorial

**by** Gunjan Gupta

**Optimization in MATLAB – Part I**

Optimization is the process of creating something that is as effective as possible. As engineers, we must continuously design devices and products that perform tasks in an efficient fashion for the least cost. In addition, scientists have interest in optimal phenomena ranging from the peak elevation of projectiles to the minimum free energy.

From a mathematical perspective, optimization deals with finding the maxima and minima of a function that depends on one or more variables. The goal is to determine the values of the variables that yield maxima or minima for the function. These can then be substituted back into the function to compute its optimal values.

From a numerical standpoint, optimization is similar in spirit to the root-location methods. That is, both involve guessing and searching for a point on a function. The fundamental difference between the two types of problems is illustrated in Fig. 1. Root location involves searching for the location where the function equals zero. In contrast, optimization involves searching for the function’s extreme points.

**Figure 1: A function of a single variable illustrating the difference between roots and optima.**

As shown in Fig. 1, the optimums are the points where the curve is flat. In mathematical terms, this corresponds to the *x *value where the derivative *f *′(*x*) is equal to zero. Additionally, the second derivative, f ′′(*x*), indicates whether the optimum is a minimum or a maximum: if *f *′′(*x*) < 0, the point is a maximum; if *f *′′(*x*) > 0, the point is a minimum.

**One – Dimensional Optimization**

A useful image in this regard is the one-dimensional “roller coaster” – like function depicted in Fig. 2. A *global optimum *represents the very best solution. A *local optimum, *though not the very best, is better than its immediate neighbors. Cases that include local optima are called *multimodal. *In such cases, we will almost always be interested in finding the global optimum.

Just as in root location, optimization in one dimension can be divided into bracketing and open methods. As described in the next section, the golden-section search is an example of a bracketing method that is very similar in spirit to the bisection method for root location. This is followed by a somewhat more sophisticated bracketing approach—parabolic interpolation.

**Figure 2: A function that asymptotically approaches zero at plus and minus ** ** ****and has two maximum and two minimum points in the vicinity of the origin.**

** **

**Golden – Section Search**

In many cultures, certain numbers are ascribed magical qualities. For example, in the West “lucky 7” and “Friday the 13th.” Beyond such superstitious quantities, there are several well-known numbers that have such interesting and powerful mathematical properties that they could truly be called “magical.” The most common of these are the ratio of a circle’s circumference to its diameter π and the base of the natural logarithm *e*.

Although not as widely known, the *golden ratio *should surely be included in the pantheon of remarkable numbers. This quantity, which is typically represented by the Greek letter φ (pronounced: fee or phi), was originally defined by Euclid (ca. 300 BCE) because of its role in the construction of the pentagram or five-pointed star. As depicted in Fig. 3, Euclid’s definition reads: “A straight line is said to have been cut in extreme and mean ratio when, as the whole line is to the greater segment, so is the greater to the lesser.”

**Figure 3: Dividing a line into two segments so that the ratio of the whole line to the larger segment is equal to the ratio of the larger segment to the smaller segment. This ratio is called the golden ratio.**

The actual value of the golden ratio can be derived by expressing Euclid’s definition as

Multiplying by and collecting terms gives

where . The positive root of this equation is the golden ratio:

The golden ratio provides the basis for the golden-section search, a simple, general-purpose method for determining the optimum of a single-variable function. The golden-section search is similar in spirit to the bisection approach for locating roots. Recall that bisection hinged on defining an interval, specified by a lower guess (*xl*) and an upper guess (*xu*) that bracketed a single root. The presence of a root between these bounds was verified by determining that *f *(*xl*) and *f *(*xu*) had different signs. The root was then estimated as the midpoint of this interval:

Now suppose that instead of a root, we were interested in determining the minimum of a one-dimensional function. As with bisection, we can start by defining an interval that contains a single answer. That is, the interval should contain a single minimum, and hence is called *unimodal. *We can adopt the same nomenclature as for bisection, where *xl *and *xu *defined the lower and upper bounds, respectively, of such an interval. However, in contrast to bisection, we need a new strategy for finding a minimum within the interval. Rather than using a single intermediate value (which is sufficient to detect a sign change, and hence a zero), we would need two intermediate function values to detect whether a minimum occurred.

The key to making this approach efficient is the wise choice of the intermediate points. As in bisection, the goal is to minimize function evaluations by replacing old values with new values. For bisection, this was accomplished by choosing the midpoint. For the golden-section search, the two intermediate points are chosen according to the golden ratio:

where

**Figure 4: ( a) The initial step of the golden-section search algorithm involves choosing two interior points according to the golden ratio. (b) The second step involves defining a new interval that encompasses the optimum.**

The function is evaluated at these two interior points. Two results can occur:

- If, as in Fig. 4
*a*,*f*(*x*1)<*f*(*x*2), then*f*(*x*1) is the minimum, and the domain of*x*to the left of*x*2, from*xl*to*x*2, can be eliminated because it does not contain the minimum. For this case,*x*2 becomes the new*xl*for the next round. - If
*f*(*x*2)<*f*(*x*1), then*f*(*x*2) is the minimum and the domain of*x*to the right of*x*1, from*x*1 to*xu*would be eliminated. For this case,*x*1 becomes the new*xu*for the next round.

Because the original *x*1 and *x*2 were chosen using the golden ratio, we do not have to recalculate all the function values for the next iteration. As the iterations are repeated, the interval containing the extremum is reduced rapidly.

** Problem 1:** Use the golden-section search to find the minimum of

within the interval from to

First, the golden ratio is used to create the two interior points:

The function can be evaluated at the interior points:

Because *f *(*x*2) < *f *(*x*1), our best estimate of the minimum at this point is that it is located at *x *= 1.5279 with a value of *f *(*x*) = –1.7647. In addition, we also know that the minimum is in the interval defined by *xl*, *x*2, and *x*1. Thus, for the next iteration, the lower bound remains *xl *= 0, and *x*1 becomes the upper bound, that is, *xu *= 2.4721. In addition, the former *x*2 value becomes the new *x*1, that is, *x*1 = 1.5279. In addition, we do not have to recalculate *f *(*x*1), it was determined on the previous iteration as *f *(1.5279) = –1.7647.

Compute the new value of *d *and *x*2:

The function evaluation at *x*2 is *f *(0.9943) = −1.5310. Since this value is less than the function value at *x*1, the minimum is *f *(1.5279) = −1.7647, and it is in the interval prescribed by *x*2, *x*1, and *xu*. The process can be repeated, with the results tabulated below.

Note that the current minimum is highlighted for every iteration. After the eighth iteration, the minimum occurs at *x *= 1.4427 with a function value of −1.7755. Thus, the result is converging on the true value of −1.7757 at *x *= 1.4276.

1 | 0 | 0 | 1.5279 | −1.7647 | 2.4721 | −0.6300 | 4.0000 | 3.1136 | 2.4721 |

2 | 0 | 0 | 0.9443 | −1.5310 | 1.5279 | −1.7647 | 2.4721 | −0.6300 | 1.5279 |

3 | 0.9443 | −1.5310 | 1.5279 | −1.7647 | 1.8885 | −1.5432 | 2.4721 | −0.6300 | 0.9443 |

4 | 0.9443 | −1.5310 | 1.3050 | −1.7595 | 1.5279 | −1.7647 | 1.8885 | −1.5432 | 0.5836 |

5 | 1.3050 | −1.7595 | 1.5279 | −1.7647 | 1.6656 | −1.7136 | 1.8885 | −1.5432 | 0.3607 |

6 | 1.3050 | −1.7595 | 1.4427 | −1.7755 | 1.5279 | −1.7647 | 1.6656 | −1.7136 | 0.2229 |

7 | 1.3050 | −1.7595 | 1.3901 | −1.7742 | 1.4427 | −1.7755 | 1.5279 | −1.7647 | 0.1378 |

8 | 1.3901 | −1.7742 | 1.4427 | −1.7755 | 1.4752 | −1.7732 | 1.5279 | −1.7647 | 0.0851 |

Matlab m-file:

function [x,fx,ea,iter]=goldmin(f,xl,xu,es,maxit,varargin) % input: % f = name of function % xl, xu = lower and upper guesses % es = desired relative error (default = 0.0001%) % maxit = maximum allowable iterations (default = 50) % p1,p2,... = additional parameters used by f % output: % x = location of minimum % fx = minimum function value % ea = approximate relative error (%) % iter = number of iterations if nargin<3 error('at least 3 input arguments required'); end if nargin<4||isempty(es) es=0.0001; end if nargin<5||isempty(maxit) maxit=50; end phi=(1+sqrt(5))/2; iter=0; while(1) d = (phi-1)*(xu - xl); x1 = xl + d; x2 = xu - d; if f(x1,varargin{:}) < f(x2,varargin{:}) xopt = x1; xl = x2; else xopt = x2; xu = x1; end iter=iter+1; if xopt~=0 ea = (2 - phi) * abs((xu - xl) / xopt) * 100; end if (ea <= es || iter >= maxit) break end end x=xopt; fx=f(xopt,varargin{:});

The M-file can be used to solve the problem 1.

>> y = @(x)((x^2)/10)- 2*sin(x);

>> [xmin,fmin,ea,iter]=goldmin(y,0,4)

xmin = 1.4276

fmin = -1.7757

ea = 9.3079e-05

iter = 29

For Parabolic Interpolation of One-Dimensional Optimization and Multidimensional Optimization check our next tutorial.