# Iterative Methods for Non-Linear Systems | MATLAB Tutorial

**by** Gunjan Gupta

**Iterative Methods for Non-Linear Systems**

**Successive Substitution**

The following is a set of two simultaneous nonlinear equations with two unknowns:

In contrast to linear systems which plot as straight lines, these equations plot as curves on an *x*_{2 }versus *x*_{1} graph. As in Fig. 1, the solution is the intersection of the curves.

**Figure 1: Graphical depiction of the solution of two simultaneous nonlinear equations.**** **

Such systems of equations can be expressed generally as

Therefore, the solution are the values of the *x*’s that make the equations equal to zero.

A simple approach for solving Eq. 1 is to use the same strategy that was employed for fixed-point iteration and the Gauss-Seidel method. That is, each one of the nonlinear equations can be solved for one of the unknowns. These equations can then be implemented iteratively to compute new values which (hopefully) will converge on the solutions. This approach, which is called *successive substitution, *is illustrated in the following example.

** Problem 1: **Use successive substitution to determine the roots of Eq. 1. Note that a correct pair of roots is x

_{1}= 2 and x

_{2}= 3. Initiate the computation with guesses of x

_{1}= 1.5 and x

_{2}= 3.5.

On the basis of the initial guesses,

Thus, the approach seems to be diverging. This behavior is even more pronounced on the second iteration:

Obviously, the approach is deteriorating.

Now we will repeat the computation but with the original equations set up in a different format. For example,

Now the results are more satisfactory:

Thus, the approach is converging on the true values of x_{1} = 2 and x_{2} = 3.

The previous example illustrates the most serious shortcoming of successive substitution – that is, convergence often depends on the manner in which the equations are formulated. Additionally, even in those instances where convergence is possible, divergence can occur if the initial guesses are insufficiently close to the true solution. These criteria are so restrictive that fixed-point iteration has limited utility for solving nonlinear systems.

**Newton – Raphson**

Just as fixed-point iteration can be used to solve systems of nonlinear equations, other open root location methods such as the Newton-Raphson method can be used for the same purpose. The Newton-Raphson method was predicated on employing the derivative (i.e., the slope) of a function to estimate its intercept with the axis of the independent variable – that is, the root. First-order Taylor series expansion:

where *xi *is the initial guess at the root and *x _{i}*

_{+1}is the point at which the slope intercepts the

*x*axis. At this intercept,

*f*(

*x*

_{i}_{+1}) by definition equals zero and Eq. 3 can be rearranged to yield

which is the single-equation form of the Newton-Raphson method.

The multi-equation form is derived in an identical fashion. However, a multivariable Taylor series must be used to account for the fact that more than one independent variable contributes to the determination of the root. For the two-variable case, a first-order Taylor series can be written for each nonlinear equation as

Just as for the single-equation version, the root estimate corresponds to the values of *x*_{1} and *x*_{2}, where *f*_{1,i+1} and *f*_{2,i+1} equal zero. For this situation,

Because all values subscripted with *i*’s are known (they correspond to the latest guess or approximation), the only unknowns are *x*_{1,i+1} and *x*_{2,i+1}. Consequently, algebraic manipulations (e.g., Cramer’s rule) can be employed to solve for

The denominator of each of these equations is formally referred to as the determinant of the *Jacobian *of the system. Above is the two-equation version of the Newton-Raphson method. As in the following example, it can be employed iteratively to home in on the roots of two simultaneous equations.

** Problem 2: **Use the multiple-equation Newton-Raphson method to determine roots of Eq.1. Initiate the computation with guesses of

*x*1 = 1.5 and

*x*2 = 3.5.

First compute the partial derivatives and evaluate them at the initial guesses of *x *and *y*:

Thus, the determinant of the Jacobian for the first iteration is

6.5(32.5) − 1.5(36.75) = 156.125

The values of the functions can be evaluated at the initial guesses as

Therefore

Thus, the results are converging to the true values of *x*_{1} = 2 and *x*_{2} = 3. The computation can be repeated until an acceptable accuracy is obtained.

**Matlab m-file:**

function x = NRnonlin(f,J,x0,es,maxit) % input: % f = function value % J = Jacobian % x0 = initial guess % es = desired percent relative error (default = 0.0001%) % maxit = maximum allowable iterations (default = 50) % output: % x = vector of roots if nargin==3,error('at least 3 input arguments required'),end if nargin==5 ||isempty(maxit),maxit=50;end if nargin==4 ||isempty(es),es=0.00001;end N = maxit; maxval = 10000.0; % define value for divergence xx = x0; % load initial guess while (N==0) JJ = feval(J,xx); if abs(det(JJ)) error('newtonm - Jacobian is singular - try new x0'); end xn = xx - inv(JJ)*feval(f,xx); if abs(feval(f,xn)) x=xn; iter = 100-N; return end if abs(feval(f,xx)) iter = 100-N; disp(['iterations = ',num2str(iter)]); error('Solution diverges'); end N = N - 1; xx = xn; end

The M-file can be used to solve the problem 2.

>> f=@(x)[x(1)^2+x(1)*x(2)-10;x(2)+3*x(1)*x(2)^2-57];

>> J=@(x)[2*x(1)+x(2) x(1);3*x(2)^2 1+6*x(1)*x(2)];

>> x0 = [1.5;3.5];

>> x = NRnonlin(f,J,x0)

x =

2.0000

3.0000