Gauss Elimination Method in MATLAB  MATLAB Tutorial
by Gunjan Gupta
Gauss Elimination Method in MATLAB
Simple Gauss Elimination using Cramer’s Rule
Cramer’s rule is solution technique that is best suited to small numbers of equations. Before describing this method, we will briefly review the concept of the determinant, which is used to implement Cramer’s rule.
Determinants: The determinant can be illustrated for a set of three equations:
[A]{x}={b}where [A] is the coefficient matrix
The determinant of this system is formed from the coefficients of [A] and is represented as
Although the determinant D and the coefficient matrix [A] are composed of the same elements, they are completely different mathematical concepts. That is why they are distinguished visually by using brackets to enclose the matrix and straight lines to enclose the determinant. In contrast to a matrix, the determinant is a single number.
For example, the value of the determinant for two simultaneous equations
is calculated by
For the thirdorder case, the determinant can be computed as
where the 2×2 determinants are called minors.
Cramer’s Rule
This rule states that each unknown in a system of linear algebraic equations may be expressed as a fraction of two determinants with denominator D and with the numerator obtained from D by replacing the column of coefficients of the unknown in question by the constants b1, b2, . . . , bn. For example, for three equations, x1 would be computed as
Problem 1: Use Cramer’s rule to solve
The determinant D can be evaluated as
The solution can be calculated as
For more than three equations, Cramer’s rule becomes impractical because, as the number of equations increases, the determinants are time consuming to evaluate by hand (or by computer).
Matlab mfile:
function x = Cramers(A,b) % input: % A = coefficient matrix % b = right hand side vector % output: % x = solution vector [m,n] = size(A); if m~=n, error('Matrix A must be square'); end D = det(A); fprintf('\nDeterminant D = %f\n', D); fprintf('\nCramer''s Rule\n') Aa = A; for i = 1:length(b) A(:,i) = b; x(i) = det(A)/D; A = Aa; end for i = 1:length(x) fprintf('\nx%d = %f\n', i, x(i)); end
The Mfile can be used to solve the problem 1.
>> A=[0.3 0.52 1;0.5 1 1.9;0.1 0.3 0.5];
>> b = [0.01;0.67;0.44];
>> Cramers(A,b);
Determinant D = 0.002200
–Cramer’s Rule–
x1 = 14.900000
x2 = 29.500000
x3 = 19.800000
Naive Gauss Elimination
The elimination of unknown’s procedure consisted of two steps (Fig.1):
 The equations were manipulated to eliminate one of the unknowns from the equations. The result of this elimination step was that we had one equation with one unknown.
 Consequently, this equation could be solved directly and the result backsubstituted into one of the original equations to solve for the remaining unknown.
This basic approach can be extended to large sets of equations by developing a systematic scheme or algorithm to eliminate unknowns and to backsubstitute. Gauss elimination is the most basic of these schemes. This section includes the systematic techniques for forward elimination and back substitution that comprise Gauss elimination. Although these techniques are ideally suited for implementation on computers, some modifications will be required to obtain a reliable algorithm. In particular, the computer program must avoid division by zero. The following method is called “naive” Gauss elimination because it does not avoid this problem.
The approach is designed to solve a general set of n equations:
Figure 1: The two phases of Gauss elimination: (a) forward elimination and
(b) back substitution.
Forward Elimination of Unknowns: The first phase is designed to reduce the set of equations to an upper triangular system (Fig. 1a). The initial step will be to eliminate the first unknown x1 from the second through the nth equations. To do this, multiply Eq. (1a) by a21/a11 to give
This equation can be subtracted from Eq.(1b) to give
or
where the prime indicates that the elements have been changed from their original values.
Repeating the procedure for the remaining equations results in the following modified system:
The procedure can be continued using the remaining pivot equations. The final manipulation in the sequence is to use the (n − 1)th equation to eliminate the xn−1 term from the nth equation. At this point, the system will have been transformed to an upper triangular system:
Back Substitution: Equation (4d) can now be solved for :
This result can be backsubstituted into the (n − 1)th equation to solve for xn−1. The procedure, which is repeated to evaluate the remaining x’s, can be represented by the following formula:
Problem 2: Use Gauss elimination to solve
The first part of the procedure is forward elimination. Multiply Eq. (P2a) by 0.1/3 and subtract the result from Eq. (P2b) to give
Then multiply Eq. (P2a) by 0.3/3 and subtract it from Eq. (P2c). After these operations, the set of equations is
To complete the forward elimination, x2 must be removed from Eq. (P2f). To accomplish this, multiply Eq. (P2e) by −0.190000/7.00333 and subtract the result from Eq. (P2f). This eliminates x2 from the third equation and reduces the system to an upper triangular form, as in
We can now solve these equations by back substitution. First, Eq. (P2i) can be solved for
This result can be backsubstituted into Eq.(P2h), which can then be solved for
Finally, x3 and x2 can be substituted back into Eq.(P2g),which can be solved for
Although there is a slight roundoff error, the results are very close to the exact solution of x1 = 3, x2 = −2.5, and x3 = 7. This can be verified by substituting the results into the original equation set:
3(3) − 0.1(−2.5) − 0.2(7.00003) = 7.84999∼=7.85
0.1(3) + 7(−2.5) − 0.3(7.00003) = −19.30000 = −19.3
0.3(3) − 0.2(−2.5) + 10(7.00003) = 71.4003∼=71.4
Matlab mfile:
function x = GaussNaive(A,b) % input: % A = coefficient matrix % b = right hand side vector % output: % x = solution vector [m,n] = size(A); if m~=n, error('Matrix A must be square'); end nb = n+1; Aug = [A b]; % forward elimination for k = 1:n1 for i = k+1:n factor = Aug(i,k)/Aug(k,k); Aug(i,k:nb) = Aug(i,k:nb)factor*Aug(k,k:nb); end end % back substitution x = zeros(n,1); x(n) = Aug(n,nb)/Aug(n,n); for i = n1:1:1 x(i) = (Aug(i,nb)Aug(i,i+1:n)*x(i+1:n))/Aug(i,i); end for i = 1:length(x) fprintf('\nx%d = %f\n', i, x(i)); end
The Mfile can be used to solve the problem 2.
>> A = [3 0.1 0.2;0.1 7 0.3;0.3 0.2 10];
>> b = [7.85;19.3;71.4];
>> x = GaussNaive(A,b)
x1 = 3.0000
x2 = 2.5000
x3 = 7.0000
Pivoting
The primary reason that the foregoing technique is called “naive” is that during both the elimination and the backsubstitution phases, it is possible that a division by zero can occur. For example, if we use naive Gauss elimination to solve
the normalization of the first row would involve division by a11 = 0. Problems may also arise when the pivot element is close, rather than exactly equal, to zero because if the magnitude of the pivot element is small compared to the other elements, then roundoff errors can be introduced.
Therefore, before each row is normalized, it is advantageous to determine the coefficient with the largest absolute value in the column below the pivot element. The rows can then be switched so that the largest element is the pivot element. This is called partial pivoting. If columns as well as rows are searched for the largest element and then switched, the procedure is called complete pivoting.
The following example illustrates the advantages of partial pivoting. Aside from avoiding division by zero, pivoting also minimizes roundoff error. As such, it also serves as a partial remedy for illconditioning.
Problem 3: Use Gauss elimination to solve
Note that in this form the first pivot element, a_{11} = 0.0003, is very close to zero. Then repeat the computation, but partial pivot by reversing the order of the equations. The exact solution is x_{1} = 1/3 and x_{2} = 2/3.
Multiplying the first equation by 1/(0.0003) yields
which can be used to eliminate x_{1} from the second equation:
which can be solved for x_{2 }= 2/3. This result can be substituted back into the first equation to evaluate x_{1}:
Due to subtractive cancellation, the result is very sensitive to the number of significant figures carried in the computation:
Significant Figures 


Absolute Value of Percent Relative Error for 
3  0.667  3.33  1099 
4  0.6667  0.0000  100 
5  0.66667  0.30000  10 
6  0.666667  0.330000  1 
7  0.6666667  0.3330000  0.1 
Note how the solution for x_{1 }is highly dependent on the number of significant figures.
On the other hand, if the equations are solved in reverse order, the row with the larger pivot element is normalized. The equations are
Elimination and substitution again yields x_{2} = 2/3. For different numbers of significant figures, x_{1} can be computed from the first equation, as in
This case is much less sensitive to the number of significant figures in the computation:
Significant Figures 


Absolute Value of Percent Relative Error for 
3  0.667  0.333  0.1 
4  0.6667  0.3333  0.01 
5  0.66667  0.33333  0.001 
6  0.666667  0.333333  0.0001 
7  0.6666667  0.3333333  0.0000 
Thus, a pivot strategy is much more satisfactory.
Matlab mfile:
function x = GaussPivot(A,b) % input: % A = coefficient matrix % b = right hand side vector % output: % x = solution vector [m,n]=size(A); if m~=n error('Matrix A must be square'); end nb=n+1; Aug=[A b]; % forward elimination for k = 1:n1 % partial pivoting [big,i]=max(abs(Aug(k:n,k))); ipr=i+k1; if ipr~=k Aug([k,ipr],:)=Aug([ipr,k],:); end for i = k+1:n factor=Aug(i,k)/Aug(k,k); Aug(i,k:nb)=Aug(i,k:nb)factor*Aug(k,k:nb); end end % back substitution x=zeros(n,1); x(n)=Aug(n,nb)/Aug(n,n); for i = n1:1:1 x(i)=(Aug(i,nb)Aug(i,i+1:n)*x(i+1:n))/Aug(i,i); end for i = 1:length(x) fprintf('\nx%d = %f\n', i, x(i)); end
The Mfile can be used to solve the problem 3.
>> A = [0.0003 3.0000;1.0000 1.0000];
>> b = [2.0001;1.0000];
>> GaussPivot(A,b);
x1 = 0.333333
x2 = 0.666667