Image Rotation using MATLAB | MATLAB Tutorial
In imaging science, image processing along with image rotation is analysis and manipulation of a digitized image, especially in order to improve its quality. It helps in improving its pictorial information for human interpretation, rendering it more suitably for autonomous machine perceptions and effective storage and transmission.
Figure. 1 shows actual Image whereas,
Figure. 2 shows processed Image.
What is Image Rotation?
“Flipping and transposing the original image is image rotation.”
- Image rotation is done by computing the inverse transformation for every destination pixel. Output pixels are computed using bilinear interpolation. Also, RGB images are computed by evaluating one colour plane at a time.
- Flipping belongs to image rotation with 45, 90,180 degree rotation.
- Images can also be rotated by some arbitrary degrees.
Different Rotation Methods: –
1. To flip an image upside down:
At pixel location xy, look for the color at location x (1 – y).
2. For horizontal flip:
At pixel location xy, look for (1 – x) y.
3. Rotating an image 90 degrees counter clockwise:
At pixel location xy, look for (y, 1 – x).
Rotation by an angle α: –
Matrix Form: –
Block Diagram: –
Commands required for Image Rotation: –
imresize(image); imroatate (image); imref2d(size(image)); imwarp(image);
Steps for Image Rotation: –
Reading the Image
> img = imread (‘cameraman.tif’)
- Resizing and rotating the image
scale = 0.7; new_img = imresize (img, scale); theta = 45; modified_img = imrotate (new_img, theta); figure, imshow (modified_img)
It may be noticed that the image is still rectangular; this is because the function imrotate makes the output array large enough to contain the entire rotated image by setting the values of pixels in modified_img that are outside the rotated image to zero.
- Recovering the Original Image
> img_out = imref2d (size(img));
> recovered = imwarp (new_img, ‘Modified_Image’, img_out);
> figure, imshowpair (img, recovered, ’montage’)
The quality of original and recovered image does not match because of the distortion and recovery process.
In particular, the image shrinking causes loss of information. The artifacts around the edges are due to the limited accuracy of the transformation.
Let’s implement both of these filters in MATLAB.
%% Read Image original = imread ('cameraman.tif'); scale = 0.7; distorted = imresize (original, scale); %% Resize and Rotate theta = 45; modified_img = imrotate (distorted, theta); figure, imshow (modified_img) %% Matching Features between Images ptsOriginal = detectSURFFeatures(original); %% Detect Features ptsDistorted = detectSURFFeatures(distorted); %% Detect Features [featuresOriginal, validPtsOriginal] = extractFeatures(original, tsOriginal); %% Extract Features [featuresDistorted, validPtsDistorted] = extractFeatures(distorted, ptsDistorted); %% Extract Features indexPairs = matchFeatures(featuresOriginal, featuresDistorted); %% Match Features matchedOriginal = validPtsOriginal(indexPairs(:,1)); %% Retrieve Location of corresponding Points matchedDistorted = validPtsDistorted(indexPairs(:,2)); %% Retrieve Location of corresponding Points %% Display Matches figure; showMatchedFeatures(original,distorted,matchedOriginal,matchedDistorted); title('Putatively matched points (including outliers)'); %% Estimate Transformation [tform, inlierDistorted, inlierOriginal] = estimateGeometricTransform(matchedDistorted, matchedOriginal, 'similarity'); figure; showMatchedFeatures(original,distorted,inlierOriginal,inlierDistorted); title('Matching points (inliers only)'); legend('ptsOriginal','ptsDistorted'); %% Scaling & Angle Tinv = tform.invert.T; ss = Tinv(2,1); sc = Tinv(1,1); scaleRecovered = sqrt(ss*ss + sc*sc) thetaRecovered = atan2(ss,sc)*180/pi %% Recovery outputView = imref2d(size(original)); recovered = imwarp(distorted,tform,'OutputView',outputView); figure, imshowpair(original,recovered,'montage')
Figure. Matching Points(inliner) Figure. Matching Points(outliner)