Saturday 30 April 2016

OpenCV C++ Code for Split and Merge

This tutorial gives a deep insight of splitting and merging function of opencv. Thus enabling us to split a color image into their respective RGB channels:

Here we want to split a color image into its three channels called "Red" ,"Green" and "Blue".

Splitting a color image into its respective RGB channels gives us an idea about the component of color which is present in an original image.

OpenCV provides built in function called “split()” for this purpose.


Syntax:
C++:void split(const Mat& src, Mat* mvbegin)

Parameters:
src input multi-channel array.
mv output array or vector of arrays.

In the first variant of the function the number of arrays must match src.channels();
the arrays themselves are reallocated, if needed.



The function “merge()” does just the opposite to that of split. It creates one multichannel array out of several single-channel ones.


Syntax: C++: void merge(const Mat* mv, size_t count, OutputArray dst)

Parameters:
mv – input array or vector of matrices to be merged; all the matrices in mv must have the same size and the same depth.
count – number of input matrices when mv is a plain C array; it must be greater than zero.
dst – output array of the same size and the same depth as mv[0]. The number of channels will be the total number of channels in the matrix array. The functions merge merge several arrays to make a single multi-channel array.


Here is the code below:
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <iostream>
 
using namespace cv;
using namespace std;
 
int main()
{
 
    Mat image;
    image = imread("C:\\Users\\arjun\\Desktop\\rgbimage.png", CV_LOAD_IMAGE_COLOR);   // Read the file
 
    if(! image.data )                              // Check for invalid input
    {
        cout <<  "Could not open or find the image" << std::endl ;
        return -1;
    }
 
    
 namedWindow( "Original Image", CV_WINDOW_AUTOSIZE );
 imshow( "Original Image", image );
 
    Mat rgbchannel[3];
    // The actual splitting.
    split(image, rgbchannel);
 
 namedWindow("Blue",CV_WINDOW_AUTOSIZE);
 imshow("Red", rgbchannel[0]);
 
 namedWindow("Green",CV_WINDOW_AUTOSIZE);
 imshow("Green", rgbchannel[1]);
 
 namedWindow("Red",CV_WINDOW_AUTOSIZE);
 imshow("Blue", rgbchannel[2]);
 
    waitKey(0);//Wait for a keystroke in the window
    return 0;
}



Input:
Output:





Note:
You might have observed that we obtain the grayscale images after splitting the color images into Red,Green and Blue colors.
Reason:
Split function splits the multichannel image into single channel arrays containing the identical pixel value of the original image.
So since we have created single channel images,opencv imshow function treats it as a grayscale image.
For a colour image, we need to create a three channel image.

The OpenCV C++ code is given below:-
#include "opencv2/objdetect/objdetect.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <iostream>
#include <stdio.h>
 
using namespace std;
using namespace cv;
 
int main()
{
    Mat image=imread("C:\\Users\\arjun\\Desktop\\aaa.png",1);    
    namedWindow("Original Image",1);
 imshow("Original Image",image);
 
    // Split the image into different channels
    vector<Mat> rgbChannels(3);
    split(src, rgbChannels);
 
    // Show individual channels
    Mat g, fin_img;
    g = Mat::zeros(Size(image.cols, image.rows), CV_8UC1);
      
    // Showing Red Channel
    // G and B channels are kept as zero matrix for visual perception
    {
    vector<Mat> channels;
    channels.push_back(g);
    channels.push_back(g);
    channels.push_back(rgbChannels[2]);
 
    /// Merge the three channels
    merge(channels, fin_img);
    namedWindow("Red",1);
 imshow("Red", fin_img);
    }
 
    // Showing Green Channel
    {
    vector<Mat> channels;
    channels.push_back(g);
    channels.push_back(rgbChannels[1]);
    channels.push_back(g);    
    merge(channels, fin_img);
    namedWindow("Green",1);
 imshow("Green", fin_img);
    }
 
    // Showing Blue Channel
    {
    vector<Mat> channels;
    channels.push_back(rgbChannels[0]);
    channels.push_back(g);
    channels.push_back(g);
    merge(channels, fin_img);
    namedWindow("Blue",1);
    imshow("Blue", fin_img);
    }
 
    waitKey(0);
    return 0;
 
}


Input:

Output:





Here after splitting the image by split(image, rgbChannels)
We get three channels of which
  • Rgbchannel[0] corresponds to that of “Blue” color image.
  • Rgbchannel[1] corresponds to that of “Green” color image
  • Rgbchannel[2] corresponds to that of “Red” color image
Since the split function splits the multi-channel image into single channel ,if we display these channels directly we would get the gray-scale image of RGB channels.
Thus we need to create a matrix of zeros and push that into other channels.

Mat::zeros(Size(image.cols, image.rows), CV_8UC1) :
Creates a matrix of Zeros of single channel whose dimension is same as that of the original image.

Then we have initialized channels as the vector and push_back always puts a new element at the end of the vector.
Here the new element is a 8 bit single channel matrix of Zeros.


The BGR color ordering is the default order of OpenCV.
Refer:
http://opencv-code.blogspot.in/2016/12/how-to-access-extract-pixel-value-particular-location-image.html

Thus for displaying the red channels… we need to make the first two channels as Zeros and create a 3channel image with merge function to get the colored image.
Similar is the case with other channels of image.

Monday 25 April 2016

OpenCV ImageBlender using Addweighted function


The task of blending or mixing two images linearly can be achieved by addWeighted function provided by OpenCV.



The syntax of OpenCV addWeighted function goes as:
C++:void addWeighted(src1, alpha, src2, beta, gamma, dst, int dtype=-1)

Parameters:

src1first input array.
alphaweight of the first array elements.
src2second input array of the same size and channel number as src1.
betaweight of the second array elements.
dstoutput array that has the same size and number of channels as the input arrays.
gammascalar added to each sum.
dtypeoptional depth of the output array; when both input arrays have the same depth, dtype can be set to -1, which will be equivalent to src1.depth().


Linear Blending means adding two images pixel by pixel.
Thus we can use the function
c(x)=(1-α)*a(x)+α*b(x)
where a(x) and b(x) are the two source images.
c(x) is the resultant blended image.

addWeighted( src1, alpha, src2, beta, 0.0, dst);
Thus addWeighted function performs the same thing as dst = α*src1+β*src2+γ
Here γ=0 and β=1-α



Why do we choose β=1-α?
Since we are having images of 8 bits. Thus pixel value can range from 0 to 255. Thus while adding two images the pixel value of the resultant image should lie between 0 to 255. Hence if we multiply a particular co-ordinate of an image by α the the other image's respective co-ordinate need to be 1-α. Thus the sum would be α+1-α=1. Thus the pixel value will range between 0-255.

Example:-
Consider that pixel value of src1 at particular co-ordinate is 230.
 And that of src2 is 215.
Now, we need to blend these two images linearly, for that we need to blend the pixel values of the two images.
Thus if we choose α=0.5 and β=0.7 .
The pixel value at that particular co-ordinate would be
         c(x)=α*a(x)+β*b(x) =0.5*230+0.7*215 =265.5

Thus β value need to be less or equal to that of 1-α. Hence here we have chosen it equal to 1-α to be on the safer side.But it can be less than 1-α too. 


 The code for it goes as below:
// OpenCV Image Blending Tutorial using addWeighted function
#include <opencv2/core/core.hpp> 
#include <opencv2/highgui/highgui.hpp> 
#include <iostream>
 
using namespace cv;
using namespace std;
 
int main()
{
 double alpha = 0.5; 
 double beta; 
 double input;
 
 Mat src1, src2, dst,src3;
 /// Read image ( same size, same type )
 src1 = imread("C:\\Users\\arjun\\Desktop\\green.jpg");
 src2 = imread("C:\\Users\\arjun\\Desktop\\blue.jpg");
 
 if( !src1.data ) { printf("Error loading src1 \n"); return -1; }
 if( !src2.data ) { printf("Error loading src2 \n"); return -1; }
  
 ///Comparing whether the two images are of same size or not
 int width1 , width2 , height1 , height2;
 width1 =src1.cols; 
 height1=src1.rows; 
 width2 =src2.cols; 
 height2=src2.rows; 
  
 if (width1!=width2 && height1!=height2)
 {
  printf("Error:Images must be of the same size \n");
  return -1;
 }
 /// Ask the user enter alpha
 std::cout<<" Simple Linear Blender "<<std::endl;
 std::cout<<"-----------------------"<<std::endl;
 std::cout<<"* Enter alpha [0-1]: ";
 std::cin>>input;
 
 /// We use the alpha provided by the user if it is between 0 and 1
 if( input >= 0.0 && input <= 1.0 )
   { 
    alpha = input;
   }
 
 beta = ( 1.0 - alpha );
 addWeighted( src1, alpha, src2, beta, 0.0, dst);
 
 /// Create Windows
 namedWindow("Linear Blend", 1);
 imshow( "Linear Blend", dst );
 
 namedWindow("Original Image1", 1);
 imshow( "Original Image1", src1 );
 
 namedWindow("Original Image2", 1);
 imshow( "Original Image2", src2 );
 waitKey(0);
 return 0;
}



Input:
Blue


Green
Output:
Cyan


The above code of blending two images linearly by using Trackbar is as shown below:
// OpenCV Image bleding Tutorial using addWeighted function and trackbar
#include <opencv2/core/core.hpp> 
#include <opencv2/highgui/highgui.hpp> 
#include <iostream>
 
using namespace cv;
using namespace std;
 
double alpha; 
double beta;
const int blend_slider_max = 100;
int alpha_slider;
Mat src1, src2, dst,src3;
 
void blend_trackbar( int , void* )
{
 alpha = (double) alpha_slider/blend_slider_max;
 beta = (double)( 1.0 - alpha );
    addWeighted( src1, alpha, src2, beta, 0.0, dst);
    imshow( "Linear Blend", dst );
}
 
int main()
{
 // Read image ( same size, same type )
 src1 = imread("C:\\Users\\arjun\\Desktop\\opencv_image1.jpg");
 src2 = imread("C:\\Users\\arjun\\Desktop\\opencv_image2.jpg");
 
 if( !src1.data ) { printf("Error loading src1 \n"); return -1; }
 if( !src2.data ) { printf("Error loading src2 \n"); return -1; }
  
 ///Comparing whether the two images are of same size or not
 int width1 , width2 , height1 , height2;
 width1 =src1.cols; 
 height1=src1.rows; 
 width2 =src2.cols; 
 height2=src2.rows; 
  
 if (width1!=width2 && height1!=height2)
 {
  printf("Error:Images must be of the same size \n");
  return -1;
 }
 
 // Create Windows 
 namedWindow("Linear Blend",CV_WINDOW_AUTOSIZE); 
 createTrackbar( "Blending", "Linear Blend", &alpha_slider, blend_slider_max, blend_trackbar );
 blend_trackbar( alpha_slider, 0 );
 
 namedWindow("Original Image1", 1);
 imshow( "Original Image1", src1 );
 
 namedWindow("Original Image2", 1);
 imshow( "Original Image2", src2 );
 
 waitKey(0);
 return 0;
}



Input Image1:-
Input Image2:-
Output:-

Wednesday 20 April 2016

OpenCV Image Blending Tutorial

Image are basically matrices of pixel values.Thus Image Blending or Image Merging in layman terms simply means that adding the pixel values at a particular co-ordinates of two images.


Note:-Images should be of the same size.


For e.g if the pixel value of two gray-scale images at a particular location are 120 and 35 respectively.Then after blending the pixel value at that particular co-ordinate would become 155.


Note:
Grayscale image is the one where each pixel is stored as a single byte(8 bits). Thus the pixel values can range from 0 to 255. Where 0 denotes black and 255 denotes white.

So,What would happen if the pixel value of the two gray-scale images when merged exceed 255?
For e.g
Let the pixel value at the particular co-ordinate is 250 (would appear white)and that of the other image at the same co-ordinate is 120 (would appear dark). After merging it would appear white because the pixel value at the respective co-ordinate of the merged image would be 255.Since 120+240>255)

Thus the lowest value of the pixel is 0. So even if we multiply a pixel value by -1.
The pixel value of the modified image would be 0.
i.e. If the pixel value at a particular co-ordinate is 255 (white) and if we multiply it by -1.Then the pixel at that point of image would become 0 (black).

We can check the above concept by accessing the pixel value of the merged image at a particular point.
Refer:
http://opencv-code.blogspot.in/2016/12/how-to-access-extract-pixel-value-particular-location-image.html


The code for merging/blending the two images are as shown below:
#include <opencv2/core/core.hpp> 
#include <opencv2/highgui/highgui.hpp> 
#include <iostream>
 
 
using namespace cv;
using namespace std;
 
int main()
{
  
 Mat src1, src2, src3;
 /// Read image ( same size, same type )
 src1 = imread("C:\\Users\\arjun\\Desktop\\red.jpg");
 src2 = imread("C:\\Users\\arjun\\Desktop\\green.jpg");
  
 ///Comparing whether the two images are of same size or not
 int width1 , width2 , height1 , height2;
 width1 =src1.cols; 
 height1=src1.rows; 
 width2 =src2.cols; 
 height2=src2.rows; 
  
 if (width1!=width2 && height1!=height2)
 {
  printf("Error:Images must be of the same size \n");
  return -1;
 }
  
 //Merging two images
 src3=src1 + src2;
 
 if( !src1.data ) { printf("Error loading src1 \n"); return -1; }
 if( !src2.data ) { printf("Error loading src2 \n"); return -1; }
 if( !src3.data ) { printf("Error loading src1 \n"); return -1; }
   
 /// Create Windows
 namedWindow("First Image", 1);
 imshow( "First Image", src1 );
 
 namedWindow("Second Image", 1);
 imshow( "Second Image", src2 );
 
 namedWindow("Blend1 Image", 1);
 imshow( "Blend1 Image", src3 );
 
 waitKey(0);
 return 0;
}


Input Images:
Red

Green

Output:


Multiplying Pixel Value By -1:

What would happen if we multiply the pixels by -1?
Since the minimum value of the pixel can be 0.Thus whole of the image would appear black.
Note:Here again we have assumed that the images are of same size.Hence we have not included the code for comparing the size of images which are to be merged.
#include <opencv2/core/core.hpp> 
#include <opencv2/highgui/highgui.hpp> 
#include <iostream>
 
 
using namespace cv;
using namespace std;
 
int main()
{
  
 Mat src1, src2;
 /// Read image ( same size, same type )
 src1 = imread("C:\\Users\\arjun\\Desktop\\red.jpg");
  
 //Merging two images
 src2=src1 * (-1);
  
 if( !src1.data ) { printf("Error loading src1 \n"); return -1; }
 if( !src2.data ) { printf("Error loading src2 \n"); return -1; }
  
 //src2 = imwrite( "C:\\Users\\arjun\\Desktop\\new1.jpg",src2);
 //src2  = imread("C:\\Users\\arjun\\Desktop\\new1.jpg");
 
 /// Create Windows
 namedWindow("First Image", 1);
 imshow( "First Image", src1 );
 
 namedWindow("Second Image", 1);
 imshow( "Second Image", src2 );
 
 waitKey(0);
 return 0;
}

Output:


Dont you think by including a for loop we can achieve a smooth transition effect between two images.
#include <opencv2/core/core.hpp> 
#include <opencv2/highgui/highgui.hpp> 
#include <iostream>
 
 
using namespace cv;
using namespace std;
 
int main()
{
  
 Mat src1, src2,src3;
 /// Read image ( same size, same type )
 src1 = imread("C:\\Users\\arjun\\Desktop\\red.jpg");
 src2 = imread("C:\\Users\\arjun\\Desktop\\green.jpg");
 
 //Checking whether images are loaded or not
 if( !src1.data ) { printf("Error loading src1 \n"); return -1; }
 if( !src2.data ) { printf("Error loading src2 \n"); return -1; }
 
 //Merging two images
 for (double i=0; i<=255;i=i+5)
 {
 src3=(i*src1)/255 + ((255-i)*src2)/255;
 
 /// Create Windows
 namedWindow("Blend Image", 1);
 imshow( "Blend Image", src3 );
  
 waitKey(250);
 }
 return 0;
}

Friday 15 April 2016

Digital Negative of an Image in OpenCV

Digital Negative as the name suggests inverting the pixel values of an image such that the bright pixels appear dark and dark as bright.

Thus the darkest pixel in the original image would be the brightest in that of its negative. A good example of it can be an X-ray image.


Now, Considering an 8 bit image.
The pixel value can range from 0 to 255.
Thus to obtain the negative we need to subtract each pixel values of an image by 255.

Hence for an k-bit image.
The pixel value will range from 0 to [(2^k)-1].
Thus we would have to subtract each pixel of an image by [(2^k)-1].



The below code is in opencv for digital negative of an 8-bit grayscale image:
// OpenCV Digital Negative Tutorial 
#include <opencv2/core/core.hpp> 
#include <opencv2/highgui/highgui.hpp> 
#include <iostream>
 
using namespace cv;
using namespace std;
 
int main()
{
 const float pi=3.14;
 Mat src1,src2;
 src1 = imread("C:\\Users\\arjun\\Desktop\\image_opencv.jpg",CV_LOAD_IMAGE_GRAYSCALE);
 src2 = Mat::zeros(src1.rows,src1.cols, CV_8UC1);
 
 if( !src1.data ) { printf("Error loadind src1 \n"); return -1;}
 
  
for (int i=0; i<src1.cols ; i++)
{
for (int j=0 ; j<src1.rows ; j++)
 { 
 Scalar color1 = src1.at<uchar>(Point(i, j));
 Scalar color2 = src1.at<uchar>(Point(i, j));
 color2.val[0] = 255-color1.val[0];
    
 src2.at<uchar>(Point(i,j)) = color2.val[0]; 
 }
 }
namedWindow("Digital Negative Image",CV_WINDOW_AUTOSIZE); 
imshow("Digital Negative Image", src2); 
//imwrite("C:\\Users\\arjun\\Desktop\\digitalnegative.jpg",src1);
 
namedWindow("Original Image",CV_WINDOW_AUTOSIZE); 
imshow("Original Image", src1);
 
 waitKey(0);
 return 0;
}


Input:

Output:


Applications:
It has various immense application in the field of medical in finding the minute details of a tissue. Also in the field of astronomy for observing distant stars.


Input:

Output:

Sunday 10 April 2016

Accessing all the pixels of an Image

To access full pixel value of an image,
We can use :
    Vec3b imagepixel = image.at(x,y);
in for loop to change the value of the co-ordinates (x,y) to cover each row and column.


/*Displaying the Pixel value of the whole Image using Loops*/
 
#include <opencv2/core/core.hpp>  
#include <opencv2/highgui/highgui.hpp>  
#include <iostream> 
 
  using namespace std;  
  using namespace cv;  
 
  int main() 
  {  
    Mat image; 
    //Reading the color image 
    image = imread("C:\\Users\\arjun\\Desktop\\image003.png", CV_LOAD_IMAGE_COLOR);  
 
    //If image not found 
    if (!image.data)                                                                          
     {  
      cout << "No image data \n";  
      return -1;  
     } 
 
 
     //for loop for counting the number of rows and columns and displaying the pixel value at each point
     for (int i = 0; i < image.rows; i++) 
       { 
         for (int j = 0; j < image.cols; j++) 
           { 
            Vec3b imagepixel = image.at<Vec3b>(i, j);
            cout<<imagepixel ;   //Displaying the pixel value  of the whole image
            } 
       }

     //Display the original image
     namedWindow("Display Image");               
     imshow("Display Image", image);  

     waitKey(0);
     return 0;
  }


Input:


Output:



What will happen if we put ,
     cout<<image;
Does it prints the whole pixel array of an image?*


/*Displaying the Pixel value of the whole Image*/
#include <opencv2/core/core.hpp>  
#include <opencv2/highgui/highgui.hpp>  
#include <iostream> 
 
  using namespace std;  
  using namespace cv;  
 
  int main() 
  {  
    Mat image; 
    //Reading the color image 
    image = imread("C:\\Users\\arjun\\Desktop\\image003.png", CV_LOAD_IMAGE_COLOR);  
 
    //If image not found
    if (!image.data)                                                                          
     {  
      cout << "No image data \n";  
      return -1;  
     } 

    //Displaying the pixel value  of the whole image
    cout<<image ;

    //Display the original image              
    namedWindow("Display Image");               
    imshow("Display Image", image);  

    waitKey(0);
    return 0;
 }

Input:


Output:



Notice the difference in output of both the pixel arrays

Tuesday 5 April 2016

Modifying a particular pixel value of an Image

In the previous tutorials we learnt how to access a pixel value of a particular co-ordinate,
Refer :
http://opencv-code.blogspot.in/2016/12/how-to-access-extract-pixel-value-particular-location-image.html

This,

OpenCV C++ tutorial

is about accessing and changing the pixel value at a particular co-ordinate of an Image.
Here is the code below:
/*Displaying the Pixel value of the whole Image using Loops*/
 
#include <opencv2/core/core.hpp>  
#include <opencv2/highgui/highgui.hpp>  
#include <iostream> 
 
  using namespace std;  
  using namespace cv;  
 
int main() 
  {  
    Mat image1,image2; 
    //Reading the color image 
    image1 = imread("C:\\Users\\arjun\\Desktop\\image003.png", CV_LOAD_IMAGE_COLOR);  
 
    //If image1 not found 
    if (!image1.data)                                                                          
    {  
     cout << "No image data \n";  
     return -1;  
    } 
 
    //Display the original image
    namedWindow("Original Image");               
    imshow("Original Image", image1);
 
    //Changing the pixel value at just a particular point(100,200)
     Vec3b color = image1.at<Vec3b>(Point(100,200));
      color.val[0] = 100;
      color.val[1] = 0;
      color.val[2] = 0;
    image1.at<Vec3b>(Point(100,200)) = color;
 
    //Save the modified image
    imwrite("C:\\Users\\arjun\\Desktop\\mod_image.png",image1);
    //Reading the modifed image
    image2 = imread("C:\\Users\\arjun\\Desktop\\mod_image.png", CV_LOAD_IMAGE_COLOR);  
 
   //If image2 not found 
     if (!image2.data)                                                                          
       {  
        cout << "No image data \n";  
        return -1;  
       } 
 
    //Display the modified image
    namedWindow("Modified Image");               
    imshow("Modified Image", image2); 
    waitKey(0);
    return 0;
   }

Input:


Modified Image: