Saturday, 2 March 2013

Plot Multi-Channel Histogram in QWT -Part 1

Here I will show you how to plot a multi-channel/colour histogram using QWT. I generate the histograms for the different channels in a RGB image using OpenCV, then I plot the histogram of each channel overlaid on each other on the same axis. This method is suitable for visualization but not very suitable for peak/valley histogram analysis as the overlaid colours would merge making it difficult to see what peaks/valleys belong to which channel.
The set-up for this tutorial is similar to the one in my previous tutorial, only this time I also include the OpenCV library. Again, we can just dive straight into the code.

Code

#include <QApplication>
#include <qwt_plot.h>
#include <qwt_plot_curve.h>
#include <qwt_plot_grid.h>
#include <qwt_symbol.h>
#include <qwt_legend.h>
#include <opencv2/core/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>

using namespace cv;

void getPoints(MatND& hist, int* histSize, QPolygonF& points)
{
    for( int h = 0; h < histSize[0]; ++h) {
      float bin_value = hist.at<float>(h);
      points << QPointF((float)h, bin_value);
    }
}


class Curve: public QwtPlotCurve
{
public:
    Curve( const QString &title ):
        QwtPlotCurve( title )
    {
        setRenderHint( QwtPlotItem::RenderAntialiased );
    }

    void setColour(const QColor &color, int penSize)
    {
        QColor c = color;
        c.setAlpha( 150 );
        setPen(c, penSize);
        setBrush( c );
    }
};

int main(int argc, char *argv[])
{
    QApplication a(argc, argv);

    if (argc < 2)
        return 1;

    //Read input image
    Mat img = cv::imread(argv[1]);

    //Convert to grayscale
    if (img.data && img.channels() != 3)
       return 1;

    QwtPlot plot; //Create plot widget
    plot.setTitle( "Plot Demo" ); //Name the plot
    plot.setCanvasBackground(Qt::white ); //Set the Background colour
    plot.setAxisScale(QwtPlot::xBottom,0,255); //Scale the x-axis
    plot.insertLegend(new QwtLegend()); //Insert a legend

    int histSize[] = {256}; // number of bins
    float hranges[] = {0.0, 255.0}; // min and max pixel value
    const float* ranges[] = {hranges};
    int channels[] = {0}; // only 1 channel used

    std::vector<cv::Mat> rgbChannels(3);
    split(img, rgbChannels);

    MatND hist;
    QPolygonF points;

    calcHist(&rgbChannels[2], 1, channels, cv::Mat(), hist, 1, histSize,ranges);
    Curve *curve = new Curve("Red Channel");
    curve->setColour(Qt::red , 2);//Set colour and thickness for drawn curve.
    /*Insert the points that should be plotted on the graph in a
    Vector of QPoints or a QPolgonF */

    getPoints(hist, histSize, points);
    curve->setZ( curve->z() - 1 );
    curve->setSamples(points); //pass points to be drawn on the curve
    curve->attach( &plot ); // Attach curve to the plot


    calcHist(&rgbChannels[1], 1, channels, cv::Mat(), hist, 1, histSize,ranges);
    curve = new Curve("Green Channel");
    curve->setColour(Qt::green , 2);
    points.clear();
    getPoints(hist, histSize, points);
    curve->setZ( curve->z() - 2 );
    curve->setSamples( points );
    curve->attach( &plot );


    calcHist(&rgbChannels[0], 1, channels, cv::Mat(), hist, 1, histSize,ranges);
    curve = new Curve("Blue Channel");
    curve->setColour(Qt::blue, 2);
    points.clear();
    getPoints(hist, histSize, points);
    curve->setZ( curve->z() - 3 );
    curve->setSamples( points );
    curve->attach( &plot );

    plot.resize( 600, 400 ); //Resize the plot
    plot.show(); //Show plot
    return a.exec();

}

Result

Figure 1: A) The input image B) The resulting plot 
Note: It is possible to remove the coloured area under the curves and reduce the plot to a simple line plot. This would make each channels' peaks and valleys more visible. This could be done by commenting out line:36 in the code. The result is shown below.
Figure 2: The result with the setBrush() property turned off.

Conclusion

That concludes this part of the tutorial, it is also possible to plot each histogram in its separate axis (in a matrix format) similar to what using the subplot() function in matlab does. This would be seen in our next tutorial. Happy Coding!.

Wednesday, 13 February 2013

Entropy-based histogram thresholding

I read about entropy thresholding[1] and I wanted to give it a try. This technique was rather simple to implement in Matlab compared to other more complex methods and performed reasonably (see results).

Code

function [A, T] = EntropyThresholding(img)

[h, ~] = imhist(img);
h = h/sum(h); % Normalize the histogram so that it sums to 1.
entropies = zeros(256, 1); % Intialize array for storing entropies.
for t = 1:254
    White = h(1:t);
    Black = h(t+1:255);
    % Add 0.001 to prevent division by zero(nan) and log of zero(-inf).
    HB =  sum((Black/(0.001+sum(Black))).*log((Black+0.001)/(0.001 +sum(Black))));
    HW =  sum((White/(0.001+sum(White))).*log((White+0.001)/(0.001 +sum(White))));
    entropies(t) = HB+HW; 
end
[~, T] = max(abs(entropies)); % The Maximal entropy determines the threshold.
T = T - 1;
A = img > T;

Results

References

1. J.N. Kapur, P.K. Sahoo and A.K.C. Wong, "A New Method for Gray-Level Picture Thresholding Using the Entropy of the Histogram", CVGIP, (29), pp.273-285 , 1985.

Saturday, 2 February 2013

Draw an OpenCV histogram using QWT

The QWT Library gives us the ability to create graphs, scale axes, insert legend and do a whole lot of graphing stuff, in a very easy manner. I wanted to show how easy it was to use so in this tutorial I plot an openCV histogram using QWT.
The set-up for this tutorial is similar to the one in my previous tutorial, only this time I also include the OpenCV library. So we can dive straight into the code.

Code

#include <QApplication>
#include <qwt_plot.h>
#include <qwt_plot_curve.h>
#include <qwt_plot_grid.h>
#include <qwt_symbol.h>
#include <qwt_legend.h>
#include <opencv2/core/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>

int main(int argc, char *argv[])
{
    QApplication a(argc, argv);

    if (argc < 2)
        return 1;
    
    //Read input image
    cv::Mat img = cv::imread(argv[1]);
    
    //Convert to grayscale
    if (img.data && img.channels() == 3)
        cv::cvtColor(img, img, CV_BGR2GRAY);
    else
        return 1;

    int histSize[1] = {256}; // number of bins
    float hranges[2] = {0.0, 255.0}; // min and max pixel value
    const float* ranges[1] = {hranges};
    int channels[1] = {0}; // only 1 channel used

    cv::MatND hist;
    // Compute histogram
    cv::calcHist(&img, 1, channels, cv::Mat(), hist, 1, histSize,ranges);

    double minVal, maxVal;
    cv::minMaxLoc(hist, &minVal, &maxVal);//Locate max and min values
   
    QwtPlot plot; //Create plot widget
    plot.setTitle( "Plot Demo" ); //Name the plot
    plot.setCanvasBackground( Qt::black ); //Set the Background colour
    plot.setAxisScale( QwtPlot::yLeft, minVal, maxVal ); //Scale the y-axis
    plot.setAxisScale(QwtPlot::xBottom,0,255); //Scale the x-axis
    plot.insertLegend(new QwtLegend()); //Insert a legend

    QwtPlotCurve *curve = new QwtPlotCurve(); // Create a curve
    curve->setTitle("Count"); //Name the curve
    curve->setPen( Qt::white, 2);//Set colour and thickness for drawing the curve 
    //Use Antialiasing to improve plot render quality
    curve->setRenderHint( QwtPlotItem::RenderAntialiased, true );
    /*Insert the points that should be plotted on the graph in a 
    Vector of QPoints or a QPolgonF */
    QPolygonF points;
    for( int h = 0; h < histSize[0]; ++h) {
        float bin_value = hist.at<float>(h);
        points << QPointF((float)h, bin_value);
    }

    curve->setSamples( points ); //pass points to be drawn on the curve
    curve->attach( &plot ); // Attach curve to the plot 
    plot.resize( 600, 400 ); //Resize the plot
    plot.show(); //Show plot

    return a.exec();

}
The code is well commented and is therefore self explanatory - no need for extra explanations. The result can be seen in the image below.

Wednesday, 30 January 2013

Getting started with QWT

I have been looking for a graphing API for QT and eventually found one that suits my needs called QWT ( Qt Widgets for Technical Applications). I ran into some issues while setting it up but finally got it running so here is the resulting tutorial on how to set-up QWT in QT creator (for Linux 12.0.4).
Now we can begin, first step is to download the source files from here. The version used for this tutorial is QWT 6.1. In the terminal, navigate to the location of the downloaded .tar.bz2 file, then type the following commands:
$ tar -xjvf qwt-6.1-rc3.tar.bz2
$ cd qwt-6.1-rc3
$ qmake qwt.pro
$ make
$ make install
Launch QT creator and then Go to File > New File or Project. Select a QT Gui Application give the project a name e.g. “FirstQwtProject”.You could leave the other settings in the wizard as they are or change them to suit your own projects. I have kept the 'FirstQwtProject.pro' ( this will vary depending on the Project Name you have chosen) and 'main.cpp' but deleted the remaining files (MainWindow class) as they would not be need for this simple introduction. To run QWT programs in QT creator we need to let the IDE know where to find the QWT libraries.
So open the .pro file associated with the project and append the following lines at the end of the file. This should be included in every QWT Project you create but remember to change the include Path "/usr/local/qwt-6.1.0-rc3/.." to the location of the QWT install directory on your PC.
CONFIG += qwt
INCLUDEPATH +="/usr/local/qwt-6.1.0-rc3/include"
LIBS += -L/usr/local/qwt-6.1.0-rc3/lib -lqwt
Now go to the 'main.cpp' file and type in the following lines of code and press Ctrl+R to run it.
#include <QApplication>
#include <qwt_plot.h>
#include <qwt_plot_curve.h>
#include <qwt_plot_grid.h>
#include <qwt_symbol.h>
#include <qwt_legend.h>

int main(int argc, char *argv[])
{
    QApplication a(argc, argv);

    QwtPlot plot;
    plot.setTitle( "Plot Demo" );
    plot.setCanvasBackground( Qt::white );
    plot.setAxisScale( QwtPlot::yLeft, 0.0, 10.0);
    plot.insertLegend( new QwtLegend() );

    QwtPlotGrid *grid = new QwtPlotGrid();
    grid->attach( &plot );

    QwtPlotCurve *curve = new QwtPlotCurve();
    curve->setTitle( "Pixel Count" );
    curve->setPen( Qt::blue, 4 ),
    curve->setRenderHint( QwtPlotItem::RenderAntialiased, true );

    QwtSymbol *symbol = new QwtSymbol( QwtSymbol::Ellipse,
        QBrush( Qt::yellow ), QPen( Qt::red, 2 ), QSize( 8, 8 ) );
    curve->setSymbol( symbol );

    QPolygonF points;
    points << QPointF( 0.0, 4.4 ) << QPointF( 1.0, 3.0 )
        << QPointF( 2.0, 4.5 ) << QPointF( 3.0, 6.8 )
        << QPointF( 4.0, 7.9 ) << QPointF( 5.0, 7.1 );
    curve->setSamples( points );

    curve->attach( &plot );

    plot.resize( 600, 400 );
    plot.show();

    return a.exec();
}
This is what you should see.

Conclusion

QWT Libraries are one of many options for plotting graphs in QT. In the next tutorial, I will show you how to plot histograms calculated via OpenCV with QWT.

Monday, 28 January 2013

Find Holes in a binary image

Since OpenCV is yet to provide special functions for blob analysis. I will show you a simple method of determining the number of holes in an image. This method exploits the ability of OpenCV's findContours() function to extract and distinguish between outer contour (shape boundaries) and inner contours (hole boundaries). For this tutorial, I assume the input image is a binary image. We will be using the 2 images below.


Figure 1: A) a complex shape with multiple levels of contours. B) a simple shape with 2 holes 

The result from the application are the following images

Figure 2:  Resulting images showing location of holes in A)  Figure 1A B) Figure 1B.

Code

Here is the full code:
#include <iostream>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>

using namespace cv;
using namespace std;

int main(int argc, char *argv[])
{
   if( argc != 2)
     {
       cout << " No image file is specified \n ";
       return -1;
     }

    Mat src = imread(argv[1]);

    vector<vector<Point> > contours;
    vector<Vec4i> hierarchy;

    findContours( src.clone(), contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_NONE );

    Mat singleLevelHoles = Mat::zeros(src.size(), src.type());
    Mat multipleLevelHoles = Mat::zeros(src.size(), src.type());


    for(vector<Vec4i>::size_type idx=0; idx<hierarchy.size(); ++idx)
    {
        if(hierarchy[idx][3] != -1)
           drawContours(singleLevelHoles, contours, idx, Scalar::all(255), CV_FILLED, 8, hierarchy);
    }

    bitwise_not(src, src);
    bitwise_and(src, singleLevelHoles, multipleLevelHoles);

    //Inverse source image.
    imwrite("/home/stephen/Pictures/Result0.jpg", src);

    //Holes before the bitwise AND operation.
    imwrite("/home/stephen/Pictures/Result1.jpg", singleLevelHoles);

    //Holes after the bitwise AND Operation.
    imwrite("/home/stephen/Pictures/Result2.jpg", multipleLevelHoles);

    return 0;
}

Explanation

  1. Lines 1-4: Standard includes from the OpenCV and C++ libraries.
  2. Lines 11-17: Error checking command line input. The image file path is passed in as command line input; the image is read into memory . 
  3. Lines 19-22: We extract contours along with the hierarchy information about the contours using the findContours() function in OpenCV. The contour retrieval mode is set to CV_RETR_TREE which simply retrieves the contour and reconstructs a full-hierarchy of nested contours. In other words, this mode causes the function to return a vector containing an entry for each contour; each entry is an array of 4 values. We are concerned with the last value of this array which tells us the parent of the contour. This value is set to -1 for outer contours otherwise it is set to the index of the parent contour in the hierarchy vector. For more information on contour hierarchy see here. The contour approximation method is  set to CV_CHAIN_APPROX_NONE which basically tells the function to store all contour points.
  4. Lines 28-32:  After extracting the contours, we use the hierarchy information to draw all the holes using the drawContours() function;  the thickness is set to CV_FILLED so that the drawn contours are filled with white pixels (255). This is perfect for when you do not have multiple levels of contours like Figure 1B but for the figure 1A the result would be as shown below.
    Figure 3: Resulting image showing holes in Figure 1A before applying logical operation.

  5. Lines 34-35: The result above could be undesirable for some operations; so the bitwise NOT and AND logical operations are used to get the location the inner contours of the image.

Conclusion

This is just a simple way to extract holes from an image without having to bother about extra libraries like cvBLobLib. Hope it was helpful. Happy Coding!

Monday, 10 December 2012

Working with Video Using OpenCV and QT

Video processing is a very important task in computer vision applications. OpenCV comes with its own GUI library (Highgui); but this library has no support for buttons and some other GUI components. Therefore it could be  preferable to use a QT GUI application, but displaying a video in a QT GUI is not as intuitive as it is with Highgui. This tutorial will show you how to display video in a QT GUI, without the GUI becoming unresponsive. We would be creating a simple video player as shown below and would be programming in C++.

STEP 1: Create a new QT GUI Project

If you don't know how to do this check out the guide here. The guide shows you how to create an OpenCV console project in Qt-creator, but this time instead of using a Qt Console Application, create a Qt GUI Application. Once created the following files are automatically added into the project:

main.cpp


This contains the main function which is the starting point of all C++ applications. It is the main function the loads the main window for the GUI application.


mainwindow.cpp


This is the source file that contains the MainWindow class implementation.


mainwindow.h


This contains the class declaration for the MainWindow class.
mainwindow.ui


This is the UI designer file that could be used to tweak the GUI.


<projectName>.pro This contains settings that are used for the project compilation.

Add widgets to GUI

Open the mainwindow.ui file, this file could be edited manually but for this tutorial we would use the designer.
  • From the list of widgets on the left of the designer, drag in a label and two pushbutton widgets.
  • Change the text on the first button to “Load Video” and on the second button to “Play”. Then Clear the text on the label. To change the text on a widget just double click on the widget the text would be highlighted, you can now change it and press enter when finished.
  • Change the background colour of the label to a darker colour. The best way to change the background colour of a QT widget is to use Cascading StyleSheets or CSS. Select the label find the stylesheet property in the property window click on the button with Three dots (ellipsis) and add this line of CSS into the “Edit Style sheet” window and save it.
    Background-color: #000;
  • The GUI should now look something like this:

Player Class Definition

Now we add a new class to handle our video player control, we will call this the “Player” Class the following class definition should be added in player.h
#ifndef PLAYER_H
#define PLAYER_H
#include <QMutex>
#include <QThread>
#include <QImage>
#include <QWaitCondition>
#include <opencv2/core/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
using namespace cv;
class Player : public QThread
{    Q_OBJECT
 private:
    bool stop;
    QMutex mutex;
    QWaitCondition condition;
    Mat frame;
    int frameRate;
    VideoCapture capture;
    Mat RGBframe;
    QImage img;
 signals:
 //Signal to output frame to be displayed
      void processedImage(const QImage &image);
 protected:
     void run();
     void msleep(int ms);
 public:
    //Constructor
    Player(QObject *parent = 0);
    //Destructor
    ~Player();
    //Load a video from memory
    bool loadVideo(string filename);
    //Play the video
    void Play();
    //Stop the video
    void Stop();
    //check if the player has been stopped
    bool isStopped() const;
};
#endif // VIDEOPLAYER_H
The class definition is simple and straightforward. The first thing to note is that Player class inherits from the QThread Class which will allow it to run on its own thread this is very important so that the main window remains responsive while the video is playing without this the video will cause the screen to freeze until it has finished playing. The processedImage(...) signal will be used to output the video frames to the main window (we would see how this work later).

Player Class Implementation

Here is the constructor for the Player class
Player::Player(QObject *parent)
 : QThread(parent)
{
    stop = true;
}
Here we simply initialise the value of the class variable stop
bool Player::loadVideo(string filename) {
    capture.open(filename);
    if (capture.isOpened())
    {
        frameRate = (int) capture.get(CV_CAP_PROP_FPS);
        return true;
    }
    else
        return false;
}
In the loadVideo() method, we use the instance of the VideoCapture class to load the video and set the frame rate. As you should already know the VideoCapture class is from the OpenCV library
void Player::Play()
{
    if (!isRunning()) {
        if (isStopped()){
            stop = false;
        }
        start(LowPriority);
    }
}
The public method play() simply starts the thread by calling the run() method which is an override of the QThread run method.
void Player::run()
{
    int delay = (1000/frameRate);
    while(!stop){
        if (!capture.read(frame))
        {
            stop = true;
        }
        if (frame.channels()== 3){
            cv::cvtColor(frame, RGBframe, CV_BGR2RGB);
            img = QImage((const unsigned char*)(RGBframe.data),
                              RGBframe.cols,RGBframe.rows,QImage::Format_RGB888);
        }
        else
        {
            img = QImage((const unsigned char*)(frame.data),
                                 frame.cols,frame.rows,QImage::Format_Indexed8);
        }
        emit processedImage(img);
        this->msleep(delay);
    }
}
In the run method, we utilise a while loop to play the video after reading the frame, it is converted into a QImage and the QImage is emitted to the MainWindow object using the processedImage(...) signal; at the end of the loop we wait for a number of milliseconds (delay) which is calculated using the frame rate of the video. If the video frame was been processed it would be advisable to factor the processing time into the delay.
Player::~Player()
{
    mutex.lock();
    stop = true;
    capture.release();
    condition.wakeOne();
    mutex.unlock();
    wait();
}
void Player::Stop()
{
    stop = true;
}
void Player::msleep(int ms){
    struct timespec ts = { ms / 1000, (ms % 1000) * 1000 * 1000 };
    nanosleep(&ts, NULL);
}
bool Player::isStopped() const{
    return this->stop;
}
Here is the rest of the Player class, in the destructor we release the VideoCapture object and wait for the run method to exit.

MainWindow Class Definition

#ifndef MAINWINDOW_H
#define MAINWINDOW_H
#include <QMainWindow>
#include <QFileDialog>
#include <QMessageBox>
#include <player.h>
namespace Ui {
class MainWindow;
}
class MainWindow : public QMainWindow
{
    Q_OBJECT
    
public:
    explicit MainWindow(QWidget *parent = 0);
    ~MainWindow();
    
private slots:
    //Display video frame in player UI
    void updatePlayerUI(QImage img);
    //Slot for the load video push button.
    void on_pushButton_clicked();
    // Slot for the play push button.
    void on_pushButton_2_clicked();
private:
    Ui::MainWindow *ui;
    Player* myPlayer;
};
#endif // MAINWINDOW_H
Here is the class definition for the MainWindow class, we include the clicked event slots for both buttons and a updatePlayerUI slot. We also include a myPlayer variable which is an instance of the Player Class

Mainwindow class implementation

MainWindow::MainWindow(QWidget *parent) :
    QMainWindow(parent),
    ui(new Ui::MainWindow)
{
    myPlayer = new Player();
    QObject::connect(myPlayer, SIGNAL(processedImage(QImage)),
                              this, SLOT(updatePlayerUI(QImage)));
    ui->setupUi(this);
}
we initialise myPlayer and we connect the signal emitted from the player class to the updatePlayerUI(...) slot, so every time a frame has been emitted it would be passed to this slot.
void MainWindow::updatePlayerUI(QImage img)
{
    if (!img.isNull())
    {
        ui->label->setAlignment(Qt::AlignCenter);
        ui->label->setPixmap(QPixmap::fromImage(img).scaled(ui->label->size()
                                           Qt::KeepAspectRatio, Qt::FastTransformation));
    }
}
The updatePlayerUI slot receives a QImage and resizes it to fit the label (keeping the aspect ratio) which will be used to display. It displays the image by setting the label pixmap
MainWindow::~MainWindow()
{
    delete myPlayer;
    delete ui;
}

void MainWindow::on_pushButton_clicked()
{
    QString filename = QFileDialog::getOpenFileName(this,
                                          tr("Open Video"), ".",
                                          tr("Video Files (*.avi *.mpg *.mp4)"));
    if (!filename.isEmpty()){
        if (!myPlayer->loadVideo(filename.toAscii().data()))
        {    
            QMessageBox msgBox;
            msgBox.setText("The selected video could not be opened!");
            msgBox.exec();
        }
    }
}
void MainWindow::on_pushButton_2_clicked()
{
    if (myPlayer->isStopped())
    {
        myPlayer->Play();
        ui->pushButton_2->setText(tr("Stop"));
    }else
    {
        myPlayer->Stop();
        ui->pushButton_2->setText(tr("Play"));
    }
}
This is the remaining part of the MainWindow Class, we have the destructor for the class, "load Video"(pushbutton) button slot and the "Play" (pushbutton_2) button slot which all pretty straightforward.

Main() Function

int main(int argc, char *argv[])
{
    QApplication a(argc, argv);
    MainWindow* w = new MainWindow();
    w->setAttribute(Qt::WA_DeleteOnClose, true);

    w->show();
    
    return a.exec();
}
And finally, the main function we create an instance of the MainWindow class and set the delete on close attribute so that the objects created are destroyed.

Final words...

This is just a simple tutorial to help anyone get started with videos in OpenCV and QT. It should also be noted that there are other ways to handle videos in QT like the Phonon multimedia framework. Please let me know if this was helpful and ask questions (if any) in the comments. Happy Coding!
UPDATE: SEE PART 2 OF THIS TUTORIAL HERE. WE SHOW YOU HOW TO ADD A TRACK-BAR TO ALLOW THE USER CONTROL THE VIDEO.

Wednesday, 21 November 2012

Getting Started with QT and OpenCV 2.4.2

I recently started developing with OpenCV 2.4.2 and Qt on a Linux machine (Ubuntu 12.0.4), so I decided to write a few tutorials. Here, we are going to see how to set-up a development environment particularly Qt-creator to work with OpenCV.
First, you need to install OpenCV as this has been dealt with severally I don't bother with this. If you have not yet installed this have a look at this tutorial or just search Google for a tutorial that suits you.

Next step you would need to install Qt-creator. This can be done either via the Ubuntu Software Centre or by directly downloading the .bin installer from here.

Now we can begin, launch QT creator and then Go to File > New File or Project. Select a QT Console Application (Use a QT GUI Application if you OpenCV installation is configured to use QT and  not GTK ) give the project a name e.g. “FirstQtProject”




You could leave the other settings in the wizard as they are or change them to suit your own projects. With the default settings, two files would be created: 'FirstQtProject.pro' ( this will vary depending on the Project Name you have chosen) and 'main.cpp'. To run OpenCV programs in QT creator we need to let the IDE know where to find the OpenCV libraries.

Open the .pro file associated with the project and append the following lines at the end of the file. This should be included in every OpenCV Project you create but remember to change the include Path "/usr/local/include/opencv2" to the location of your OpenCV include directory on your PC.

#Change this to your include directory.
INCLUDEPATH += "/usr/local/include/opencv2" 
LIBS += `pkg-config --cflags --libs opencv`
If you don't have Pkg-config installed replace the last line with the following and replace “/usr/local/lib” with the location of the openCV libraries on your PC.
# Confirm the location of you opencv libraries and change appropriately.
LIBS += usr/local/lib \
-lopencv_core \
-lopencv_highgui \
-lopencv_imgproc \
-lopencv_flann \
-lopencv_legacy \
-lopencv_ml \
-lopencv_features2d \
-lopencv_calib3d
Now go to 'main.cpp' file and type in the following lines of code and press Ctrl+R to run it.
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
int main(void)
{
    // Load an image from the disk and store in variable.
    cv::Mat image1 = cv::imread("/home/stephen/Pictures/download.jpg");
    // Create a image display window called Figure1. 
    cv::namedWindow("Figure1");
    // Display image in Figure1.  
    cv::imshow("Figure1", image1);
    // Wait until user closes the window or presses Esc Key. 
    cv::waitKey(0);
    
    return 0;
}
The result should be an empty console window called "qtcreator_process_sub” and a window called “Figure1” containing the loaded image.

Please let me know if this was helpful and if there is tutorial you would like me to write mention it in the comments. Happy Coding!