2 Manipulating the Pixels

来源:互联网 发布:中国青年网络微博 编辑:程序博客网 时间:2024/06/02 07:56
In this chapter, we will cover:
  Accessing pixel values
  Scanning an image with pointers
 Scanning an image with iterators
  Writing efficient image scanning loops
  Scanning an image with neighbor access
  Performing simple image arithmetic

 Defining regions of interest




Accessing pixel values

salt-and-pepper noise is a particular  type of noise in which some pixels are replaced by a white or a black pixel. This type of noise   can occur in faulty communication when the value of some pixels is lost during transmission.
In our case, we will simply randomly select a few pixels and assign them the white color.


Access Pixe element we can use cv::Mat at<> template method.

CV::Mat_ template is derive from cv:Mat but did not define extra data member, so can cv::Mat object saftely convert to CV::Mat_ by specify the data type ie:uchar,int unsign int etc, but you can use cv::Vec<> template, althere has define some common use type:

typedef Vec<uchar, 2> Vec2b;
typedef Vec<uchar, 3> Vec3b;
typedef Vec<uchar, 4> Vec4b;

typedef Vec<short, 2> Vec2s;
typedef Vec<short, 3> Vec3s;
typedef Vec<short, 4> Vec4s;

typedef Vec<ushort, 2> Vec2w;
typedef Vec<ushort, 3> Vec3w;
typedef Vec<ushort, 4> Vec4w;

typedef Vec<int, 2> Vec2i;
typedef Vec<int, 3> Vec3i;
typedef Vec<int, 4> Vec4i;
typedef Vec<int, 6> Vec6i;
typedef Vec<int, 8> Vec8i;

typedef Vec<float, 2> Vec2f;
typedef Vec<float, 3> Vec3f;
typedef Vec<float, 4> Vec4f;
typedef Vec<float, 6> Vec6f;

typedef Vec<double, 2> Vec2d;
typedef Vec<double, 3> Vec3d;
typedef Vec<double, 4> Vec4d;
typedef Vec<double, 6> Vec6d;



-(void)simplesalt1:( cv::Mat *)image count:(int) n
{
    
    for (int k=0 ;k<n;k++)
    {
        int r=arc4random()%image->rows;
        int c=arc4random()%image->cols;
        
        if(image->channels()==1) //for gray level image
        {
            image->at<uchar>(r,c)=255;
        }
        else
        {
            //use cv::Mat at <> template method to access each pixel
            image->at<cv::Vec3b>(r,c)[0]=255;
            image->at<cv::Vec3b>(r,c)[1]=255;
            image->at<cv::Vec3b>(r,c)[2]=255;
            
        }
    }

}
//Use Mat_ template, as
-(void)simpleSalt2:(cv::Mat *)image count:(int) n
{
    // you can not use CV_8U, just just a Marco,not an exactly type
 
    
    int row=image->rows;
    int col=image->cols;
    
    if (image->channels()==1) {
          cv::Mat_<uchar> img=*image; // just create a header for img, the Data is shared
        for (int i=0;i<n;i++)
        {
 
            int r=arc4random()%row;
            int c=arc4random()%col;
            img(r,c)=255;
        }
       
       
    }
    else
    {

//因为我用的是png  所以有4个 channel ,需要用 Vect4b

        NSLog(@"channel number=%d",image->channels());
      //  cv::Mat_<cv::Vec3b> img=*image;
       //or i can choose the define type
       // cv::Mat_<Vec4b> img=(*image);
        Mat4b img=*image;
        
        for (int i=0;i<n;i++)
        {
            
            int r=arc4random()%row;
            int c=arc4random()%col;
                img(r,c)[0]=255;
               img(r,c)[1]=255;
                img(r,c)[2]=255;
                cv::Mat_<> template define or overload the () operate let you can easy access every pixel element,

                for multple channle you can use [] operater to access each channel value.          
        }
        
    
    }
}


Scan the image useing pointer

target simplely reduce the color intensity


-(void)reduceImage1:(cv::Mat *)image factor:(int)n
{
    int rows=image->rows;
    int totalElementsPerline=image->cols* image->channels();
    //each time we can get one row data return by pointer
     for (int i=0;i<rows;i++)
    {
        uchar * pLinePointer=image->ptr<uchar>(i);// by this prt template method,we can retrive one row per time
        for (int j=0; j<totalElementsPerline; j++) {
           if ((j+1)%4!=0) { //we dot not reduce the alpha channel 不考虑alpha channel
             // pLinePointer[j]=pLinePointer[j]/n*n +n/2;
             //  *pLinePointer++=*pLinePointer/n * n +n/2; //* 优先级比++ 要高,利用这个特性进行循环
                 pLinePointer[j]=pLinePointer[j]-pLinePointer[j]%n + n/2;
               
            }
            
        }
    
    }
}
//使用为运算效率更高,但要求是pow of 2
-(void)reduceImage2:(cv::Mat *)image factor:(int)n
{
    int rows=image->rows;
    int totalElementsPerline=image->cols* image->channels();
    //each time we can get one row data return by pointer
    int mask=0xFF << n;
    int div=0x01<<n;
    NSLog(@"div=%d",div);
    for (int i=0;i<rows;i++)
    {
        uchar * pLinePointer=image->ptr<uchar>(i);
        for (int j=0; j<totalElementsPerline; j++) {
            if ((j+1)%4!=0) { //we dot not reduce the alpha channel
               
                pLinePointer[j]=(pLinePointer[j]&mask) + div/2;
                
            }
            
        }
        
    }
}


[helper reduceImage2:&source factor:6];

[helper reduceImage1:&source factor:64]; //2 ^6=64 math.pow(2,6)=64, I test in Lua.


Create a deep Copy of the Mat object ,call clone method,

//如果只是使用( cv::Mat * )outputImg, 就算在函数里面new了一个 Mat对象,返回时候,形参变量跟实参其实是不一样的,实参还是的不到实质的对象,所以用了指针的指针,参考c++primer 函数的参数传递机制。


-(void)reduceImage3:(const cv::Mat *)inputImg outputImage:( cv::Mat * * )outputImg factor:(int)n
{
    //create a deep copy of the orginal image
    if (*outputImg==nullptr) {
        *outputImg=new Mat();
        (*outputImg)->create(inputImg->rows, inputImg->cols, inputImg->type());
    }
    Mat * tmp=*outputImg;
    
    *tmp=inputImg->clone();
    int rows=tmp->rows;
    int totalElementsPerRow=tmp->channels()*tmp->cols;
    int mask=0xFF<<n;
    int div=0x01<<n;
    int hdiv=div/2;
    for (int r=0; r<rows; r++)
    {
        
        uchar * pLineData=tmp->ptr<uchar>(r);
        for (int j=0; j<totalElementsPerRow; j++)
        {
            if ((j+1)%4!=0)
            {
                pLineData[j]=(pLineData[j]&mask) +hdiv;
            }
            
        }
        
    }
    
}


====要么就是返回一个Mat 指针,这种方法就比上面好看很多,但其实一样,都存在一个问题,就是内存释放的

问题,new 出来的对象没有release

-(cv::Mat *)reduceImage4:(const cv::Mat *)inputImg factor:(int)n
{
    Mat * result=new Mat();
    result->create(inputImg->rows, inputImg->cols, inputImg->type());

使用Create 的目的是让result 具有inputImg 同样的大小跟类型,just as the parameter tell us.

但其实这里可以省略,因为下面的clone 方法会首先做这样的事情,否则怎么知道是否够空间执行clone操作,


    *result=inputImg->clone();
    int mask=0xFF<<n;
    int div=(0x01<<n) / 2;
    for(int i=0;i<result->rows;i++)
    {

    //虽然我么也是整行数据拿,但我们把每个pixel当成一个整体。。。。

这样就不需要判断alpha channel 了。。。

        Vec4b * pLineData=result->ptr<Vec4b>(i);
        for (int j=0; j<result->cols; j++)
        {
                pLineData[j][0]= (pLineData[j][0] & mask) +div;
                pLineData[j][1]= (pLineData[j][1] & mask) +div;
                pLineData[j][2]= (pLineData[j][2] & mask) +div;
            
        }
    }
    
    
    return result;
}

 cv::Mat * outPutImg=NULL;
    //[helper reduceImage3:&source outputImage:&outPutImg factor:6];
    outPutImg=[helper reduceImage4:&source factor:6];
    
    UIImage * outputImage=[helper UIImageFromCVMat:*outPutImg];
    [imageView setImage:outputImage];
    outPutImg=NULL; //注意用完后置null 好让系统回收。


注意由于create 返回的是连续的内存块,没有pading, 那么他的内存块大小就是total()*elemSize(), 所以我们可以上把上面的方法写的更perfomence。

-(cv::Mat *)reduceImage5:(const cv::Mat *)inputImg factor:(int)n
{
    Mat * result=new Mat();
    int mask=0xFF<<n;
    int div=(0x01<<n )/2;
    result->create(inputImg->rows, inputImg->cols, inputImg->type());
    for (int i=0; i<inputImg->rows; i++) {
        Vec4b * pDestLineData=result->ptr<Vec4b>(i);
        //becuase inputImg is a const pointer, so you also need a const declaration here
        const  Vec4b * pInputLineData=inputImg->ptr<Vec4b>(i);
        
        for (int j=0; j<inputImg->cols; j++)
        {
            pDestLineData[j][0]=(pInputLineData[j][0]& mask ) + div;
             pDestLineData[j][1]=(pInputLineData[j][1]& mask ) + div;
             pDestLineData[j][2]=(pInputLineData[j][2]& mask ) + div;
            pDestLineData[j][3]=pInputLineData[j][3];
        }
    
    }
    //这个方法我们利用了create函数的特性,自己实现clone 的同时做了reduce 的动作,所以更加performance
     return result;
}


Efficient scanning of continuous image

For some reason,an image can be pad some extra pixels at the end of each rows. how ever when is nonpad,

an image can be seen as a long one-dimensional array of W*H pixel.  if Mat.isContinues() return true, then this image is nonPad, we can use this feature

其实都不变,只是因为我们知道内存是连续的,所以指针可以一直往下移动,指针果然是强大,,,

// use continuous feature
-(cv::Mat *)reduceImage6:(const cv::Mat *)inputImg factor:(int)n
{
    Mat * result=NULL;
    if (inputImg->isContinuous())
    {
        //  int oneRowsTotalElement=inputImg->total()*inputImg->elemSize();
        //just create a new header Mat,the data pointer also point to inputImg
     //   Mat oneDemetionData=inputImg->reshape(inputImg->channels(), 1, &oneRowsTotalElement);
        
        int totalEle=inputImg->total();
        int mask=0xFF<<n;
        int div=(0x01<<n) /2;
        result=new Mat();
        result->create(inputImg->rows,  inputImg->cols,inputImg->type());
        Vec4b * pOnlyOneRowdata=result->ptr<Vec4b>(0);
        const Vec4b * pOnlyOneRowdataInput=inputImg->ptr<Vec4b>(0);
        for (int j=0;j<totalEle;j++)
        {
            pOnlyOneRowdata[j][0]=(pOnlyOneRowdataInput[j][0] &mask) + div;
             pOnlyOneRowdata[j][1]=(pOnlyOneRowdataInput[j][1] &mask) + div;
             pOnlyOneRowdata[j][2]=(pOnlyOneRowdataInput[j][2] &mask) + div;
            pOnlyOneRowdata[j][3]=pOnlyOneRowdataInput[j][3];
        }
        
        
    }
    else
    {
        result=[self reduceImage5:inputImg factor:n];
    }
    
    return result;
}


Low-Level pointer arithmetics

其实只要能拿到image data的开始地址,还是一样好操作, the address of the first element of this memory block is given by the Mat.data attribute, will return an unsigned char pointer.

unchar * pData=image.data;

if you want move to the next row:

data+=image.step;//next line.

The method step give you the total nubmer of bytes (including the padded pixed) in a line.

so image.at(j,i) pixel can also be retive by :

data=image.data+ j*image.step + i*image.elemSize();


-(void)simplesalt3:(cv::Mat *)image count:(int)n
{
    for (int i=0; i<n; ++i)
    {
        int r=arc4random()%image->rows;
        int c=arc4random()%image->cols;
        
        if (image->channels()==1)
        {
            uchar * data=image->data + r*image->step + c*image->elemSize();
            *data=255;
        }
        else
        {
            uchar * data=image->data + r*image->step + c*image->elemSize();
            data[0]=255;
            data[1]=255;
            data[2]=255;
        }
        
        
    }
    
    
}


Scanning image using Iterator

in STL each collection associate a Iterator class, opecv also have an Iterator for Mat class.


//use iterator, this only for BGRA  formate and directly reduce on the orignal image
-(void)reduceImage7:(cv::Mat *)image factor:(int)n
{
    
    Mat_<Vec4b>::iterator it=image->begin<Vec4b>();
    Mat_<Vec4b>::iterator itEnd=image->end<Vec4b>();

   //the same as the above both are the Vec4b Iterator
  //  MatIterator_<Vec4b> it;
    // MatIterator_<Vec4b> itEnd;

    int mask=0xFF <<n;
    int div=(0x01<<n) /2;
    
    for(;it!=itEnd;++it)
    {
        /*
         (*it)[0]=((*it)[0]&mask) + div;
         (*it)[1]=((*it)[1]&mask) + div;
         (*it)[2]=((*it)[2]&mask) + div;
         */
       //  Vec4b pixel=*it; //after dereferenceing *it, will copy the content to the pixel vairable
        //so that not the orignal pixel, we use reference of c++ feature

//注意要用引用,否则会copy 一份到新的Vec4b variable .
         Vec4b & pixel=*it;
         pixel[0]=pixel[0]&mask +div;
         pixel[1]=pixel[1]&mask +div;
         pixel[2]=pixel[2]&mask +div;
     
 }
    
}




-(void)reduceImage8:(cv::Mat *)image factor:(int)n
{
    // MatIterator_<Vec4b> it=image->begin<Vec4b>()+image->cols;
    // MatIterator_<Vec4b> end=image->end<Vec4b>()-image->cols;
   // double duration=static_cast<double>(cv::getTickCount());
    
    
    Mat_<Vec4b> image2=*image;
    MatIterator_<Vec4b> it=image2.begin();  //use Mat_ ,then in the begin method no need to specify the type
    MatIterator_<Vec4b> end=image2.end();
 
 //   MatIterator_<Vec4b> it2=image2.begin()+image2.cols;
 //   MatIterator_<Vec4b> it3=image2.begin()+ image2.rows;
    
 //   Vec4b elmen=image2(1,0);
    //in the book begin at the second he use begin()+ image2.rows
    //but in my view should plus image.cols
 
    int mask=0xFF<<n;
    int div=(0x01<<n)/2;
    
    while (it!=end)
    {
        Vec4b & pixel=*it;
        pixel=Vec4b((pixel[0]&mask)+div,(pixel[1]&mask)+div,(pixel[2]&mask)+div,pixel[3]);
       // (*it)=Vec4b(0,255,0,255);
         ++it;
    }
    
  //  duration=static_cast<double>(cv::getTickCount())-duration;
  //  NSLog(@"how long %f  %f",duration,duration/cv::getTickFrequency());
    
    
}


性能比较:

使用位运算是最快,使用at method is the most slower, the use Iterator is the second slower. 使用pointer scan image

plus bitwise operation is the best choice. 循环使用row and column  不如把图片看成是一行,然后通过pointer scan.this will be better. but only for continuous image.


 Scanning an image with neighbor access,其实一次多几个指针的操作,注意一点就是我们必须要创建另外一

张image 来接纳运算的结果,而不能用同一张。其次就是第一行/列, 最后一行/列 不能操作,因为他们的 neighbor element 不全,用同样的方法会出问题的。



-(cv::Mat *)sharpenImage1:(cv::Mat *)image
{
    Mat * result=new Mat();
    result->create(image->size(), image->type());
    int rows=image->rows-1;
    int cols=image->cols-1;
    for (int r=1; r<rows; ++r)
    {
        const Vec4b * previousRow=image->ptr<Vec4b>(r-1);
        const Vec4b * currentRow=image->ptr<Vec4b>(r);
        const Vec4b * nextRow=image->ptr<Vec4b>(r+1);
        Vec4b * currentRowOfDestination=result->ptr<Vec4b>(r);
        for (int c=1; c<cols; ++c)
        {
            for (int i=0;i<3;i++)
            {
              currentRowOfDestination[c][i]=cv::saturate_cast<uchar>(
              5*currentRow[c][i]-currentRow[c-1][i]-currentRow[c+1][i]
                  -previousRow[c][i]-nextRow[c][i]
              );
             }
            
        }
        
        
    }
    //只是简单的set 为0
    result->row(0).setTo(Scalar::all(0));
    result->row(rows).setTo(Scalar(0));
    result->col(0).setTo(Scalar(0));
    result->col(cols).setTo(Scalar(0));
    return result;
    
}


cv::saturate_cast<uchar>() ,因为运算结果可能溢出,这个方法可以保证 比如 uchar, 可以让<0 set to 0 ,>255    set to 255;


We can represent this neighbor operation in a Kernel matrix.

 



这样的一个matrix 相当于一个filter 一样,cover on each pixel element ,the precess the image with its factor .


so OpenCV 抽象出这样的concep, and defined a special function perform this task:cv::filter2D function.


-(cv::Mat *)sharpenImage2:(cv::Mat *)image
{
    
    Mat * result=new Mat();
    result->create(image->size(), image->type());
    Mat kernel(3,3,CV_32F,Scalar(0));
      kernel.at<float>(1,1)=5.0;
      kernel.at<float>(1,0)=-1.0;
      kernel.at<float>(1,2)=-1.0;
      kernel.at<float>(0,1)=-1.0;
      kernel.at<float>(2,1)=-1.0;
     cv::filter2D(*image, *result, image->depth(), kernel);
    
    
    return result;
}



Performing simple image arithmetic


cv::addWeighted(image1,0.7,image2,0.9,0.,result); //image1 and image2 must have the same size; orelse hwo to do the matrix arithmetic.


//cv::addWeighted(image1,0.7,image2,0.9,0.,result);
/*
 
 // c[i]= a[i]+b[i];
 cv::add(imageA,imageB,resultC);
 // c[i]= a[i]+k;
 cv::add(imageA,cv::Scalar(k),resultC);
 // c[i]= k1*a[1]+k2*b[i]+k3;
 cv::addWeighted(imageA,k1,imageB,k2,k3,resultC);
 // c[i]= k*a[1]+b[i];
 cv::scaleAdd(imageA,k,imageB,resultC);
 For some functions, you can also specify a mask:
 // if (mask[i]) c[i]= a[i]+b[i];
 cv::add(imageA,imageB,resultC,mask);
 
 For some functions, you can also specify a mask:
 // if (mask[i]) c[i]= a[i]+b[i];
 cv::add(imageA,imageB,resultC,mask);


 
 If you apply a mask, the operation is performed only on pixels for which the mask value is not null (the mask must be 1-channel).
 
 cv::subtract, cv::absdiff, cv::multiply, and cv::divide functions.
 Bit-wise operators are also available:
 cv::bitwise_and, cv::bitwise_or, cv::bitwise_xor, and cv::bitwise_ not.
 
 Operators cv::min and cv::max which find per-element maximum or minimum pixel
 value are also very useful.



The images must have the same size and type (the output image will be re-allocated if it does match the input size). Also, since the operation is performed per-element, one of the input images can be used as output.


Several operators that take a single image as input are also available: cv::sqrt, cv::pow, cv::abs, cv::cuberoot, cv::exp, and cv::log. In fact, there exists an OpenCV function for almost any operation you have to apply on your images.


Overloaded image operators

most arithmetic functions have their corresponding operator overloaded in OpenCV 2. Consequently, the call to cv::addWeighted can be written as:
   result= 0.7*image1+0.9*image2;  (cv::saturate_case() in be call internerally)


Most C++ operators have been overloaded. Among them the bitwise operators &, |, ^, ~,
the min, max, and abs functions, the comparison operators <, <=, ==,!=, >, >=; these later returning a 8-bit binary image. You will also find the matrix multiplication m1*m2 (where
m1 and m2 are both cv::Mat instances), matrix inversion m1.inv(), transpose m1.t(), determinant m1.determinant(), vector norm, v1.norm(), cross-product v1.cross(v2), dot product v1.dot(v2), and so on. When this makes sense, you also have the op= operator (for example, +=) defined.



Rduce image can rewrite as follow:

image=(image&cv::Scalar(mask,mask,mask))
                     +cv::Scalar(div/2,div/2,div/2);


但我测试过这样是不行的, & 并没有起作用,到是+ 运算了。

2013-12-08 09:28:47.462 HelloWorld[562:a0b] origin 165  214  228 255
2013-12-08 09:28:47.463 HelloWorld[562:a0b] after 160  224  224 255    (正确的结果)

2013-12-08 09:29:34.608 HelloWorld[595:a0b] origin 165  214  228 255
2013-12-08 09:29:34.609 HelloWorld[595:a0b] after 197  246  255 255  165+32=197, & not take effect


eventhough I try:

 cv::bitwise_and(*image,Scalar(mask,mask,mask,0xFF) , *result);
    *result= (*result) + Scalar(div,div,div,0);


cv::bitwise_and also not take effect..


暂时不去追究原因,因为毕竟性能没有自己operate each element 的理想。so just ommit this issue.



Splitting the image channel

if you want to process one individual channel, you can do it by scanning loop, but there is a method

cv::split will copy the three channels of a color image into thress distinct cv::Mat instance.


 Mat * result=nil;
    std::vector<cv::Mat> planes;
    cv::split(*image, planes);
    planes[0]+=(*second);
    cv::merge(planes,*result);

    


经过测试,书上给的是有问题的,至少一个channel + 上有3个channel的图片是不work的,除非书上的那个image2 是gray-leave image.但我们掌握有这个个方法就是了

OpenCV Error: Sizes of input arguments do not match (The operation is neither 'array op array' (where arrays have the same size and the same number of channels), nor 'array op scalar', nor 'scalar op array') in arithm_op,


Define regions of interest


Region of interest  short as ROI

once define the ROI, the ROI can be manipulate as a regular cv::Mat instance. the key is that the ROI points to the same data buffer as its parent image


-(void)addImageLogonUseROI:(cv::Mat *)image logon:(cv::Mat *)secod
{
    Mat_<Vec4b> souce=*image;
    Mat imageROI=souce(cv::Rect(200,60,secod->cols,secod->rows));
    //Note that the last parameter, you should not use image as the output. orelse, the logon will be enlarg scale.
    
    //The key is that the ROI points to the same data buffer as its parent image.
    //eventhough we are process the image's ROI,not a sperate data.
    cv::addWeighted(imageROI, 1.0, *secod, .4, .0, imageROI);
    
}


The ROI can also be described using row and column ranges. A range is a continuous sequence from a start index to an end index (excluded). The cv::Range structure is used to represent this concept. Therefore, an ROI can be defined from two ranges, for example, in our example, the ROI could have been equivalently defined as follows:
   cv::Mat imageROI= image(cv::Range(270,270+logo.rows),
                           cv::Range(385,385+logo.cols))
The operator() of cv::Mat returns another cv::Mat instance that can then be used
in subsequence calls. Any transformation of the ROI will affect the original image in the corresponding area because the image and the ROI share the same image data. Since the definition of an ROI does not copy data, it is executed in constant time, no matter the size of the ROI.



If one wants to define an ROI made of some lines of an image, the following call could be used:
   cv::Mat imageROI= image.rowRange(start,end) ;
and similarly, for an ROI made of some image columns:
   cv::Mat imageROI= image.colRange(start,end) ;
The methods row and col that were used in the recipe Scanning an image with neighbor access are a special case of these later methods in which the start and end index are equal in order to define a single-line or single-column ROI.


-(cv::Mat *)identifyImag:(cv::Mat *)image giveColor:(cv::Vec3b)color minDistance:(int)distance
{
    Mat * result=new Mat;
    result->create(image->size(),CV_8UC1); //we just use 1 channel to record our identification
    int rows=image->rows;
    int cols=image->cols;
    
    for (int i=0; i<rows; ++i) {
        Vec4b * pLineSourceData=image->ptr<Vec4b>(i);
        //not Vec1b,just have uchar data type
        uchar * pLineDestdata=result->ptr(i);
        for (int j=0; j<cols; ++j)
        {
            //Vec4b pixElement=pLineData[j];
            Vec4b pixElement=*pLineSourceData++;
            
            int dis=[self getColorDistance1:pixElement targerColor:color];
            if (dis<=distance) {
                *pLineDestdata++=255;
            }
            else
            {
                *pLineDestdata++=0;
            }
            
            
            
        }
        
        
    }
    
   
    
    return result;
}
-(int)getColorDistance1:(Vec4b) sourceColor targerColor:(Vec3b)targetColor
{
    return abs(sourceColor[0]-targetColor[0])+abs(sourceColor[1]-targetColor[1])+abs(sourceColor[2]-targetColor[2]);
    
}

-(int)getColorDistance2:(Vec4b) sourceColor targerColor:(Vec3b)targetColor
{
    return static_cast<int>(cv::norm(Vec3i(sourceColor[0]-targetColor[0],sourceColor[1]-targetColor[1]),sourceColor[2]-targetColor[2]));
    
    
}


Alternatively, one could have proposed the following definition for the distance computation:
   return static_cast<int>(
      cv::norm<uchar,3>(color-target);
This definition may look right at first glance, however, it is wrong.This is because, all of these operators always include a call to saturate_cast  in cases where the target value is greater than the corresponding color value, the value 0 will be assigned instead of the negative value one would have expected.



Converting color spaces

The RGB color space (or BGR depending on which order the colors are stored) is based on the use of the red, green, and blue additive primary colors.

in digital images, the red, green, and blue channels are adjusted such that when combined in equal amounts, a gray-level intensity is obtained, that is, from black (0,0,0) to white (255,255,255).红色是多少,多红,所以的有个数值量来衡量, 这么说我们平时说的RGB value 其实就是一个gray-level image with 3 channel.


Unfortunately, computing the distance between colors using the RGB color space is not the best way to measure the similarity of two given colors. Indeed, RGB is not a perceptually uniform color space.This means that two colors at a given distance might look very similar, while two other colors separated by the same distancewill look very different.


To solve this problem, other color spaces having the property of being perceptually uniform have been introduced. In particular, theCIE L*a*b* is one such color space. By converting our images to this space, the Euclidean distance between an image pixel and the target color will then meaningfully be a measure of the visual similarity between the two colors. We will show in this recipe how we can modify the previous application in order to work with the CIE L*a*b*.


When an image is converted from one color space to another, a linear or non-linear transformation is applied on each input pixel to produce the output pixels. The pixel type of the output image will match the one of the input image. Even if most of the time you work with 8-bit pixels, you can also use color conversion with images of floats (in which case, pixel values are generally assumed to vary between 0 and 1.0) or with integer images (with pixel generally varying between 0 and 65535). But the exact domain of the pixel values depends on the specific color space. For example, with the CIE L*a*b* color space, the L channel varies between 0 and 100, while the a and b chromaticity components vary between -127 and 127.


Among them is the YCrCb, which is the color space used in JPEG compression. To convert from BGR to YCrCb, the mask would be CV_BGR2YCrCb. Note that the representation with the three regular primary colors, red, green, and blue, is available in the RGB order or BRG order.




he HSV and HLS color spaces are also interesting because they decompose the colors into their hue and saturation components, plus the value or luminance component, which is a more natural way for humans to describe colors.

You can also convert color images to gray-level. The output will be a 1-channel image:
            cv::cvtColor(color, gray, CV_BGR2Gray);


It is also possible to do the conversion in the other direction,but the 3 channels of the resulting color image will then be identically filled with the corresponding values in the gray-level image.


for Color space conveting, we need to study more to get better understanding.











 


 
 

原创粉丝点击