patents.google.com

CN103632153B - Region-based image saliency map extracting method - Google Patents

  • ️Wed Jan 11 2017

CN103632153B - Region-based image saliency map extracting method - Google Patents

Region-based image saliency map extracting method Download PDF

Info

Publication number
CN103632153B
CN103632153B CN201310651864.5A CN201310651864A CN103632153B CN 103632153 B CN103632153 B CN 103632153B CN 201310651864 A CN201310651864 A CN 201310651864A CN 103632153 B CN103632153 B CN 103632153B Authority
CN
China
Prior art keywords
color
region
image
pixel
area
Prior art date
2013-12-05
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310651864.5A
Other languages
Chinese (zh)
Other versions
CN103632153A (en
Inventor
邵枫
姜求平
蒋刚毅
郁梅
李福翠
彭宗举
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Duyan Information Technology Co ltd
Original Assignee
Ningbo University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
2013-12-05
Filing date
2013-12-05
Publication date
2017-01-11
2013-12-05 Application filed by Ningbo University filed Critical Ningbo University
2013-12-05 Priority to CN201310651864.5A priority Critical patent/CN103632153B/en
2014-03-12 Publication of CN103632153A publication Critical patent/CN103632153A/en
2017-01-11 Application granted granted Critical
2017-01-11 Publication of CN103632153B publication Critical patent/CN103632153B/en
Status Active legal-status Critical Current
2033-12-05 Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

本发明公开了一种基于区域的图像显著图提取方法,其首先通过计算图像的全局颜色直方图,得到基于全局颜色直方图的图像显著图,然后采用超像素分割技术对图像进行分割,分别计算各个区域的颜色对比度和空间稀疏性,并利用区域之间的相似性进行加权,得到基于区域颜色对比度的图像显著图和基于区域空间稀疏性的图像显著图,最后对基于全局颜色直方图的图像显著图、基于区域颜色对比度的图像显著图和基于区域空间稀疏性的图像显著图进行融合,得到最终的图像显著图,优点是获得的图像显著图能够较好地反映全局和局部区域的显著变化情况,符合图像显著语义的特征。

The invention discloses a region-based image saliency map extraction method, which first obtains the image saliency map based on the global color histogram by calculating the global color histogram of the image, and then uses superpixel segmentation technology to segment the image, and calculates respectively The color contrast and spatial sparsity of each region are weighted by the similarity between regions to obtain the image saliency map based on the regional color contrast and the image saliency map based on the regional spatial sparsity, and finally the image based on the global color histogram The saliency map, the image saliency map based on regional color contrast and the image saliency map based on regional spatial sparsity are fused to obtain the final image saliency map. The advantage is that the obtained image saliency map can better reflect the significant changes in the global and local regions situation, which conforms to the characteristics of image salience semantics.

Description

一种基于区域的图像显著图提取方法A region-based image saliency map extraction method

技术领域technical field

本发明涉及一种图像信号的处理方法,尤其是涉及一种基于区域的图像显著图提取方法。The invention relates to an image signal processing method, in particular to an area-based image saliency map extraction method.

背景技术Background technique

在人类视觉接收与信息处理中,由于大脑资源有限以及外界环境信息重要性区别,在处理过程中人脑对外界环境信息并不是一视同仁的,而是表现出选择特征。人们在观看图像或者视频片段时注意力并非均匀分布到图像的每个区域,而是对某些显著区域关注度更高。如何将视频中视觉注意度高的显著区域检测并提取出来是计算机视觉以及基于内容的视频检索领域的一个重要的研究内容。In human visual reception and information processing, due to limited brain resources and differences in the importance of external environmental information, the human brain does not treat external environmental information equally in the processing process, but shows selective characteristics. When people watch images or video clips, their attention is not evenly distributed to every area of the image, but they pay more attention to certain salient areas. How to detect and extract salient regions with high visual attention in videos is an important research content in the field of computer vision and content-based video retrieval.

现有的显著图模型是一种模拟生物体视觉注意机制的选择性注意模型,其通过计算每个像素点在颜色、亮度、方向方面与周边背景的对比,并将所有像素点的显著值构成一张显著图,然而这类方法并不能很好地提取图像显著图信息,这是因为基于像素的显著特征并不能很好地反映人眼观看时的显著语义特征,而基于区域的显著特征能够有效地提高提取的稳定性和准确性,因此,如何对图像进行区域分割,如何对各个区域的特征进行提取,如何对各个区域的显著特征进行描述,如何度量区域本身的显著度和区域与区域之间的显著度,都是对基于区域的显著图提取中需要研究解决的问题。The existing saliency map model is a selective attention model that simulates the visual attention mechanism of organisms. It calculates the contrast of each pixel with the surrounding background in terms of color, brightness, and direction, and constructs the saliency value of all pixels. A saliency map, however, this type of method cannot extract image saliency map information very well, because the salient features based on pixels cannot well reflect the salient semantic features of human eyes, while salient features based on regions can Effectively improve the stability and accuracy of extraction, therefore, how to segment the image, how to extract the features of each area, how to describe the salient features of each area, how to measure the salience of the area itself and the area and area The saliency between them is a problem that needs to be studied and solved in the region-based saliency map extraction.

发明内容Contents of the invention

本发明所要解决的技术问题是提供一种符合显著语义特征,且有较高提取稳定性和准确性的基于区域的图像显著图提取方法。The technical problem to be solved by the present invention is to provide a region-based image saliency map extraction method that conforms to salient semantic features and has high extraction stability and accuracy.

本发明解决上述技术问题所采用的技术方案为:一种基于区域的图像显著图提取方法,其特征在于包括以下步骤:The technical solution adopted by the present invention to solve the above-mentioned technical problems is: a region-based image saliency map extraction method, which is characterized in that it comprises the following steps:

①将待处理的源图像记为{Ii(x,y)},其中,i=1,2,3,1≤x≤W,1≤y≤H,W表示{Ii(x,y)}的宽,H表示{Ii(x,y)}的高,Ii(x,y)表示{Ii(x,y)}中坐标位置为(x,y)的像素点的第i个分量的颜色值,第1个分量为R分量、第2个分量为G分量和第3个分量为B分量;①Record the source image to be processed as {I i (x,y)}, where i=1,2,3, 1≤x≤W, 1≤y≤H, W means {I i (x,y) )}, H means the height of {I i (x, y)}, I i (x, y) means the pixel point whose coordinate position is (x, y) in {I i (x, y)} The color value of the i component, the first component is the R component, the second component is the G component, and the third component is the B component;

②首先获取{Ii(x,y)}的量化图像及量化图像的全局颜色直方图,然后根据{Ii(x,y)}的量化图像,获取{Ii(x,y)}中的每个像素点的颜色种类,再根据{Ii(x,y)}的量化图像的全局颜色直方图和{Ii(x,y)}中的每个像素点的颜色种类,获取{Ii(x,y)}的基于全局颜色直方图的图像显著图,记为{HS(x,y)},其中,HS(x,y)表示{HS(x,y)}中坐标位置为(x,y)的像素点的像素值,亦表示{Ii(x,y)}中坐标位置为(x,y)的像素点的基于全局颜色直方图的显著值;② First obtain the quantized image of {I i (x, y)} and the global color histogram of the quantized image, and then according to the quantized image of { I i (x, y)}, obtain the The color type of each pixel in {I i (x, y)}, and then according to the global color histogram of the quantized image of {I i (x, y)} and the color type of each pixel in {I i (x, y)}, get { The image saliency map based on the global color histogram of I i (x,y)} is denoted as {HS(x,y)}, where HS(x,y) represents the coordinate position in {HS(x,y)} The pixel value of the pixel point (x, y) also represents the salient value based on the global color histogram of the pixel point whose coordinate position is (x, y) in {I i (x, y)};

③采用超像素分割技术将{Ii(x,y)}分割成M个互不重叠的区域,然后将{Ii(x,y)}重新表示为M个区域的集合,记为{SPh},再计算{SPh}中的各个区域之间的相似性,将{SPh}中的第p个区域与第q个区域之间的相似性记为Sim(SPp,SPq),其中,M≥1,SPh表示{SPh}中的第h个区域,1≤h≤M,1≤p≤M,1≤q≤M,p≠q,SPp表示{SPh}中的第p个区域,SPq表示{SPh}中的第q个区域;③ Use superpixel segmentation technology to divide {I i (x, y)} into M non-overlapping regions, and then re-express {I i (x, y)} as a set of M regions, denoted as {SP h }, and then calculate the similarity between the regions in {SP h }, and record the similarity between the p-th region and the q-th region in {SP h } as Sim(SP p ,SP q ) , where, M≥1, SP h represents the hth region in {SP h }, 1≤h≤M, 1≤p≤M, 1≤q≤M, p≠q, SP p represents {SP h } The p-th area in , SP q means the q-th area in {SP h };

④根据{SPh}中的各个区域之间的相似性,获取{Ii(x,y)}的基于区域颜色对比度的图像显著图,记为{NGC(x,y)},其中,NGC(x,y)表示{NGC(x,y)}中坐标位置为(x,y)的像素点的像素值;④According to the similarity between each region in {SP h }, obtain the image saliency map based on regional color contrast of {I i (x,y)}, denoted as {NGC(x,y)}, where, NGC (x,y) indicates the pixel value of the pixel whose coordinate position is (x,y) in {NGC(x,y)};

⑤根据{SPh}中的各个区域之间的相似性,获取{Ii(x,y)}的基于区域空间稀疏性的图像显著图,记为{NSS(x,y)},其中,NSS(x,y)表示{NSS(x,y)}中坐标位置为(x,y)的像素点的像素值;⑤According to the similarity between the regions in {SP h }, obtain the image saliency map based on the spatial sparsity of {I i (x,y)}, denoted as {NSS(x,y)}, where, NSS(x,y) indicates the pixel value of the pixel point whose coordinate position is (x,y) in {NSS(x,y)};

⑥对{Ii(x,y)}的基于全局颜色直方图的图像显著图{HS(x,y)}、{Ii(x,y)}的基于区域颜色对比度的图像显著图{NGC(x,y)}及{Ii(x,y)}的基于区域空间稀疏性的图像显著图{NSS(x,y)}进行融合,得到{Ii(x,y)}的最终的图像显著图,记为{Sal(x,y)},将{Sal(x,y)}中坐标位置为(x,y)的像素点的像素值记为Sal(x,y),Sal(x,y)=HS(x,y)×NGC(x,y)×NSS(x,y)。⑥ Image saliency map {HS(x, y)} based on global color histogram of {I i (x, y)}, image saliency map based on regional color contrast of {I i (x, y)} {NGC (x,y)} and the image saliency map {NSS(x,y)} based on regional spatial sparsity of {I i (x,y)} are fused to obtain the final {I i (x,y)} The image saliency map is recorded as {Sal(x,y)}, and the pixel value of the pixel point whose coordinate position is (x,y) in {Sal(x,y)} is recorded as Sal(x,y), and Sal( x,y)=HS(x,y)×NGC(x,y)×NSS(x,y).

所述的步骤②的具体过程为:The concrete process of described step 2. is:

②-1、对{Ii(x,y)}中的每个像素点的各个分量的颜色值分别进行量化,得到{Ii(x,y)}的量化图像,记为{Pi(x,y)},将{Pi(x,y)}中坐标位置为(x,y)的像素点的第i个分量的颜色值记为Pi(x,y),其中,符号为向下取整符号;②-1. Quantize the color values of each component of each pixel in {I i (x, y)} respectively to obtain the quantized image of {I i (x, y)}, which is denoted as {P i ( x, y)}, record the color value of the i-th component of the pixel whose coordinate position is (x, y) in {P i (x, y)} as P i (x, y), Among them, the symbol is the rounding down symbol;

②-2、计算{Pi(x,y)}的全局颜色直方图,记为{H(k)|0≤k≤4095},其中,H(k)表示{Pi(x,y)}中属于第k种颜色的所有像素点的个数;②-2. Calculate the global color histogram of {P i (x,y)}, recorded as {H(k)|0≤k≤4095}, where H(k) means {P i (x,y) } in the number of all pixels belonging to the kth color;

②-3、根据{Pi(x,y)}中的每个像素点的各个分量的颜色值,计算{Ii(x,y)}中对应像素点的颜色种类,将{Ii(x,y)}中坐标位置为(x,y)的像素点的颜色种类记为kxy,kxy=P3(x,y)×256+P2(x,y)×16+P1(x,y),其中,P3(x,y)表示{Pi(x,y)}中坐标位置为(x,y)的像素点的第3个分量的颜色值,P2(x,y)表示{Pi(x,y)}中坐标位置为(x,y)的像素点的第2个分量的颜色值,P1(x,y)表示{Pi(x,y)}中坐标位置为(x,y)的像素点的第1个分量的颜色值;②-3. According to the color value of each component of each pixel in {P i (x, y)}, calculate the color type of the corresponding pixel in {I i (x, y)}, and set {I i ( The color type of the pixel whose coordinate position is (x,y) in x,y)} is recorded as k xy , k xy =P 3 (x,y)×256+P 2 (x,y)×16+P 1 (x,y), where P 3 (x,y) represents the color value of the third component of the pixel whose coordinate position is (x,y) in {P i (x,y)}, P 2 (x ,y) represents the color value of the second component of the pixel whose coordinate position is (x,y) in {P i (x,y)}, and P 1 (x,y) represents {P i (x,y) } in the color value of the first component of the pixel whose coordinate position is (x, y);

②-4、计算{Ii(x,y)}中的每个像素点的基于全局颜色直方图的显著值,将{Ii(x,y)}中坐标位置为(x,y)的像素点的基于全局颜色直方图的显著值记为HS(x,y), HS ( x , y ) = Σ k = 0 4095 ( H ( k ) × D ( k xy , k ) ) , D ( k xy , k ) = ( p k xy , 1 - p k , 1 ) 2 + ( p k xy , 2 - p k , 2 ) 2 + ( p k xy , 3 - p k , 3 ) 2 , 其中,D(kxy,k)表示{H(k)|0≤k≤4095}中的第kxy种颜色与第k种颜色之间的欧氏距离, p k xy , 2 = mod ( k xy / 16 ) , pk,2=mod(k/16), 表示{H(k)|0≤k≤4095}中的第kxy种颜色对应的第1个分量的颜色值,表示{H(k)|0≤k≤4095}中的第kxy种颜色对应的第2个分量的颜色值,表示{H(k)|0≤k≤4095}中的第kxy种颜色对应的第3个分量的颜色值,pk,1表示{H(k)|0≤k≤4095}中的第k种颜色对应的第1个分量的颜色值,pk,2表示{H(k)|0≤k≤4095}中的第k种颜色对应的第2个分量的颜色值,pk,3表示{H(k)|0≤k≤4095}中的第k种颜色对应的第3个分量的颜色值,mod()为取余数操作函数;②-4. Calculate the salient value of each pixel in {I i (x, y)} based on the global color histogram, and set the coordinate position in {I i (x, y)} as (x, y) The salient value of the pixel based on the global color histogram is recorded as HS(x,y), HS ( x , the y ) = Σ k = 0 4095 ( h ( k ) × D. ( k xy , k ) ) , D. ( k xy , k ) = ( p k xy , 1 - p k , 1 ) 2 + ( p k xy , 2 - p k , 2 ) 2 + ( p k xy , 3 - p k , 3 ) 2 , Among them, D(k xy ,k) represents the Euclidean distance between the k xy color and the k color in {H(k)|0≤k≤4095}, p k xy , 2 = mod ( k xy / 16 ) , p k,2 = mod(k/16), Indicates the color value of the first component corresponding to the k xy color in {H(k)|0≤k≤4095}, Indicates the color value of the second component corresponding to the k xy color in {H(k)|0≤k≤4095}, Represents the color value of the third component corresponding to the k xy color in {H(k)|0≤k≤4095}, p k,1 represents the first color in {H(k)|0≤k≤4095} The color value of the first component corresponding to k colors, p k,2 means the color value of the second component corresponding to the k color in {H(k)|0≤k≤4095}, p k,3 Indicates the color value of the third component corresponding to the kth color in {H(k)|0≤k≤4095}, and mod() is the remainder operation function;

②-5、根据{Ii(x,y)}中的每个像素点的基于全局颜色直方图的显著值,得到{Ii(x,y)}的基于全局颜色直方图的图像显著图,记为{HS(x,y)}。②-5. According to the saliency value based on the global color histogram of each pixel in {I i (x, y)}, the image saliency map based on the global color histogram of {I i (x, y)} is obtained , denoted as {HS(x,y)}.

所述的步骤③中{SPh}中的第p个区域与第q个区域之间的相似性Sim(SPp,SPq)的获取过程为:The acquisition process of the similarity Sim(SP p , SP q ) between the p-th region and the q-th region in {SP h } in the step ③ is:

③-1、对{SPh}中的每个区域中的每个像素点的各个分量的颜色值分别进行量化,得到{SPh}中的每个区域的量化区域,将{SPh}中的第h个区域的量化区域记为{Ph,i(xh,yh)},将{Ph,i(xh,yh)}中坐标位置为(xh,yh)的像素点的第i个分量的颜色值记为Ph,i(xh,yh),假设{Ph,i(xh,yh)}中坐标位置为(xh,yh)的像素点在{Ii(x,y)}中的坐标位置为(x,y),则其中,1≤xh≤Wh,1≤yh≤Hh,Wh表示{SPh}中的第h个区域的宽度,Hh表示{SPh}中的第h个区域的高度,符号为向下取整符号;③-1. Quantize the color values of each component of each pixel in each region in {SP h }, respectively, to obtain the quantized region of each region in {SP h }, and convert {SP h } The quantization area of the h-th area of is recorded as {P h,i (x h ,y h )}, and the coordinate position in {P h,i (x h ,y h )} is (x h ,y h ) The color value of the i-th component of the pixel is recorded as P h,i (x h ,y h ), assuming that the coordinate position in {P h,i (x h ,y h )} is (x h ,y h ) The coordinate position of the pixel in {I i (x, y)} is (x, y), then Among them, 1≤x h ≤W h , 1≤y h ≤H h , W h represents the width of the h-th region in {SP h }, H h represents the height of the h-th region in {SP h }, symbol is the rounding down symbol;

③-2、计算{SPh}中的每个区域的量化区域的颜色直方图,将{Ph,i(xh,yh)}的颜色直方图记为其中,表示{Ph,i(xh,yh)}中属于第k种颜色的所有像素点的个数;③-2. Calculate the color histogram of the quantized region of each region in {SP h }, and mark the color histogram of {P h,i (x h ,y h )} as in, Indicates the number of all pixels belonging to the kth color in {P h,i (x h ,y h )};

③-3、对{SPh}中的每个区域的量化区域的颜色直方图进行归一化操作,得到对应的归一化后的颜色直方图,将对进行归一化操作后得到的归一化后的颜色直方图记为 { H ′ SP h ( k ) | 0 ≤ k ≤ 4095 } , H ′ SP h ( k ) = H SP h ( k ) Σ h ′ = 1 M H SP h ′ ( k ) , 其中,表示{SPh}中的第h个区域的量化区域{Ph,i(xh,yh)}中属于第k种颜色的像素点的出现概率,表示{SPh}中的第h'个区域的量化区域{Ph',i(xh',yh')}中属于第k种颜色的所有像素点的个数,1≤xh'≤Wh',1≤yh'≤Hh',Wh'表示{SPh}中的第h'个区域的宽度,Hh'表示{SPh}中的第h'个区域的高度,Ph',i(xh',yh')表示{Ph',i(xh',yh')}中坐标位置为(xh',yh')的像素点的第i个分量的颜色值;③-3. Perform a normalization operation on the color histogram of the quantized region of each region in {SP h } to obtain the corresponding normalized color histogram, which will be The normalized color histogram obtained after the normalization operation is marked as { h ′ SP h ( k ) | 0 ≤ k ≤ 4095 } , h ′ SP h ( k ) = h SP h ( k ) Σ h ′ = 1 m h SP h ′ ( k ) , in, Indicates the occurrence probability of pixels belonging to the k-th color in the quantized area {P h,i (x h ,y h )} of the h-th area in {SP h }, Indicates the number of all pixels belonging to the k-th color in the quantized area {P h',i (x h' ,y h' )} of the h'th area in {SP h }, 1≤x h' ≤W h ' ,1≤y h' ≤H h' , W h' represents the width of the h'th region in {SP h }, H h' represents the height of the h'th region in {SP h } , P h',i (x h' ,y h' ) means the pixel point whose coordinate position is (x h' ,y h' ) in {P h',i (x h' ,y h' )} The color value of the i component;

③-4、计算{SPh}中的第p个区域与第q个区域之间的相似性,记为Sim(SPp,SPq),Sim(SPp,SPq)=Simc(SPp,SPq)×Simd(SPp,SPq),Simc(SPp,SPq)表示{SPh}中的第p个区域与{SPh}中的第q区域之间的颜色相似性, Sim c ( SP p , SP q ) = Σ k = 0 4095 min ( H ′ SP p ( k ) , H ′ SP q ( k ) ) , Simd(SPp,SPq)表示{SPh}中的第p个区域与{SPh}中的第q区域之间的空间相似性,其中,SPp表示{SPh}中的第p个区域,SPq表示{SPh}中的第q个区域,表示{SPh}中的第p个区域的量化区域{Pp,i(xp,yp)}中属于第k种颜色的像素点的出现概率,表示{SPh}中的第q个区域的量化区域{Pq,i(xq,yq)}中属于第k种颜色的像素点的出现概率,1≤xp≤Wp,1≤yp≤Hp,Wp表示{SPh}中的第p个区域的宽度,Hp表示{SPh}中的第p个区域的高度,Pp,i(xp,yp)表示{Pp,i(xp,yp)}中坐标位置为(xp,yp)的像素点的第i个分量的颜色值,1≤xq≤Wq,1≤yq≤Hq,Wq表示{SPh}中的第q个区域的宽度,Hq表示{SPh}中的第q个区域的高度,Pq,i(xq,yq)表示{Pq,i(xq,yq)}中坐标位置为(xq,yq)的像素点的第i个分量的颜色值,min()为取最小值函数,表示{SPh}中的第p个区域中的中心像素点的坐标位置,表示{SPh}中的第q个区域中的中心像素点的坐标位置,符号“‖‖”为求欧式距离符号。③-4. Calculate the similarity between the p-th region and the q-th region in {SP h }, denoted as Sim(SP p ,SP q ), Sim(SP p ,SP q )=Sim c (SP p ,SP q )×Sim d (SP p ,SP q ), Sim c (SP p ,SP q ) represents the color between the p-th region in {SP h } and the q-th region in {SP h } similarity, Sim c ( SP p , SP q ) = Σ k = 0 4095 min ( h ′ SP p ( k ) , h ′ SP q ( k ) ) , Sim d (SP p ,SP q ) denotes the spatial similarity between the p-th region in {SP h } and the q-th region in {SP h }, where, SP p represents the p-th region in {SP h }, SP q represents the q-th region in {SP h }, Indicates the occurrence probability of pixels belonging to the k-th color in the quantized area {P p,i (x p ,y p )} of the p-th area in {SP h }, Indicates the occurrence probability of pixels belonging to the kth color in the quantized area {P q,i (x q ,y q )} of the qth area in {SP h }, 1≤x p ≤W p ,1≤ y p ≤ H p , W p represents the width of the p-th region in {SP h }, H p represents the height of the p-th region in {SP h }, P p,i (x p ,y p ) represents The color value of the i-th component of the pixel whose coordinate position is (x p , y p ) in {P p,i (x p ,y p )}, 1≤x q ≤W q ,1≤y q ≤H q , W q represents the width of the qth region in {SP h }, H q represents the height of the qth region in {SP h }, P q,i (x q ,y q ) represents {P q, i (x q ,y q )} is the color value of the i-th component of the pixel whose coordinate position is (x q ,y q ), min() is the minimum value function, Indicates the coordinate position of the central pixel point in the p-th region in {SP h }, Indicates the coordinate position of the central pixel point in the qth region in {SP h }, and the symbol "‖‖" is the Euclidean distance symbol.

所述的步骤④的具体过程为:The concrete process of described step 4. is:

④-1、计算{SPh}中的每个区域的颜色对比度,将{SPh}中的第h个区域的颜色对比度记为 NGC SP h = Σ q = 1 M W ( SP h , SP q ) × | | m SP h - m SP q | | , 其中,SPh表示{SPh}中的第h个区域,SPq表示{SPh}中的第q个区域,表示{SPh}中的第h个区域中包含的像素点的总个数,Simd(SPh,SPq)表示{SPh}中的第h个区域与{SPh}中的第q区域之间的空间相似性, 表示{SPh}中的第h个区域中的中心像素点的坐标位置,表示{SPh}中的第q个区域中的中心像素点的坐标位置,符号“‖‖”为求欧式距离符号,表示{SPh}中的第h个区域的颜色均值向量,表示{SPh}中的第q个区域的颜色均值向量;④-1. Calculate the color contrast of each region in {SP h }, and record the color contrast of the hth region in {SP h } as NGC SP h = Σ q = 1 m W ( SP h , SP q ) × | | m SP h - m SP q | | , where, SP h represents the h-th region in {SP h }, SP q represents the q-th region in {SP h }, Indicates the total number of pixels contained in the h-th area in {SP h }, Sim d (SP h ,SP q ) means the h-th area in {SP h } and the q-th area in {SP h } the spatial similarity between regions, Indicates the coordinate position of the central pixel point in the h-th region in {SP h }, Indicates the coordinate position of the central pixel point in the qth region in {SP h }, the symbol "‖‖" is the Euclidean distance symbol, represents the color mean vector of the h-th region in {SP h }, Represents the color mean vector of the qth region in {SP h };

④-2、对{SPh}中的每个区域的颜色对比度进行归一化操作,得到对应的归一化后的颜色对比度,将对{SPh}中的第h个区域的颜色对比度进行归一化操作后得到的归一化后的颜色对比度记为 其中,NGCmin表示{SPh}中的M个区域中最小的颜色对比度,NGCmax表示{SPh}中的M个区域中最大的颜色对比度;④-2. Perform a normalization operation on the color contrast of each region in {SP h } to obtain the corresponding normalized color contrast, and then perform a normalization operation on the color contrast of the hth region in {SP h } The normalized color contrast obtained after the normalization operation is recorded as Among them, NGC min represents the minimum color contrast among the M regions in {SP h }, and NGC max represents the maximum color contrast among the M regions in {SP h };

④-3、计算{SPh}中的每个区域的基于颜色对比度的显著值,将{SPh}中的第h个区域的基于颜色对比度的显著值记为 NGC ′ ′ SP h = Σ q = 1 M ( Sim ( SP h , SP q ) × NGC ′ SP h ) Σ q = 1 M Sim ( SP h , SP q ) , 其中,Sim(SPh,SPq)表示{SPh}中的第h个区域与第q个区域之间的相似性;④-3. Calculate the saliency value based on color contrast of each region in {SP h }, and record the saliency value based on color contrast of the hth region in {SP h } as NGC ′ ′ SP h = Σ q = 1 m ( Sim ( SP h , SP q ) × NGC ′ SP h ) Σ q = 1 m Sim ( SP h , SP q ) , Among them, Sim(SP h , SP q ) represents the similarity between the h-th region and the q-th region in {SP h };

④-4、将{SPh}中的每个区域的基于颜色对比度的显著值作为对应区域中的所有像素点的显著值,从而得到{Ii(x,y)}的基于区域颜色对比度的图像显著图,记为{NGC(x,y)},其中,NGC(x,y)表示{NGC(x,y)}中坐标位置为(x,y)的像素点的像素值。④-4. Take the saliency value based on color contrast of each region in {SP h } as the saliency value of all pixels in the corresponding region, so as to obtain the saliency value based on region color contrast of {I i (x, y)} The image saliency map is denoted as {NGC(x,y)}, where NGC(x,y) represents the pixel value of the pixel at the coordinate position (x,y) in {NGC(x,y)}.

所述的步骤⑤的具体过程为:The concrete process of described step 5. is:

⑤-1、计算{SPh}中的每个区域的空间稀疏性,将{SPh}中的第h个区域的空间稀疏性记为 NSS SP h = Σ q = 1 M ( Sim ( SP h , SP q ) × D SP h ) Σ q = 1 M Sim ( SP h , SP q ) , 其中,Sim(SPh,SPq)表示{SPh}中的第h个区域与第q个区域之间的相似性,表示{SPh}中的第h个区域中的中心像素点与{Ii(x,y)}的中心像素点之间的欧氏距离;⑤-1. Calculate the spatial sparsity of each region in {SP h }, and record the spatial sparsity of the hth region in {SP h } as NSS SP h = Σ q = 1 m ( Sim ( SP h , SP q ) × D. SP h ) Σ q = 1 m Sim ( SP h , SP q ) , Among them, Sim(SP h , SP q ) represents the similarity between the h-th region and the q-th region in {SP h }, Indicates the Euclidean distance between the central pixel point in the h-th region in {SP h } and the central pixel point in {I i (x,y)};

⑤-2、对{SPh}中的每个区域的空间稀疏性进行归一化操作,得到对应的归一化后的空间稀疏性,将对{SPh}中的第h个区域的空间稀疏性进行归一化操作后得到的归一化后的空间稀疏性记为 其中,NSSmin表示{SPh}中的M个区域中最小的空间稀疏性,NSSmax表示{SPh}中的M个区域中最大的空间稀疏性;⑤-2. Perform a normalization operation on the spatial sparsity of each region in {SP h } to obtain the corresponding normalized spatial sparsity, and the space of the hth region in {SP h } sparsity The normalized spatial sparsity obtained after the normalization operation is denoted as Among them, NSS min represents the smallest spatial sparsity among the M regions in {SP h }, and NSS max represents the largest spatial sparsity among the M regions in {SP h };

⑤-3、计算{SPh}中的每个区域的基于空间稀疏性的显著值,将{SPh}中的第h个区域的基于空间稀疏性的显著值记为 NSS ′ ′ SP h = Σ q = 1 M ( Sim ( SP h , SP q ) × NSS ′ SP h ) Σ q = 1 M Sim ( SP h , SP q ) ; ⑤-3. Calculate the saliency value based on spatial sparsity of each region in {SP h }, and record the saliency value based on spatial sparsity of the hth region in {SP h } as NSS ′ ′ SP h = Σ q = 1 m ( Sim ( SP h , SP q ) × NSS ′ SP h ) Σ q = 1 m Sim ( SP h , SP q ) ;

⑤-4、将{SPh}中的每个区域的基于空间稀疏性的显著值作为对应区域中的所有像素点的显著值,从而得到{Ii(x,y)}的基于区域空间稀疏性的图像显著图,记为{NSS(x,y)},其中,NSS(x,y)表示{NSS(x,y)}中坐标位置为(x,y)的像素点的像素值。⑤-4. Take the saliency value based on spatial sparsity of each region in {SP h } as the saliency value of all pixels in the corresponding region, so as to obtain the region-based spatial sparsity of {I i (x,y)} The characteristic image saliency map is denoted as {NSS(x,y)}, where NSS(x,y) represents the pixel value of the pixel at the coordinate position (x,y) in {NSS(x,y)}.

与现有技术相比,本发明的优点在于:Compared with the prior art, the present invention has the advantages of:

1)本发明方法通过分别计算得到基于全局颜色直方图的图像显著图、基于区域颜色对比度的图像显著图和基于区域空间稀疏性的图像显著图,并最终融合得到图像显著图,所获得的图像显著图能够较好地反映图像的全局和局部区域的显著变化情况,且稳定性和准确性高。1) The method of the present invention obtains the image saliency map based on the global color histogram, the image saliency map based on the regional color contrast and the image saliency map based on the regional spatial sparsity by calculating respectively, and finally fuses the image saliency map, and the obtained image The saliency map can better reflect the significant changes in the global and local regions of the image, and has high stability and accuracy.

2)本发明方法采用超像素分割技术对图像进行分割,并利用直方图特征分别计算各个区域的颜色对比度和空间稀疏性,最终利用区域之间的相似性进行加权,得到最终的基于区域的图像显著图,这样能够提取符合显著语义的特征信息。2) The method of the present invention uses superpixel segmentation technology to segment the image, and uses the histogram feature to calculate the color contrast and spatial sparsity of each region, and finally uses the similarity between regions to weight to obtain the final region-based image Saliency map, which can extract feature information that conforms to saliency semantics.

附图说明Description of drawings

图1为本发明方法的总体实现框图;Fig. 1 is the overall realization block diagram of the inventive method;

图2a为“Image1”的原始图像;Figure 2a is the original image of "Image1";

图2b为“Image1”图像的真实(Ground truth)显著图;Figure 2b is the ground truth saliency map of the "Image1" image;

图2c为“Image1”图像的基于全局颜色直方图的图像显著图;Figure 2c is an image saliency map based on the global color histogram of the image "Image1";

图2d为“Image1”图像的基于区域颜色对比度的图像显著图;Figure 2d is the image saliency map based on the regional color contrast of the "Image1" image;

图2e为“Image1”图像的基于区域空间稀疏性的图像显著图;Fig. 2e is an image saliency map based on regional spatial sparsity of the image "Image1";

图2f为“Image1”图像最终的图像显著图;Figure 2f is the final image saliency map of the "Image1" image;

图3a为“Image2”的原始图像;Figure 3a is the original image of "Image2";

图3b为“Image2”图像的真实(Ground truth)显著图;Figure 3b is the ground truth saliency map of the "Image2" image;

图3c为“Image2”图像的基于全局颜色直方图的图像显著图;Figure 3c is an image saliency map based on the global color histogram of the "Image2" image;

图3d为“Image2”图像的基于区域颜色对比度的图像显著图;Figure 3d is an image saliency map based on the regional color contrast of the "Image2" image;

图3e为“Image2”图像的基于区域空间稀疏性的图像显著图;Fig. 3e is the image saliency map based on the spatial sparsity of the "Image2" image;

图3f为“Image2”图像最终的图像显著图;Figure 3f is the final image saliency map of the "Image2" image;

图4a为“Image3”的原始图像;Figure 4a is the original image of "Image3";

图4b为“Image3”图像的真实(Ground truth)显著图;Figure 4b is the real (Ground truth) saliency map of the "Image3" image;

图4c为“Image3”图像的基于全局颜色直方图的图像显著图;Figure 4c is an image saliency map based on the global color histogram of the "Image3" image;

图4d为“Image3”图像的基于区域颜色对比度的图像显著图;Figure 4d is an image saliency map based on regional color contrast of the "Image3" image;

图4e为“Image3”图像的基于区域空间稀疏性的图像显著图;Fig. 4e is an image saliency map based on regional spatial sparsity of the "Image3" image;

图4f为“Image3”图像最终的图像显著图;Figure 4f is the final image saliency map of the "Image3" image;

图5a为“Image4”的原始图像;Figure 5a is the original image of "Image4";

图5b为“Image4”图像的真实(Ground truth)显著图;Figure 5b is the ground truth saliency map of the "Image4" image;

图5c为“Image4”图像的基于全局颜色直方图的图像显著图;Figure 5c is an image saliency map based on the global color histogram of the "Image4" image;

图5d为“Image4”图像的基于区域颜色对比度的图像显著图;Figure 5d is the image saliency map based on the regional color contrast of the "Image4" image;

图5e为“Image4”图像的基于区域空间稀疏性的图像显著图;Fig. 5e is an image saliency map based on regional spatial sparsity of the "Image4" image;

图5f为“Image4”图像最终的图像显著图;Figure 5f is the final image saliency map of the "Image4" image;

图6a为“Image5”的原始图像;Figure 6a is the original image of "Image5";

图6b为“Image5”图像的真实(Ground truth)显著图;Figure 6b is the ground truth saliency map of the "Image5" image;

图6c为“Image5”图像的基于全局颜色直方图的图像显著图;Figure 6c is an image saliency map based on the global color histogram of the "Image5" image;

图6d为“Image5”图像的基于区域颜色对比度的图像显著图;Figure 6d is an image saliency map based on regional color contrast of the "Image5" image;

图6e为“Image5”图像的基于区域空间稀疏性的图像显著图;Fig. 6e is an image saliency map based on regional spatial sparsity of the "Image5" image;

图6f为“Image5”图像最终的图像显著图。Figure 6f is the final image saliency map of the "Image5" image.

具体实施方式detailed description

以下结合附图实施例对本发明作进一步详细描述。The present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments.

本发明提出的一种基于区域的图像显著图提取方法,其总体实现框图如图1所示,其包括以下步骤:A kind of region-based image saliency map extraction method proposed by the present invention, its overall realization block diagram is as shown in Figure 1, and it comprises the following steps:

①将待处理的源图像记为{Ii(x,y)},其中,i=1,2,3,1≤x≤W,1≤y≤H,W表示{Ii(x,y)}的宽,H表示{Ii(x,y)}的高,Ii(x,y)表示{Ii(x,y)}中坐标位置为(x,y)的像素点的第i个分量的颜色值,第1个分量为R分量、第2个分量为G分量和第3个分量为B分量。①Record the source image to be processed as {I i (x,y)}, where i=1,2,3, 1≤x≤W, 1≤y≤H, W means {I i (x,y) )}, H means the height of {I i (x, y)}, I i (x, y) means the pixel point whose coordinate position is (x, y) in {I i (x, y)} The color value of the i component, the first component is the R component, the second component is the G component, and the third component is the B component.

②如果只考虑局部显著性,则图像中变化较剧烈的边缘或复杂的背景区域显著性较高,而平滑的目标区域内部显著性较低,这样还需要考虑全局显著性,全局显著性是指各像素点相对于全局图像的显著程度,因此本发明首先获取{Ii(x,y)}的量化图像及量化图像的全局颜色直方图,然后根据{Ii(x,y)}的量化图像,获取{Ii(x,y)}中的每个像素点的颜色种类,再根据{Ii(x,y)}的量化图像的全局颜色直方图和{Ii(x,y)}中的每个像素点的颜色种类,获取{Ii(x,y)}的基于全局颜色直方图的图像显著图,记为{HS(x,y)},其中,HS(x,y)表示{HS(x,y)}中坐标位置为(x,y)的像素点的像素值,亦表示{Ii(x,y)}中坐标位置为(x,y)的像素点的基于全局颜色直方图的显著值。②If only local saliency is considered, then sharply changing edges or complex background areas in the image have higher saliency, while smooth target areas have lower saliency. In this way, global saliency needs to be considered. Global saliency refers to The significant degree of each pixel relative to the global image, so the present invention first obtains the quantized image of {I i (x, y)} and the global color histogram of the quantized image, and then quantizes according to {I i (x, y)} Image, get the color type of each pixel in {I i (x, y)}, and then according to the global color histogram of the quantized image of {I i (x, y)} and {I i (x, y) }, get the image saliency map based on the global color histogram of {I i (x,y)}, denoted as {HS(x,y)}, where, HS(x,y ) represents the pixel value of the pixel whose coordinate position is (x, y) in {HS(x, y)}, and also represents the pixel value of the pixel whose coordinate position is (x, y) in {I i (x, y)} Salient values based on the global color histogram.

在此具体实施例中,步骤②的具体过程为:In this specific embodiment, the concrete process of step 2. is:

②-1、对{Ii(x,y)}中的每个像素点的各个分量的颜色值分别进行量化,得到{Ii(x,y)}的量化图像,记为{Pi(x,y)},将{Pi(x,y)}中坐标位置为(x,y)的像素点的第i个分量的颜色值记为Pi(x,y),其中,符号为向下取整符号。②-1. Quantize the color values of each component of each pixel in {I i (x, y)} respectively to obtain the quantized image of {I i (x, y)}, which is denoted as {P i ( x, y)}, record the color value of the i-th component of the pixel whose coordinate position is (x, y) in {P i (x, y)} as P i (x, y), Among them, the symbol is the rounding down symbol.

②-2、计算{Pi(x,y)}的全局颜色直方图,记为{H(k)|0≤k≤4095},其中,H(k)表示{Pi(x,y)}中属于第k种颜色的所有像素点的个数。②-2. Calculate the global color histogram of {P i (x,y)}, recorded as {H(k)|0≤k≤4095}, where H(k) means {P i (x,y) } in the number of all pixels belonging to the kth color.

②-3、根据{Pi(x,y)}中的每个像素点的各个分量的颜色值,计算{Ii(x,y)}中对应像素点的颜色种类,将{Ii(x,y)}中坐标位置为(x,y)的像素点的颜色种类记为kxy,kxy=P3(x,y)×256+P2(x,y)×16+P1(x,y),其中,P3(x,y)表示{Pi(x,y)}中坐标位置为(x,y)的像素点的第3个分量的颜色值,P2(x,y)表示{Pi(x,y)}中坐标位置为(x,y)的像素点的第2个分量的颜色值,P1(x,y)表示{Pi(x,y)}中坐标位置为(x,y)的像素点的第1个分量的颜色值。②-3. According to the color value of each component of each pixel in {P i (x, y)}, calculate the color type of the corresponding pixel in {I i (x, y)}, and set {I i ( The color type of the pixel whose coordinate position is (x,y) in x,y)} is recorded as k xy , k xy =P 3 (x,y)×256+P 2 (x,y)×16+P 1 (x,y), where P 3 (x,y) represents the color value of the third component of the pixel whose coordinate position is (x,y) in {P i (x,y)}, P 2 (x ,y) represents the color value of the second component of the pixel whose coordinate position is (x,y) in {P i (x,y)}, and P 1 (x,y) represents {P i (x,y) } in the color value of the first component of the pixel whose coordinate position is (x, y).

②-4、计算{Ii(x,y)}中的每个像素点的基于全局颜色直方图的显著值,将{Ii(x,y)}中坐标位置为(x,y)的像素点的基于全局颜色直方图的显著值记为HS(x,y), HS ( x , y ) = Σ k = 0 4095 ( H ( k ) × D ( k xy , k ) ) , D ( k xy , k ) = ( p k xy , 1 - p k , 1 ) 2 + ( p k xy , 2 - p k , 2 ) 2 + ( p k xy , 3 - p k , 3 ) 2 , 其中,D(kxy,k)表示{H(k)|0≤k≤4095}中的第kxy种颜色与第k种颜色之间的欧氏距离, p k xy , 2 = mod ( k xy / 16 ) , pk,2=mod(k/16), 表示{H(k)|0≤k≤4095}中的第kxy种颜色对应的第1个分量的颜色值,表示{H(k)|0≤k≤4095}中的第kxy种颜色对应的第2个分量的颜色值,表示{H(k)|0≤k≤4095}中的第kxy种颜色对应的第3个分量的颜色值,pk,1表示{H(k)|0≤k≤4095}中的第k种颜色对应的第1个分量的颜色值,pk,2表示{H(k)|0≤k≤4095}中的第k种颜色对应的第2个分量的颜色值,pk,3表示{H(k)|0≤k≤4095}中的第k种颜色对应的第3个分量的颜色值,mod()为取余数操作函数。②-4. Calculate the salient value of each pixel in {I i (x, y)} based on the global color histogram, and set the coordinate position in {I i (x, y)} as (x, y) The salient value of the pixel based on the global color histogram is recorded as HS(x,y), HS ( x , the y ) = Σ k = 0 4095 ( h ( k ) × D. ( k xy , k ) ) , D. ( k xy , k ) = ( p k xy , 1 - p k , 1 ) 2 + ( p k xy , 2 - p k , 2 ) 2 + ( p k xy , 3 - p k , 3 ) 2 , Among them, D(k xy ,k) represents the Euclidean distance between the k xy color and the k color in {H(k)|0≤k≤4095}, p k xy , 2 = mod ( k xy / 16 ) , p k,2 = mod(k/16), Indicates the color value of the first component corresponding to the k xy color in {H(k)|0≤k≤4095}, Indicates the color value of the second component corresponding to the k xy color in {H(k)|0≤k≤4095}, Represents the color value of the third component corresponding to the k xy color in {H(k)|0≤k≤4095}, p k,1 represents the first color in {H(k)|0≤k≤4095} The color value of the first component corresponding to k colors, p k,2 means the color value of the second component corresponding to the k color in {H(k)|0≤k≤4095}, p k,3 Indicates the color value of the third component corresponding to the kth color in {H(k)|0≤k≤4095}, and mod() is the remainder operation function.

②-5、根据{Ii(x,y)}中的每个像素点的基于全局颜色直方图的显著值,得到{Ii(x,y)}的基于全局颜色直方图的图像显著图,记为{HS(x,y)}。②-5. According to the saliency value based on the global color histogram of each pixel in {I i (x, y)}, the image saliency map based on the global color histogram of {I i (x, y)} is obtained , denoted as {HS(x,y)}.

③采用超像素(Superpixel)分割技术将{Ii(x,y)}分割成M个互不重叠的区域,然后将{Ii(x,y)}重新表示为M个区域的集合,记为{SPh},再考虑局部显著性,图像中相似的区域之间一般具有较低的显著性,因此本发明计算{SPh}中的各个区域之间的相似性,将{SPh}中的第p个区域与第q个区域之间的相似性记为Sim(SPp,SPq),其中,M≥1,SPh表示{SPh}中的第h个区域,1≤h≤M,1≤p≤M,1≤q≤M,p≠q,SPp表示{SPh}中的第p个区域,SPq表示{SPh}中的第q个区域。在本实施例中,取M=200。③ Use superpixel segmentation technology to divide {I i (x, y)} into M non-overlapping regions, and then re-express {I i (x, y)} as a set of M regions, record is {SP h }, and considering the local saliency, similar areas in the image generally have lower saliency, so the present invention calculates the similarity between each area in {SP h }, and {SP h } The similarity between the p-th area and the q-th area in is denoted as Sim(SP p , SP q ), where M≥1, SP h represents the h-th area in {SP h }, 1≤h ≤M, 1≤p≤M, 1≤q≤M, p≠q, SP p denotes the p-th region in {SP h }, and SP q denotes the q-th region in {SP h }. In this embodiment, take M=200.

在此具体实施例中,步骤③中{SPh}中的第p个区域与第q个区域之间的相似性Sim(SPp,SPq)的获取过程为:In this specific embodiment, the acquisition process of the similarity Sim(SP p , SP q ) between the p-th region and the q-th region in {SP h } in step ③ is:

③-1、对{SPh}中的每个区域中的每个像素点的各个分量的颜色值分别进行量化,得到{SPh}中的每个区域的量化区域,将{SPh}中的第h个区域的量化区域记为{Ph,i(xh,yh)},将{Ph,i(xh,yh)}中坐标位置为(xh,yh)的像素点的第i个分量的颜色值记为Ph,i(xh,yh),假设{Ph,i(xh,yh)}中坐标位置为(xh,yh)的像素点在{Ii(x,y)}中的坐标位置为(x,y),则其中,1≤xh≤Wh,1≤yh≤Hh,Wh表示{SPh}中的第h个区域的宽度,Hh表示{SPh}中的第h个区域的高度,符号为向下取整符号。③-1. Quantize the color values of each component of each pixel in each region in {SP h }, respectively, to obtain the quantized region of each region in {SP h }, and convert {SP h } The quantization area of the h-th area of is recorded as {P h,i (x h ,y h )}, and the coordinate position in {P h,i (x h ,y h )} is (x h ,y h ) The color value of the i-th component of the pixel is recorded as P h,i (x h ,y h ), assuming that the coordinate position in {P h,i (x h ,y h )} is (x h ,y h ) The coordinate position of the pixel in {I i (x, y)} is (x, y), then Among them, 1≤x h ≤W h , 1≤y h ≤H h , W h represents the width of the h-th region in {SP h }, H h represents the height of the h-th region in {SP h }, symbol is the rounding down symbol.

③-2、计算{SPh}中的每个区域的量化区域的颜色直方图,将{Ph,i(xh,yh)}的颜色直方图记为其中,表示{Ph,i(xh,yh)}中属于第k种颜色的所有像素点的个数。③-2. Calculate the color histogram of the quantized region of each region in {SP h }, and mark the color histogram of {P h,i (x h ,y h )} as in, Indicates the number of all pixels belonging to the kth color in {P h,i (x h ,y h )}.

③-3、对{SPh}中的每个区域的量化区域的颜色直方图进行归一化操作,得到对应的归一化后的颜色直方图,将对进行归一化操作后得到的归一化后的颜色直方图记为 { H ′ SP h ( k ) | 0 ≤ k ≤ 4095 } , H ′ SP h ( k ) = H SP h ( k ) Σ h ′ = 1 M H SP h ′ ( k ) , 其中,表示{SPh}中的第h个区域的量化区域{Ph,i(xh,yh)}中属于第k种颜色的像素点的出现概率,表示{SPh}中的第h'个区域的量化区域{Ph',i(xh',yh')}中属于第k种颜色的所有像素点的个数,1≤xh'≤Wh',1≤yh'≤Hh',Wh'表示{SPh}中的第h'个区域的宽度,Hh'表示{SPh}中的第h'个区域的高度,Ph',i(xh',yh')表示{Ph',i(xh',yh')}中坐标位置为(xh',yh')的像素点的第i个分量的颜色值。③-3. Perform a normalization operation on the color histogram of the quantized region of each region in {SP h } to obtain the corresponding normalized color histogram, which will be The normalized color histogram obtained after the normalization operation is marked as { h ′ SP h ( k ) | 0 ≤ k ≤ 4095 } , h ′ SP h ( k ) = h SP h ( k ) Σ h ′ = 1 m h SP h ′ ( k ) , in, Indicates the occurrence probability of pixels belonging to the k-th color in the quantized area {P h,i (x h ,y h )} of the h-th area in {SP h }, Indicates the number of all pixels belonging to the k-th color in the quantized area {P h',i (x h' ,y h' )} of the h'th area in {SP h }, 1≤x h' ≤W h ' ,1≤y h' ≤H h' , W h' represents the width of the h'th region in {SP h }, H h' represents the height of the h'th region in {SP h } , P h',i (x h' ,y h' ) means the pixel point whose coordinate position is (x h' ,y h' ) in {P h',i (x h' ,y h' )} Color values for the i components.

③-4、计算{SPh}中的第p个区域与第q个区域之间的相似性,记为Sim(SPp,SPq),Sim(SPp,SPq)=Simc(SPp,SPq)×Simd(SPp,SPq),Simc(SPp,SPq)表示{SPh}中的第p个区域与{SPh}中的第q区域之间的颜色相似性, Sim c ( SP p , SP q ) = Σ k = 0 4095 min ( H ′ SP p ( k ) , H ′ SP q ( k ) ) , Simd(SPp,SPq)表示{SPh}中的第p个区域与{SPh}中的第q区域之间的空间相似性,其中,SPp表示{SPh}中的第p个区域,SPq表示{SPh}中的第q个区域,表示{SPh}中的第p个区域的量化区域{Pp,i(xp,yp)}中属于第k种颜色的像素点的出现概率,表示{SPh}中的第q个区域的量化区域{Pq,i(xq,yq)}中属于第k种颜色的像素点的出现概率,1≤xp≤Wp,1≤yp≤Hp,Wp表示{SPh}中的第p个区域的宽度,Hp表示{SPh}中的第p个区域的高度,Pp,i(xp,yp)表示{Pp,i(xp,yp)}中坐标位置为(xp,yp)的像素点的第i个分量的颜色值,1≤xq≤Wq,1≤yq≤Hq,Wq表示{SPh}中的第q个区域的宽度,Hq表示{SPh}中的第q个区域的高度,Pq,i(xq,yq)表示{Pq,i(xq,yq)}中坐标位置为(xq,yq)的像素点的第i个分量的颜色值,min()为取最小值函数,表示{SPh}中的第p个区域中的中心像素点的坐标位置,表示{SPh}中的第q个区域中的中心像素点的坐标位置,符号“‖‖”为求欧式距离符号。③-4. Calculate the similarity between the p-th region and the q-th region in {SP h }, denoted as Sim(SP p ,SP q ), Sim(SP p ,SP q )=Sim c (SP p ,SP q )×Sim d (SP p ,SP q ), Sim c (SP p ,SP q ) represents the color between the p-th region in {SP h } and the q-th region in {SP h } similarity, Sim c ( SP p , SP q ) = Σ k = 0 4095 min ( h ′ SP p ( k ) , h ′ SP q ( k ) ) , Sim d (SP p ,SP q ) denotes the spatial similarity between the p-th region in {SP h } and the q-th region in {SP h }, where, SP p represents the p-th region in {SP h }, SP q represents the q-th region in {SP h }, Indicates the occurrence probability of pixels belonging to the k-th color in the quantized area {P p,i (x p ,y p )} of the p-th area in {SP h }, Indicates the occurrence probability of pixels belonging to the kth color in the quantized area {P q,i (x q ,y q )} of the qth area in {SP h }, 1≤x p ≤W p ,1≤ y p ≤ H p , W p represents the width of the p-th region in {SP h }, H p represents the height of the p-th region in {SP h }, P p,i (x p ,y p ) represents The color value of the i-th component of the pixel whose coordinate position is (x p , y p ) in {P p,i (x p ,y p )}, 1≤x q ≤W q ,1≤y q ≤H q , W q represents the width of the qth region in {SP h }, H q represents the height of the qth region in {SP h }, P q,i (x q ,y q ) represents {P q, i (x q ,y q )} is the color value of the i-th component of the pixel whose coordinate position is (x q ,y q ), min() is the minimum value function, Indicates the coordinate position of the central pixel point in the p-th region in {SP h }, Indicates the coordinate position of the central pixel point in the qth region in {SP h }, and the symbol "‖‖" is the Euclidean distance symbol.

④根据{SPh}中的各个区域之间的相似性,获取{Ii(x,y)}的基于区域颜色对比度的图像显著图,记为{NGC(x,y)},其中,NGC(x,y)表示{NGC(x,y)}中坐标位置为(x,y)的像素点的像素值。④According to the similarity between each region in {SP h }, obtain the image saliency map based on regional color contrast of {I i (x,y)}, denoted as {NGC(x,y)}, where, NGC (x, y) represents the pixel value of the pixel at the coordinate position (x, y) in {NGC(x, y)}.

在此具体实施例中,步骤④的具体过程为:In this specific embodiment, the concrete process of step 4. is:

④-1、计算{SPh}中的每个区域的颜色对比度,将{SPh}中的第h个区域的颜色对比度记为 NGC SP h = Σ q = 1 M W ( SP h , SP q ) × | | m SP h - m SP q | | , 其中,SPh表示{SPh}中的第h个区域,SPq表示{SPh}中的第q个区域,表示{SPh}中的第h个区域中包含的像素点的总个数,Simd(SPh,SPq)表示{SPh}中的第h个区域与{SPh}中的第q区域之间的空间相似性, 表示{SPh}中的第h个区域中的中心像素点的坐标位置,表示{SPh}中的第q个区域中的中心像素点的坐标位置,符号“‖‖”为求欧式距离符号,表示{SPh}中的第h个区域的颜色均值向量,即将{SPh}中的第h个区域中的所有像素点的颜色向量求平均得到表示{SPh}中的第q个区域的颜色均值向量。④-1. Calculate the color contrast of each region in {SP h }, and record the color contrast of the hth region in {SP h } as NGC SP h = Σ q = 1 m W ( SP h , SP q ) × | | m SP h - m SP q | | , where, SP h represents the h-th region in {SP h }, SP q represents the q-th region in {SP h }, Indicates the total number of pixels contained in the h-th area in {SP h }, Sim d (SP h ,SP q ) means the h-th area in {SP h } and the q-th area in {SP h } the spatial similarity between regions, Indicates the coordinate position of the central pixel point in the h-th region in {SP h }, Indicates the coordinate position of the central pixel point in the qth region in {SP h }, the symbol "‖‖" is the Euclidean distance symbol, Indicates the color mean vector of the hth area in {SP h }, that is, the color vector of all pixels in the hth area in {SP h } is obtained by averaging Color mean vector representing the qth region in {SP h }.

④-2、对{SPh}中的每个区域的颜色对比度进行归一化操作,得到对应的归一化后的颜色对比度,将对{SPh}中的第h个区域的颜色对比度进行归一化操作后得到的归一化后的颜色对比度记为 其中,NGCmin表示{SPh}中的M个区域中最小的颜色对比度,NGCmax表示{SPh}中的M个区域中最大的颜色对比度。④-2. Perform a normalization operation on the color contrast of each region in {SP h } to obtain the corresponding normalized color contrast, and then perform a normalization operation on the color contrast of the hth region in {SP h } The normalized color contrast obtained after the normalization operation is recorded as Among them, NGC min represents the minimum color contrast among the M regions in {SP h }, and NGC max represents the maximum color contrast among the M regions in {SP h }.

④-3、计算{SPh}中的每个区域的基于颜色对比度的显著值,将{SPh}中的第h个区域的基于颜色对比度的显著值记为 NGC ′ ′ SP h = Σ q = 1 M ( Sim ( SP h , SP q ) × NGC ′ SP h ) Σ q = 1 M Sim ( SP h , SP q ) , 其中,Sim(SPh,SPq)表示{SPh}中的第h个区域与第q个区域之间的相似性。④-3. Calculate the saliency value based on color contrast of each region in {SP h }, and record the saliency value based on color contrast of the hth region in {SP h } as NGC ′ ′ SP h = Σ q = 1 m ( Sim ( SP h , SP q ) × NGC ′ SP h ) Σ q = 1 m Sim ( SP h , SP q ) , Among them, Sim(SP h , SP q ) represents the similarity between the h-th region and the q-th region in {SP h }.

④-4、将{SPh}中的每个区域的基于颜色对比度的显著值作为对应区域中的所有像素点的显著值,即对于{SPh}中的第h个区域,将{SPh}中的第h个区域的基于颜色对比度的显著值作为该区域中的所有像素点的显著值,从而得到{Ii(x,y)}的基于区域颜色对比度的图像显著图,记为{NGC(x,y)},其中,NGC(x,y)表示{NGC(x,y)}中坐标位置为(x,y)的像素点的像素值。④-4. Take the saliency value based on the color contrast of each region in {SP h } as the saliency value of all pixels in the corresponding region, that is, for the hth region in {SP h }, set {SP h } in the hth region based on color contrast as the salient value of all pixels in the region, so as to obtain the image saliency map based on regional color contrast of {I i (x, y)}, denoted as { NGC(x,y)}, wherein, NGC(x,y) represents the pixel value of the pixel point whose coordinate position is (x,y) in {NGC(x,y)}.

⑤根据{SPh}中的各个区域之间的相似性,获取{Ii(x,y)}的基于区域空间稀疏性的图像显著图,记为{NSS(x,y)},其中,NSS(x,y)表示{NSS(x,y)}中坐标位置为(x,y)的像素点的像素值。⑤According to the similarity between the regions in {SP h }, obtain the image saliency map based on the spatial sparsity of {I i (x,y)}, denoted as {NSS(x,y)}, where, NSS(x,y) indicates the pixel value of the pixel point whose coordinate position is (x,y) in {NSS(x,y)}.

在此具体实施例中,步骤⑤的具体过程为:In this specific embodiment, the concrete process of step 5. is:

⑤-1、计算{SPh}中的每个区域的空间稀疏性,将{SPh}中的第h个区域的空间稀疏性记为 NSS SP h = Σ q = 1 M ( Sim ( SP h , SP q ) × D SP h ) Σ q = 1 M Sim ( SP h , SP q ) , 其中,Sim(SPh,SPq)表示{SPh}中的第h个区域与第q个区域之间的相似性,表示{SPh}中的第h个区域中的中心像素点与{Ii(x,y)}的中心像素点之间的欧氏距离。⑤-1. Calculate the spatial sparsity of each region in {SP h }, and record the spatial sparsity of the hth region in {SP h } as NSS SP h = Σ q = 1 m ( Sim ( SP h , SP q ) × D. SP h ) Σ q = 1 m Sim ( SP h , SP q ) , Among them, Sim(SP h , SP q ) represents the similarity between the h-th region and the q-th region in {SP h }, Indicates the Euclidean distance between the central pixel point in the h-th region in {SP h } and the central pixel point in {I i (x,y)}.

⑤-2、对{SPh}中的每个区域的空间稀疏性进行归一化操作,得到对应的归一化后的空间稀疏性,将对{SPh}中的第h个区域的空间稀疏性进行归一化操作后得到的归一化后的空间稀疏性记为 其中,NSSmin表示{SPh}中的M个区域中最小的空间稀疏性,NSSmax表示{SPh}中的M个区域中最大的空间稀疏性。⑤-2. Perform a normalization operation on the spatial sparsity of each region in {SP h } to obtain the corresponding normalized spatial sparsity, and the space of the hth region in {SP h } sparsity The normalized spatial sparsity obtained after the normalization operation is denoted as Among them, NSS min represents the smallest spatial sparsity among the M regions in {SP h }, and NSS max represents the largest spatial sparsity among the M regions in {SP h }.

⑤-3、计算{SPh}中的每个区域的基于空间稀疏性的显著值,将{SPh}中的第h个区域的基于空间稀疏性的显著值记为 NS S ′ ′ SP h = Σ q = 1 M ( Sim ( SP h , SP q ) × NSS ′ SP h ) Σ q = 1 M Sim ( SP h , SP q ) . ⑤-3. Calculate the saliency value based on spatial sparsity of each region in {SP h }, and record the saliency value based on spatial sparsity of the hth region in {SP h } as NS S ′ ′ SP h = Σ q = 1 m ( Sim ( SP h , SP q ) × NSS ′ SP h ) Σ q = 1 m Sim ( SP h , SP q ) .

⑤-4、将{SPh}中的每个区域的基于空间稀疏性的显著值作为对应区域中的所有像素点的显著值,即对于{SPh}中的第h个区域,将{SPh}中的第h个区域的基于空间稀疏性的显著值作为该区域中的所有像素点的显著值,从而得到{Ii(x,y)}的基于区域空间稀疏性的图像显著图,记为{NSS(x,y)},其中,NSS(x,y)表示{NSS(x,y)}中坐标位置为(x,y)的像素点的像素值。⑤-4. Take the saliency value based on spatial sparsity of each region in {SP h } as the saliency value of all pixels in the corresponding region, that is, for the hth region in {SP h }, set {SP The saliency value based on the spatial sparsity of the hth region in h } is used as the saliency value of all pixels in the region, so as to obtain the image saliency map based on the region spatial sparsity of {I i (x, y)}, It is recorded as {NSS(x,y)}, where NSS(x,y) represents the pixel value of the pixel point whose coordinate position is (x,y) in {NSS(x,y)}.

⑥对{Ii(x,y)}的基于全局颜色直方图的图像显著图{HS(x,y)}、{Ii(x,y)}的基于区域颜色对比度的图像显著图{NGC(x,y)}及{Ii(x,y)}的基于区域空间稀疏性的图像显著图{NSS(x,y)}进行融合,得到{Ii(x,y)}的最终的图像显著图,记为{Sal(x,y)},将{Sal(x,y)}中坐标位置为(x,y)的像素点的像素值记为Sal(x,y),Sal(x,y)=HS(x,y)×NGC(x,y)×NSS(x,y)。⑥ Image saliency map {HS(x, y)} based on global color histogram of {I i (x, y)}, image saliency map based on regional color contrast of {I i (x, y)} {NGC (x,y)} and the image saliency map {NSS(x,y)} based on regional spatial sparsity of {I i (x,y)} are fused to obtain the final {I i (x,y)} The image saliency map is recorded as {Sal(x,y)}, and the pixel value of the pixel point whose coordinate position is (x,y) in {Sal(x,y)} is recorded as Sal(x,y), and Sal( x,y)=HS(x,y)×NGC(x,y)×NSS(x,y).

以下就利用本发明方法对微软亚洲研究院提供的显著对象图像库MSRA中Image1、Image2、Image3、Image4和Image5五组图像的显著图进行提取。图2a给出了“Image1”的原始图像,图2b给出了“Image1”图像的真实(Ground truth)显著图,图2c给出了“Image1”图像的基于全局颜色直方图的图像显著图、图2d给出了“Image1”图像的基于区域颜色对比度的图像显著图、图2e给出了“Image1”图像的基于区域空间稀疏性的图像显著图、图2f给出了“Image1”图像最终的图像显著图;图3a给出了“Image2”的原始图像,图3b给出了“Image2”图像的真实(Ground truth)显著图,图3c给出了“Image2”图像的基于全局颜色直方图的图像显著图、图3d给出了“Image2”图像的基于区域颜色对比度的图像显著图、图3e给出了“Image2”图像的基于区域空间稀疏性的图像显著图、图3f给出了“Image2”图像最终的图像显著图;图4a给出了“Image3”的原始图像,图4b给出了“Image3”图像的真实(Groundtruth)显著图,图4c给出了“Image3”图像的基于全局颜色直方图的图像显著图、图4d给出了“Image3”图像的基于区域颜色对比度的图像显著图、图4e给出了“Image3”图像的基于区域空间稀疏性的图像显著图、图4f给出了“Image3”图像最终的图像显著图;图5a给出了“Image4”的原始图像,图5b给出了“Image4”图像的真实(Ground truth)显著图,图5c给出了“Image4”图像的基于全局颜色直方图的图像显著图、图5d给出了“Image4”图像的基于区域颜色对比度的图像显著图、图5e给出了“Image4”图像的基于区域空间稀疏性的图像显著图、图5f给出了“Image4”图像最终的图像显著图;图6a给出了“Image5”的原始图像,图6b给出了“Image5”图像的真实(Ground truth)显著图,图6c给出了“Image5”图像的基于全局颜色直方图的图像显著图、图6d给出了“Image5”图像的基于区域颜色对比度的图像显著图、图6e给出了“Image5”图像的基于区域空间稀疏性的图像显著图、图6f给出了“Image5”图像最终的图像显著图。从图2a至图6f可以看出,采用本发明方法得到的图像显著图由于考虑了全局和局部区域的显著变化情况,因此能够很好地符合显著语义的特征。In the following, the method of the present invention is used to extract the saliency maps of Image1, Image2, Image3, Image4 and Image5 in the salient object image library MSRA provided by Microsoft Research Asia. Figure 2a shows the original image of "Image1", Figure 2b shows the real (Ground truth) saliency map of the "Image1" image, and Figure 2c shows the image saliency map of the "Image1" image based on the global color histogram, Figure 2d shows the image saliency map based on the regional color contrast of the "Image1" image, Figure 2e shows the image saliency map based on the regional spatial sparsity of the "Image1" image, and Figure 2f shows the final Image saliency map; Fig. 3a shows the original image of "Image2", Fig. 3b shows the real (Ground truth) saliency map of "Image2" image, and Fig. 3c shows the global color histogram of "Image2" image Image saliency map, Fig. 3d shows the image saliency map of the image "Image2" based on the regional color contrast, Fig. 3e shows the image saliency map of the "Image2" image based on the spatial sparsity of the region, Fig. 3f shows the image saliency map of the "Image2" image The final image saliency map of the "image; Fig. 4a shows the original image of "Image3", Fig. 4b shows the real (Groundtruth) saliency map of "Image3" image, and Fig. 4c shows the global color-based image of "Image3" The image saliency map of the histogram, Figure 4d shows the image saliency map of the "Image3" image based on the regional color contrast, Figure 4e shows the image saliency map of the "Image3" image based on the regional spatial sparsity, and Figure 4f shows Figure 5a shows the original image of "Image4", Figure 5b shows the real (Ground truth) saliency map of "Image4" image, and Figure 5c shows the "Image4" image The image saliency map based on the global color histogram, Figure 5d shows the image saliency map based on the regional color contrast of the "Image4" image, Figure 5e shows the image saliency map based on the regional spatial sparsity of the "Image4" image, Figure 5f shows the final image saliency map of the "Image4" image; Figure 6a shows the original image of "Image5", Figure 6b shows the real (Ground truth) saliency map of the "Image5" image, and Figure 6c shows The image saliency map of the "Image5" image based on the global color histogram, Figure 6d shows the image saliency map of the "Image5" image based on the regional color contrast, and Figure 6e shows the image saliency map of the "Image5" image based on the regional spatial sparsity Image saliency map, Figure 6f shows the final image saliency map of the "Image5" image. It can be seen from Fig. 2a to Fig. 6f that the image saliency map obtained by using the method of the present invention can well conform to the characteristics of saliency semantics because the saliency changes of the global and local regions are considered.

Claims (4)

1. A region-based image saliency map extraction method is characterized by comprising the following steps:

① denote the source image to be processed as Ii(x, y) }, wherein I ≦ 1,2,3, 1 ≦ x ≦ W, 1 ≦ y ≦ H, W represents { I ≦ x ≦ Hi(x, y) }, H denotes { IiHigh of (x, y) }, Ii(x, y) represents { IiThe color value of the ith component of the pixel point with the coordinate position (x, y) in (x, y) }, wherein the 1 st component is an R component, the 2 nd component is a G component and the 3 rd component is a B component;

② first acquires { Ii(x, y) } quantized image and global color histogram of quantized image, then according to { I }i(x, y) } obtaining { I } from the quantized imageiThe color type of each pixel point in (x, y) } is determined according to { I }iGlobal color histogram of quantized image of (x, y) } and { IiThe color type of each pixel point in (x, y) } is obtained to obtain { I }i(x, y) } is an image saliency map based on a global color histogram, and is denoted as { HS (x, y) }, wherein HS (x, y) represents a pixel value of a pixel point with a coordinate position (x, y) in { HS (x, y) }, and also represents { I }iThe coordinate position in the (x, y) } is the significant value of the pixel point of (x, y) based on the global color histogram;

the concrete process of the second step is as follows:

② -1, pair { IiRespectively quantizing the color value of each component of each pixel point in (x, y) to obtain { I }i(x, y) } quantized image, denoted as { P }i(x, y) }, will { PiThe color value of the ith component of the pixel point with the coordinate position (x, y) in (x, y) is recorded as Pi(x,y),Wherein, the symbolIs a rounded-down symbol;

② -2, calculating { Pi(x, y) }, denoted as { H (k) |0 ≦ k ≦ 4095}, where H (k) represents { P ≦ 4095}, where H (k) representsiThe number of all pixel points belonging to the kth color in (x, y) };

② -3, according to { P }i(x, y) calculating color values of respective components of each pixel in the (x, y) } image, calculating { I }i(x, y) } the color type of the corresponding pixel point will be { IiThe color type of the pixel point with the coordinate position (x, y) in (x, y) is recorded as kxy,kxy=P3(x,y)×256+P2(x,y)×16+P1(x, y) wherein P3(x, y) represents { P }iThe color value, P, of the 3 rd component of the pixel point with the coordinate position (x, y) in (x, y) } is2(x, y) represents { P }i(x,y)}Color value, P, of the 2 nd component of a pixel having a (x, y) middle coordinate position1(x, y) represents { P }iThe color value of the 1 st component of the pixel point with the coordinate position (x, y) in (x, y) };

② -4, calculation of { Ii(x, y) } the global color histogram-based saliency value for each pixel point in the (x, y) } will be { IiThe significant value based on the global color histogram of the pixel point with the coordinate position (x, y) in (x, y) is marked as HS (x, y), wherein D (k)xyK) represents the k-th item in { H (k) |0 ≦ k ≦ 4095}xyThe euclidean distance between the seed color and the kth color, pk,2=mod(k/16), denotes the k-th in { H (k) |0 ≦ k ≦ 4095}xyThe color value of the 1 st component corresponding to a seed color,denotes the k-th in { H (k) |0 ≦ k ≦ 4095}xyThe color value of the 2 nd component corresponding to the seed color,denotes the k-th in { H (k) |0 ≦ k ≦ 4095}xyColor value of 3 rd component, p, corresponding to a colork,1Denotes { H (k) |0 ≦ k ≦ 4095} of the color value of the 1 st component corresponding to the k-th color, pk,2Denotes a color value, p, of the 2 nd component corresponding to the k-th color in { H (k) |0 ≦ k ≦ 4095}k,3Representing the color value of the 3 rd component corresponding to the k-th color in { H (k) |0 ≦ k ≦ 4095}, and mod () is a remainder taking operation function;

② -5, according to { I }iThe significant value of each pixel point in (x, y) based on the global color histogram is obtained to obtain { I }i(x, y) } global color histogram based image saliency map, denoted as { HS (x, y) };

③ use superpixel splitting technique to split { I }i(x, y) } into M non-overlapping regions, and then dividing { I }i(x, y) } is re-represented as a set of M regions, denoted as { SP }h}, recalculating { SPhSimilarity between the respective regions in (will) { SP }hThe similarity between the p-th and q-th regions in (SP) is denoted as Sim (SP)p,SPq) Wherein M is more than or equal to 1, SPhRepresents SPhIn the h-th area, h is more than or equal to 1 and less than or equal to M, p is more than or equal to 1 and less than or equal to M, q is more than or equal to 1 and less than or equal to M, p is not equal to q, SP is equal topRepresents SPhP-th area in (SP)qRepresents SPhThe q-th region in (1);

④ according to { SPhObtaining the similarity among all the areas in the { I }, and obtaining the { I } of the areas in the { I }, wherein the areas in the { I } are similar to each otheri(x, y) } is an image saliency map based on regional color contrast, and is marked as { NGC (x, y) }, wherein the NGC (x, y) represents the pixel value of a pixel point with a coordinate position (x, y) in the { NGC (x, y) };

⑤ according to { SPhObtaining the similarity among all the areas in the { I }, and obtaining the { I } of the areas in the { I }, wherein the areas in the { I } are similar to each otheri(x, y) } is an image saliency map based on region space sparsity and is marked as { NSS (x, y) }, wherein NSS (x, y) represents the pixel value of a pixel point with a coordinate position (x, y) in { NSS (x, y) };

⑥ pairs { Ii(x, y) } global color histogram-based image saliency maps { HS (x, y) }, { I (I) }i(x, y) } region color contrast based image saliency maps { NGC (x, y) } and { Ii(x, y) } image saliency maps { NSS (x, y) } based on region space sparsity are fused to obtain { Ii(x, y) } final image saliency map, denoted as { Sal (x)Y), the pixel value of the pixel point whose coordinate position is (x, y) in { Sal (x, y) } is defined as Sal (x, y), and the Sal (x, y) is defined as HS (x, y) × NGC (x, y) × NSS (x, y).

2. The method according to claim 1, wherein { SP ] in step ③ is used as a basis for extracting saliency map of imagehSimilarity Sim (SP) between p-th and q-th regions inp,SPq) The acquisition process comprises the following steps:

③ -1, pair SPhQuantizing the color value of each component of each pixel point in each region to obtain { SP }hQuantized region of each region in { SP } would behThe quantization region of the h-th region in (1) } is denoted as { Ph,i(xh,yh) Will { P }h,i(xh,yh) The position of the middle coordinate is (x)h,yh) The color value of the ith component of the pixel point is recorded as Ph,i(xh,yh) Suppose { Ph,i(xh,yh) The position of the middle coordinate is (x)h,yh) Has a pixel point of { IiThe coordinate position in (x, y) } is (x, y), thenWherein x is more than or equal to 1h≤Wh,1≤yh≤Hh,WhRepresents SPhWidth of the H-th area in (H) } HhRepresents SPhHeight of h-th area in (1), signIs a rounded-down symbol;

③ -2, calculation of SPhColor histogram of quantized region of each region in { P }, will be { Ph,i(xh,yh) The color histogram of is noted asWherein,represents { Ph,i(xh,yh) The number of all pixel points belonging to the kth color in the pixel;

③ -3, pair SPhNormalizing the color histogram of the quantization area of each area to obtain a corresponding normalized color histogram, and performing normalization on the color histogramsThe normalized color histogram obtained after normalization is recorded as Wherein,represents SPhH-th region of { P } quantization region of the h-th regionh,i(xh,yh) The probability of occurrence of a pixel belonging to the k-th color in the pixel,represents SPhQuantization region of h' th region in { P }h',i(xh',yh') X is more than or equal to 1 and the number of all pixel points belonging to the k color in the pixelh'≤Wh',1≤yh'≤Hh',Wh'Represents SPhWidth of H' th area in (H) }, Hh'Represents SPhHeight of h' th area in (P) } hh',i(xh',yh') Represents { Ph',i(xh',yh') The position of the middle coordinate is (x)h',yh') The color value of the ith component of the pixel point of (1);

③ -4, calculation of SPhThe similarity between the p-th and q-th regions in (1), denoted as Sim (SP)p,SPq),Sim(SPp,SPq)=Simc(SPp,SPq)×Simd(SPp,SPq),Simc(SPp,SPq) Represents SPhThe p-th region in (f) and (SP)hThe color similarity between the q-th regions in (j),Simd(SPp,SPq) Represents SPhThe p-th region in (f) and (SP)hThe spatial similarity between the q-th regions in (j),wherein, SPpRepresents SPhP-th area in (SP)qRepresents SPhThe q-th area in (1),represents SPhQuantization region of the P-th region in { P } quantization region of the P-th region { Pp,i(xp,yp) The probability of occurrence of a pixel belonging to the k-th color in the pixel,represents SPhQuantization region of the qth region in { P }, a quantization region of the qth region of { P } is a quantization region of the qth regionq,i(xq,yq) The probability of appearance of pixel points belonging to the k-th color in the pixel is more than or equal to 1 and less than or equal to xp≤Wp,1≤yp≤Hp,WpRepresents SPhWidth of p-th area in (H)pRepresents SPhHeight of P-th area in (P) }, Pp,i(xp,yp) Represents { Pp,i(xp,yp) The position of the middle coordinate is (x)p,yp) The color value of the ith component of the pixel point is more than or equal to 1 and less than or equal to xq≤Wq,1≤yq≤Hq,WqRepresents SPhWidth of the q-th area in (H) } mqRepresents SPhHigh of the q-th region in (1) }Degree, Pq,i(xq,yq) Represents { Pq,i(xq,yq) The position of the middle coordinate is (x)q,yq) Min () is a minimum function,represents SPhThe coordinate position of the center pixel point in the p-th region in (1),represents SPhThe coordinate position of the central pixel point in the qth area in the symbol "| | |" is the Euclidean distance symbol.

3. The method for extracting the image saliency map based on the region according to claim 2, characterized in that the specific process of the step (iv) is as follows:

④ -1, calculation of SPhColor contrast of each region in { SP } will be { SP }hColor contrast of the h-th area in (1) } is noted as Wherein, SPhRepresents SPhH area in (SP)qRepresents SPhThe q-th area in (1),represents SPhTotal number of pixel points included in the h-th area in (Sim)d(SPh,SPq) Represents SPhH area in the with { SP }hThe spatial similarity between the q-th regions in (j), represents SPhThe coordinate position of the center pixel point in the h-th area in (1),represents SPhThe coordinate position of the central pixel point in the qth area in the symbol, "| | |" is the Euclidean distance symbol,represents SPhThe color mean vector of the h-th region in (j),represents SPhThe color mean vector of the qth region in (j);

④ -2, pair SPhNormalizing the color contrast of each region in the { SP } to obtain the corresponding normalized color contrast, and aligning the { SP }hColor contrast of the h-th area in (1) } color contrastThe normalized color contrast obtained after normalization was recorded as Wherein, NGCminRepresents SPhMinimum color contrast of M regions in (NGC) } NGCmaxRepresents SPhMaximum color contrast in M regions in (j);

④ -3, calculation of SPhColor-based of each region inSignificance of color contrast, will SPhThe h region in (1) has a significant value based on color contrast recorded as Wherein, Sim (SP)h,SPq) Represents SPhSimilarity between the h region and the q region in (1);

④ -4, will { SPhThe significant value of each area based on the color contrast is taken as the significant value of all pixel points in the corresponding area, so as to obtain { I }iThe (x, y) } image saliency map based on area color contrast is denoted as { NGC (x, y) }, wherein NGC (x, y) represents the pixel value of a pixel point with a coordinate position (x, y) in { NGC (x, y) }.

4. The method according to claim 3, wherein the specific process of step (c) is as follows:

⑤ -1, calculation of SPhSpatial sparsity of each region in { SP } will behThe spatial sparsity of the h-th region in (1) } is noted as Wherein, Sim (SP)h,SPq) Represents SPhThe similarity between the h-th and q-th regions in (1),represents SPhThe central pixel point in the h-th area in (I) } and (I)i(x, y) } euclidean distance between center pixel points;

⑤ -2, pair SPhThe spatial sparsity of each region inRow normalization operation to obtain corresponding normalized space sparsity to be paired with { SPhSpatial sparsity of the h-th region inThe normalized space sparsity obtained after normalization is recorded as Wherein NSSminRepresents SPhMinimum spatial sparsity, NSS, of M regions inmaxRepresents SPhMaximum spatial sparsity in M regions in (j);

⑤ -3, calculation of SPhSignificant value based on spatial sparsity for each region in the { SP will behSignificant value based on spatial sparsity of the h-th region in (1) } is noted

⑤ -4, will { SPhThe significant value of each region based on space sparsity is used as the significant value of all pixel points in the corresponding region, so as to obtain { I }iThe image saliency map based on the area space sparsity of (x, y) } is marked as { NSS (x, y) }, wherein NSS (x, y) represents the pixel value of a pixel point with a coordinate position (x, y) in { NSS (x, y) }.

CN201310651864.5A 2013-12-05 2013-12-05 Region-based image saliency map extracting method Active CN103632153B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310651864.5A CN103632153B (en) 2013-12-05 2013-12-05 Region-based image saliency map extracting method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310651864.5A CN103632153B (en) 2013-12-05 2013-12-05 Region-based image saliency map extracting method

Publications (2)

Publication Number Publication Date
CN103632153A CN103632153A (en) 2014-03-12
CN103632153B true CN103632153B (en) 2017-01-11

Family

ID=50213181

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310651864.5A Active CN103632153B (en) 2013-12-05 2013-12-05 Region-based image saliency map extracting method

Country Status (1)

Country Link
CN (1) CN103632153B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104050674B (en) * 2014-06-27 2017-01-25 中国科学院自动化研究所 Salient region detection method and device
CN104133956B (en) * 2014-07-25 2017-09-12 小米科技有限责任公司 Handle the method and device of picture
CN104134217B (en) * 2014-07-29 2017-02-15 中国科学院自动化研究所 Video salient object segmentation method based on super voxel graph cut
CN104392233B (en) * 2014-11-21 2017-06-06 宁波大学 A kind of image saliency map extracting method based on region
CN106611427B (en) * 2015-10-21 2019-11-15 中国人民解放军理工大学 Video Saliency Detection Method Based on Candidate Region Fusion
CN105512663A (en) * 2015-12-02 2016-04-20 南京邮电大学 Significance detection method based on global and local contrast
CN106611178A (en) * 2016-03-10 2017-05-03 四川用联信息技术有限公司 Salient object identification method
CN106709512B (en) * 2016-12-09 2020-03-17 河海大学 Infrared target detection method based on local sparse representation and contrast

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102867313A (en) * 2012-08-29 2013-01-09 杭州电子科技大学 Visual saliency detection method with fusion of region color and HoG (histogram of oriented gradient) features
CN103218832A (en) * 2012-10-15 2013-07-24 上海大学 Visual saliency algorithm based on overall color contrast ratio and space distribution in image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102867313A (en) * 2012-08-29 2013-01-09 杭州电子科技大学 Visual saliency detection method with fusion of region color and HoG (histogram of oriented gradient) features
CN103218832A (en) * 2012-10-15 2013-07-24 上海大学 Visual saliency algorithm based on overall color contrast ratio and space distribution in image

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Context-Aware Saliency Detection;Stas Goferman等;《IEEE Transactions on Pattern Analysis and Machine Intelligence archive》;20121031;第34卷(第10期);第1915-1926页 *
基于感知重要性的立体图像质量评价方法;段芬芳 等;《光电工程》;20131031;第40卷(第10期);第70-76页 *

Also Published As

Publication number Publication date
CN103632153A (en) 2014-03-12

Similar Documents

Publication Publication Date Title
CN103632153B (en) 2017-01-11 Region-based image saliency map extracting method
CN110188705B (en) 2022-05-06 Remote traffic sign detection and identification method suitable for vehicle-mounted system
US11830230B2 (en) 2023-11-28 Living body detection method based on facial recognition, and electronic device and storage medium
US10803554B2 (en) 2020-10-13 Image processing method and device
CN105701508B (en) 2017-12-15 Global local optimum model and conspicuousness detection algorithm based on multistage convolutional neural networks
CN106203430B (en) 2017-11-03 A kind of conspicuousness object detecting method based on foreground focused degree and background priori
CN104392233B (en) 2017-06-06 A kind of image saliency map extracting method based on region
CN103996198B (en) 2017-11-21 The detection method of area-of-interest under Complex Natural Environment
CN105354581B (en) 2018-11-16 The color image feature extracting method of Fusion of Color feature and convolutional neural networks
CN103020985B (en) 2015-12-09 A kind of video image conspicuousness detection method based on field-quantity analysis
CN101271525A (en) 2008-09-24 A Fast Method for Obtaining Feature Saliency Maps of Image Sequences
CN106960176B (en) 2020-03-10 Pedestrian gender identification method based on transfinite learning machine and color feature fusion
CN110390308B (en) 2022-09-30 Video behavior identification method based on space-time confrontation generation network
CN103309982B (en) 2016-02-10 A kind of Remote Sensing Image Retrieval method of view-based access control model significant point feature
CN109948593A (en) 2019-06-28 Crowd Counting Method Based on MCNN Combined with Global Density Features
CN104680546A (en) 2015-06-03 Image salient object detection method
CN106156777A (en) 2016-11-23 Textual image detection method and device
CN103093470A (en) 2013-05-08 Rapid multi-modal image synergy segmentation method with unrelated scale feature
CN105069774A (en) 2015-11-18 Object segmentation method based on multiple-instance learning and graph cuts optimization
CN103678552A (en) 2014-03-26 Remote-sensing image retrieving method and system based on salient regional features
CN110930384A (en) 2020-03-27 Crowd counting method, device, device and medium based on density information
CN109858349B (en) 2022-11-15 Traffic sign identification method and device based on improved YOLO model
CN115761484A (en) 2023-03-07 Cloud detection method and device based on remote sensing image
CN103324753B (en) 2016-03-23 Based on the image search method of symbiotic sparse histogram
CN102831621B (en) 2014-11-05 Video significance processing method based on spectral analysis

Legal Events

Date Code Title Description
2014-03-12 PB01 Publication
2014-03-12 PB01 Publication
2014-04-09 C10 Entry into substantive examination
2014-04-09 SE01 Entry into force of request for substantive examination
2017-01-11 GR01 Patent grant
2017-01-11 GR01 Patent grant
2020-01-07 TR01 Transfer of patent right

Effective date of registration: 20191219

Address after: Room 1,020, Nanxun Science and Technology Pioneering Park, No. 666 Chaoyang Road, Nanxun District, Huzhou City, Zhejiang Province, 313000

Patentee after: Huzhou You Yan Intellectual Property Service Co., Ltd.

Address before: 315211 Zhejiang Province, Ningbo Jiangbei District Fenghua Road No. 818

Patentee before: Ningbo University

2020-01-07 TR01 Transfer of patent right
2020-07-21 TR01 Transfer of patent right

Effective date of registration: 20200702

Address after: 313000 Room 121,221, Building 3, 1366 Hongfeng Road, Wuxing District, Huzhou City, Zhejiang Province

Patentee after: ZHEJIANG DUYAN INFORMATION TECHNOLOGY Co.,Ltd.

Address before: Room 1,020, Nanxun Science and Technology Pioneering Park, No. 666 Chaoyang Road, Nanxun District, Huzhou City, Zhejiang Province, 313000

Patentee before: Huzhou You Yan Intellectual Property Service Co.,Ltd.

2020-07-21 TR01 Transfer of patent right