CN112200746A - Dehazing method and device for foggy traffic scene images - Google Patents
- ️Fri Jan 08 2021
CN112200746A - Dehazing method and device for foggy traffic scene images - Google Patents
Dehazing method and device for foggy traffic scene images Download PDFInfo
-
Publication number
- CN112200746A CN112200746A CN202011109313.2A CN202011109313A CN112200746A CN 112200746 A CN112200746 A CN 112200746A CN 202011109313 A CN202011109313 A CN 202011109313A CN 112200746 A CN112200746 A CN 112200746A Authority
- CN
- China Prior art keywords
- traffic scene
- scene image
- channel
- image
- foggy Prior art date
- 2020-10-16 Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30181—Earth observation
- G06T2207/30192—Weather; Meteorology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30236—Traffic on road, railway or crossing
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Cheminformatics (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
本发明公开了一种雾天交通场景图像的去雾方法和设备,方法为:A,针对远景区域、近景区域以及过渡区域不同的雾气浓度,分别计算雾天交通场景图像中对应区域的大气光值;然后在HSI颜色空间利用各通道的传输图和大气光值,根据大气散射模型计算初步去雾的交通场景图像;B,基于预设的I通道阈值,对初步去雾的交通场景图像进行全局亮度提升;预设的I通道阈值是根据初步去雾的交通场景图像中天空区域的I通道像素设置得到;且天空区域是基于暗通道特征和相对能量特征对雾天交通场景图像分割得到;C,对步骤B得到的图像,进行限制对比度自适应直方图均衡化和引导滤波处理,得到最后去雾的交通场景图像。本发明可对交通场景图像快速、有效去雾。
The invention discloses a fog removal method and device for foggy traffic scene images. The method is as follows: A. According to different fog concentrations in the long-range, near-field and transition areas, respectively calculate the atmospheric light of the corresponding areas in the foggy traffic scene image. Then use the transmission map and atmospheric light value of each channel in the HSI color space to calculate the preliminary dehazed traffic scene image according to the atmospheric scattering model; B, based on the preset I channel threshold, the preliminary dehazed traffic scene image is processed. The global brightness is improved; the preset I channel threshold is obtained according to the I channel pixel settings of the sky area in the preliminary dehazing traffic scene image; and the sky area is obtained by segmenting the foggy traffic scene image based on dark channel features and relative energy features; C, perform limited contrast adaptive histogram equalization and guided filtering on the image obtained in step B to obtain the final dehazed traffic scene image. The invention can quickly and effectively dehaze images of traffic scenes.
Description
Technical Field
The invention belongs to the field of image information processing, and particularly relates to a defogging method and equipment for a traffic scene image in foggy days.
Background
Fog, a common natural phenomenon, is formed by condensation of water vapor in the air contacting the cooler ground surface, consisting of small water droplets floating in the air. In the field of video monitoring, due to the existence of fog, the visibility of visible objects is reduced, so that the image obtained by a sensor is seriously degraded, and the post-processing of the image is not facilitated, thereby influencing the robustness of visual systems such as target tracking, intelligent transportation, video monitoring, aerial photography and the like. In addition, in intelligent traffic and regional video monitoring, the monitoring effect is seriously influenced by the existence of fog, so that the intelligent traffic system judges vehicle information wrongly, images obtained by regional video monitoring are fuzzy, the former is particularly serious, and the existence of fog often causes traffic accidents or traffic jam, even cancellation of flights. Therefore, the investigation of defogging is necessary.
At present, a great deal of research is carried out by many scholars at home and abroad aiming at the problem of image degradation caused by haze weather. Current defogging methods are largely divided into two main categories: non-model based image defogging algorithms and model based image defogging algorithms.
The image defogging algorithm based on the non-model is to improve the brightness and the contrast through the image enhancement algorithm without considering the physical reason of the degradation of the image in the haze weather, so that the visual effect of the image in the haze weather is improved. The image defogging algorithm based on the non-model is relatively simple in processing and wide in application range, but has the problems of information loss, image distortion and the like, and the essence of the method is to reduce the influence of haze on the image as much as possible, so that the defogging effect is not thorough because the defogging is not carried out fundamentally. Common image defogging algorithms based on non-models include a histogram equalization algorithm, a Retinex algorithm, a wavelet transform algorithm and the like.
The image defogging algorithm based on the model is used for analyzing the degradation reason of the image in the foggy day, further establishing the model to simulate the image degradation process, and then obtaining the fogless image through solving the inverse process. The most common and effective model at present is the atmospheric scattering model proposed by McCartney. Based on this model, various kinds of defogging algorithms have been proposed by many scholars. The algorithm analyzes the foggy image forming mechanism under the atmospheric scattering model and essentially analyzes the image degradation reason, so that the defogging result is more ideal and reliable. The earliest defogging algorithm utilized multiple frames of images in combination with an atmospheric scattering model to solve a model equation to obtain a fog-free image. In recent years, a single image defogging algorithm based on an atmospheric scattering model is widely researched, and the algorithm adds known priori knowledge into the atmospheric scattering model, solves the transmittance and atmospheric light value of an image and then obtains a fog-free image according to an atmospheric scattering model formula.
In the aspect of an image defogging patent, huhaofeng et al (with the patent publication number of CN107966412A) obtain total light intensity and polarization difference by correcting two light intensity maps in an orthogonal polarization state, and then obtain a defogged image by calculating through a differential polarization recovery model, but the requirement on equipment for obtaining the light intensity maps in the orthogonal polarization state twice is high, and the defogged image cannot be popularized in a large scale. Tanghong faithful et al (patent publication No. CN107085830A) estimate the atmospheric transmittance by a two-region filtering method and a propagation filtering method, and then optimize the atmospheric light intensity by an adaptive method to realize the recovery of fog-free images, but this method cannot effectively process fog-day images with low brightness. Zuojon et al (patent publication No. CN110992285A) segment a foggy image into content features and detail features through a hierarchical neural network, and then perform defogging processing respectively, and interact with intermediate results to restore a fogless image, but this method is not suitable for situations with insufficient computing power. The methods are all general image defogging algorithms, are not optimized for traffic scenes, and have respective limitations.
Under the background, it is particularly important to research a defogging method for a traffic scene image with strong robustness and low time complexity.
Disclosure of Invention
The invention provides a defogging method and equipment for a foggy traffic scene image based on sky region segmentation and color space conversion, and solves the technical problems that an existing image defogging method is not used for optimizing a traffic scene, and a low-brightness fog image is poor in defogging effect.
In order to achieve the technical purpose, the invention adopts the following technical scheme:
a defogging method for a traffic scene image in foggy days comprises the following steps:
step A, respectively calculating atmospheric light values of corresponding areas in a foggy day traffic scene image according to different fog concentrations of a distant view area, a near view area and a transition area; then, calculating a traffic scene image subjected to preliminary defogging according to an atmospheric scattering model by utilizing the transmission diagram and the atmospheric light value of each channel in the HSI color space;
b, carrying out global brightness improvement on the preliminarily defogged traffic scene image based on a preset I channel threshold value;
the preset I channel threshold value is obtained according to I channel pixels of a sky area in an HSI color space in a traffic scene image subjected to preliminary defogging; the sky area is obtained by segmenting the foggy day traffic scene image based on the dark channel characteristics and the relative energy characteristics;
and step C, carrying out contrast-limiting adaptive histogram equalization and guide filtering processing on the image obtained in the step B to obtain a finally defogged traffic scene image.
In a more preferred technical solution, the method for calculating the dark channel characteristics includes:
wherein I and j are the row number and column number of the pixel in the image, M is the sum of all the row numbers in the image, I is the image composed of r, g and b channels, c represents any color channel of r, g and b, IcRepresenting the c color channel component of image I, Ic(I, j) represents the c color channel component of image I at pixel (I, j); fDC++Representing the dark channel characteristics of image I at pixel (I, j).
In a more preferred technical solution, the method for calculating the relative energy characteristic includes:
where Z (i, j) is expressed as an intermediate variable with respect to the horizontal and vertical second derivatives of the Gaussian function, α is the maximum of Z (i, j), k is the contrast gain, τ is the noise threshold,
representing a convolution operation, ghAnd gvHorizontal and vertical second derivatives of the gaussian function, respectively; i (I, j) represents the pixel value of pixel (I, j), R, G, B is the r, g, b channel components, respectively, FCE++Representing the dark channel characteristics of image I at pixel (I, j).
In a more preferred technical scheme, the method for segmenting the foggy day traffic scene image to obtain the sky area based on the dark channel characteristic and the relative energy characteristic comprises the following steps:
a1, extracting dark channel characteristics and relative energy characteristics of the foggy day traffic scene image;
a2, roughly dividing the foggy day traffic scene image into a sky area and a non-sky area by adopting a K-means clustering method based on the dark channel characteristics and the relative energy characteristics;
a3, selecting partial pixels with higher confidence level from the roughly divided sky area and non-sky area, and respectively using the partial pixels as positive and negative samples; training a machine model to obtain a sky area fine classifier based on the dark channel characteristics and the relative energy characteristics of the positive and negative samples;
a4, classifying pixels in the foggy day traffic scene image by using a fine sky region classifier based on the dark channel characteristics and the relative energy characteristics, and obtaining a corresponding binary image of fine sky region segmentation according to the classification result.
In a more preferred technical scheme, the method for roughly segmenting the foggy day traffic scene image by adopting the K-means clustering method comprises the following steps:
b1, converting the dark channel characteristics and the relative energy characteristics of the foggy day traffic scene image into a one-dimensional column vector by a two-dimensional matrix, and executing regularization operation;
b2, setting the category number of the K-means clustering to be 2, and clustering pixels in the foggy day traffic scene image based on two column vectors of dark channel characteristics and relative energy characteristics to obtain a clustering label and a clustering center;
b3, determining the categories of all pixels in the foggy day traffic scene image according to the clustering center, thereby determining a sky area and a non-sky area;
b4, converting the clustering label from the one-dimensional column vector into a two-dimensional matrix to obtain a rough segmentation image of the foggy day traffic scene image.
In a more preferred technical solution, the method for selecting positive and negative samples from the roughly divided sky region and the non-sky region includes:
c1, traversing the binary image obtained by roughly dividing the binary image into a sky area and a non-sky area according to a column, and recording the column number if the sky area is on the upper part of the column and the ratio of the sky area to the height of the column is not less than a preset ratio;
c2, extracting the longest subsequence of the sky area from the column number recorded in c1, wherein the pixels with the preset proportion at the upper part are the pixels with higher confidence coefficient of the sky area, and the pixels are used as positive samples;
c3, traversing the binary image obtained by roughly dividing the binary image into a sky area and a non-sky area according to a column, and recording the column number if the non-sky area is at the lower part of the column and the ratio of the height of the column is not less than a preset ratio;
c4, extracting the longest subsequence of the non-sky area from the column number recorded in c3, wherein the pixels with the lower preset proportion are the pixels with higher confidence of the non-sky area, and the pixels are used as negative samples.
In a more preferred technical scheme, the method for setting the preset I channel threshold value is as follows:
calculating the number N of sky area pixels in the traffic scene image subjected to preliminary defoggingsky(ii) a Then, sorting I channel pixel values of the traffic scene images subjected to preliminary defogging from large to small, and taking the Nth pixel value in the sortingskyThe pixel value of each pixel is used as a preset I channel threshold value T;
and B, performing global brightness improvement on the preliminarily defogged traffic scene image, specifically: and multiplying each pixel value of the I channel of the traffic scene image subjected to preliminary defogging by 1T, and replacing the original I channel pixel value with the obtained result.
In a more preferred technical solution, the dividing method of the distant view area, the close view area and the transition area in step a is as follows: clustering all pixels in the foggy day traffic scene image based on the dark channel characteristics, so as to divide the foggy day traffic scene image into a long-range area, a short-range area and a transition area;
the method for calculating the atmospheric light value of the traffic scene image in the foggy day comprises the following steps: calculating the atmospheric light value of a distant view region by adopting a quadtree decomposition method, calculating the atmospheric light value of a near view region by adopting a statistical method, calculating the atmospheric light value of a transition region by adopting linear interpolation, and then carrying out mean value filtering processing on all the obtained atmospheric light values.
In a more preferred technical solution, the traffic scene image of preliminary defogging is calculated according to the atmospheric scattering model by using the transmission map and the atmospheric light value of each channel in the HSI color space, specifically:
d1, converting the foggy day traffic scene image from RGB color space to HSI color space to obtain brightness component I of the foggy day traffic scene imageJSaturation component SJAnd a chrominance component HJ;
d2, calculating transmission diagrams of the foggy day traffic scene images in the I channel and the S channel:
wherein T is a transmission diagram of the foggy day traffic scene image in the channel I, T is a transmission diagram of the foggy day traffic scene image in the channel S, and IIIs the luminance component of the fog-free traffic scene image,
d3, using brightness component I of traffic scene image in foggy day in channel IJTransmission diagram t and atmospheric light value IAAnd calculating the brightness component of the traffic scene image subjected to preliminary defogging according to the atmospheric scattering model:
IJ(i,j)=II(i,j)t(i,j)+IA(1-t(i,j)),
the brightness component I of the fog-free traffic scene image calculated by the formulaINamely the brightness component of the traffic scene image subjected to preliminary defogging;
saturation component S of traffic scene image in foggy day in S channelJTransmission diagram T and atmospheric light value SACalculating the saturation component S of the preliminarily defogged traffic scene image according to the atmospheric scattering model and the definition of the saturationI:
SJ(i,j)=SI(i,j)T(i,j)+SA(1-T(i,j)),SA=0;
Keeping the H channel value of the traffic scene image subjected to preliminary defogging and the fog traffic scene image unchanged, namely HI=HJ;
d4, using the obtained traffic scene image preliminarily defogged from each channel component H of HSI color spaceI、SIAnd IIAnd converting the image into the RGB color space to obtain the traffic scene image subjected to preliminary defogging.
The invention also provides an apparatus comprising a processor and a memory; wherein: the memory is to store computer instructions; the processor is configured to execute the computer instructions stored in the memory, and specifically, to perform the method according to any of the above technical solutions.
Advantageous effects
The invention realizes end-to-end image defogging, brightness improvement and detail enhancement by combining the atmospheric scattering model and the image enhancement technology. The method has the advantages of high operation speed, good effect, no need of manual participation, low cost and strong universality. According to the improved dark channel characteristics and the improved relative energy characteristics, the concept from rough segmentation to fine segmentation is applied to realize accurate segmentation of the sky area; the image defogging based on the physical model is realized by adopting the color space conversion and single-channel image defogging technology; and by adopting the CLAHE and the guided filtering technology, the details are enhanced and the visual effect is improved. The method can be widely applied to defogging of videos and images in traffic scenes.
Drawings
FIG. 1 is an overall flow diagram of a process according to an embodiment of the invention;
FIG. 2 is a diagram illustrating the effect of the steps of sky region segmentation in example 1; wherein, the graph a is the improved dark channel characteristic, the graph b is the improved relative energy characteristic, the graph c is the result of the rough segmentation of the sky area, and the graph d is the result of the fine segmentation of the sky area;
FIG. 3 is a graph showing the effect of the defogging steps based on the atmospheric scattering model in example 1; wherein, the graph a is the atmospheric light value, the graph b is the defogging result of the I channel, the graph c is the defogging result of the S channel, and the graph d is the preliminary defogging result;
FIG. 4 is a graph showing the effects of the steps of the post-defogging treatment in example 1; wherein, the graph a is the result of global brightness improvement, the graph b is the result of CLAHE processing, the graph c is the result of guide filtering processing, and the graph d is the original fog graph;
FIG. 5 is a diagram illustrating the effect of the steps of sky region segmentation in example 2; wherein, the graph a is the improved dark channel characteristic, the graph b is the improved relative energy characteristic, the graph c is the result of the rough segmentation of the sky area, and the graph d is the result of the fine segmentation of the sky area;
FIG. 6 is a graph showing the effect of the defogging steps based on the atmospheric scattering model in example 2; wherein, the graph a is the atmospheric light value, the graph b is the defogging result of the I channel, the graph c is the defogging result of the S channel, and the graph d is the preliminary defogging result;
FIG. 7 is a graph showing the effects of the steps of the post-defogging treatment in example 2; wherein, the graph a is the result of global brightness lifting, the graph b is the result of CLAHE processing, the graph c is the result of guided filtering processing, and the graph d is the original fog graph.
Detailed Description
The following describes embodiments of the present invention in detail, which are developed based on the technical solutions of the present invention, and give detailed implementation manners and specific operation procedures to further explain the technical solutions of the present invention.
The present embodiment provides a defogging method for a foggy traffic scene image based on sky region segmentation and color space conversion, and the overall implementation flow is shown in fig. 1, and includes the following steps:
step A, respectively calculating atmospheric light values of corresponding areas in a foggy day traffic scene image according to different fog concentrations of a distant view area, a near view area and a transition area; and then, calculating a traffic scene image subjected to preliminary defogging according to an atmospheric scattering model by using the transmission map and the atmospheric light value of each channel in the HSI color space. The specific treatment process of the step A is as follows:
1) calculating the value of atmospheric light
The fog concentration is not uniformly distributed in the whole image of the traffic scene in foggy days, the fog concentration at a distant scene is higher, and the fog concentration at a near scene is lower. Therefore, the corresponding atmospheric light value is calculated according to the fog concentrations of different areas, so that the problem of image over-enhancement and the problem of halo artifacts after defogging can be solved. Firstly, dividing a foggy day traffic scene image into a long-range area, a short-range area and a transition area, then calculating an atmospheric light value of the long-range area by using a quadtree decomposition method, calculating an atmospheric light value of the short-range area by using a statistical method, calculating an atmospheric light value of the transition area by using linear interpolation, and finally performing mean filtering to smooth the obtained atmospheric light value. The method comprises the following specific steps:
(i) the improved dark channel characteristics are used as fog concentration, the number of the clustering categories is set to be 3, then clustering operation is carried out to obtain a clustering label and a clustering center, and a distant view area, a near view area and a transition area are determined according to the clustering center;
since the dark channel value is invalid in a white non-sky area, such as a white lane line, which is common in traffic scenes, directly using the dark channel value as a feature affects the accuracy of segmentation. Therefore, the embodiment proposes an improved dark channel feature calculation method in combination with the position features (perpendicular to the ground) of the traffic surveillance camera:
wherein I and j are the row number and column number of the pixel in the image, M is the sum of all the row numbers in the image, I is the image composed of r, g and b channels, c represents any color channel of r, g and b, IcRepresenting the c color channel component of image I, Ic(I, j) represents the c color channel component of image I at pixel (I, j); fDC++Representing the dark channel characteristics of image I at pixel (I, j).
(ii) Carrying out quadtree decomposition on the divided distant view areas: setting a stopping condition of the quadtree decomposition, continuously decomposing the subarea with the highest average brightness until the stopping condition is met, wherein the pixel average value of the corresponding subarea when the decomposition is stopped is the atmospheric light value of the long-range view area;
(iii) sorting the pixels of the close-range area from large to small according to the brightness values, and calculating the average value of the pixels with the first 1% brightness, namely the atmospheric light value of the close-range area;
(iv) and traversing the atmospheric light value graph according to columns, performing interpolation processing on a transition region between a distant view region and a close view region, and then performing mean filtering to obtain the smooth atmospheric light value graph of the foggy day traffic scene image.
2) RGB color space to HSI color space
In order to alleviate the problems of color distortion, halo artifacts and the like, the original foggy traffic scene graph is converted from the RGB color space to the HSI color space. And during defogging, keeping the H channel unchanged, and restoring the I channel and the S channel. The calculation formula for the conversion from the RGB color space to the HSI color space is as follows:
3) computing transmission maps for I and S channels
According to the atmospheric scattering model, the I channel satisfies the following equation:
IJ(i,j)=II(i,j)t(i,j)+IA(1-t(i,j))
wherein IJIs the brightness component of the image of a traffic scene in foggy weather, IIIs the luminance component of the image of a fog-free traffic scene, IAThe atmospheric light value of the foggy day traffic scene image in the I channel, namely the brightness component of the atmospheric light value, and t is a transmission diagram of the foggy day traffic scene image in the I channel.
In this embodiment, a method for calculating an I-channel transmission diagram is provided by taking a defogging method of an infrared image (single-channel image) as a reference, and the method includes:
assuming that the S channel also satisfies the atmospheric scattering model, the following equation is obtained:
SJ(i,j)=SI(i,j)T(i,j)+SA(1-T(i,j))
wherein S isJIs the saturation component of the image of the traffic scene in foggy weather, SIIs the saturation component of the fog-free traffic scene image, SAThe atmospheric light value of the foggy day traffic scene image in the S channel is the saturation component of the atmospheric light value, and T is a transmission diagram of the foggy day traffic scene image in the S channel.
Since the saturation of the atmospheric light value is low, approximately equal to 0, i.e. SA0, so the above equation can be simplified to:
SJ(i,j)=SI(i,j)T(i,j)
according to the definition formula of the saturation, the above formula can be expanded as follows:
min(RJ,GJ,BJ)=t·min(RI,GI,BI)+(1-t)·min(RA,GA,BA)
since the RGB values of the atmospheric light values are approximately equal, the following approximate relationship can be obtained:
IA≈min(RA,GA,BA)
and finally, a calculation formula of the S channel transmission diagram is obtained:
4) calculating a luminance component I of a preliminary defogged traffic scene imageISaturation component SIAnd chrominance component HI
(i) Utilizing brightness component I of traffic scene image in fog day in channel IJTransmission diagram t and atmospheric light value IAAnd calculating the brightness component of the traffic scene image subjected to preliminary defogging according to the atmospheric scattering model:
IJ(i,j)=II(i,j)t(i,j)+IA(1-t(i,j)),
the brightness component I of the fog-free traffic scene image calculated by the formulaINamely the brightness component of the traffic scene image subjected to preliminary defogging;
(ii) saturation component S of traffic scene image in foggy day in S channelJTransmission diagram T and atmospheric light value SACalculating the saturation component S of the preliminarily defogged traffic scene image according to the atmospheric scattering model and the definition of the saturationI:
SJ(i,j)=SI(i,j)T(i,j)+SA(1-T(i,j)),SA=0;
(iii) Keeping the H channel value of the traffic scene image subjected to preliminary defogging and the fog traffic scene image unchanged, namely HI=HJ。
5) Using the obtained preliminarily defogged traffic scene image to be HSI coloredComponent of each channel of space II、SIAnd HIAnd converting the image into the RGB color space to obtain the traffic scene image subjected to preliminary defogging. The calculation formula for the conversion from HSI color space to RGB color space is as follows:
RG region (0 ≦ H < 120 °):
G=3I-(R+B),
B=I(1-S);
GB area (120 ≦ H < 240 °):
H=H-120°,
R=I(1-S),
B=3I-(R+G);
BR area (240 ≦ H ≦ 360 °):
H=H-240°,
R=3I-(G+B),
G=I(1-S),
b, carrying out global brightness improvement on the preliminarily defogged traffic scene image based on a preset I channel threshold value; the preset I channel threshold value is obtained according to I channel pixels of a sky area in an HSI color space in a traffic scene image subjected to preliminary defogging; and the sky area is obtained by segmenting the foggy day traffic scene image based on the dark channel characteristics and the relative energy characteristics.
The defogged image obtained based on the atmospheric scattering model in the step A has the problems of partial dark area, loss of details and the like, and each channel component I is directly processedI、SIAnd HIThe visual effect of converting back to RGB color space is poor and needs to be adoptedSome post-processing steps are taken to enhance the visual effect. In this embodiment, a suitable threshold is obtained according to an I-channel pixel setting of the sky area in the traffic scene image subjected to preliminary defogging in the HSI color space, and the obtained image in step a is subjected to global brightness enhancement based on the preset I-channel threshold. The specific treatment process is as follows:
1) dark channel characteristic and relative energy characteristic of extracted foggy day traffic scene image
The method for calculating the dark channel characteristics is the same as the improved method for calculating the dark channel characteristics, and comprises the following steps:
wherein I and j are the row number and column number of the pixel in the image, M is the sum of all the row numbers in the image, I is the image composed of r, g and b channels, c represents any color channel of r, g and b, IcRepresenting the c color channel component of image I, Ic(I, j) represents the c color channel component of image I at pixel (I, j); fDC++Representing the dark channel characteristics of image I at pixel (I, j).
Due to the fact that the quality of the monitoring video is poor mostly, the sky area contains a lot of noise, the difference of relative energy characteristics of the sky area and the non-sky area is not obvious, and therefore the segmentation accuracy is affected. Therefore, the present embodiment proposes an improved method for calculating the relative energy characteristics by combining the position characteristics (perpendicular to the ground) of the traffic monitoring camera and the brightness information of the image:
where Z (i, j) is expressed as an intermediate variable with respect to the horizontal and vertical second derivatives of the Gaussian function, α is the maximum of Z (i, j), k is the contrast gain, τ is the noise threshold,
representing a convolution operation, ghAnd gvHorizontal and vertical second derivatives of the gaussian function, respectively; i (I, j) represents the pixel value of pixel (I, j), R, G, B is the r, g, b channel components, respectively, FCE++Representing the dark channel characteristics of image I at pixel (I, j).
2) Roughly dividing a sky area: and roughly dividing the foggy day traffic scene image into a sky area and a non-sky area by adopting a K-means clustering method based on the dark channel characteristics and the relative energy characteristics. The specific process is as follows:
(i) converting dark channel characteristics and relative energy characteristics of the foggy day traffic scene image into a one-dimensional column vector by a two-dimensional matrix, and executing regularization operation;
(ii) setting the category number of the K-means clustering to be 2, and clustering pixels in the foggy day traffic scene image based on two column vectors of dark channel characteristics and relative energy characteristics to obtain a clustering label and a clustering center;
(iii) determining the categories of all pixels in the foggy day traffic scene image according to the clustering center, thereby determining a sky area and a non-sky area;
(iv) and converting the clustering label from the one-dimensional column vector into a two-dimensional matrix to obtain a roughly-segmented image of the foggy day traffic scene image.
3) Fine segmentation of sky regions: selecting partial pixels with higher confidence degrees from the roughly divided sky area and non-sky area, and respectively using the partial pixels as positive and negative samples; and training a machine model to obtain a sky area fine classifier based on the dark channel characteristics and the relative energy characteristics of the positive and negative samples.
The errors of the sky region rough segmentation result are mostly concentrated at the junction of the sky region and the non-sky region, in order to obtain the fine segmentation result of the sky region, the sky region and the non-sky region with higher confidence coefficient are extracted from the rough segmentation result and are respectively used as positive and negative samples, then the dark channel characteristics and the relative energy characteristics of the positive and negative samples are sent to a Support Vector Machine (SVM), an SVM classifier is obtained after training, and the fine segmentation result with higher precision can be obtained by using the classifier. The method comprises the following specific steps:
(i) traversing the binary image obtained by roughly dividing the binary image into a sky area and a non-sky area according to a column, and recording the column number if the sky area is positioned at the upper part of the column and the ratio of the sky area to the height of the column is not less than 10%;
(ii) extracting a subsequence with the longest sky area from the column number recorded in the step (i), wherein 10% of pixels at the upper part of the subsequence are pixels with higher confidence coefficient of the sky area, and taking the pixel as a positive sample;
(iii) traversing the binary image obtained by roughly dividing the binary image into a sky area and a non-sky area according to a column, and recording the column number if the non-sky area is positioned at the lower part of the column and the ratio of the non-sky area to the column height is not less than 10%;
(iv) (iv) extracting the longest subsequence of the non-sky region from the recorded column number in (iii), wherein the lower 10% of pixels are pixels with higher confidence coefficient of the non-sky region, and taking the pixels as negative samples;
(v) training the SVM to obtain a fine classifier capable of distinguishing a sky area from a non-sky area according to the dark channel characteristics and the relative energy characteristics of the positive and negative samples and corresponding positive and negative sample labels;
(vi) and classifying pixels in the foggy day traffic scene image by using a fine sky area classifier based on the dark channel characteristics and the relative energy characteristics, and obtaining a corresponding binary image of fine sky area segmentation according to a classification result.
4) Determining a preset I-channel threshold
Based on the binary image obtained by subdivision in the step 3), dividing the preliminarily defogged traffic scene image into a sky area and a non-sky area, and counting the number N of sky area pixels in the preliminarily defogged traffic scene imagesky(ii) a Then calculating traffic scene images for preliminary defogging, sequencing the I channel pixel values from large to small, and selecting and arrangingN in the sequenceskyThe pixel value of each pixel is used as a preset I-channel threshold value T.
5) For the preliminarily defogged traffic scene image, each pixel value I of the I channel is usedIAll multiplied by 1T to obtain II'Using II'Replacing the original I-channel pixel value IIThe overall brightness of the traffic scene image subjected to preliminary defogging is improved.
And step C, carrying out contrast-limiting adaptive histogram equalization and guide filtering processing on the image obtained in the step B to obtain a finally defogged traffic scene image.
1) Limited contrast adaptive histogram equalization
After the overall brightness of the traffic scene image subjected to preliminary defogging is improved in the step B, the visual effect of the area with the larger brightness value is better, and the visual effect of the area with the smaller brightness value is still not ideal. Therefore, this embodiment deals with the I channel component I of the image obtained in step BI’And executing contrast-limited self-adaptive histogram equalization operation to improve the visual effect of the region with smaller brightness value. In this embodiment, the contrast enhancement limiting parameter is set to 0.001, and the required histogram shape parameter is set to "rayleigh", so that I with better overall visual effect can be obtainedI". Then the H is putI、SIAnd IIThe components are converted back to the RGB color space, and a defogged image with a good visual effect can be obtained.
2) Guided filtering
After the limited contrast self-adaptive histogram equalization operation is performed on the preliminarily defogged traffic scene image obtained in the step B, the visual effect of the image is remarkably improved, but the problem of detail loss still exists. Therefore, the operation of guiding the filtering is further performed on the result of the last step of processing to highlight the detail information of the image. The radius parameter of the local window is set to 15, the regularization parameter is set to 0.004, and the defogged image with enhanced details can be obtained by directly processing in the RGB color space, namely the final defogged traffic scene image.
The invention also provides an apparatus comprising a processor and a memory; wherein: the memory is to store computer instructions; the processor is configured to execute the computer instructions stored in the memory, and in particular, to perform the method of the above-described method embodiment.
Example 1:
respectively extracting improved dark channel characteristics [ shown in figure 2(a) ] and improved relative energy characteristics [ shown in figure 2(b) ] of a foggy day image [ shown in figure 4(d) ]; then, according to the characteristics, obtaining a rough segmentation result [ shown in fig. 2(c) ] of the sky region by using K-means clustering; further, sky regions and non-sky regions with higher confidence are extracted from the rough segmentation result, and a fine segmentation result of the sky region is generated using the classification result [ as shown in fig. 2(d) ].
The atmospheric light values of the distant view region, the near view region and the transition region are calculated respectively, and smoothing processing is performed [ as shown in fig. 3(a) ]](ii) a Then, the transmission maps of the I channel and the S channel are calculated in the HSI color space, and the restoration results of the I channel and the S channel can be obtained according to the atmospheric light value and the transmission maps [ as shown in fig. 3(b) and fig. 3(c) ]](ii) a Finally, H is putI、SIAnd IIThe component is transformed back to RGB color space to obtain the preliminary defogging result [ as shown in FIG. 3(d) ]]。
Calculating a threshold value K, and adding IIMultiplying by 1/K can increase the global brightness of the image [ as shown in FIG. 4(a) ]](ii) a Then CLAHE is performed to obtain a defogged image with better visual effect [ as shown in FIG. 4(b) ]](ii) a Finally, the guiding filtering is performed to obtain the detail-enhanced defogged image [ as shown in FIG. 4(c) ]]。
Example 2:
respectively extracting improved dark channel characteristics [ shown in figure 5(a) ] and improved relative energy characteristics [ shown in figure 5(b) ] of a foggy day image [ shown in figure 7(d) ]; then, according to the characteristics, obtaining a rough segmentation result [ shown in fig. 5(c) ] of the sky region by using K-means clustering; further, sky regions and non-sky regions with higher confidence are extracted from the rough segmentation result, and a fine segmentation result of the sky region is generated using the classification result [ as shown in fig. 5(d) ].
Respectively calculating a distant view region, a close view region andthe atmospheric light value in the transition region is smoothed as shown in FIG. 6(a)](ii) a Then, the transmission maps of the I channel and the S channel are calculated in the HSI color space, and the restoration results of the I channel and the S channel can be obtained according to the atmospheric light value and the transmission maps [ as shown in fig. 6(b) and fig. 6(c) ]](ii) a Finally, H is putI、SIAnd IIThe component is transformed back to RGB color space to obtain the preliminary defogging result [ as shown in FIG. 6(d) ]]。
Calculating a threshold value K, and adding IIMultiplying by 1/K can increase the global brightness of the image [ as shown in FIG. 7(a) ]](ii) a Then CLAHE is performed to obtain a defogged image with better visual effect [ as shown in FIG. 7(b) ]](ii) a Finally, the guiding filtering is performed to obtain the detail-enhanced defogged image [ as shown in FIG. 7(c) ]]。
It should be noted that the above disclosure is only specific examples of the present invention, and those skilled in the art can devise various modifications according to the spirit and scope of the present invention.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011109313.2A CN112200746B (en) | 2020-10-16 | 2020-10-16 | Defogging method and equipment for foggy-day traffic scene image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011109313.2A CN112200746B (en) | 2020-10-16 | 2020-10-16 | Defogging method and equipment for foggy-day traffic scene image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112200746A true CN112200746A (en) | 2021-01-08 |
CN112200746B CN112200746B (en) | 2024-03-08 |
Family
ID=74009188
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011109313.2A Active CN112200746B (en) | 2020-10-16 | 2020-10-16 | Defogging method and equipment for foggy-day traffic scene image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112200746B (en) |
Cited By (5)
* Cited by examiner, † Cited by third partyPublication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112816483A (en) * | 2021-03-05 | 2021-05-18 | 北京文安智能技术股份有限公司 | Group fog recognition early warning method and system based on fog value analysis and electronic equipment |
CN113205469A (en) * | 2021-06-04 | 2021-08-03 | 中国人民解放军国防科技大学 | Single image defogging method based on improved dark channel |
CN113822816A (en) * | 2021-09-25 | 2021-12-21 | 李蕊男 | Haze removing method for single remote sensing image optimized by aerial fog scattering model |
CN116309203A (en) * | 2023-05-19 | 2023-06-23 | 中国人民解放军国防科技大学 | A method and device for unmanned platform motion estimation with polarization vision adaptive enhancement |
CN118898551A (en) * | 2024-09-30 | 2024-11-05 | 三业电气有限公司 | A method, medium and device for defogging transmission line monitoring images |
Citations (10)
* Cited by examiner, † Cited by third partyPublication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160005152A1 (en) * | 2014-07-01 | 2016-01-07 | Adobe Systems Incorporated | Multi-Feature Image Haze Removal |
CN105631823A (en) * | 2015-12-28 | 2016-06-01 | 西安电子科技大学 | Dark channel sky area defogging method based on threshold segmentation optimization |
CN105631829A (en) * | 2016-01-15 | 2016-06-01 | 天津大学 | Night haze image defogging method based on dark channel prior and color correction |
CN106548463A (en) * | 2016-10-28 | 2017-03-29 | 大连理工大学 | Based on dark and the sea fog image automatic defogging method and system of Retinex |
CN107301623A (en) * | 2017-05-11 | 2017-10-27 | 北京理工大学珠海学院 | A kind of traffic image defogging method split based on dark and image and system |
CN109919859A (en) * | 2019-01-25 | 2019-06-21 | 暨南大学 | A kind of outdoor scene image defogging enhancement method, computing device and storage medium thereof |
US20190287219A1 (en) * | 2018-03-15 | 2019-09-19 | National Chiao Tung University | Video dehazing device and method |
CN110310241A (en) * | 2019-06-26 | 2019-10-08 | 长安大学 | A multi-atmospheric light value traffic image defogging method combined with depth region segmentation |
CN110570365A (en) * | 2019-08-06 | 2019-12-13 | 西安电子科技大学 | Image Dehazing Method Based on Prior Information |
CN111091501A (en) * | 2018-10-24 | 2020-05-01 | 天津工业大学 | A Parameter Estimation Method for Atmospheric Scattering and Dehazing Models |
-
2020
- 2020-10-16 CN CN202011109313.2A patent/CN112200746B/en active Active
Patent Citations (10)
* Cited by examiner, † Cited by third partyPublication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160005152A1 (en) * | 2014-07-01 | 2016-01-07 | Adobe Systems Incorporated | Multi-Feature Image Haze Removal |
CN105631823A (en) * | 2015-12-28 | 2016-06-01 | 西安电子科技大学 | Dark channel sky area defogging method based on threshold segmentation optimization |
CN105631829A (en) * | 2016-01-15 | 2016-06-01 | 天津大学 | Night haze image defogging method based on dark channel prior and color correction |
CN106548463A (en) * | 2016-10-28 | 2017-03-29 | 大连理工大学 | Based on dark and the sea fog image automatic defogging method and system of Retinex |
CN107301623A (en) * | 2017-05-11 | 2017-10-27 | 北京理工大学珠海学院 | A kind of traffic image defogging method split based on dark and image and system |
US20190287219A1 (en) * | 2018-03-15 | 2019-09-19 | National Chiao Tung University | Video dehazing device and method |
CN111091501A (en) * | 2018-10-24 | 2020-05-01 | 天津工业大学 | A Parameter Estimation Method for Atmospheric Scattering and Dehazing Models |
CN109919859A (en) * | 2019-01-25 | 2019-06-21 | 暨南大学 | A kind of outdoor scene image defogging enhancement method, computing device and storage medium thereof |
CN110310241A (en) * | 2019-06-26 | 2019-10-08 | 长安大学 | A multi-atmospheric light value traffic image defogging method combined with depth region segmentation |
CN110570365A (en) * | 2019-08-06 | 2019-12-13 | 西安电子科技大学 | Image Dehazing Method Based on Prior Information |
Non-Patent Citations (1)
* Cited by examiner, † Cited by third partyTitle |
---|
高强: "基于暗通道补偿与大气光值改进的图像去雾方法", 激光与光电子学进展 * |
Cited By (6)
* Cited by examiner, † Cited by third partyPublication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112816483A (en) * | 2021-03-05 | 2021-05-18 | 北京文安智能技术股份有限公司 | Group fog recognition early warning method and system based on fog value analysis and electronic equipment |
CN113205469A (en) * | 2021-06-04 | 2021-08-03 | 中国人民解放军国防科技大学 | Single image defogging method based on improved dark channel |
CN113822816A (en) * | 2021-09-25 | 2021-12-21 | 李蕊男 | Haze removing method for single remote sensing image optimized by aerial fog scattering model |
CN116309203A (en) * | 2023-05-19 | 2023-06-23 | 中国人民解放军国防科技大学 | A method and device for unmanned platform motion estimation with polarization vision adaptive enhancement |
CN118898551A (en) * | 2024-09-30 | 2024-11-05 | 三业电气有限公司 | A method, medium and device for defogging transmission line monitoring images |
CN118898551B (en) * | 2024-09-30 | 2024-12-03 | 三业电气有限公司 | Defogging method, medium and equipment for transmission line monitoring image |
Also Published As
Publication number | Publication date |
---|---|
CN112200746B (en) | 2024-03-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108596849B (en) | 2021-11-23 | Single image defogging method based on sky region segmentation |
CN111209952B (en) | 2023-05-30 | Underwater target detection method based on improved SSD and migration learning |
CN112200746B (en) | 2024-03-08 | Defogging method and equipment for foggy-day traffic scene image |
CN107103591B (en) | 2020-01-07 | A Single Image Dehazing Method Based on Image Haze Concentration Estimation |
CN108615226B (en) | 2022-02-11 | An Image Dehazing Method Based on Generative Adversarial Networks |
CN110310241B (en) | 2021-06-01 | Method for defogging traffic image with large air-light value by fusing depth region segmentation |
CN108537756B (en) | 2020-08-25 | Single image defogging method based on image fusion |
CN107527332A (en) | 2017-12-29 | Enhancement Method is kept based on the low-light (level) image color for improving Retinex |
CN108009518A (en) | 2018-05-08 | A kind of stratification traffic mark recognition methods based on quick two points of convolutional neural networks |
CN106169081A (en) | 2016-11-30 | A kind of image classification based on different illumination and processing method |
CN111709888B (en) | 2023-12-08 | Aerial image defogging method based on improved generation countermeasure network |
CN111489330B (en) | 2021-06-22 | Weak and small target detection method based on multi-source information fusion |
CN112419163B (en) | 2023-06-30 | Single image weak supervision defogging method based on priori knowledge and deep learning |
CN110060221B (en) | 2023-01-17 | A Bridge Vehicle Detection Method Based on UAV Aerial Images |
CN112766056A (en) | 2021-05-07 | Method and device for detecting lane line in low-light environment based on deep neural network |
CN108564538A (en) | 2018-09-21 | Image haze removing method and system based on ambient light difference |
Khan et al. | 2022 | Recent advancement in haze removal approaches |
CN116883868A (en) | 2023-10-13 | UAV intelligent cruise detection method based on adaptive image defogging |
CN111914749A (en) | 2020-11-10 | A method and system for lane line recognition based on neural network |
CN110047041B (en) | 2023-05-09 | Space-frequency domain combined traffic monitoring video rain removing method |
CN109345479B (en) | 2021-04-06 | Real-time preprocessing method and storage medium for video monitoring data |
Han et al. | 2022 | Single-image dehazing using scene radiance constraint and color gradient guided filter |
CN102592125A (en) | 2012-07-18 | Moving object detection method based on standard deviation characteristic |
Huang et al. | 2023 | Efficient image dehazing algorithm using multiple priors constraints |
Sun et al. | 2021 | Single-image dehazing based on dark channel prior and fast weighted guided filtering |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
2021-01-08 | PB01 | Publication | |
2021-01-08 | PB01 | Publication | |
2021-01-26 | SE01 | Entry into force of request for substantive examination | |
2021-01-26 | SE01 | Entry into force of request for substantive examination | |
2024-03-08 | GR01 | Patent grant | |
2024-03-08 | GR01 | Patent grant |