patents.google.com

CN112200746A - Dehazing method and device for foggy traffic scene images - Google Patents

  • ️Fri Jan 08 2021

CN112200746A - Dehazing method and device for foggy traffic scene images - Google Patents

Dehazing method and device for foggy traffic scene images Download PDF

Info

Publication number
CN112200746A
CN112200746A CN202011109313.2A CN202011109313A CN112200746A CN 112200746 A CN112200746 A CN 112200746A CN 202011109313 A CN202011109313 A CN 202011109313A CN 112200746 A CN112200746 A CN 112200746A Authority
CN
China
Prior art keywords
traffic scene
scene image
channel
image
foggy
Prior art date
2020-10-16
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011109313.2A
Other languages
Chinese (zh)
Other versions
CN112200746B (en
Inventor
郭璠
邱俊峰
唐琎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central South University
Original Assignee
Central South University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
2020-10-16
Filing date
2020-10-16
Publication date
2021-01-08
2020-10-16 Application filed by Central South University filed Critical Central South University
2020-10-16 Priority to CN202011109313.2A priority Critical patent/CN112200746B/en
2021-01-08 Publication of CN112200746A publication Critical patent/CN112200746A/en
2024-03-08 Application granted granted Critical
2024-03-08 Publication of CN112200746B publication Critical patent/CN112200746B/en
Status Active legal-status Critical Current
2040-10-16 Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30192Weather; Meteorology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30236Traffic on road, railway or crossing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

本发明公开了一种雾天交通场景图像的去雾方法和设备,方法为:A,针对远景区域、近景区域以及过渡区域不同的雾气浓度,分别计算雾天交通场景图像中对应区域的大气光值;然后在HSI颜色空间利用各通道的传输图和大气光值,根据大气散射模型计算初步去雾的交通场景图像;B,基于预设的I通道阈值,对初步去雾的交通场景图像进行全局亮度提升;预设的I通道阈值是根据初步去雾的交通场景图像中天空区域的I通道像素设置得到;且天空区域是基于暗通道特征和相对能量特征对雾天交通场景图像分割得到;C,对步骤B得到的图像,进行限制对比度自适应直方图均衡化和引导滤波处理,得到最后去雾的交通场景图像。本发明可对交通场景图像快速、有效去雾。

Figure 202011109313

The invention discloses a fog removal method and device for foggy traffic scene images. The method is as follows: A. According to different fog concentrations in the long-range, near-field and transition areas, respectively calculate the atmospheric light of the corresponding areas in the foggy traffic scene image. Then use the transmission map and atmospheric light value of each channel in the HSI color space to calculate the preliminary dehazed traffic scene image according to the atmospheric scattering model; B, based on the preset I channel threshold, the preliminary dehazed traffic scene image is processed. The global brightness is improved; the preset I channel threshold is obtained according to the I channel pixel settings of the sky area in the preliminary dehazing traffic scene image; and the sky area is obtained by segmenting the foggy traffic scene image based on dark channel features and relative energy features; C, perform limited contrast adaptive histogram equalization and guided filtering on the image obtained in step B to obtain the final dehazed traffic scene image. The invention can quickly and effectively dehaze images of traffic scenes.

Figure 202011109313

Description

Defogging method and device for traffic scene image in foggy day

Technical Field

The invention belongs to the field of image information processing, and particularly relates to a defogging method and equipment for a traffic scene image in foggy days.

Background

Fog, a common natural phenomenon, is formed by condensation of water vapor in the air contacting the cooler ground surface, consisting of small water droplets floating in the air. In the field of video monitoring, due to the existence of fog, the visibility of visible objects is reduced, so that the image obtained by a sensor is seriously degraded, and the post-processing of the image is not facilitated, thereby influencing the robustness of visual systems such as target tracking, intelligent transportation, video monitoring, aerial photography and the like. In addition, in intelligent traffic and regional video monitoring, the monitoring effect is seriously influenced by the existence of fog, so that the intelligent traffic system judges vehicle information wrongly, images obtained by regional video monitoring are fuzzy, the former is particularly serious, and the existence of fog often causes traffic accidents or traffic jam, even cancellation of flights. Therefore, the investigation of defogging is necessary.

At present, a great deal of research is carried out by many scholars at home and abroad aiming at the problem of image degradation caused by haze weather. Current defogging methods are largely divided into two main categories: non-model based image defogging algorithms and model based image defogging algorithms.

The image defogging algorithm based on the non-model is to improve the brightness and the contrast through the image enhancement algorithm without considering the physical reason of the degradation of the image in the haze weather, so that the visual effect of the image in the haze weather is improved. The image defogging algorithm based on the non-model is relatively simple in processing and wide in application range, but has the problems of information loss, image distortion and the like, and the essence of the method is to reduce the influence of haze on the image as much as possible, so that the defogging effect is not thorough because the defogging is not carried out fundamentally. Common image defogging algorithms based on non-models include a histogram equalization algorithm, a Retinex algorithm, a wavelet transform algorithm and the like.

The image defogging algorithm based on the model is used for analyzing the degradation reason of the image in the foggy day, further establishing the model to simulate the image degradation process, and then obtaining the fogless image through solving the inverse process. The most common and effective model at present is the atmospheric scattering model proposed by McCartney. Based on this model, various kinds of defogging algorithms have been proposed by many scholars. The algorithm analyzes the foggy image forming mechanism under the atmospheric scattering model and essentially analyzes the image degradation reason, so that the defogging result is more ideal and reliable. The earliest defogging algorithm utilized multiple frames of images in combination with an atmospheric scattering model to solve a model equation to obtain a fog-free image. In recent years, a single image defogging algorithm based on an atmospheric scattering model is widely researched, and the algorithm adds known priori knowledge into the atmospheric scattering model, solves the transmittance and atmospheric light value of an image and then obtains a fog-free image according to an atmospheric scattering model formula.

In the aspect of an image defogging patent, huhaofeng et al (with the patent publication number of CN107966412A) obtain total light intensity and polarization difference by correcting two light intensity maps in an orthogonal polarization state, and then obtain a defogged image by calculating through a differential polarization recovery model, but the requirement on equipment for obtaining the light intensity maps in the orthogonal polarization state twice is high, and the defogged image cannot be popularized in a large scale. Tanghong faithful et al (patent publication No. CN107085830A) estimate the atmospheric transmittance by a two-region filtering method and a propagation filtering method, and then optimize the atmospheric light intensity by an adaptive method to realize the recovery of fog-free images, but this method cannot effectively process fog-day images with low brightness. Zuojon et al (patent publication No. CN110992285A) segment a foggy image into content features and detail features through a hierarchical neural network, and then perform defogging processing respectively, and interact with intermediate results to restore a fogless image, but this method is not suitable for situations with insufficient computing power. The methods are all general image defogging algorithms, are not optimized for traffic scenes, and have respective limitations.

Under the background, it is particularly important to research a defogging method for a traffic scene image with strong robustness and low time complexity.

Disclosure of Invention

The invention provides a defogging method and equipment for a foggy traffic scene image based on sky region segmentation and color space conversion, and solves the technical problems that an existing image defogging method is not used for optimizing a traffic scene, and a low-brightness fog image is poor in defogging effect.

In order to achieve the technical purpose, the invention adopts the following technical scheme:

a defogging method for a traffic scene image in foggy days comprises the following steps:

step A, respectively calculating atmospheric light values of corresponding areas in a foggy day traffic scene image according to different fog concentrations of a distant view area, a near view area and a transition area; then, calculating a traffic scene image subjected to preliminary defogging according to an atmospheric scattering model by utilizing the transmission diagram and the atmospheric light value of each channel in the HSI color space;

b, carrying out global brightness improvement on the preliminarily defogged traffic scene image based on a preset I channel threshold value;

the preset I channel threshold value is obtained according to I channel pixels of a sky area in an HSI color space in a traffic scene image subjected to preliminary defogging; the sky area is obtained by segmenting the foggy day traffic scene image based on the dark channel characteristics and the relative energy characteristics;

and step C, carrying out contrast-limiting adaptive histogram equalization and guide filtering processing on the image obtained in the step B to obtain a finally defogged traffic scene image.

In a more preferred technical solution, the method for calculating the dark channel characteristics includes:

Figure BDA0002728062230000021

wherein I and j are the row number and column number of the pixel in the image, M is the sum of all the row numbers in the image, I is the image composed of r, g and b channels, c represents any color channel of r, g and b, IcRepresenting the c color channel component of image I, Ic(I, j) represents the c color channel component of image I at pixel (I, j); fDC++Representing the dark channel characteristics of image I at pixel (I, j).

In a more preferred technical solution, the method for calculating the relative energy characteristic includes:

Figure BDA0002728062230000031

Figure BDA0002728062230000032

Figure BDA0002728062230000033

where Z (i, j) is expressed as an intermediate variable with respect to the horizontal and vertical second derivatives of the Gaussian function, α is the maximum of Z (i, j), k is the contrast gain, τ is the noise threshold,

Figure BDA0002728062230000034

representing a convolution operation, ghAnd gvHorizontal and vertical second derivatives of the gaussian function, respectively; i (I, j) represents the pixel value of pixel (I, j), R, G, B is the r, g, b channel components, respectively, FCE++Representing the dark channel characteristics of image I at pixel (I, j).

In a more preferred technical scheme, the method for segmenting the foggy day traffic scene image to obtain the sky area based on the dark channel characteristic and the relative energy characteristic comprises the following steps:

a1, extracting dark channel characteristics and relative energy characteristics of the foggy day traffic scene image;

a2, roughly dividing the foggy day traffic scene image into a sky area and a non-sky area by adopting a K-means clustering method based on the dark channel characteristics and the relative energy characteristics;

a3, selecting partial pixels with higher confidence level from the roughly divided sky area and non-sky area, and respectively using the partial pixels as positive and negative samples; training a machine model to obtain a sky area fine classifier based on the dark channel characteristics and the relative energy characteristics of the positive and negative samples;

a4, classifying pixels in the foggy day traffic scene image by using a fine sky region classifier based on the dark channel characteristics and the relative energy characteristics, and obtaining a corresponding binary image of fine sky region segmentation according to the classification result.

In a more preferred technical scheme, the method for roughly segmenting the foggy day traffic scene image by adopting the K-means clustering method comprises the following steps:

b1, converting the dark channel characteristics and the relative energy characteristics of the foggy day traffic scene image into a one-dimensional column vector by a two-dimensional matrix, and executing regularization operation;

b2, setting the category number of the K-means clustering to be 2, and clustering pixels in the foggy day traffic scene image based on two column vectors of dark channel characteristics and relative energy characteristics to obtain a clustering label and a clustering center;

b3, determining the categories of all pixels in the foggy day traffic scene image according to the clustering center, thereby determining a sky area and a non-sky area;

b4, converting the clustering label from the one-dimensional column vector into a two-dimensional matrix to obtain a rough segmentation image of the foggy day traffic scene image.

In a more preferred technical solution, the method for selecting positive and negative samples from the roughly divided sky region and the non-sky region includes:

c1, traversing the binary image obtained by roughly dividing the binary image into a sky area and a non-sky area according to a column, and recording the column number if the sky area is on the upper part of the column and the ratio of the sky area to the height of the column is not less than a preset ratio;

c2, extracting the longest subsequence of the sky area from the column number recorded in c1, wherein the pixels with the preset proportion at the upper part are the pixels with higher confidence coefficient of the sky area, and the pixels are used as positive samples;

c3, traversing the binary image obtained by roughly dividing the binary image into a sky area and a non-sky area according to a column, and recording the column number if the non-sky area is at the lower part of the column and the ratio of the height of the column is not less than a preset ratio;

c4, extracting the longest subsequence of the non-sky area from the column number recorded in c3, wherein the pixels with the lower preset proportion are the pixels with higher confidence of the non-sky area, and the pixels are used as negative samples.

In a more preferred technical scheme, the method for setting the preset I channel threshold value is as follows:

calculating the number N of sky area pixels in the traffic scene image subjected to preliminary defoggingsky(ii) a Then, sorting I channel pixel values of the traffic scene images subjected to preliminary defogging from large to small, and taking the Nth pixel value in the sortingskyThe pixel value of each pixel is used as a preset I channel threshold value T;

and B, performing global brightness improvement on the preliminarily defogged traffic scene image, specifically: and multiplying each pixel value of the I channel of the traffic scene image subjected to preliminary defogging by 1T, and replacing the original I channel pixel value with the obtained result.

In a more preferred technical solution, the dividing method of the distant view area, the close view area and the transition area in step a is as follows: clustering all pixels in the foggy day traffic scene image based on the dark channel characteristics, so as to divide the foggy day traffic scene image into a long-range area, a short-range area and a transition area;

the method for calculating the atmospheric light value of the traffic scene image in the foggy day comprises the following steps: calculating the atmospheric light value of a distant view region by adopting a quadtree decomposition method, calculating the atmospheric light value of a near view region by adopting a statistical method, calculating the atmospheric light value of a transition region by adopting linear interpolation, and then carrying out mean value filtering processing on all the obtained atmospheric light values.

In a more preferred technical solution, the traffic scene image of preliminary defogging is calculated according to the atmospheric scattering model by using the transmission map and the atmospheric light value of each channel in the HSI color space, specifically:

d1, converting the foggy day traffic scene image from RGB color space to HSI color space to obtain brightness component I of the foggy day traffic scene imageJSaturation component SJAnd a chrominance component HJ

d2, calculating transmission diagrams of the foggy day traffic scene images in the I channel and the S channel:

Figure BDA0002728062230000041

Figure BDA0002728062230000042

wherein T is a transmission diagram of the foggy day traffic scene image in the channel I, T is a transmission diagram of the foggy day traffic scene image in the channel S, and IIIs the luminance component of the fog-free traffic scene image,

d3, using brightness component I of traffic scene image in foggy day in channel IJTransmission diagram t and atmospheric light value IAAnd calculating the brightness component of the traffic scene image subjected to preliminary defogging according to the atmospheric scattering model:

IJ(i,j)=II(i,j)t(i,j)+IA(1-t(i,j)),

the brightness component I of the fog-free traffic scene image calculated by the formulaINamely the brightness component of the traffic scene image subjected to preliminary defogging;

saturation component S of traffic scene image in foggy day in S channelJTransmission diagram T and atmospheric light value SACalculating the saturation component S of the preliminarily defogged traffic scene image according to the atmospheric scattering model and the definition of the saturationI

SJ(i,j)=SI(i,j)T(i,j)+SA(1-T(i,j)),SA=0;

Keeping the H channel value of the traffic scene image subjected to preliminary defogging and the fog traffic scene image unchanged, namely HI=HJ

d4, using the obtained traffic scene image preliminarily defogged from each channel component H of HSI color spaceI、SIAnd IIAnd converting the image into the RGB color space to obtain the traffic scene image subjected to preliminary defogging.

The invention also provides an apparatus comprising a processor and a memory; wherein: the memory is to store computer instructions; the processor is configured to execute the computer instructions stored in the memory, and specifically, to perform the method according to any of the above technical solutions.

Advantageous effects

The invention realizes end-to-end image defogging, brightness improvement and detail enhancement by combining the atmospheric scattering model and the image enhancement technology. The method has the advantages of high operation speed, good effect, no need of manual participation, low cost and strong universality. According to the improved dark channel characteristics and the improved relative energy characteristics, the concept from rough segmentation to fine segmentation is applied to realize accurate segmentation of the sky area; the image defogging based on the physical model is realized by adopting the color space conversion and single-channel image defogging technology; and by adopting the CLAHE and the guided filtering technology, the details are enhanced and the visual effect is improved. The method can be widely applied to defogging of videos and images in traffic scenes.

Drawings

FIG. 1 is an overall flow diagram of a process according to an embodiment of the invention;

FIG. 2 is a diagram illustrating the effect of the steps of sky region segmentation in example 1; wherein, the graph a is the improved dark channel characteristic, the graph b is the improved relative energy characteristic, the graph c is the result of the rough segmentation of the sky area, and the graph d is the result of the fine segmentation of the sky area;

FIG. 3 is a graph showing the effect of the defogging steps based on the atmospheric scattering model in example 1; wherein, the graph a is the atmospheric light value, the graph b is the defogging result of the I channel, the graph c is the defogging result of the S channel, and the graph d is the preliminary defogging result;

FIG. 4 is a graph showing the effects of the steps of the post-defogging treatment in example 1; wherein, the graph a is the result of global brightness improvement, the graph b is the result of CLAHE processing, the graph c is the result of guide filtering processing, and the graph d is the original fog graph;

FIG. 5 is a diagram illustrating the effect of the steps of sky region segmentation in example 2; wherein, the graph a is the improved dark channel characteristic, the graph b is the improved relative energy characteristic, the graph c is the result of the rough segmentation of the sky area, and the graph d is the result of the fine segmentation of the sky area;

FIG. 6 is a graph showing the effect of the defogging steps based on the atmospheric scattering model in example 2; wherein, the graph a is the atmospheric light value, the graph b is the defogging result of the I channel, the graph c is the defogging result of the S channel, and the graph d is the preliminary defogging result;

FIG. 7 is a graph showing the effects of the steps of the post-defogging treatment in example 2; wherein, the graph a is the result of global brightness lifting, the graph b is the result of CLAHE processing, the graph c is the result of guided filtering processing, and the graph d is the original fog graph.

Detailed Description

The following describes embodiments of the present invention in detail, which are developed based on the technical solutions of the present invention, and give detailed implementation manners and specific operation procedures to further explain the technical solutions of the present invention.

The present embodiment provides a defogging method for a foggy traffic scene image based on sky region segmentation and color space conversion, and the overall implementation flow is shown in fig. 1, and includes the following steps:

step A, respectively calculating atmospheric light values of corresponding areas in a foggy day traffic scene image according to different fog concentrations of a distant view area, a near view area and a transition area; and then, calculating a traffic scene image subjected to preliminary defogging according to an atmospheric scattering model by using the transmission map and the atmospheric light value of each channel in the HSI color space. The specific treatment process of the step A is as follows:

1) calculating the value of atmospheric light

The fog concentration is not uniformly distributed in the whole image of the traffic scene in foggy days, the fog concentration at a distant scene is higher, and the fog concentration at a near scene is lower. Therefore, the corresponding atmospheric light value is calculated according to the fog concentrations of different areas, so that the problem of image over-enhancement and the problem of halo artifacts after defogging can be solved. Firstly, dividing a foggy day traffic scene image into a long-range area, a short-range area and a transition area, then calculating an atmospheric light value of the long-range area by using a quadtree decomposition method, calculating an atmospheric light value of the short-range area by using a statistical method, calculating an atmospheric light value of the transition area by using linear interpolation, and finally performing mean filtering to smooth the obtained atmospheric light value. The method comprises the following specific steps:

(i) the improved dark channel characteristics are used as fog concentration, the number of the clustering categories is set to be 3, then clustering operation is carried out to obtain a clustering label and a clustering center, and a distant view area, a near view area and a transition area are determined according to the clustering center;

since the dark channel value is invalid in a white non-sky area, such as a white lane line, which is common in traffic scenes, directly using the dark channel value as a feature affects the accuracy of segmentation. Therefore, the embodiment proposes an improved dark channel feature calculation method in combination with the position features (perpendicular to the ground) of the traffic surveillance camera:

Figure BDA0002728062230000061

wherein I and j are the row number and column number of the pixel in the image, M is the sum of all the row numbers in the image, I is the image composed of r, g and b channels, c represents any color channel of r, g and b, IcRepresenting the c color channel component of image I, Ic(I, j) represents the c color channel component of image I at pixel (I, j); fDC++Representing the dark channel characteristics of image I at pixel (I, j).

(ii) Carrying out quadtree decomposition on the divided distant view areas: setting a stopping condition of the quadtree decomposition, continuously decomposing the subarea with the highest average brightness until the stopping condition is met, wherein the pixel average value of the corresponding subarea when the decomposition is stopped is the atmospheric light value of the long-range view area;

(iii) sorting the pixels of the close-range area from large to small according to the brightness values, and calculating the average value of the pixels with the first 1% brightness, namely the atmospheric light value of the close-range area;

(iv) and traversing the atmospheric light value graph according to columns, performing interpolation processing on a transition region between a distant view region and a close view region, and then performing mean filtering to obtain the smooth atmospheric light value graph of the foggy day traffic scene image.

2) RGB color space to HSI color space

In order to alleviate the problems of color distortion, halo artifacts and the like, the original foggy traffic scene graph is converted from the RGB color space to the HSI color space. And during defogging, keeping the H channel unchanged, and restoring the I channel and the S channel. The calculation formula for the conversion from the RGB color space to the HSI color space is as follows:

Figure BDA0002728062230000071

Figure BDA0002728062230000072

3) computing transmission maps for I and S channels

According to the atmospheric scattering model, the I channel satisfies the following equation:

IJ(i,j)=II(i,j)t(i,j)+IA(1-t(i,j))

wherein IJIs the brightness component of the image of a traffic scene in foggy weather, IIIs the luminance component of the image of a fog-free traffic scene, IAThe atmospheric light value of the foggy day traffic scene image in the I channel, namely the brightness component of the atmospheric light value, and t is a transmission diagram of the foggy day traffic scene image in the I channel.

In this embodiment, a method for calculating an I-channel transmission diagram is provided by taking a defogging method of an infrared image (single-channel image) as a reference, and the method includes:

Figure BDA0002728062230000073

assuming that the S channel also satisfies the atmospheric scattering model, the following equation is obtained:

SJ(i,j)=SI(i,j)T(i,j)+SA(1-T(i,j))

wherein S isJIs the saturation component of the image of the traffic scene in foggy weather, SIIs the saturation component of the fog-free traffic scene image, SAThe atmospheric light value of the foggy day traffic scene image in the S channel is the saturation component of the atmospheric light value, and T is a transmission diagram of the foggy day traffic scene image in the S channel.

Since the saturation of the atmospheric light value is low, approximately equal to 0, i.e. SA0, so the above equation can be simplified to:

SJ(i,j)=SI(i,j)T(i,j)

according to the definition formula of the saturation, the above formula can be expanded as follows:

Figure BDA0002728062230000081

min(RJ,GJ,BJ)=t·min(RI,GI,BI)+(1-t)·min(RA,GA,BA)

since the RGB values of the atmospheric light values are approximately equal, the following approximate relationship can be obtained:

IA≈min(RA,GA,BA)

and finally, a calculation formula of the S channel transmission diagram is obtained:

Figure BDA0002728062230000082

4) calculating a luminance component I of a preliminary defogged traffic scene imageISaturation component SIAnd chrominance component HI

(i) Utilizing brightness component I of traffic scene image in fog day in channel IJTransmission diagram t and atmospheric light value IAAnd calculating the brightness component of the traffic scene image subjected to preliminary defogging according to the atmospheric scattering model:

IJ(i,j)=II(i,j)t(i,j)+IA(1-t(i,j)),

the brightness component I of the fog-free traffic scene image calculated by the formulaINamely the brightness component of the traffic scene image subjected to preliminary defogging;

(ii) saturation component S of traffic scene image in foggy day in S channelJTransmission diagram T and atmospheric light value SACalculating the saturation component S of the preliminarily defogged traffic scene image according to the atmospheric scattering model and the definition of the saturationI

SJ(i,j)=SI(i,j)T(i,j)+SA(1-T(i,j)),SA=0;

(iii) Keeping the H channel value of the traffic scene image subjected to preliminary defogging and the fog traffic scene image unchanged, namely HI=HJ

5) Using the obtained preliminarily defogged traffic scene image to be HSI coloredComponent of each channel of space II、SIAnd HIAnd converting the image into the RGB color space to obtain the traffic scene image subjected to preliminary defogging. The calculation formula for the conversion from HSI color space to RGB color space is as follows:

RG region (0 ≦ H < 120 °):

Figure BDA0002728062230000091

G=3I-(R+B),

B=I(1-S);

GB area (120 ≦ H < 240 °):

H=H-120°,

R=I(1-S),

Figure BDA0002728062230000092

B=3I-(R+G);

BR area (240 ≦ H ≦ 360 °):

H=H-240°,

R=3I-(G+B),

G=I(1-S),

Figure BDA0002728062230000093

b, carrying out global brightness improvement on the preliminarily defogged traffic scene image based on a preset I channel threshold value; the preset I channel threshold value is obtained according to I channel pixels of a sky area in an HSI color space in a traffic scene image subjected to preliminary defogging; and the sky area is obtained by segmenting the foggy day traffic scene image based on the dark channel characteristics and the relative energy characteristics.

The defogged image obtained based on the atmospheric scattering model in the step A has the problems of partial dark area, loss of details and the like, and each channel component I is directly processedI、SIAnd HIThe visual effect of converting back to RGB color space is poor and needs to be adoptedSome post-processing steps are taken to enhance the visual effect. In this embodiment, a suitable threshold is obtained according to an I-channel pixel setting of the sky area in the traffic scene image subjected to preliminary defogging in the HSI color space, and the obtained image in step a is subjected to global brightness enhancement based on the preset I-channel threshold. The specific treatment process is as follows:

1) dark channel characteristic and relative energy characteristic of extracted foggy day traffic scene image

The method for calculating the dark channel characteristics is the same as the improved method for calculating the dark channel characteristics, and comprises the following steps:

Figure BDA0002728062230000101

wherein I and j are the row number and column number of the pixel in the image, M is the sum of all the row numbers in the image, I is the image composed of r, g and b channels, c represents any color channel of r, g and b, IcRepresenting the c color channel component of image I, Ic(I, j) represents the c color channel component of image I at pixel (I, j); fDC++Representing the dark channel characteristics of image I at pixel (I, j).

Due to the fact that the quality of the monitoring video is poor mostly, the sky area contains a lot of noise, the difference of relative energy characteristics of the sky area and the non-sky area is not obvious, and therefore the segmentation accuracy is affected. Therefore, the present embodiment proposes an improved method for calculating the relative energy characteristics by combining the position characteristics (perpendicular to the ground) of the traffic monitoring camera and the brightness information of the image:

Figure BDA0002728062230000102

Figure BDA0002728062230000103

Figure BDA0002728062230000104

where Z (i, j) is expressed as an intermediate variable with respect to the horizontal and vertical second derivatives of the Gaussian function, α is the maximum of Z (i, j), k is the contrast gain, τ is the noise threshold,

Figure BDA0002728062230000105

representing a convolution operation, ghAnd gvHorizontal and vertical second derivatives of the gaussian function, respectively; i (I, j) represents the pixel value of pixel (I, j), R, G, B is the r, g, b channel components, respectively, FCE++Representing the dark channel characteristics of image I at pixel (I, j).

2) Roughly dividing a sky area: and roughly dividing the foggy day traffic scene image into a sky area and a non-sky area by adopting a K-means clustering method based on the dark channel characteristics and the relative energy characteristics. The specific process is as follows:

(i) converting dark channel characteristics and relative energy characteristics of the foggy day traffic scene image into a one-dimensional column vector by a two-dimensional matrix, and executing regularization operation;

(ii) setting the category number of the K-means clustering to be 2, and clustering pixels in the foggy day traffic scene image based on two column vectors of dark channel characteristics and relative energy characteristics to obtain a clustering label and a clustering center;

(iii) determining the categories of all pixels in the foggy day traffic scene image according to the clustering center, thereby determining a sky area and a non-sky area;

(iv) and converting the clustering label from the one-dimensional column vector into a two-dimensional matrix to obtain a roughly-segmented image of the foggy day traffic scene image.

3) Fine segmentation of sky regions: selecting partial pixels with higher confidence degrees from the roughly divided sky area and non-sky area, and respectively using the partial pixels as positive and negative samples; and training a machine model to obtain a sky area fine classifier based on the dark channel characteristics and the relative energy characteristics of the positive and negative samples.

The errors of the sky region rough segmentation result are mostly concentrated at the junction of the sky region and the non-sky region, in order to obtain the fine segmentation result of the sky region, the sky region and the non-sky region with higher confidence coefficient are extracted from the rough segmentation result and are respectively used as positive and negative samples, then the dark channel characteristics and the relative energy characteristics of the positive and negative samples are sent to a Support Vector Machine (SVM), an SVM classifier is obtained after training, and the fine segmentation result with higher precision can be obtained by using the classifier. The method comprises the following specific steps:

(i) traversing the binary image obtained by roughly dividing the binary image into a sky area and a non-sky area according to a column, and recording the column number if the sky area is positioned at the upper part of the column and the ratio of the sky area to the height of the column is not less than 10%;

(ii) extracting a subsequence with the longest sky area from the column number recorded in the step (i), wherein 10% of pixels at the upper part of the subsequence are pixels with higher confidence coefficient of the sky area, and taking the pixel as a positive sample;

(iii) traversing the binary image obtained by roughly dividing the binary image into a sky area and a non-sky area according to a column, and recording the column number if the non-sky area is positioned at the lower part of the column and the ratio of the non-sky area to the column height is not less than 10%;

(iv) (iv) extracting the longest subsequence of the non-sky region from the recorded column number in (iii), wherein the lower 10% of pixels are pixels with higher confidence coefficient of the non-sky region, and taking the pixels as negative samples;

(v) training the SVM to obtain a fine classifier capable of distinguishing a sky area from a non-sky area according to the dark channel characteristics and the relative energy characteristics of the positive and negative samples and corresponding positive and negative sample labels;

(vi) and classifying pixels in the foggy day traffic scene image by using a fine sky area classifier based on the dark channel characteristics and the relative energy characteristics, and obtaining a corresponding binary image of fine sky area segmentation according to a classification result.

4) Determining a preset I-channel threshold

Based on the binary image obtained by subdivision in the step 3), dividing the preliminarily defogged traffic scene image into a sky area and a non-sky area, and counting the number N of sky area pixels in the preliminarily defogged traffic scene imagesky(ii) a Then calculating traffic scene images for preliminary defogging, sequencing the I channel pixel values from large to small, and selecting and arrangingN in the sequenceskyThe pixel value of each pixel is used as a preset I-channel threshold value T.

5) For the preliminarily defogged traffic scene image, each pixel value I of the I channel is usedIAll multiplied by 1T to obtain II'Using II'Replacing the original I-channel pixel value IIThe overall brightness of the traffic scene image subjected to preliminary defogging is improved.

And step C, carrying out contrast-limiting adaptive histogram equalization and guide filtering processing on the image obtained in the step B to obtain a finally defogged traffic scene image.

1) Limited contrast adaptive histogram equalization

After the overall brightness of the traffic scene image subjected to preliminary defogging is improved in the step B, the visual effect of the area with the larger brightness value is better, and the visual effect of the area with the smaller brightness value is still not ideal. Therefore, this embodiment deals with the I channel component I of the image obtained in step BI’And executing contrast-limited self-adaptive histogram equalization operation to improve the visual effect of the region with smaller brightness value. In this embodiment, the contrast enhancement limiting parameter is set to 0.001, and the required histogram shape parameter is set to "rayleigh", so that I with better overall visual effect can be obtainedI". Then the H is putI、SIAnd IIThe components are converted back to the RGB color space, and a defogged image with a good visual effect can be obtained.

2) Guided filtering

After the limited contrast self-adaptive histogram equalization operation is performed on the preliminarily defogged traffic scene image obtained in the step B, the visual effect of the image is remarkably improved, but the problem of detail loss still exists. Therefore, the operation of guiding the filtering is further performed on the result of the last step of processing to highlight the detail information of the image. The radius parameter of the local window is set to 15, the regularization parameter is set to 0.004, and the defogged image with enhanced details can be obtained by directly processing in the RGB color space, namely the final defogged traffic scene image.

The invention also provides an apparatus comprising a processor and a memory; wherein: the memory is to store computer instructions; the processor is configured to execute the computer instructions stored in the memory, and in particular, to perform the method of the above-described method embodiment.

Example 1:

respectively extracting improved dark channel characteristics [ shown in figure 2(a) ] and improved relative energy characteristics [ shown in figure 2(b) ] of a foggy day image [ shown in figure 4(d) ]; then, according to the characteristics, obtaining a rough segmentation result [ shown in fig. 2(c) ] of the sky region by using K-means clustering; further, sky regions and non-sky regions with higher confidence are extracted from the rough segmentation result, and a fine segmentation result of the sky region is generated using the classification result [ as shown in fig. 2(d) ].

The atmospheric light values of the distant view region, the near view region and the transition region are calculated respectively, and smoothing processing is performed [ as shown in fig. 3(a) ]](ii) a Then, the transmission maps of the I channel and the S channel are calculated in the HSI color space, and the restoration results of the I channel and the S channel can be obtained according to the atmospheric light value and the transmission maps [ as shown in fig. 3(b) and fig. 3(c) ]](ii) a Finally, H is putI、SIAnd IIThe component is transformed back to RGB color space to obtain the preliminary defogging result [ as shown in FIG. 3(d) ]]。

Calculating a threshold value K, and adding IIMultiplying by 1/K can increase the global brightness of the image [ as shown in FIG. 4(a) ]](ii) a Then CLAHE is performed to obtain a defogged image with better visual effect [ as shown in FIG. 4(b) ]](ii) a Finally, the guiding filtering is performed to obtain the detail-enhanced defogged image [ as shown in FIG. 4(c) ]]。

Example 2:

respectively extracting improved dark channel characteristics [ shown in figure 5(a) ] and improved relative energy characteristics [ shown in figure 5(b) ] of a foggy day image [ shown in figure 7(d) ]; then, according to the characteristics, obtaining a rough segmentation result [ shown in fig. 5(c) ] of the sky region by using K-means clustering; further, sky regions and non-sky regions with higher confidence are extracted from the rough segmentation result, and a fine segmentation result of the sky region is generated using the classification result [ as shown in fig. 5(d) ].

Respectively calculating a distant view region, a close view region andthe atmospheric light value in the transition region is smoothed as shown in FIG. 6(a)](ii) a Then, the transmission maps of the I channel and the S channel are calculated in the HSI color space, and the restoration results of the I channel and the S channel can be obtained according to the atmospheric light value and the transmission maps [ as shown in fig. 6(b) and fig. 6(c) ]](ii) a Finally, H is putI、SIAnd IIThe component is transformed back to RGB color space to obtain the preliminary defogging result [ as shown in FIG. 6(d) ]]。

Calculating a threshold value K, and adding IIMultiplying by 1/K can increase the global brightness of the image [ as shown in FIG. 7(a) ]](ii) a Then CLAHE is performed to obtain a defogged image with better visual effect [ as shown in FIG. 7(b) ]](ii) a Finally, the guiding filtering is performed to obtain the detail-enhanced defogged image [ as shown in FIG. 7(c) ]]。

It should be noted that the above disclosure is only specific examples of the present invention, and those skilled in the art can devise various modifications according to the spirit and scope of the present invention.

Claims (10)

1.一种雾天交通场景图像的去雾方法,其特征在于,包括以下步骤:1. a dehazing method of foggy traffic scene image, is characterized in that, comprises the following steps: 步骤A,针对远景区域、近景区域以及过渡区域不同的雾气浓度,分别计算雾天交通场景图像中对应区域的大气光值;然后在HSI颜色空间利用各通道的传输图和大气光值,根据大气散射模型计算初步去雾的交通场景图像;In step A, the atmospheric light values of the corresponding areas in the foggy traffic scene image are calculated respectively for different fog concentrations in the long-range, close-range and transitional areas; then the transmission map and atmospheric light values of each channel are used in the HSI color space, The scattering model calculates the initial dehazed traffic scene image; 步骤B,基于预设的I通道阈值,对初步去雾的交通场景图像进行全局亮度提升;Step B, based on the preset I channel threshold, perform global brightness enhancement on the initially dehazed traffic scene image; 其中,预设的I通道阈值,是根据初步去雾的交通场景图像中天空区域在HSI颜色空间的I通道像素设置得到;且,天空区域,是基于暗通道特征和相对能量特征对雾天交通场景图像进行分割得到;Among them, the preset I channel threshold is obtained according to the I channel pixel setting of the sky area in the HSI color space in the preliminary dehazing traffic scene image; and, the sky area is based on the dark channel characteristics and relative energy characteristics. The scene image is segmented to obtain; 步骤C,对步骤B得到的图像,进行限制对比度自适应直方图均衡化和引导滤波处理,得到最后去雾的交通场景图像。In step C, the image obtained in step B is subjected to limited contrast adaptive histogram equalization and guided filtering to obtain a final dehazed traffic scene image. 2.根据权利要求1所述的方法,其特征在于,所述暗通道特征的计算方法为:2. method according to claim 1, is characterized in that, the calculation method of described dark channel characteristic is:

Figure FDA0002728062220000011

Figure FDA0002728062220000011

其中,i和j分别是图像中像素的行号和列号,M是图像中所有行号之和,I是由r、g、b通道组成的图像,c表示r、g、b的任一颜色通道,Ic表示图像I的c颜色通道分量,Ic(i,j)表示图像I在像素(i,j)的c颜色通道分量;FDC++表示图像I在像素(i,j)的暗通道特征。where i and j are the row and column numbers of the pixels in the image, respectively, M is the sum of all row numbers in the image, I is an image composed of r, g, and b channels, and c represents any one of r, g, and b. Color channel, I c represents the c color channel component of the image I, I c (i, j) represents the c color channel component of the image I at the pixel (i, j); F DC++ represents the image I at the pixel (i, j) color channel component; Dark channel features. 3.根据权利要求1所述的方法,其特征在于,所述相对能量特征的计算方法为:3. The method according to claim 1, wherein the calculation method of the relative energy characteristic is:

Figure FDA0002728062220000012

Figure FDA0002728062220000012

Figure FDA0002728062220000013

Figure FDA0002728062220000013

Figure FDA0002728062220000014

Figure FDA0002728062220000014

其中,Z(i,j)为有关于高斯函数水平和垂直二阶导数的中间变量,α是Z(i,j)的最大值,k是对比度增益,τ是噪声阈值,

Figure FDA0002728062220000015

表示卷积运算,gh和gv分别是高斯函数的水平和垂直二阶导数;I(i,j)表示像素(i,j)的像素值,R、G、B分别为r、g、b通道分量,FCE++表示图像I在像素(i,j)的暗通道特征。
where Z(i,j) is the intermediate variable related to the horizontal and vertical second derivative of the Gaussian function, α is the maximum value of Z(i,j), k is the contrast gain, τ is the noise threshold,

Figure FDA0002728062220000015

Represents the convolution operation, g h and g v are the horizontal and vertical second-order derivatives of the Gaussian function respectively; I(i, j) represents the pixel value of the pixel (i, j), R, G, B are r, g, The b channel component, F CE++ represents the dark channel feature of image I at pixel (i, j).
4.根据权利要求1所述的方法,其特征在于,基于暗通道特征和相对能量特征对雾天交通场景图像进行分割得到天空区域的方法为:4. The method according to claim 1, wherein the method for segmenting the foggy traffic scene image based on the dark channel feature and the relative energy feature to obtain the sky area is: a1,提取雾天交通场景图像的暗通道特征和相对能量特征;a1, extract the dark channel feature and relative energy feature of the foggy traffic scene image; a2,基于暗通道特征和相对能量特征,采用K均值聚类方法对雾天交通场景图像粗分割为天空区域和非天空区域;a2, based on the dark channel feature and relative energy feature, the K-means clustering method is used to roughly segment the foggy traffic scene image into sky area and non-sky area; a3,从粗分割的天空区域和非天空区域中选取置信度更高的部分像素,分别作为正负样本;并基于正负样本的暗通道特征和相对能量特征,训练机器模型得到天空区域细分类器;a3, select some pixels with higher confidence from the roughly segmented sky area and non-sky area, as positive and negative samples respectively; and based on the dark channel features and relative energy features of the positive and negative samples, train the machine model to obtain a sub-classification of the sky area device; a4,基于暗通道特征和相对能量特征,使用天空区域细分类器对雾天交通场景图像中的像素进行分类,并根据分类结果得到对应的天空区域细分割的二值图像。a4, based on the dark channel feature and relative energy feature, use the sky region sub-classifier to classify the pixels in the foggy traffic scene image, and obtain the corresponding sky region finely segmented binary image according to the classification result. 5.根据权利要求4所述的方法,其特征在于,采用K均值聚类方法对雾天交通场景图像粗分割的方法为:5. method according to claim 4, is characterized in that, the method that adopts K-means clustering method to rough segmentation of foggy traffic scene image is: b1,将雾天交通场景图像的暗通道特征和相对能量特征由二维矩阵转换为一维列向量,并执行正则化操作;b1, convert the dark channel feature and relative energy feature of the foggy traffic scene image from a two-dimensional matrix to a one-dimensional column vector, and perform a regularization operation; b2,设置K均值聚类的类别数为2,基于暗通道特征和相对能量特征这两个列向量,对雾天交通场景图像中的像素进行聚类,得到聚类标签和聚类中心;b2, set the number of categories of K-means clustering to 2, cluster the pixels in the foggy traffic scene image based on the two column vectors of the dark channel feature and the relative energy feature, and obtain the cluster label and cluster center; b3,根据聚类中心确定雾天交通场景图像中所有像素的类别,从而确定天空区域和非天空区域;b3, determine the category of all pixels in the foggy traffic scene image according to the cluster center, so as to determine the sky area and the non-sky area; b4,将聚类标签从一维列向量转换为二维矩阵,得到雾天交通场景图像的粗分割图像。b4, convert the cluster labels from a one-dimensional column vector to a two-dimensional matrix to obtain a coarsely segmented image of the foggy traffic scene image. 6.根据权利要求4所述的方法,其特征在于,从粗分割的天空区域和非天空区域中选取正负样本的方法为:6. method according to claim 4, is characterized in that, the method that selects positive and negative samples from the sky area of rough segmentation and non-sky area is: c1,将粗分割为天空区域和非天空区域得到的二值图像按列遍历,如果天空区域在该列的上部,且在该列高度的占比不少于预设比例,则记录该列号;c1, traverse the binary image obtained by roughly dividing the sky area and the non-sky area into columns. If the sky area is in the upper part of the column, and the proportion of the height of the column is not less than the preset ratio, record the column number ; c2,从c1记录的列号中提取天空区域最长的子序列,其上部预设比例的像素即为天空区域置信度更高的像素,将其作为正样本;c2, extract the longest subsequence in the sky area from the column number recorded by c1, and the pixels with the preset ratio in the upper part are the pixels with higher confidence in the sky area, which are regarded as positive samples; c3,将粗分割为天空区域和非天空区域得到的二值图像按列遍历,如果非天空区域在该列的下部,且在该列高度的占比不少于预设比例,则记录该列号;c3, traverse the binary image obtained by roughly dividing the sky area and the non-sky area into columns. If the non-sky area is in the lower part of the column, and the proportion of the height of the column is not less than the preset ratio, then record the column. No; c4,从c3记录的列号中提取非天空区域最长的子序列,其下部预设比例的像素即为非天空区域置信度更高的像素,将其作为负样本。c4, extract the longest subsequence in the non-sky area from the column number recorded in c3, and the lower preset ratio of pixels is the pixel with higher confidence in the non-sky area, which is regarded as a negative sample. 7.根据权利要求1所述的方法,其特征在于,预设的I通道阈值的设置方法为:7. method according to claim 1, is characterized in that, the setting method of preset I channel threshold value is: 计算初步去雾的交通场景图像中天空区域像素的数量Nsky;然后对初步去雾的交通场景图像,将其I通道像素值从大到小排序,取排序中第Nsky个像素的像素值作为预设的I通道阈值T;Calculate the number of sky area pixels N sky in the preliminary dehazed traffic scene image; then sort the I channel pixel values from large to small for the preliminary dehazed traffic scene image, and take the pixel value of the Nth sky pixel in the sorting as the preset I channel threshold T; 步骤B中所述的对初步去雾的交通场景图像进行全局亮度提升,具体为:对初步去雾的交通场景图像,将其I通道的每个像素值均与1/T相乘,得到的结果替换原来的I通道像素值。The global brightness enhancement of the preliminary dehazed traffic scene image described in step B is specifically: for the preliminary dehazed traffic scene image, each pixel value of the I channel is multiplied by 1/T, and the obtained The result replaces the original I channel pixel value. 8.根据权利要求1所述的方法,其特征在于,步骤A中所述的远景区域、近景区域以及过渡区域,划分方法为:基于暗通道特征对雾天交通场景图像中的所有像素进行聚类,从而将雾天交通场景图像划分为远景区域、近景区域以及过渡区域;8. The method according to claim 1, wherein the far-field area, the near-field area and the transition area described in step A are divided into: clustering all pixels in the foggy traffic scene image based on dark channel features. class, so that the foggy traffic scene image is divided into distant view area, close scene area and transition area; 雾天交通场景图像的大气光值计算方法为:采用四叉树分解法计算远景区域的大气光值,采用统计法计算近景区域的大气光值,采用线性插值计算过渡区域的大气光值,然后对所有得到的大气光值进行均值滤波处理。The calculation method of the atmospheric light value of the foggy traffic scene image is as follows: use the quadtree decomposition method to calculate the atmospheric light value of the distant view area, use the statistical method to calculate the atmospheric light value of the near view area, use the linear interpolation to calculate the atmospheric light value of the transition area, and then Mean filtering is performed on all obtained atmospheric light values. 9.根据权利要求1所述的方法,其特征在于,所述在HSI颜色空间利用各通道的传输图和大气光值,根据大气散射模型计算初步去雾的交通场景图像,具体为:9. The method according to claim 1, characterized in that, using the transmission map of each channel and the atmospheric light value in the HSI color space to calculate the preliminary dehazing traffic scene image according to the atmospheric scattering model, specifically: d1,将雾天交通场景图像从RGB颜色空间转换到HSI颜色空间,得到雾天交通场景图像的亮度分量IJ、饱和度分量SJ以及色度分量HJd1, convert the foggy traffic scene image from the RGB color space to the HSI color space to obtain the luminance component I J , the saturation component S J and the chrominance component H J of the foggy traffic scene image; d2,计算雾天交通场景图像在I通道和S通道的传输图:d2, calculate the transmission map of the foggy traffic scene image in the I channel and the S channel:

Figure FDA0002728062220000031

Figure FDA0002728062220000031

Figure FDA0002728062220000032

Figure FDA0002728062220000032

其中,α为补偿系数,β为大气散射系数,e为自然常数,t为雾天交通场景图像在I通道的传输图,T为雾天交通场景图像在S通道的传输图,II是无雾交通场景图像的亮度分量,Among them, α is the compensation coefficient, β is the atmospheric scattering coefficient, e is a natural constant, t is the transmission map of the foggy traffic scene image in the I channel, T is the transmission map of the foggy traffic scene image in the S channel, I I is no the luminance component of the foggy traffic scene image, d3,利用雾天交通场景图像在I通道的亮度分量IJ、传输图t以及大气光值IA,根据大气散射模型计算初步去雾的交通场景图像的亮度分量:d3, using the brightness component I J of the foggy traffic scene image in the I channel, the transmission map t and the atmospheric light value I A , calculate the brightness component of the initially dehazed traffic scene image according to the atmospheric scattering model: IJ(i,j)=II(i,j)t(i,j)+IA(1-t(i,j)),I J (i,j)=I I (i,j)t(i,j)+I A (1-t(i,j)), 通过该式计算得到的无雾交通场景图像的亮度分量II,即为初步去雾的交通场景图像的亮度分量;The brightness component II of the fog-free traffic scene image calculated by this formula is the brightness component of the traffic scene image that is initially dehazed; 利用雾天交通场景图像在S通道的饱和度分量SJ、传输图T和大气光值SA=0,根据大气散射模型和饱和度的定义,计算初步去雾的交通场景图像的饱和度分量SIUsing the saturation component S J of the foggy traffic scene image in the S channel, the transmission map T and the atmospheric light value S A =0, according to the definition of the atmospheric scattering model and saturation, the saturation component of the preliminary dehazing traffic scene image is calculated. S I : SJ(i,j)=SI(i,j)T(i,j)+SA(1-T(i,j)),SA=0;S J (i,j)=S I (i,j)T(i,j)+S A (1-T(i,j)), S A =0; 保持初步去雾的交通场景图像与雾天交通场景图像的H通道值不变,即HI=HJKeep the H channel values of the initially dehazed traffic scene image and the foggy traffic scene image unchanged, that is, H I =H J ; d4,利用得到的初步去雾的交通场景图像由HSI颜色空间的各通道分量HI、SI和II转换回RGB颜色空间,即可得到初步去雾的交通场景图像。d4, using the obtained preliminary dehazed traffic scene image to be converted back to RGB color space by each channel component H I , S I and II of the HSI color space, and then the preliminary dehazed traffic scene image can be obtained. 10.一种设备,其特征在于,包括处理器和存储器;其中:所述存储器用于存储计算机指令;所述处理器用于执行所述存储器存储的计算机指令,具体执行如权利要求1-9任一所述的方法。10. A device, characterized by comprising a processor and a memory; wherein: the memory is used to store computer instructions; the processor is used to execute the computer instructions stored in the memory, specifically performing any one of claims 1-9. a described method.
CN202011109313.2A 2020-10-16 2020-10-16 Defogging method and equipment for foggy-day traffic scene image Active CN112200746B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011109313.2A CN112200746B (en) 2020-10-16 2020-10-16 Defogging method and equipment for foggy-day traffic scene image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011109313.2A CN112200746B (en) 2020-10-16 2020-10-16 Defogging method and equipment for foggy-day traffic scene image

Publications (2)

Publication Number Publication Date
CN112200746A true CN112200746A (en) 2021-01-08
CN112200746B CN112200746B (en) 2024-03-08

Family

ID=74009188

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011109313.2A Active CN112200746B (en) 2020-10-16 2020-10-16 Defogging method and equipment for foggy-day traffic scene image

Country Status (1)

Country Link
CN (1) CN112200746B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112816483A (en) * 2021-03-05 2021-05-18 北京文安智能技术股份有限公司 Group fog recognition early warning method and system based on fog value analysis and electronic equipment
CN113205469A (en) * 2021-06-04 2021-08-03 中国人民解放军国防科技大学 Single image defogging method based on improved dark channel
CN113822816A (en) * 2021-09-25 2021-12-21 李蕊男 Haze removing method for single remote sensing image optimized by aerial fog scattering model
CN116309203A (en) * 2023-05-19 2023-06-23 中国人民解放军国防科技大学 A method and device for unmanned platform motion estimation with polarization vision adaptive enhancement
CN118898551A (en) * 2024-09-30 2024-11-05 三业电气有限公司 A method, medium and device for defogging transmission line monitoring images

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160005152A1 (en) * 2014-07-01 2016-01-07 Adobe Systems Incorporated Multi-Feature Image Haze Removal
CN105631823A (en) * 2015-12-28 2016-06-01 西安电子科技大学 Dark channel sky area defogging method based on threshold segmentation optimization
CN105631829A (en) * 2016-01-15 2016-06-01 天津大学 Night haze image defogging method based on dark channel prior and color correction
CN106548463A (en) * 2016-10-28 2017-03-29 大连理工大学 Based on dark and the sea fog image automatic defogging method and system of Retinex
CN107301623A (en) * 2017-05-11 2017-10-27 北京理工大学珠海学院 A kind of traffic image defogging method split based on dark and image and system
CN109919859A (en) * 2019-01-25 2019-06-21 暨南大学 A kind of outdoor scene image defogging enhancement method, computing device and storage medium thereof
US20190287219A1 (en) * 2018-03-15 2019-09-19 National Chiao Tung University Video dehazing device and method
CN110310241A (en) * 2019-06-26 2019-10-08 长安大学 A multi-atmospheric light value traffic image defogging method combined with depth region segmentation
CN110570365A (en) * 2019-08-06 2019-12-13 西安电子科技大学 Image Dehazing Method Based on Prior Information
CN111091501A (en) * 2018-10-24 2020-05-01 天津工业大学 A Parameter Estimation Method for Atmospheric Scattering and Dehazing Models

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160005152A1 (en) * 2014-07-01 2016-01-07 Adobe Systems Incorporated Multi-Feature Image Haze Removal
CN105631823A (en) * 2015-12-28 2016-06-01 西安电子科技大学 Dark channel sky area defogging method based on threshold segmentation optimization
CN105631829A (en) * 2016-01-15 2016-06-01 天津大学 Night haze image defogging method based on dark channel prior and color correction
CN106548463A (en) * 2016-10-28 2017-03-29 大连理工大学 Based on dark and the sea fog image automatic defogging method and system of Retinex
CN107301623A (en) * 2017-05-11 2017-10-27 北京理工大学珠海学院 A kind of traffic image defogging method split based on dark and image and system
US20190287219A1 (en) * 2018-03-15 2019-09-19 National Chiao Tung University Video dehazing device and method
CN111091501A (en) * 2018-10-24 2020-05-01 天津工业大学 A Parameter Estimation Method for Atmospheric Scattering and Dehazing Models
CN109919859A (en) * 2019-01-25 2019-06-21 暨南大学 A kind of outdoor scene image defogging enhancement method, computing device and storage medium thereof
CN110310241A (en) * 2019-06-26 2019-10-08 长安大学 A multi-atmospheric light value traffic image defogging method combined with depth region segmentation
CN110570365A (en) * 2019-08-06 2019-12-13 西安电子科技大学 Image Dehazing Method Based on Prior Information

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
高强: "基于暗通道补偿与大气光值改进的图像去雾方法", 激光与光电子学进展 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112816483A (en) * 2021-03-05 2021-05-18 北京文安智能技术股份有限公司 Group fog recognition early warning method and system based on fog value analysis and electronic equipment
CN113205469A (en) * 2021-06-04 2021-08-03 中国人民解放军国防科技大学 Single image defogging method based on improved dark channel
CN113822816A (en) * 2021-09-25 2021-12-21 李蕊男 Haze removing method for single remote sensing image optimized by aerial fog scattering model
CN116309203A (en) * 2023-05-19 2023-06-23 中国人民解放军国防科技大学 A method and device for unmanned platform motion estimation with polarization vision adaptive enhancement
CN118898551A (en) * 2024-09-30 2024-11-05 三业电气有限公司 A method, medium and device for defogging transmission line monitoring images
CN118898551B (en) * 2024-09-30 2024-12-03 三业电气有限公司 Defogging method, medium and equipment for transmission line monitoring image

Also Published As

Publication number Publication date
CN112200746B (en) 2024-03-08

Similar Documents

Publication Publication Date Title
CN108596849B (en) 2021-11-23 Single image defogging method based on sky region segmentation
CN111209952B (en) 2023-05-30 Underwater target detection method based on improved SSD and migration learning
CN112200746B (en) 2024-03-08 Defogging method and equipment for foggy-day traffic scene image
CN107103591B (en) 2020-01-07 A Single Image Dehazing Method Based on Image Haze Concentration Estimation
CN108615226B (en) 2022-02-11 An Image Dehazing Method Based on Generative Adversarial Networks
CN110310241B (en) 2021-06-01 Method for defogging traffic image with large air-light value by fusing depth region segmentation
CN108537756B (en) 2020-08-25 Single image defogging method based on image fusion
CN107527332A (en) 2017-12-29 Enhancement Method is kept based on the low-light (level) image color for improving Retinex
CN108009518A (en) 2018-05-08 A kind of stratification traffic mark recognition methods based on quick two points of convolutional neural networks
CN106169081A (en) 2016-11-30 A kind of image classification based on different illumination and processing method
CN111709888B (en) 2023-12-08 Aerial image defogging method based on improved generation countermeasure network
CN111489330B (en) 2021-06-22 Weak and small target detection method based on multi-source information fusion
CN112419163B (en) 2023-06-30 Single image weak supervision defogging method based on priori knowledge and deep learning
CN110060221B (en) 2023-01-17 A Bridge Vehicle Detection Method Based on UAV Aerial Images
CN112766056A (en) 2021-05-07 Method and device for detecting lane line in low-light environment based on deep neural network
CN108564538A (en) 2018-09-21 Image haze removing method and system based on ambient light difference
Khan et al. 2022 Recent advancement in haze removal approaches
CN116883868A (en) 2023-10-13 UAV intelligent cruise detection method based on adaptive image defogging
CN111914749A (en) 2020-11-10 A method and system for lane line recognition based on neural network
CN110047041B (en) 2023-05-09 Space-frequency domain combined traffic monitoring video rain removing method
CN109345479B (en) 2021-04-06 Real-time preprocessing method and storage medium for video monitoring data
Han et al. 2022 Single-image dehazing using scene radiance constraint and color gradient guided filter
CN102592125A (en) 2012-07-18 Moving object detection method based on standard deviation characteristic
Huang et al. 2023 Efficient image dehazing algorithm using multiple priors constraints
Sun et al. 2021 Single-image dehazing based on dark channel prior and fast weighted guided filtering

Legal Events

Date Code Title Description
2021-01-08 PB01 Publication
2021-01-08 PB01 Publication
2021-01-26 SE01 Entry into force of request for substantive examination
2021-01-26 SE01 Entry into force of request for substantive examination
2024-03-08 GR01 Patent grant
2024-03-08 GR01 Patent grant