patents.google.com

CN118587219A - An occlusion analysis and object color intelligent recognition system - Google Patents

  • ️Tue Sep 03 2024

CN118587219A - An occlusion analysis and object color intelligent recognition system - Google Patents

An occlusion analysis and object color intelligent recognition system Download PDF

Info

Publication number
CN118587219A
CN118587219A CN202411072973.6A CN202411072973A CN118587219A CN 118587219 A CN118587219 A CN 118587219A CN 202411072973 A CN202411072973 A CN 202411072973A CN 118587219 A CN118587219 A CN 118587219A Authority
CN
China
Prior art keywords
color
formula
shape
frames
model
Prior art date
2024-08-06
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202411072973.6A
Other languages
Chinese (zh)
Other versions
CN118587219B (en
Inventor
柯海滨
刘云明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Xiwei Intelligent Technology Co ltd
Original Assignee
Shenzhen Xiwei Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
2024-08-06
Filing date
2024-08-06
Publication date
2024-09-03
2024-08-06 Application filed by Shenzhen Xiwei Intelligent Technology Co ltd filed Critical Shenzhen Xiwei Intelligent Technology Co ltd
2024-08-06 Priority to CN202411072973.6A priority Critical patent/CN118587219B/en
2024-09-03 Publication of CN118587219A publication Critical patent/CN118587219A/en
2024-10-29 Application granted granted Critical
2024-10-29 Publication of CN118587219B publication Critical patent/CN118587219B/en
Status Active legal-status Critical Current
2044-08-06 Anticipated expiration legal-status Critical

Links

  • 238000004458 analytical method Methods 0.000 title claims abstract description 16
  • 239000000284 extract Substances 0.000 claims abstract description 17
  • 238000000034 method Methods 0.000 claims abstract description 15
  • 238000010801 machine learning Methods 0.000 claims abstract description 6
  • 230000008846 dynamic interplay Effects 0.000 claims abstract description 5
  • 230000008859 change Effects 0.000 claims description 50
  • 239000011159 matrix material Substances 0.000 claims description 17
  • 230000000877 morphologic effect Effects 0.000 claims description 14
  • 238000001228 spectrum Methods 0.000 claims description 9
  • 238000004364 calculation method Methods 0.000 claims description 8
  • 230000035945 sensitivity Effects 0.000 claims description 7
  • 239000003086 colorant Substances 0.000 claims description 4
  • 230000001186 cumulative effect Effects 0.000 claims description 4
  • 238000012300 Sequence Analysis Methods 0.000 claims description 3
  • 238000006243 chemical reaction Methods 0.000 claims description 3
  • 238000003708 edge detection Methods 0.000 claims description 3
  • 238000011156 evaluation Methods 0.000 claims description 3
  • 230000004927 fusion Effects 0.000 claims description 3
  • 238000005065 mining Methods 0.000 claims description 3
  • 238000005516 engineering process Methods 0.000 abstract description 10
  • 230000008569 process Effects 0.000 abstract description 9
  • 230000004438 eyesight Effects 0.000 abstract description 4
  • 238000012544 monitoring process Methods 0.000 abstract description 4
  • 230000008447 perception Effects 0.000 abstract description 3
  • 230000003993 interaction Effects 0.000 abstract description 2
  • 238000000518 rheometry Methods 0.000 abstract description 2
  • 238000004088 simulation Methods 0.000 abstract description 2
  • 238000005299 abrasion Methods 0.000 abstract 1
  • 238000001514 detection method Methods 0.000 abstract 1
  • 238000005286 illumination Methods 0.000 abstract 1
  • 230000036632 reaction speed Effects 0.000 abstract 1
  • 238000009795 derivation Methods 0.000 description 12
  • 101000827703 Homo sapiens Polyphosphoinositide phosphatase Proteins 0.000 description 2
  • 102100023591 Polyphosphoinositide phosphatase Human genes 0.000 description 2
  • 101100233916 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) KAR5 gene Proteins 0.000 description 2
  • 230000007613 environmental effect Effects 0.000 description 2
  • 238000012986 modification Methods 0.000 description 2
  • 230000004048 modification Effects 0.000 description 2
  • 101001121408 Homo sapiens L-amino-acid oxidase Proteins 0.000 description 1
  • 102100026388 L-amino-acid oxidase Human genes 0.000 description 1
  • 101100012902 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) FIG2 gene Proteins 0.000 description 1
  • 230000005856 abnormality Effects 0.000 description 1
  • 238000013473 artificial intelligence Methods 0.000 description 1
  • 230000004456 color vision Effects 0.000 description 1
  • 238000010219 correlation analysis Methods 0.000 description 1
  • 238000013135 deep learning Methods 0.000 description 1
  • 230000007812 deficiency Effects 0.000 description 1
  • 238000003745 diagnosis Methods 0.000 description 1
  • 230000001815 facial effect Effects 0.000 description 1
  • 238000011835 investigation Methods 0.000 description 1
  • 238000005457 optimization Methods 0.000 description 1
  • 230000008092 positive effect Effects 0.000 description 1
  • 238000012545 processing Methods 0.000 description 1
  • 230000004044 response Effects 0.000 description 1
  • 230000003595 spectral effect Effects 0.000 description 1
  • 238000012549 training Methods 0.000 description 1
  • 230000000007 visual effect Effects 0.000 description 1

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of intelligent recognition, in particular to a system for analyzing a shielding object and intelligently recognizing the color of an object, which comprises the following components: the continuous frame analysis module extracts continuous frames based on the video stream, performs pixel contrast, and extracts shape and texture information of mechanical components within each frame. In the invention, the detection capability of fine abrasion or deformation can be obviously improved by deeply analyzing the details of mechanical parts in continuous video frames, the interaction between parts and the motion trail of the parts can be accurately estimated and predicted by dynamic interaction simulation and motion trail modeling technology, the reaction speed and decision efficiency of an automatic system are greatly improved, in addition, the color recognition accuracy under different illumination conditions is optimized by monitoring color rheology and carrying out detailed analysis on light source changes, which is particularly critical to a real-time vision system, and the method for analyzing the shape and size relevance by integrating machine learning can provide higher-level environment perception capability in a complex environment, thereby optimizing the industrial automation process.

Description

一种遮挡物分析与物体颜色智能识别系统An occlusion analysis and object color intelligent recognition system

技术领域Technical Field

本发明涉及智能识别技术领域,尤其涉及一种遮挡物分析与物体颜色智能识别系统。The present invention relates to the field of intelligent recognition technology, and in particular to an obstruction analysis and object color intelligent recognition system.

背景技术Background Art

智能识别技术领域涉及使用人工智能算法来分析和识别图像、视频、声音等数据中的特定模式和对象。这个领域主要依赖于机器学习、深度学习和计算机视觉技术。通过训练模型识别不同的数据特征,智能识别系统能够执行多种复杂任务,例如面部识别、物体追踪、场景理解等。此技术在安全监控、自动驾驶、医疗诊断、个性化推荐等多个行业中有广泛的应用。智能识别不仅提高了处理速度和精度,也极大地扩展了机器的应用场景,使得机器能够在更复杂的环境中作出反应。The field of intelligent recognition technology involves the use of artificial intelligence algorithms to analyze and identify specific patterns and objects in data such as images, videos, and sounds. This field mainly relies on machine learning, deep learning, and computer vision technologies. By training models to recognize different data features, intelligent recognition systems are able to perform a variety of complex tasks, such as facial recognition, object tracking, scene understanding, etc. This technology is widely used in many industries such as security monitoring, autonomous driving, medical diagnosis, and personalized recommendations. Intelligent recognition not only improves processing speed and accuracy, but also greatly expands the application scenarios of machines, allowing machines to respond in more complex environments.

其中,遮挡物分析与物体颜色智能识别系统是指使用智能识别技术来分析图像中被遮挡的物体,并智能地识别这些物体的颜色。这种系统可以在物体部分被遮挡的情况下,通过分析可见部分的特征,推断整个物体的形态和颜色。该系统的用途广泛,可以应用于视频监控、图像编辑软件、自动驾驶汽车的视觉系统等领域。在实际应用中,这种技术能够帮助提高图像识别的准确性和鲁棒性,尤其是在复杂的视觉环境中,能有效地支持决策制作和环境感知。Among them, the occlusion analysis and object color intelligent recognition system refers to the use of intelligent recognition technology to analyze occluded objects in an image and intelligently identify the colors of these objects. This system can infer the shape and color of the entire object by analyzing the characteristics of the visible part when the object is partially occluded. The system has a wide range of uses and can be applied to video surveillance, image editing software, and vision systems for self-driving cars. In practical applications, this technology can help improve the accuracy and robustness of image recognition, especially in complex visual environments, and can effectively support decision-making and environmental perception.

在传统技术中,面对部分遮挡或部分可见的对象时,难以精确处理识别问题。依赖有限的可见特征,当无法完整视觉捕捉对象时,识别准确率显著下降。例如,在安全监控中,部分遮挡的面部或车辆识别错误可能导致安全威胁的误判或漏判。此外,传统颜色识别技术在光照变化条件下容易受到影响,在外部环境中尤为明显,如自动驾驶车辆在不同天气和光照条件下的颜色感知差异可能影响行驶安全。这些技术不足表明,传统方法在动态和多变环境中的适应性和灵活性有待提高。Traditional technologies have difficulty accurately handling recognition problems when faced with partially occluded or partially visible objects. Relying on limited visible features, recognition accuracy drops significantly when the object cannot be fully visually captured. For example, in security monitoring, errors in the recognition of partially occluded faces or vehicles may lead to misjudgment or omission of security threats. In addition, traditional color recognition technology is easily affected by changing lighting conditions, which is particularly evident in external environments. For example, differences in color perception of autonomous vehicles under different weather and lighting conditions may affect driving safety. These technical deficiencies indicate that the adaptability and flexibility of traditional methods in dynamic and changing environments needs to be improved.

发明内容Summary of the invention

本发明的目的是解决现有技术中存在的缺点,而提出的一种遮挡物分析与物体颜色智能识别系统。The purpose of the present invention is to solve the shortcomings of the prior art and to propose an occlusion analysis and object color intelligent recognition system.

为了实现上述目的,本发明采用了如下技术方案:一种遮挡物分析与物体颜色智能识别系统包括:In order to achieve the above-mentioned purpose, the present invention adopts the following technical solution: an occlusion analysis and object color intelligent recognition system comprises:

连续帧分析模块基于视频流提取连续帧,执行像素对比,提取每帧内机械部件的形状和纹理信息,通过对部件边界进行分析,识别部件表面的磨损或变形特征,生成帧特征集;The continuous frame analysis module extracts continuous frames based on the video stream, performs pixel comparison, extracts the shape and texture information of the mechanical parts in each frame, identifies the wear or deformation characteristics of the surface of the parts by analyzing the boundaries of the parts, and generates a frame feature set;

运动序列分析模块基于所述帧特征集,通过模拟机械部件间的动态交互,评估部件运动的平滑度和准确性,计算确定部件的连续运动路径,得到物体运动轨迹模型;The motion sequence analysis module evaluates the smoothness and accuracy of the motion of the components by simulating the dynamic interaction between the mechanical components based on the frame feature set, calculates and determines the continuous motion path of the components, and obtains the object motion trajectory model;

颜色变化分析模块基于所述物体运动轨迹模型,监测运动过程中的颜色流变,分析光源变化对颜色识别的影响,提取颜色变化数据,生成颜色变化模型;The color change analysis module monitors the color flow during the movement based on the object motion trajectory model, analyzes the impact of light source changes on color recognition, extracts color change data, and generates a color change model;

深度关联挖掘模块基于所述颜色变化模型,分析图像中的形状、大小、位置,从数据中提取形状和大小的关联性,优化模型匹配当前工业环境,获取关键形态特征。The deep association mining module analyzes the shape, size, and position in the image based on the color change model, extracts the correlation between shape and size from the data, optimizes the model to match the current industrial environment, and obtains key morphological features.

作为本发明的进一步方案,所述帧特征集的获取步骤具体为:As a further solution of the present invention, the step of acquiring the frame feature set is specifically:

执行对连续视频帧进行像素差异对比,采用公式:Perform pixel difference comparison on consecutive video frames using the formula:

计算相邻两帧之间每个像素的绝对差值,生成像素差异矩阵 Calculate the absolute difference of each pixel between two adjacent frames to generate a pixel difference matrix ;

其中,代表时刻t在位置的像素差异和分别表示当前帧和前一帧的像素值;in, Represents the time t at position The pixel difference and Represent the pixel values of the current frame and the previous frame respectively;

基于所述像素差异矩阵,采用边缘检测公式:Based on the pixel difference matrix , using the edge detection formula:

计算每个像素点的边缘强度,获取边缘强度矩阵 Calculate the edge strength of each pixel and obtain the edge strength matrix ;

其中,代表时刻t在位置的边缘强度和是像素差异矩阵在x和y方向的梯度;in, Represents the time t at position The edge strength and is the pixel difference matrix Gradients in the x and y directions;

利用所述边缘强度矩阵和机械部件的形状信息,采用非线性转换公式:Using the edge strength matrix And the shape information of mechanical parts, using the nonlinear conversion formula:

得到帧特征集 Get frame feature set ;

其中,表示位置在机械部件上是否表现出磨损或变形的特征,是敏感度调节系数,是边缘强度的阈值调节系数,用于增强部件的磨损或变形特征。in, Indicates location Whether there are signs of wear or deformation on the mechanical parts, is the sensitivity adjustment factor, is a threshold adjustment factor for edge strength used to enhance the wear or deformation characteristics of a component.

作为本发明的进一步方案,所述物体运动轨迹模型的获取步骤具体为:As a further solution of the present invention, the step of acquiring the object motion trajectory model is specifically as follows:

从所述帧特征集提取多帧中机械部件的位置坐标,使用距离公式:The position coordinates of the mechanical parts in multiple frames are extracted from the frame feature set using the distance formula:

计算连续帧间部件的位置变化,为每个部件生成位置变动值 Calculate the position changes of components between consecutive frames and generate position change values for each component ;

其中,代表时间t第i个部件的位置变动值和分别是时间t和的x、y坐标;in, Represents the position change value of the i-th component at time t and are time t and The x and y coordinates of

利用所述位置变动值,采用平滑度评估公式:Using the position change value , using the smoothness evaluation formula:

计算部件运动的平滑度评分,评分反映运动的连续性和准确性; Calculate the smoothness score of the component's motion, which reflects the continuity and accuracy of the motion;

其中,表示时间t的平滑度评分,是部件的位置变动值,是平滑度阈值,N是部件数量;in, represents the smoothness score at time t, is the position change value of the component, is the smoothness threshold, N is the number of components;

将所述平滑度评分与部件的运动状态进行分析,通过累积公式:The smoothness score The motion state of the components is analyzed by the cumulative formula:

计算并得到物体的运动轨迹模型M; Calculate and obtain the object's motion trajectory model M;

其中,M表示运动轨迹模型,是依赖于每帧中部件平滑度的权重,是总帧数。Where M represents the motion trajectory model, is a weight that depends on the smoothness of the component in each frame, is the total number of frames.

作为本发明的进一步方案,所述颜色变化模型的获取步骤具体为:As a further solution of the present invention, the step of obtaining the color change model is specifically:

从所述物体运动轨迹模型中提取关键帧,记录多个关键帧中物体的颜色光谱数据,计算相邻关键帧间颜色差异,采用公式:Extract key frames from the object motion trajectory model, record the color spectrum data of the object in multiple key frames, and calculate the color difference between adjacent key frames using the formula:

得到每个波长下的颜色流变量 Get the color flow variable at each wavelength ;

其中分别是时间t和的颜色光谱值;in are time t and The color spectrum value of

利用所述颜色流变量,分析光源变化对颜色识别的影响,引入光源灵敏度调整公式:Using the color flow variable , analyze the impact of light source changes on color recognition, and introduce the light source sensitivity adjustment formula:

计算调整后的颜色识别数据 Calculate adjusted color recognition data ;

其中,表示波长在时间t的调整后颜色识别数据,是基准波长和b是调节光源影响的系数;in, Indicates wavelength The adjusted color recognition data at time t, is the reference wavelength and b is the coefficient adjusting the influence of the light source;

结合所述调整后的颜色识别数据,利用非线性权重公式:Combined with the adjusted color recognition data , using the nonlinear weight formula:

融合全部时间点和波长,生成颜色变化模型 Fusion of all time points and wavelengths to generate a color change model ;

其中,是波长的权重系数,是总时间长度。in, is the wavelength The weight coefficient of is the total time length.

作为本发明的进一步方案,所述关键形态特征的获取步骤具体为:As a further solution of the present invention, the step of acquiring the key morphological features is specifically as follows:

从所述颜色变化模型中获取图像数据,识别图像中每个物体的形状、大小和位置信息,采用面积计算公式:Obtain image data from the color change model, identify the shape, size and location information of each object in the image, and use the area calculation formula:

计算每个物体的面积,得到形状描述符; Calculate the area of each object to obtain a shape descriptor;

其中,代表第i个物体的面积,是物体的宽度;in, represents the area of the ith object, is the width of the object;

应用机器学习技术分析所述形状描述符,通过拟合度评分公式:The shape descriptors are analyzed using machine learning techniques and scored using the goodness-of-fit formula:

评估形状和大小之间的关联性,生成形状大小关联分数 Assess the association between shape and size, generating a shape-size association score ;

其中,是形状面积和分别是面积的平均值和标准差,N是对象的数量;in, is the area of the shape and are the mean and standard deviation of the area, respectively, and N is the number of objects;

根据所述形状大小关联分数优化匹配模型,采用权重调整公式:Associate a score based on the size of the shape Optimize the matching model and use the weight adjustment formula:

整合得到优化后的关键形态特征 Integrate optimized key morphological features ;

其中,是形状与大小关联分数,是权重,是调节因子,M是数据点的总数。in, is the shape-size association score, is the weight, is the adjustment factor and M is the total number of data points.

与现有技术相比,本发明的优点和积极效果在于:Compared with the prior art, the advantages and positive effects of the present invention are:

本发明中,通过深入分析连续视频帧中的机械部件细节,能显著提升对细微磨损或变形的检测能力。动态交互模拟和运动轨迹建模技术精确评估和预测部件间的相互作用及其运动轨迹,极大提高自动化系统的反应速度和决策效率。此外,对颜色流变的监测及光源变化的详尽分析,优化了在不同照明条件下的颜色识别准确性,这对实时视觉系统尤其关键。集成机器学习进行形状和大小关联性分析的方法,能在复杂环境中提供更高级别的环境感知能力,从而优化工业自动化过程。In the present invention, by deeply analyzing the details of mechanical components in continuous video frames, the ability to detect slight wear or deformation can be significantly improved. Dynamic interaction simulation and motion trajectory modeling technology accurately evaluate and predict the interaction between components and their motion trajectories, greatly improving the response speed and decision-making efficiency of the automation system. In addition, the monitoring of color rheology and the detailed analysis of light source changes optimize the accuracy of color recognition under different lighting conditions, which is particularly critical for real-time vision systems. The method of integrating machine learning for shape and size correlation analysis can provide a higher level of environmental perception in complex environments, thereby optimizing industrial automation processes.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

图1为本发明的系统流程图;Fig. 1 is a system flow chart of the present invention;

图2为本发明帧特征集的获取步骤流程图;FIG2 is a flow chart of the steps of obtaining a frame feature set of the present invention;

图3为本发明物体运动轨迹模型的获取步骤流程图;FIG3 is a flow chart of the steps of obtaining the object motion trajectory model of the present invention;

图4为本发明颜色变化模型的获取步骤流程图;FIG4 is a flow chart of steps for obtaining a color change model of the present invention;

图5为本发明关键形态特征的获取步骤流程图。FIG. 5 is a flow chart of the steps for obtaining key morphological features of the present invention.

具体实施方式DETAILED DESCRIPTION

为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。In order to make the purpose, technical solution and advantages of the present invention more clearly understood, the present invention is further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are only used to explain the present invention and are not intended to limit the present invention.

在本发明的描述中,需要理解的是,术语“长度”、“宽度”、“上”、“下”、“前”、“后”、“左”、“右”、“竖直”、“水平”、“顶”、“底”、“内”、“外”等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本发明和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本发明的限制。此外,在本发明的描述中,“多个”的含义是两个或两个以上,除非另有明确具体的限定。In the description of the present invention, it should be understood that the terms "length", "width", "up", "down", "front", "back", "left", "right", "vertical", "horizontal", "top", "bottom", "inside", "outside" and the like indicate positions or positional relationships based on the positions or positional relationships shown in the drawings, and are only for the convenience of describing the present invention and simplifying the description, rather than indicating or implying that the device or element referred to must have a specific orientation, be constructed and operated in a specific orientation, and therefore cannot be understood as limiting the present invention. In addition, in the description of the present invention, "plurality" means two or more, unless otherwise clearly and specifically defined.

实施例1Example 1

请参阅图1,一种遮挡物分析与物体颜色智能识别系统包括:Please refer to FIG1 , a system for analyzing occlusion and intelligently identifying object color includes:

连续帧分析模块基于视频流提取连续帧,执行像素对比,提取每帧内机械部件的形状和纹理信息,通过对部件边界进行分析,识别部件表面的磨损或变形特征,生成帧特征集;The continuous frame analysis module extracts continuous frames based on the video stream, performs pixel comparison, extracts the shape and texture information of the mechanical parts in each frame, identifies the wear or deformation characteristics of the surface of the parts by analyzing the boundaries of the parts, and generates a frame feature set;

运动序列分析模块基于帧特征集,通过模拟机械部件间的动态交互,评估部件运动的平滑度和准确性,计算确定部件的连续运动路径,得到物体运动轨迹模型;The motion sequence analysis module is based on the frame feature set. By simulating the dynamic interaction between mechanical parts, the smoothness and accuracy of the parts' motion are evaluated, the continuous motion path of the parts is determined by calculation, and the object's motion trajectory model is obtained.

颜色变化分析模块基于物体运动轨迹模型,监测运动过程中的颜色流变,分析光源变化对颜色识别的影响,提取颜色变化数据,生成颜色变化模型;The color change analysis module is based on the object motion trajectory model, monitors the color flow during the motion process, analyzes the impact of light source changes on color recognition, extracts color change data, and generates a color change model;

深度关联挖掘模块基于颜色变化模型,分析图像中的形状、大小、位置,从数据中提取形状和大小的关联性,优化模型匹配当前工业环境,获取关键形态特征。The deep association mining module analyzes the shape, size, and position in the image based on the color change model, extracts the correlation between shape and size from the data, optimizes the model to match the current industrial environment, and obtains key morphological features.

帧特征集包括形状参数、纹理参数、边界标识,物体运动轨迹模型具体为路径坐标、速度指标、运动异常标记,颜色变化模型具体包括颜色深度、变化频率、色彩偏移值,关键形态特征包括形状相似度、大小范围、位置坐标。The frame feature set includes shape parameters, texture parameters, and boundary markers. The object motion trajectory model specifically includes path coordinates, speed indicators, and motion abnormality markers. The color change model specifically includes color depth, change frequency, and color offset value. The key morphological features include shape similarity, size range, and position coordinates.

请参阅图2,帧特征集的获取步骤具体为:Please refer to FIG. 2 , the steps for obtaining the frame feature set are specifically as follows:

执行对连续视频帧进行像素差异对比,采用公式:Perform pixel difference comparison on consecutive video frames using the formula:

计算相邻两帧之间每个像素的绝对差值,生成像素差异矩阵 Calculate the absolute difference of each pixel between two adjacent frames to generate a pixel difference matrix ;

其中,代表时刻t在位置的像素差异和分别表示当前帧和前一帧的像素值;in, Represents the time t at position The pixel difference and Represent the pixel values of the current frame and the previous frame respectively;

基于所述像素差异矩阵,采用边缘检测公式:Based on the pixel difference matrix , using the edge detection formula:

计算每个像素点的边缘强度,获取边缘强度矩阵 Calculate the edge strength of each pixel and obtain the edge strength matrix ;

其中,代表时刻t在位置的边缘强度和是像素差异矩阵在x和y方向的梯度;in, Represents the time t at position The edge strength and is the pixel difference matrix Gradients in the x and y directions;

利用所述边缘强度矩阵和机械部件的形状信息,采用非线性转换公式:Using the edge strength matrix And the shape information of mechanical parts, using the nonlinear conversion formula:

得到帧特征集 Get frame feature set ;

其中,表示位置在机械部件上是否表现出磨损或变形的特征,是敏感度调节系数,是边缘强度的阈值调节系数,用于增强部件的磨损或变形特征。in, Indicates location Whether there are signs of wear or deformation on the mechanical parts, is the sensitivity adjustment factor, is a threshold adjustment factor for edge strength used to enhance the wear or deformation characteristics of a component.

参数解释和推导过程: Parameter interpretation and derivation process:

分别代表在位置的当前帧和前一帧的像素值。 and Respectively represent the position The pixel values of the current frame and the previous frame.

表示这两帧之间在该位置的像素差异。 Represents the pixel difference at that location between the two frames.

算例:假设在位置(10,10)的像素值,当前帧的值为120,前一帧的值为110,则:Example: Assume that the pixel value at position (10,10) in the current frame The value is 120, the previous frame The value of is 110, then:

其中,10表示在这两帧之间,该像素点的亮度变化了10单位。这种变化是由于物体移动或光线变化引起的。 Among them, 10 means that the brightness of the pixel has changed by 10 units between the two frames. This change is caused by the movement of the object or the change of light.

公式:formula:

参数解释和推导过程: Parameter interpretation and derivation process:

是像素差异矩阵在x和y方向的梯度。 and is the pixel difference matrix The gradient in the x and y directions.

表示时刻t在位置的边缘强度。 Indicates that at time t, The edge strength.

算例:继续使用上述的矩阵,假设在(10,10)位置,x方向和y方向的梯度分别是4和3,则:Example: Continue using the above Matrix, assuming that at the position (10, 10), the gradients in the x and y directions are 4 and 3 respectively, then:

其中5表示在该像素位置,边缘的强度为5,高值表示边界或边缘更明显。 Among them, 5 means that at this pixel location, the strength of the edge is 5, and a high value means that the boundary or edge is more obvious.

公式:formula:

参数解释和推导过程: Parameter interpretation and derivation process:

是敏感度调节系数。 is the sensitivity adjustment factor.

是边缘强度的阈值调节系数。 is the threshold adjustment coefficient of edge strength.

是得到的边缘强度。 is the resulting edge strength.

算例:Example:

设定,继续使用set up and , continue to use :

其中的0.982表示该像素点是机械部件上的磨损或变形部位,因为该值接近于1,表示高信度。 The 0.982 indicates that the pixel is a worn or deformed part of the mechanical component. Since the value is close to 1, it indicates high confidence.

请参阅图3,物体运动轨迹模型的获取步骤具体为:Please refer to FIG3 , the steps for obtaining the object motion trajectory model are specifically as follows:

从所述帧特征集提取多帧中机械部件的位置坐标,使用距离公式:The position coordinates of the mechanical parts in multiple frames are extracted from the frame feature set using the distance formula:

计算连续帧间部件的位置变化,为每个部件生成位置变动值 Calculate the position changes of components between consecutive frames and generate position change values for each component ;

其中,代表时间t第i个部件的位置变动值和分别是时间t和的x、y坐标;in, Represents the position change value of the i-th component at time t and are time t and The x and y coordinates of

利用所述位置变动值,采用平滑度评估公式:Using the position change value , using the smoothness evaluation formula:

计算部件运动的平滑度评分,评分反映运动的连续性和准确性; Calculate the smoothness score of the component's motion, which reflects the continuity and accuracy of the motion;

其中,表示时间t的平滑度评分,是部件的位置变动值,是平滑度阈值,N是部件数量;in, represents the smoothness score at time t, is the position change value of the component, is the smoothness threshold, N is the number of components;

将所述平滑度评分与部件的运动状态进行分析,通过累积公式:The smoothness score The motion state of the components is analyzed by the cumulative formula:

计算并得到物体的运动轨迹模型M; Calculate and obtain the object's motion trajectory model M;

其中,M表示运动轨迹模型,是依赖于每帧中部件平滑度的权重,是总帧数。Where M represents the motion trajectory model, is a weight that depends on the smoothness of the component in each frame, is the total number of frames.

公式:formula:

其中: in:

:时间t第i个部件的位置变动值。 : The position change value of the i-th component at time t.

:时间t和第i个部件的x坐标。 , : time t and The x-coordinate of the ith component.

:时间t和第i个部件的y坐标。 , : time t and The y coordinate of the i-th widget.

推导过程及算例:假设一个部件在时间的位置为(3,4),时间t的位置为(6,8):Derivation process and example: Assume that a component is The position of is (3, 4), and the position of time t is (6, 8):

结果,表示部件在连续两帧之间移动了5单位距离。 result , indicating that the component moves 5 units between two consecutive frames.

公式:formula:

其中: in:

:时间t的平滑度评分; : smoothness score at time t;

:计算的部件位置变动; : Calculated component position change;

:平滑度阈值,假设为6; : Smoothness threshold, assumed to be 6;

N部件数量,假设有2个部件。N is the number of parts, assuming there are 2 parts.

推导过程及算例:继续使用位置变动结果,参照另一个部件在同一时间帧的位置变动为Derivation process and example: Continue to use the position change result and refer to the position change of another component in the same time frame: :

结果,表示时间t的运动非常平滑,所有部件的移动都在阈值之内。 result , indicating that the motion at time t is very smooth and the movement of all components is within the threshold.

公式:formula:

其中: in:

M:运动轨迹模型。M: Motion trajectory model.

:计算的平滑度评分。 : The calculated smoothness score.

:每帧的权重,假设 : The weight of each frame, assuming .

:总帧数,假设为2帧。 : Total number of frames, assuming 2 frames.

推导过程及算例:Derivation process and examples:

参照两帧,每帧的平滑度评分均为1:Consider two frames, each with a smoothness score of 1:

结果,表示整个运动序列在给定权重下的综合平滑度。 result , which represents the comprehensive smoothness of the entire motion sequence under a given weight.

请参阅图4,颜色变化模型的获取步骤具体为:Please refer to FIG4 , the steps for obtaining the color change model are as follows:

从物体运动轨迹模型中提取关键帧,记录多个关键帧中物体的颜色光谱数据,计算相邻关键帧间颜色差异,采用公式:Extract key frames from the object motion trajectory model, record the color spectrum data of the object in multiple key frames, and calculate the color difference between adjacent key frames using the formula:

得到每个波长下的颜色流变量 Get the color flow variable at each wavelength ;

其中分别是时间t和的颜色光谱值;in are time t and The color spectrum value of

利用颜色流变量,分析光源变化对颜色识别的影响,引入光源灵敏度调整公式:Using Color Flow Variables , analyze the impact of light source changes on color recognition, and introduce the light source sensitivity adjustment formula:

计算调整后的颜色识别数据 Calculate adjusted color recognition data ;

其中,表示波长在时间t的调整后颜色识别数据,是基准波长和b是调节光源影响的系数;in, Indicates wavelength The adjusted color recognition data at time t, is the reference wavelength and b is the coefficient adjusting the influence of the light source;

结合调整后的颜色识别数据,利用非线性权重公式:Combined with adjusted color recognition data , using the nonlinear weight formula:

融合全部时间点和波长,生成颜色变化模型 Fusion of all time points and wavelengths to generate a color change model ;

其中,是波长的权重系数,是总时间长度。in, is the wavelength The weight coefficient of is the total time length.

公式:formula:

其中: in:

:波长在时间t的颜色变化量; :wavelength The amount of color change at time t;

:分别是时间t和的颜色光谱值。 and : are time t and The color spectrum value of .

推导和算例:Derivation and examples:

假设在波长500nm(绿光范围内),在时间点t的颜色光谱值为0.45,而在时间点的颜色光谱值为0.40,则颜色变化量计算如下:Assuming that the wavelength is 500nm (in the green light range), the color spectrum value at time point t is is 0.45, and at time point Color spectrum value is 0.40, then the color change The calculation is as follows:

计算结果代表在500nm波长下,从时间到时间t的颜色强度变化了0.05单位。 The calculated results represent the time at a wavelength of 500nm. By time t the color intensity has changed by 0.05 units.

公式:formula:

其中: in:

:波长在时间t的调整后颜色识别数据; :wavelength Adjusted color recognition data at time t;

:计算的颜色变化量; : Calculated color change;

:基准波长,设定为550nm(光谱中心); : Reference wavelength, set to 550nm (spectral center);

a、b:调节光源影响的系数,假设(经过实际参数调查设置)。a, b: coefficients for adjusting the influence of light source, assuming , (After actual parameter investigation and setting).

推导和算例:Derivation and examples:

使用上一步的算例结果,假设波长为500nm,计算调整后的颜色识别数据如下:Using the calculation results of the previous step, assuming the wavelength is 500nm, the adjusted color recognition data is calculated as follows:

结果表明,在调整光源影响后,500nm波长处的颜色识别数据为0.0393单位。 The results show that after adjusting the influence of the light source, the color recognition data at a wavelength of 500nm is 0.0393 units.

公式:formula:

其中: in:

:颜色变化模型; : Color change model;

:调整后的颜色数据; : adjusted color data;

:波长的权重,假设所有 :wavelength weights, assuming that all ;

:总时间长度,假设有3个时间点。 : Total time length, assuming there are 3 time points.

推导和算例:模型计算如下:Derivation and Examples: Model The calculation is as follows:

结果代表整个运动过程中,对应波长的颜色变化累积贡献为0.004641单位,展示了颜色变化的总量。 The result represents a cumulative contribution of 0.004641 units of color change at the corresponding wavelength during the entire motion, showing the total amount of color change.

请参阅图5,关键形态特征的获取步骤具体为:Please refer to FIG5 , the steps for obtaining key morphological features are as follows:

从颜色变化模型中获取图像数据,识别图像中每个物体的形状、大小和位置信息,采用面积计算公式:Obtain image data from the color change model, identify the shape, size and location information of each object in the image, and use the area calculation formula:

计算每个物体的面积,得到形状描述符; Calculate the area of each object to obtain a shape descriptor;

其中,代表第i个物体的面积,是物体的宽度;in, represents the area of the ith object, is the width of the object;

应用机器学习技术分析形状描述符,通过拟合度评分公式:Apply machine learning techniques to analyze shape descriptors and use the goodness of fit scoring formula:

评估形状和大小之间的关联性,生成形状大小关联分数 Assess the association between shape and size, generating a shape-size association score ;

其中,是形状面积和分别是面积的平均值和标准差,N是对象的数量;in, is the area of the shape and are the mean and standard deviation of the area, respectively, and N is the number of objects;

根据形状大小关联分数优化匹配模型,采用权重调整公式:Associating fractions based on shape size Optimize the matching model and use the weight adjustment formula:

整合得到优化后的关键形态特征 Integrate optimized key morphological features ;

其中,是权重,是调节因子,M是数据点的总数。in, is the weight, is the adjustment factor and M is the total number of data points.

公式说明::第i个物体的面积。 Formula Description: : The area of the ith object.

:第i个物体的直径。 : The diameter of the ith object.

推导和算例:Derivation and examples:

假设有一个圆形物体,其直径单位长度。Suppose there is a circular object with a diameter Unit length.

首先计算半径:单位长度。First calculate the radius: Unit length.

然后计算面积:平方单位。Then calculate the area: Square units.

这表示该圆形物体的面积为平方单位,大约为12.57平方单位。This means that the area of the circular object is square unit, approximately 12.57 square units.

公式说明: Formula Description:

:形状与大小的拟合度评分。 : Shape and size fit score.

:形状面积。 : Shape area.

:面积的平均值。 : The average value of the area.

:面积的标准差。 : Standard deviation of the area.

:对象的数量。 : The number of objects.

推导和算例:Derivation and examples:

假设有三个物体,其面积分别为10,12和14平方单位。则:Suppose there are three objects with areas of 10, 12, and 14 square units respectively. Then:

平均面积平方单位。Average area Square units.

计算方差和标准差:Calculate the variance and standard deviation:

, .

拟合度评分计算:Goodness of fit score calculation:

评分2.3表示物体的形状和大小具有较高的拟合度。 A score of 2.3 indicates that the shape and size of the object are well-fitted.

公式说明: Formula Description:

:优化后的关键形态特征模型。 : Optimized key morphological feature model.

:形状与大小关联分数。 : Shape and size association score.

:权重。 : weight.

:调节因子。 : Adjustment factor.

:数据点的总数。 : The total number of data points.

推导和算例:Derivation and examples:

假设分别为0.9,0.85和0.95,对应的权重为0.5,0.3和0.2,调节因子均为0.05。Assumptions They are 0.9, 0.85 and 0.95 respectively, and the corresponding weights The adjustment factor is 0.5, 0.3 and 0.2. Both are 0.05.

.

结果0.945表示优化模型的关键形态特征得分,用于量化形态特征与当前工业环境的匹配度。The result 0.945 represents the key morphological feature score of the optimization model, which is used to quantify the matching degree of the morphological features with the current industrial environment.

以上,仅是本发明的较佳实施例而已,并非对本发明作其他形式的限制,任何熟悉本专业的技术人员可能利用上述揭示的技术内容加以变更或改型为等同变化的等效实施例应用于其他领域,但是凡是未脱离本发明技术方案内容,依据本发明的技术实质对以上实施例所做的任何简单修改、等同变化与改型,仍属于本发明技术方案的保护范围。The above are only preferred embodiments of the present invention and are not intended to limit the present invention in other forms. Any technician familiar with the profession may use the technical contents disclosed above to change or modify them into equivalent embodiments with equivalent changes and apply them to other fields. However, any simple modification, equivalent change and modification made to the above embodiments based on the technical essence of the present invention without departing from the technical solution of the present invention still falls within the protection scope of the technical solution of the present invention.

Claims (5)

1.一种遮挡物分析与物体颜色智能识别系统,其特征在于,所述系统包括:1. An occlusion analysis and object color intelligent recognition system, characterized in that the system comprises: 连续帧分析模块基于视频流提取连续帧,执行像素对比,提取每帧内机械部件的形状和纹理信息,通过对部件边界进行分析,识别部件表面的磨损或变形特征,生成帧特征集;The continuous frame analysis module extracts continuous frames based on the video stream, performs pixel comparison, extracts the shape and texture information of the mechanical parts in each frame, identifies the wear or deformation characteristics of the surface of the parts by analyzing the boundaries of the parts, and generates a frame feature set; 运动序列分析模块基于所述帧特征集,通过模拟机械部件间的动态交互,评估部件运动的平滑度和准确性,计算确定部件的连续运动路径,得到物体运动轨迹模型;The motion sequence analysis module evaluates the smoothness and accuracy of the motion of the components by simulating the dynamic interaction between the mechanical components based on the frame feature set, calculates and determines the continuous motion path of the components, and obtains the object motion trajectory model; 颜色变化分析模块基于所述物体运动轨迹模型,监测运动过程中的颜色流变,分析光源变化对颜色识别的影响,提取颜色变化数据,生成颜色变化模型;The color change analysis module monitors the color flow during the movement based on the object motion trajectory model, analyzes the impact of light source changes on color recognition, extracts color change data, and generates a color change model; 深度关联挖掘模块基于所述颜色变化模型,分析图像中的形状、大小、位置,从数据中提取形状和大小的关联性,优化模型匹配当前工业环境,获取关键形态特征。The deep association mining module analyzes the shape, size, and position in the image based on the color change model, extracts the correlation between shape and size from the data, optimizes the model to match the current industrial environment, and obtains key morphological features. 2.根据权利要求1所述的遮挡物分析与物体颜色智能识别系统,其特征在于,所述帧特征集的获取步骤具体为:2. The occlusion analysis and object color intelligent recognition system according to claim 1, wherein the step of acquiring the frame feature set is specifically: 执行对连续视频帧进行像素差异对比,采用公式:Perform pixel difference comparison on consecutive video frames using the formula: 计算相邻两帧之间每个像素的绝对差值,生成像素差异矩阵 Calculate the absolute difference of each pixel between two adjacent frames to generate a pixel difference matrix ; 其中,代表时刻t在位置的像素差异和分别表示当前帧和前一帧的像素值;in, Represents the time t at position The pixel difference and Represent the pixel values of the current frame and the previous frame respectively; 基于所述像素差异矩阵,采用边缘检测公式:Based on the pixel difference matrix , using the edge detection formula: 计算每个像素点的边缘强度,获取边缘强度矩阵 Calculate the edge strength of each pixel and obtain the edge strength matrix ; 其中,代表时刻t在位置的边缘强度和是像素差异矩阵在x和y方向的梯度;in, Represents the time t at position The edge strength and is the pixel difference matrix Gradients in the x and y directions; 利用所述边缘强度矩阵和机械部件的形状信息,采用非线性转换公式:Using the edge strength matrix And the shape information of mechanical parts, using the nonlinear conversion formula: 得到帧特征集 Get frame feature set ; 其中,表示位置在机械部件上是否表现出磨损或变形的特征,是敏感度调节系数,是边缘强度的阈值调节系数,用于增强部件的磨损或变形特征。in, Indicates location Whether there are signs of wear or deformation on the mechanical parts, is the sensitivity adjustment factor, is a threshold adjustment factor for edge strength used to enhance the wear or deformation characteristics of a component. 3.根据权利要求2所述的遮挡物分析与物体颜色智能识别系统,其特征在于,所述物体运动轨迹模型的获取步骤具体为:3. The system for analyzing obstructions and intelligently identifying object colors according to claim 2, wherein the step of acquiring the object motion trajectory model is specifically as follows: 从所述帧特征集提取多帧中机械部件的位置坐标,使用距离公式:The position coordinates of the mechanical parts in multiple frames are extracted from the frame feature set using the distance formula: 计算连续帧间部件的位置变化,为每个部件生成位置变动值 Calculate the position changes of components between consecutive frames and generate position change values for each component ; 其中,代表时间t第i个部件的位置变动值和分别是时间t和的x、y坐标;in, Represents the position change value of the i-th component at time t and are time t and The x and y coordinates of 利用所述位置变动值,采用平滑度评估公式:Using the position change value , using the smoothness evaluation formula: 计算部件运动的平滑度评分,评分反映运动的连续性和准确性; Calculate the smoothness score of the component's motion, which reflects the continuity and accuracy of the motion; 其中,表示时间t的平滑度评分,是部件的位置变动值,是平滑度阈值,N是部件数量;in, represents the smoothness score at time t, is the position change value of the component, is the smoothness threshold, N is the number of components; 将所述平滑度评分与部件的运动状态进行分析,通过累积公式:The smoothness score The motion state of the components is analyzed by the cumulative formula: 计算并得到物体的运动轨迹模型M; Calculate and obtain the object's motion trajectory model M; 其中,M表示运动轨迹模型,是依赖于每帧中部件平滑度的权重,是总帧数。Where M represents the motion trajectory model, is a weight that depends on the smoothness of the component in each frame, is the total number of frames. 4.根据权利要求3所述的遮挡物分析与物体颜色智能识别系统,其特征在于,所述颜色变化模型的获取步骤具体为:4. The system for analyzing obstructions and intelligently identifying object colors according to claim 3, wherein the step of acquiring the color change model is specifically as follows: 从所述物体运动轨迹模型中提取关键帧,记录多个关键帧中物体的颜色光谱数据,计算相邻关键帧间颜色差异,采用公式:Extract key frames from the object motion trajectory model, record the color spectrum data of the object in multiple key frames, and calculate the color difference between adjacent key frames using the formula: 得到每个波长下的颜色流变量 Get the color flow variable at each wavelength ; 其中分别是时间t和的颜色光谱值;in are time t and The color spectrum value of 利用所述颜色流变量,分析光源变化对颜色识别的影响,引入光源灵敏度调整公式:Using the color flow variable , analyze the impact of light source changes on color recognition, and introduce the light source sensitivity adjustment formula: 计算调整后的颜色识别数据 Calculate adjusted color recognition data ; 其中,表示波长在时间t的调整后颜色识别数据,是基准波长和b是调节光源影响的系数;in, Indicates wavelength The adjusted color recognition data at time t, is the reference wavelength and b is the coefficient adjusting the influence of the light source; 结合所述调整后的颜色识别数据,利用非线性权重公式:Combined with the adjusted color recognition data , using the nonlinear weight formula: 融合全部时间点和波长,生成颜色变化模型 Fusion of all time points and wavelengths to generate a color change model ; 其中,是波长的权重系数,是总时间长度。in, is the wavelength The weight coefficient of is the total time length. 5.根据权利要求4所述的遮 挡物分析与物体颜色智能识别系统,其特征在于,所述关键形态特征的获取步骤具体为:5. The system for analyzing obstructions and intelligently identifying object colors according to claim 4, wherein the step of acquiring the key morphological features is as follows: 从所述颜色变化模型中获取图像数据,识别图像中每个物体的形状、大小和位置信息,采用面积计算公式:Obtain image data from the color change model, identify the shape, size and location information of each object in the image, and use the area calculation formula: 计算每个物体的面积,得到形状描述符; Calculate the area of each object to obtain a shape descriptor; 其中,代表第i个物体的面积,是物体的宽度;in, represents the area of the ith object, is the width of the object; 应用机器学习技术分析所述形状描述符,通过拟合度评分公式:The shape descriptors are analyzed using machine learning techniques and scored using the goodness-of-fit formula: 评估形状和大小之间的关联性,生成形状大小关联分数 Assess the association between shape and size, generating a shape-size association score ; 其中,是形状面积和分别是面积的平均值和标准差,N是对象的数量;in, is the area of the shape and are the mean and standard deviation of the area, respectively, and N is the number of objects; 根据所述形状大小关联分数优化匹配模型,采用权重调整公式:Associate a score based on the size of the shape Optimize the matching model and use the weight adjustment formula: 整合得到优化后的关键形态特征 Integrate optimized key morphological features ; 其中,是权重,是调节因子,M是数据点的总数。in, is the weight, is the adjustment factor and M is the total number of data points.

CN202411072973.6A 2024-08-06 2024-08-06 An occlusion analysis and object color intelligent recognition system Active CN118587219B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411072973.6A CN118587219B (en) 2024-08-06 2024-08-06 An occlusion analysis and object color intelligent recognition system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411072973.6A CN118587219B (en) 2024-08-06 2024-08-06 An occlusion analysis and object color intelligent recognition system

Publications (2)

Publication Number Publication Date
CN118587219A true CN118587219A (en) 2024-09-03
CN118587219B CN118587219B (en) 2024-10-29

Family

ID=92535847

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411072973.6A Active CN118587219B (en) 2024-08-06 2024-08-06 An occlusion analysis and object color intelligent recognition system

Country Status (1)

Country Link
CN (1) CN118587219B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119011839A (en) * 2024-10-24 2024-11-22 北京中天路通智控科技有限公司 Security monitoring method and system based on artificial intelligence
CN119229350A (en) * 2024-11-28 2024-12-31 深圳市城市交通规划设计研究中心股份有限公司 Traffic abnormal event processing emergency degree ordering method based on unmanned aerial vehicle monitoring

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060284976A1 (en) * 2005-06-17 2006-12-21 Fuji Xerox Co., Ltd. Methods and interfaces for visualizing activity across video frames in an action keyframe
CN101068342A (en) * 2007-06-05 2007-11-07 西安理工大学 Video moving target close-up tracking and monitoring method based on dual camera linkage structure
CN102685437A (en) * 2012-02-03 2012-09-19 深圳市创维群欣安防科技有限公司 Method and monitor for compensating video image
RU2012129183A (en) * 2012-07-11 2014-01-20 Корпорация "САМСУНГ ЭЛЕКТРОНИКС Ко., Лтд." METHOD FOR CALCULATING MOVEMENT WITH CORRECTION OF OCCLUSIONS
CN117474959A (en) * 2023-12-19 2024-01-30 北京智汇云舟科技有限公司 Target object motion trail processing method and system based on video data
CN117906608A (en) * 2024-01-07 2024-04-19 蚌埠学院 A path positioning system and method for a mobile robot
CN118338117A (en) * 2024-06-12 2024-07-12 广州市森锐科技股份有限公司 Camera based on deep learning algorithm

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060284976A1 (en) * 2005-06-17 2006-12-21 Fuji Xerox Co., Ltd. Methods and interfaces for visualizing activity across video frames in an action keyframe
CN101068342A (en) * 2007-06-05 2007-11-07 西安理工大学 Video moving target close-up tracking and monitoring method based on dual camera linkage structure
CN102685437A (en) * 2012-02-03 2012-09-19 深圳市创维群欣安防科技有限公司 Method and monitor for compensating video image
RU2012129183A (en) * 2012-07-11 2014-01-20 Корпорация "САМСУНГ ЭЛЕКТРОНИКС Ко., Лтд." METHOD FOR CALCULATING MOVEMENT WITH CORRECTION OF OCCLUSIONS
CN117474959A (en) * 2023-12-19 2024-01-30 北京智汇云舟科技有限公司 Target object motion trail processing method and system based on video data
CN117906608A (en) * 2024-01-07 2024-04-19 蚌埠学院 A path positioning system and method for a mobile robot
CN118338117A (en) * 2024-06-12 2024-07-12 广州市森锐科技股份有限公司 Camera based on deep learning algorithm

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
余弦: "基于轨迹的视频运动对象的检测与跟踪技术研究", 《中国硕士学位论文全文数据库 信息技术辑》, 1 January 2010 (2010-01-01) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119011839A (en) * 2024-10-24 2024-11-22 北京中天路通智控科技有限公司 Security monitoring method and system based on artificial intelligence
CN119229350A (en) * 2024-11-28 2024-12-31 深圳市城市交通规划设计研究中心股份有限公司 Traffic abnormal event processing emergency degree ordering method based on unmanned aerial vehicle monitoring

Also Published As

Publication number Publication date
CN118587219B (en) 2024-10-29

Similar Documents

Publication Publication Date Title
CN118587219B (en) 2024-10-29 An occlusion analysis and object color intelligent recognition system
US20230005238A1 (en) 2023-01-05 Pixel-level based micro-feature extraction
CN111563446B (en) 2021-09-03 Human-machine interaction safety early warning and control method based on digital twin
EP2801078B1 (en) 2018-09-05 Context aware moving object detection
Masoud et al. 2001 A novel method for tracking and counting pedestrians in real-time using a single camera
CN106682603B (en) 2020-01-21 Real-time driver fatigue early warning system based on multi-source information fusion
KR101653278B1 (en) 2016-09-01 Face tracking system using colar-based face detection method
WO2020253475A1 (en) 2020-12-24 Intelligent vehicle motion control method and apparatus, device and storage medium
CN103440667B (en) 2016-08-10 The automaton that under a kind of occlusion state, moving target is stably followed the trail of
KR20050048062A (en) 2005-05-24 Method and apparatus of human detection and privacy protection system employing the same
JP5598751B2 (en) 2014-10-01 Motion recognition device
CN110781806A (en) 2020-02-11 Pedestrian detection tracking method based on YOLO
CN114997279A (en) 2022-09-02 Construction worker dangerous area intrusion detection method based on improved Yolov5 model
Alksasbeh et al. 2021 Smart hand gestures recognition using K-NN based algorithm for video annotation purposes
CN113312973A (en) 2021-08-27 Method and system for extracting features of gesture recognition key points
CN117557600A (en) 2024-02-13 Vehicle-mounted image processing method and system
KR20120089948A (en) 2012-08-16 Real-time gesture recognition using mhi shape information
Dahmane et al. 2005 Real-time video surveillance with self-organizing maps
KR100543706B1 (en) 2006-01-20 Vision-based Person Detection Method and Apparatus
CN109670391B (en) 2022-09-23 Intelligent lighting device based on machine vision and dynamic identification data processing method
CN110688969A (en) 2020-01-14 Video frame human behavior identification method
Kushwaha et al. 2012 Rule based human activity recognition for surveillance system
Thotapalli et al. 2021 Feature extraction of moving objects using background subtraction technique for robotic applications
WO2020175085A1 (en) 2020-09-03 Image processing apparatus and image processing method
Perdomo et al. 2013 Automatic scene calibration for detecting and tracking people using a single camera

Legal Events

Date Code Title Description
2024-09-03 PB01 Publication
2024-09-03 PB01 Publication
2024-09-20 SE01 Entry into force of request for substantive examination
2024-09-20 SE01 Entry into force of request for substantive examination
2024-10-29 GR01 Patent grant
2024-10-29 GR01 Patent grant