patents.google.com

CN109460046B - A method for unmanned aerial vehicle natural landmark recognition and autonomous landing - Google Patents

  • ️Fri Aug 06 2021
A method for unmanned aerial vehicle natural landmark recognition and autonomous landing Download PDF

Info

Publication number
CN109460046B
CN109460046B CN201811213147.3A CN201811213147A CN109460046B CN 109460046 B CN109460046 B CN 109460046B CN 201811213147 A CN201811213147 A CN 201811213147A CN 109460046 B CN109460046 B CN 109460046B Authority
CN
China
Prior art keywords
image
landing
coordinates
area
point
Prior art date
2018-10-17
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811213147.3A
Other languages
Chinese (zh)
Other versions
CN109460046A (en
Inventor
朱航
裴思宇
李宏泽
黄钰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin University
Original Assignee
Jilin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
2018-10-17
Filing date
2018-10-17
Publication date
2021-08-06
2018-10-17 Application filed by Jilin University filed Critical Jilin University
2018-10-17 Priority to CN201811213147.3A priority Critical patent/CN109460046B/en
2019-03-12 Publication of CN109460046A publication Critical patent/CN109460046A/en
2021-08-06 Application granted granted Critical
2021-08-06 Publication of CN109460046B publication Critical patent/CN109460046B/en
Status Active legal-status Critical Current
2038-10-17 Anticipated expiration legal-status Critical

Links

  • 238000000034 method Methods 0.000 title claims abstract description 20
  • 238000000605 extraction Methods 0.000 claims abstract description 7
  • 238000012545 processing Methods 0.000 claims description 14
  • 238000006243 chemical reaction Methods 0.000 claims description 7
  • 239000011159 matrix material Substances 0.000 claims description 5
  • 238000003384 imaging method Methods 0.000 claims description 3
  • 230000000717 retained effect Effects 0.000 claims 1
  • 238000005259 measurement Methods 0.000 description 5
  • 238000001914 filtration Methods 0.000 description 4
  • 230000009466 transformation Effects 0.000 description 4
  • 230000007547 defect Effects 0.000 description 2
  • RZVHIXYEVGDQDX-UHFFFAOYSA-N 9,10-anthraquinone Chemical compound C1=CC=C2C(=O)C3=CC=CC=C3C(=O)C2=C1 RZVHIXYEVGDQDX-UHFFFAOYSA-N 0.000 description 1
  • 238000013459 approach Methods 0.000 description 1
  • 238000011161 development Methods 0.000 description 1
  • 230000007613 environmental effect Effects 0.000 description 1
  • 238000013507 mapping Methods 0.000 description 1
  • 239000000463 material Substances 0.000 description 1
  • 239000000575 pesticide Substances 0.000 description 1
  • 230000008569 process Effects 0.000 description 1
  • 238000011160 research Methods 0.000 description 1
  • 230000004044 response Effects 0.000 description 1
  • 238000005507 spraying Methods 0.000 description 1

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/04Control of altitude or depth
    • G05D1/06Rate of change of altitude or depth
    • G05D1/0607Rate of change of altitude or depth specially adapted for aircraft
    • G05D1/0653Rate of change of altitude or depth specially adapted for aircraft during a phase of take-off or landing
    • G05D1/0676Rate of change of altitude or depth specially adapted for aircraft during a phase of take-off or landing specially adapted for landing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/48Extraction of image or video features by mapping characteristic values of the pattern into a parameter space, e.g. Hough transformation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Remote Sensing (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Astronomy & Astrophysics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Image Analysis (AREA)

Abstract

一种无人机自然地标识别与自主着陆方法属机器视觉导航技术领域,本发明根据给定的预着陆坐标在卫星数字地图上确定着陆区域,利用无人机在预着陆坐标拍摄航拍图像,将卫星数字地图与航拍图像进行滤波、灰度化、二值化处理、边缘特征提取和Hough变换,提取连续几何曲线,并采用加权Hausdorff距离匹配算法将二者进行匹配,根据格林定理计算无人机航拍图像中区域的形心相对于无人机的坐标,根据投影关系计算区域形心的空间坐标,并引导无人机在区域形心的空间坐标自主着陆。本发明能保证无人机在指定范围内自主识别最佳着陆点,并精准着陆,可弥补GPS导航下自主着陆误差大的不足,且自主着陆的安全性和可靠性得到提高。

Figure 201811213147

A method for recognizing natural landmarks and autonomous landing of an unmanned aerial vehicle belongs to the technical field of machine vision navigation. The present invention determines a landing area on a satellite digital map according to given pre-landing coordinates, uses the unmanned aerial vehicle to take aerial images at the pre-landing coordinates, The satellite digital map and aerial image are filtered, grayed, binarized, edge feature extraction and Hough transform, extracted continuous geometric curve, and the weighted Hausdorff distance matching algorithm is used to match the two, and the UAV is calculated according to Green's theorem The centroid of the area in the aerial image is relative to the coordinates of the UAV, and the spatial coordinates of the area centroid are calculated according to the projection relationship, and the UAV is guided to land autonomously at the spatial coordinates of the area centroid. The invention can ensure that the unmanned aerial vehicle can autonomously identify the best landing point within a specified range and land accurately, can make up for the large error of autonomous landing under GPS navigation, and improve the safety and reliability of autonomous landing.

Figure 201811213147

Description

Unmanned aerial vehicle natural landmark identification and autonomous landing method

Technical Field

The invention belongs to the technical field of machine vision navigation, and particularly relates to a natural landmark identification and autonomous landing method for an unmanned aerial vehicle.

Background

In recent years, with the development of micro inertial navigation systems, flight control systems, micro electromechanical systems and novel materials, the research on micro unmanned aerial vehicles has greatly progressed. The rotary wing type micro unmanned aerial vehicle has the advantages of being good in flexibility, compact in structure, low in cost, fast in data acquisition and the like, and the application range also covers various fields including but not limited to pesticide spraying, geological surveying, searching and rescuing, cargo transportation, mapping and the like. Due to the limitation of the response speed and the working efficiency of the people for acquiring the information, the tasks are automatically completed by the unmanned aerial vehicle as much as possible, the actions of automatic take-off and landing, path planning, obstacle avoidance, ground imitation flying and the like are realized through a set program or the automatic planning of the unmanned aerial vehicle, and the accuracy and the reliability of the operation process are ensured.

In the aspect of unmanned aerial vehicle autonomous landing, an autonomous landing mode based on GPS navigation is mostly used at present, namely, a GPS sensor carried by the unmanned aerial vehicle records the geographic coordinate of a take-off time body, or a certain geographic coordinate is specified artificially, and when the unmanned aerial vehicle lands, a GPS positioning system guides the unmanned aerial vehicle to hover over the recorded geographic coordinate and descend for landing. The GPS navigation has the defects of large interference by non-air media, low positioning accuracy and the like, so that the unmanned aerial vehicle has large autonomous landing error in remote areas or areas with a large number of shelters and cannot accurately complete a landing task.

An unmanned aerial vehicle autonomous landing method based on machine vision is one of approaches for solving inaccurate positioning of a GPS (global positioning system), and currently, an autonomous landing method based on artificial landmarks is more applied to a rotor unmanned aerial vehicle. And along with the application of unmanned aerial vehicles in various fields is more and more extensive, also higher and more high to unmanned aerial vehicle's environmental suitability requirement. Some specific tasks require the drone to land in places where artificial landmarks are not suitable, and even require the drone to autonomously find a suitable landing place in a specific area, which requires the drone to have the ability to recognize natural landmarks. Therefore, in order to provide accurate navigation information to the drone and complete a specific autonomous landing task, a natural landmark identification and autonomous landing method for the drone is urgently needed.

Disclosure of Invention

The invention aims to provide an unmanned aerial vehicle natural landmark identification and autonomous landing method based on machine vision and a satellite digital map, which aims to solve the problems in the prior art.

The natural landmark identification and autonomous landing method for the unmanned aerial vehicle comprises the following steps:

1.1 according to the given Pre-landing coordinates (X)0,Y0,Z0) Determining a landing area P with a convex polygon outline on a satellite digital map, firstly carrying out filtering, graying and binarization processing on an image of the area P, then further carrying out edge feature extraction, removing part of miscellaneous points, reserving main edge features based on the area, and finally extracting a continuous geometric curve through Hough transformation to obtain an outline curve I and a reference image A of the area P on the satellite digital map, wherein the binarization processing adopts a maximum inter-class variance method;

1.2, flying an unmanned aerial vehicle to the air above a given pre-landing coordinate, carrying out filtering, graying and binarization processing on an aerial image, further carrying out edge feature extraction, removing part of miscellaneous points, reserving main edge features based on a region, and finally extracting a continuous geometric curve through Hough transformation to obtain a profile curve II and a measured image B of a region P on a satellite digital map, wherein the binarization processing also adopts a maximum inter-class variance method;

1.3, matching the reference image A obtained in the step 1.1 with the actual measurement image B obtained in the step 1.2, and confirming a landing area P in the aerial image of the unmanned aerial vehicle; the image matching adopts a weighted Hausdorff distance matching algorithm, and comprises the following steps:

1.3.1 in the reference image A and the actual measurement image B, the 3-4DT algorithm is adopted to carry out the distance conversion of the characteristic point set in the two-dimensional space, and an image distance conversion matrix J is obtainedAAnd JB

1.3.2 extracting branch points in a reference image A and a measured image B, and respectively storing the branch points in matrixes A and B;

1.3.3 according to JA、JBAnd A and B calculate the weighted Hausdorff distance:

H(A,B)=max(hWHD(A,B),hWHD(B,A))

Figure GDA0001893972810000021

Figure GDA0001893972810000022

wherein: A. b is two point sets; n is a radical ofaIs the total number of feature points in point set a; a is a feature point belonging to A; d (a, B) is the distance from the characteristic point a on the point set A to the point set B; h isWHD(A, B) represents the directed distance from point set A to point set B; h isWHD(B, A) represents the directed distance from point set B to point set A;

the point with the minimum Hausdorff distance is the final matching point, so that the preliminary positioning information is obtained;

1.3.4, utilizing a least square algorithm to carry out one-to-one correspondence on all matching point pairs to obtain more accurate position information;

1.4 establishing a two-dimensional plane rectangular coordinate system by taking the unmanned aerial vehicle camera as the origin of coordinates, and calculating the coordinate (x) of the centroid of the area P in the aerial image of the unmanned aerial vehicle relative to the unmanned aerial vehicle according to the Green's theoremc,yc);

1.5 calculating the coordinates (X) of the P centroid of the region from the projection relationshipc,Yc,Zc) The method specifically comprises the following steps:

1.5.1 calculating ground resolution GSD:

Figure GDA0001893972810000023

wherein: GSD represents ground resolution (m); f is the focal length (mm) of the lens; p is the pixel size (mm) of the imaging sensor; h is the corresponding flight height (m) of the unmanned aerial vehicle;

1.5.2 calculating the actual ground distance of the image diagonal, and obtaining the ground distance L between the image diagonal according to the width w and the height h of the image:

Figure GDA0001893972810000024

wherein: GSD represents ground resolution (m); w is the image width; h is the image height;

1.5.3 according to the longitude and latitude of the central point of the image, the distance and the direction angle of the area P centroid relative to the central point, the geographical coordinate of the area P centroid is obtained:

Figure GDA0001893972810000025

Figure GDA0001893972810000031

wherein: theta0∈(0,2π);LonaLongitude of the image center point; lataThe latitude of the central point of the image; ri6378137m is taken as the equatorial radius; rjTaking 6356725m as the extreme radius;

1.5.4 converting geographic coordinates into spatial coordinates to obtain spatial coordinates (X) of P centroid of regionc,Yc,Zc):

Figure GDA0001893972810000032

Wherein: n is the curvature radius; lon is longitude; lat is latitude; h is elevation;

1.6 unmanned aerial vehicle flies to space coordinate (X)c,Yc,Zc) And (5) landing in the vertical direction in the air.

The method can ensure that the unmanned aerial vehicle autonomously identifies the optimal landing point within the designated range, accurately lands, can make up for the defect of large autonomous landing error under GPS navigation, and improves the safety and reliability of autonomous landing.

Drawings

FIG. 1 is a flowchart of a method for identifying natural landmarks and autonomously landing for unmanned aerial vehicles

Detailed Description

In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below.

Step one, according to the given pre-landing coordinate (X)0,Y0,Z0) Determining a proper landing area P (requiring the outline of the area to be a convex polygon) on a satellite digital map, firstly carrying out filtering processing, graying processing and binarization processing on the image of the area P, further carrying out edge feature extraction, removing partial miscellaneous points, reserving main edge features based on the area, finally extracting a continuous geometric curve through Hough transformation to obtain an outline curve of the area P on the satellite digital map, and obtaining a reference image A, wherein the binarization processing selects a maximum inter-class variance method, supposing that T is a selected global threshold, pixels of all pixel points of the image are divided into a foreground and a background according to T as a boundary, and omega is a global threshold1And ω2Respectively representing the proportion of pixels belonging to the background and the foreground in the whole image, then:

Figure GDA0001893972810000033

Figure GDA0001893972810000034

wherein: p (i) represents the probability of the pixel with the pixel value i appearing in the image.

μ0And mu1Respectively representing the average value of the pixels of the background pixel and the foreground pixel, and if mu is the average pixel value of all the pixels, then:

Figure GDA0001893972810000035

Figure GDA0001893972810000036

Figure GDA0001893972810000037

the variance σ between classes corresponding to the threshold2(T) is defined as:

σ2(T)=ω0(T)[μ0(T)-μ(T)]21(T)[μ1(T)-μ(T)]2=ω0(T)ω1(T)[μ0(T)-μ1(T)]2

and traversing each gray value, and finding out the threshold T corresponding to the maximum inter-class variance, namely the threshold.

And secondly, flying the unmanned aerial vehicle to the vicinity of the upper space of the given pre-landing coordinate, carrying out filtering processing, graying processing and binarization processing on the aerial image, further carrying out edge feature extraction, removing part of miscellaneous points, reserving main edge features based on the area, and finally extracting a continuous geometric curve through Hough transformation to obtain a contour curve of the area P on the satellite digital map so as to obtain a real-measurement image B. The binarization processing also selects a maximum inter-class variance method.

And step three, matching the reference image A with the actual measurement image B, and confirming the landing area P in the aerial image of the unmanned aerial vehicle. The image matching adopts a weighted Hausdorff distance matching algorithm, and comprises the following specific steps:

(1) in the reference image A and the actual measurement image B, the distance conversion of the feature point set in the two-dimensional space is carried out by adopting a 3-4DT algorithm to obtain an image distance conversion matrix JAAnd JB

(2) Extracting branch points in the reference image A and the measured image B, and respectively storing the branch points in the matrixes A and the matrix B;

(3) according to JA、JBAnd A and B calculate the weighted Hausdorff distance:

H(A,B)=max(hWHD(A,B),hWHD(B,A))

Figure GDA0001893972810000041

Figure GDA0001893972810000042

wherein: A. b is two sets of points, NaIs the total number of feature points in the point set A, a is a feature point belonging to A, d (a, B) is the distance from the feature point a to the point set B on the point set A, hWHD(A,B)、hWHD(B, A) represent the directional distances from point set A to point set B and from point set B to point set A, respectively.

The point with the minimum Hausdorff distance is the final matching point to obtain preliminary positioning information.

(4) And performing one-to-one correspondence on all the matching point pairs by using a least square algorithm to acquire more accurate position information.

Establishing a two-dimensional plane rectangular coordinate system by taking the unmanned aerial vehicle camera as a coordinate origin, and calculating the coordinate (x) of the centroid of the region P in the aerial image of the unmanned aerial vehicle relative to the unmanned aerial vehiclec,yc)。

According to green's theorem, the closed contour along region P integrates:

Figure GDA0001893972810000043

Figure GDA0001893972810000044

after discretization, the above formula translates to:

Figure GDA0001893972810000051

Figure GDA0001893972810000052

step five, calculating the coordinates (X) of the P centroid of the region according to the projection relationc,Yc,Zc):

(1) Calculating the ground resolution:

Figure GDA0001893972810000053

wherein: GSD represents ground resolution (m), f is lens focal length (mm), P is imaging sensor's pixel size (mm), and H is the corresponding flight height (m) of unmanned aerial vehicle.

(2) Calculating the actual ground distance of the image diagonal, and obtaining the ground distance between the image diagonal according to the width w and the height h of the image:

Figure GDA0001893972810000054

(3) according to the longitude and latitude of the central point of the image, the distance and the direction angle of the area P centroid relative to the central point, the geographic coordinate of the area P centroid is obtained:

Figure GDA0001893972810000055

Figure GDA0001893972810000056

wherein: theta0∈(0,2π),Lona、LataIs the longitude and latitude of the center point of the image, RiTaking 6378137m, R as the equatorial radiusj6356725m was taken for the polar radius.

(4) Conversion between a geographical coordinate to an inter-spatial coordinate system

Figure GDA0001893972810000057

Wherein: n is curvature radius, Lon, Lat and H are longitude, latitude and elevation respectively, and space coordinates (X) of the centroid of the area P is obtainedc,Yc,Zc)。

Step six, the unmanned plane flies to a space coordinate (X)c,Yc,Zc) Go to hang in the upper airLanding in a straight direction.

Claims (1)

1.一种无人机自然地标识别与自主着陆方法,其特征在于包括下列步骤:1. a kind of unmanned aerial vehicle natural landmark identification and autonomous landing method, it is characterized in that comprising the following steps: 1.1根据给定的预着陆X0,Y0,Z0坐标,在卫星数字地图上确定一块轮廓是凸多边形的着陆区域P,先对区域P的图像进行滤波、灰度化和二值化处理,再进一步实施边缘特征提取,剔除部分杂点,保留基于区域的主边缘特征,最后通过Hough变换提取连续几何曲线,得到卫星数字地图上区域P的轮廓曲线Ⅰ和参考图像A,其中,二值化处理采用最大类间方差法;1.1 According to the given pre-landing X 0 , Y 0 , Z 0 coordinates, determine a landing area P whose outline is a convex polygon on the satellite digital map, and first filter, grayscale and binarize the image of the area P , and then further implement edge feature extraction, remove some noise points, retain the main edge features based on the region, and finally extract the continuous geometric curve through Hough transform to obtain the contour curve I of the region P on the satellite digital map and the reference image A. Among them, the binary value The maximum inter-class variance method is used for the processing; 1.2无人机飞行至给定的预着陆坐标上空,将航拍图像进行滤波、灰度化和二值化处理,再进一步实施边缘特征提取,剔除部分杂点,保留基于区域的主边缘特征,最后通过Hough变换提取连续几何曲线,得到卫星数字地图上区域P的轮廓曲线Ⅱ和实测图像B,其中,二值化处理同样采用最大类间方差法;1.2 The drone flies to the given pre-landing coordinates, and the aerial image is filtered, grayed and binarized, and then further edge feature extraction is performed, some noise points are removed, and the main edge features based on the region are retained. Finally, The continuous geometric curve is extracted by Hough transform, and the contour curve II of the area P on the satellite digital map and the measured image B are obtained, and the maximum inter-class variance method is also used for the binarization processing; 1.3将步骤1.1得到的参考图像A与步骤1.2得到的实测图像B进行匹配,在无人机航拍图像中确认着陆区域P;图像匹配采用加权Hausdorff距离匹配算法,包括下列步骤:1.3 Match the reference image A obtained in step 1.1 with the measured image B obtained in step 1.2, and confirm the landing area P in the UAV aerial image; the image matching adopts the weighted Hausdorff distance matching algorithm, including the following steps: 1.3.1在参考图像A与实测图像B中,采用3-4DT算法进行特征点集在二维空间中的距离转换,得到图像距离转换矩阵JA和JB1.3.1 in the reference image A and the measured image B, adopt the 3-4DT algorithm to carry out the distance conversion of the feature point set in the two-dimensional space, and obtain the image distance conversion matrix J A and J B ; 1.3.2提取参考图像A和实测图像B中的分支点,并分别存储到矩阵A和B中;1.3.2 Extract the branch points in the reference image A and the measured image B, and store them in the matrix A and B respectively; 1.3.3根据JA、JB、A和B计算加权Hausdorff距离:1.3.3 Calculate the weighted Hausdorff distance from J A , J B , A and B: H(A,B)=max(hWHD(A,B),hWHD(B,A))H(A,B)=max( hWHD (A,B), hWHD (B,A))

Figure FDA0001831586700000011

Figure FDA0001831586700000011

Figure FDA0001831586700000012

Figure FDA0001831586700000012

其中:A、B为两个点集;Na为点集A中特征点的总数;a是属于A的一个特征点;d(a,B)是点集A上特征点a到点集B的距离;hWHD(A,B)代表从点集A到点集B的有向距离;hWHD(B,A)代表从点集B到点集A的有向距离;Among them: A and B are two point sets; Na is the total number of feature points in point set A; a is a feature point belonging to A; d(a, B) is the feature point a to point set B on point set A ; h WHD (A, B) represents the directional distance from point set A to point set B; h WHD (B, A) represents the directional distance from point set B to point set A; 具有最小Hausdorff距离的点就是最终匹配点,由此获取初步的定位信息;The point with the smallest Hausdorff distance is the final matching point, from which preliminary positioning information is obtained; 1.3.4利用最小二乘算法对所有的匹配点对进行一一对应,来获取更为精确的位置信息;1.3.4 Use the least squares algorithm to perform one-to-one correspondence of all matching point pairs to obtain more accurate position information; 1.4以无人机摄像头为坐标原点建立二维平面直角坐标系,根据格林定理计算无人机航拍图像中区域P的形心相对于无人机的xc,yc坐标;1.4 Establish a two-dimensional plane rectangular coordinate system with the UAV camera as the coordinate origin, and calculate the x c , y c coordinates of the centroid of the area P in the UAV aerial image relative to the UAV according to Green's theorem; 1.5根据投影关系计算区域P形心的Xc,Yc,Zc坐标,具体包括:1.5 Calculate the X c , Y c , Z c coordinates of the centroid of the region P according to the projection relationship, including: 1.5.1计算地面分辨率GSD:1.5.1 Calculate the ground resolution GSD:

Figure FDA0001831586700000013

Figure FDA0001831586700000013

其中:GSD表示地面分辨率(m);f为镜头焦距(mm);P为成像传感器的像元尺寸(mm);H为无人机对应的航高(m);Among them: GSD represents the ground resolution (m); f is the focal length of the lens (mm); P is the pixel size of the imaging sensor (mm); H is the flight height corresponding to the drone (m); 1.5.2计算影像对角线实际地面距离,根据影像的宽度w和高度h,得到图像对角线之间的地面距离L:1.5.2 Calculate the actual ground distance of the image diagonal, and obtain the ground distance L between the image diagonals according to the width w and height h of the image:

Figure FDA0001831586700000021

Figure FDA0001831586700000021

其中:GSD表示地面分辨率(m);w为影像宽度;h为影像高度;Among them: GSD represents the ground resolution (m); w is the width of the image; h is the height of the image; 1.5.3根据影像中心点经纬度及区域P形心相对中心点的距离及方向角,求得区域P形心的地理坐标:1.5.3 According to the latitude and longitude of the center point of the image and the distance and direction angle of the centroid of the area relative to the center point, the geographic coordinates of the centroid of the area P are obtained:

Figure FDA0001831586700000022

Figure FDA0001831586700000022

Figure FDA0001831586700000023

Figure FDA0001831586700000023

其中:θ0∈(0,2π);Lona为影像中心点的经度;Lata为影像中心点的纬度;Ri为赤道半径,取6378137m;Rj为极半径,取6356725m;Where: θ 0 ∈(0,2π); Lon a is the longitude of the center of the image; Lat a is the latitude of the center of the image; R i is the equatorial radius, taking 6378137m; Rj is the polar radius, taking 6356725m; 1.5.4进行地理坐标到空间坐标的转换,得到区域P形心的空间坐标Xc,Yc,Zc1.5.4 Convert geographic coordinates to spatial coordinates to obtain the spatial coordinates X c , Y c , Z c of the centroid of the region P:

Figure FDA0001831586700000024

Figure FDA0001831586700000024

其中:N为曲率半径;Lon为经度;Lat为纬度;H为高程;Where: N is the radius of curvature; Lon is the longitude; Lat is the latitude; H is the elevation; 1.6无人机飞行至空间坐标Xc,Yc,Zc上空,进行垂直方向着陆。1.6 The drone flies to the space above the space coordinates X c , Y c , Z c for vertical landing.
CN201811213147.3A 2018-10-17 2018-10-17 A method for unmanned aerial vehicle natural landmark recognition and autonomous landing Active CN109460046B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811213147.3A CN109460046B (en) 2018-10-17 2018-10-17 A method for unmanned aerial vehicle natural landmark recognition and autonomous landing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811213147.3A CN109460046B (en) 2018-10-17 2018-10-17 A method for unmanned aerial vehicle natural landmark recognition and autonomous landing

Publications (2)

Publication Number Publication Date
CN109460046A CN109460046A (en) 2019-03-12
CN109460046B true CN109460046B (en) 2021-08-06

Family

ID=65607782

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811213147.3A Active CN109460046B (en) 2018-10-17 2018-10-17 A method for unmanned aerial vehicle natural landmark recognition and autonomous landing

Country Status (1)

Country Link
CN (1) CN109460046B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110968112B (en) * 2019-12-12 2023-08-01 哈尔滨工程大学 Unmanned aerial vehicle autonomous landing method based on monocular vision
CN111324145B (en) * 2020-02-28 2022-08-16 厦门理工学院 Unmanned aerial vehicle autonomous landing method, device, equipment and storage medium
CN111626260A (en) * 2020-06-05 2020-09-04 贵州省草业研究所 Aerial photo ground object feature point extraction method based on unmanned aerial vehicle remote sensing technology
CN113920439B (en) * 2020-07-10 2024-09-06 千寻位置网络有限公司 Extraction method and device for arrow point
CN112419374B (en) * 2020-11-11 2022-12-27 北京航空航天大学 Unmanned aerial vehicle positioning method based on image registration
CN115526896A (en) * 2021-07-19 2022-12-27 中核利华消防工程有限公司 Fire prevention and control method and device, electronic equipment and readable storage medium
CN114998773B (en) * 2022-08-08 2023-02-17 四川腾盾科技有限公司 Characteristic mismatching elimination method and system suitable for aerial image of unmanned aerial vehicle system
CN115237158B (en) * 2022-08-17 2024-11-19 吉林大学 Multi-rotor unmanned aerial vehicle autonomous tracking and landing control system and control method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20100095665A (en) * 2009-02-12 2010-09-01 한양대학교 산학협력단 Automatic landing method, landing apparatus of scanning probe microscope and scanning probe microscope using the same
CN103424126A (en) * 2013-08-12 2013-12-04 西安电子科技大学 System and method for verifying visual autonomous landing simulation of unmanned aerial vehicle
CN105000194A (en) * 2015-08-13 2015-10-28 史彩成 UAV (unmanned aerial vehicle) assisted landing visual guiding method and airborne system based on ground cooperative mark
CN105550994A (en) * 2016-01-26 2016-05-04 河海大学 Satellite image based unmanned aerial vehicle image rapid and approximate splicing method
CN107063261A (en) * 2017-03-29 2017-08-18 东北大学 The multicharacteristic information terrestrial reference detection method precisely landed for unmanned plane

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20100095665A (en) * 2009-02-12 2010-09-01 한양대학교 산학협력단 Automatic landing method, landing apparatus of scanning probe microscope and scanning probe microscope using the same
CN103424126A (en) * 2013-08-12 2013-12-04 西安电子科技大学 System and method for verifying visual autonomous landing simulation of unmanned aerial vehicle
CN105000194A (en) * 2015-08-13 2015-10-28 史彩成 UAV (unmanned aerial vehicle) assisted landing visual guiding method and airborne system based on ground cooperative mark
CN105550994A (en) * 2016-01-26 2016-05-04 河海大学 Satellite image based unmanned aerial vehicle image rapid and approximate splicing method
CN107063261A (en) * 2017-03-29 2017-08-18 东北大学 The multicharacteristic information terrestrial reference detection method precisely landed for unmanned plane

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于视觉的无人机自主着陆地标识别方法;李宇 等;《计算机应用研究》;20120731;第29卷(第7期);第2780-2783页 *
新型的无人机自主着陆地标设计与研究;陈勇 等;《电子科技大学学报》;20161130;第45卷(第6期);第934-938页 *

Also Published As

Publication number Publication date
CN109460046A (en) 2019-03-12

Similar Documents

Publication Publication Date Title
CN109460046B (en) 2021-08-06 A method for unmanned aerial vehicle natural landmark recognition and autonomous landing
CN110569838B (en) 2022-05-24 An autonomous landing method of quadrotor UAV based on visual positioning
CN106054929B (en) 2018-10-16 A kind of unmanned plane based on light stream lands bootstrap technique automatically
US20200344464A1 (en) 2020-10-29 Systems and Methods for Improving Performance of a Robotic Vehicle by Managing On-board Camera Defects
US20190068829A1 (en) 2019-02-28 Systems and Methods for Improving Performance of a Robotic Vehicle by Managing On-board Camera Obstructions
CN112740225B (en) 2022-05-13 A kind of pavement element determination method and device
CN103714541B (en) 2015-07-08 A Method for Identifying and Locating Buildings Using Mountain Contour Area Constraints
Huang et al. 2017 Structure from motion technique for scene detection using autonomous drone navigation
Hebel et al. 2011 Simultaneous calibration of ALS systems and alignment of multiview LiDAR scans of urban areas
CN109949361A (en) 2019-06-28 An Attitude Estimation Method for Rotor UAV Based on Monocular Vision Positioning
CN109885086B (en) 2022-09-23 A UAV vertical landing method based on compound polygonal sign guidance
CN111666855B (en) 2023-06-30 Method, system and electronic equipment for extracting three-dimensional parameters of animals based on drone
CN103822635A (en) 2014-05-28 Visual information based real-time calculation method of spatial position of flying unmanned aircraft
Yoneda et al. 2015 Urban road localization by using multiple layer map matching and line segment matching
CN108305264A (en) 2018-07-20 A kind of unmanned plane precision landing method based on image procossing
CN101109640A (en) 2008-01-23 Vision-based autonomous landing navigation system for unmanned aircraft
CN105405126B (en) 2017-11-07 A kind of multiple dimensioned vacant lot parameter automatic calibration method based on single camera vision system
CN109341686B (en) 2023-10-27 Aircraft landing pose estimation method based on visual-inertial tight coupling
KR102289752B1 (en) 2021-08-13 A drone for performring route flight in gps blocked area and methed therefor
CN113377118A (en) 2021-09-10 Multi-stage accurate landing method for unmanned aerial vehicle hangar based on vision
CN107424156B (en) 2019-12-06 Accurate measurement method for autonomous formation of unmanned aerial vehicles based on visual attention imitating barn owl eyes
CN110058604A (en) 2019-07-26 A kind of accurate landing system of unmanned plane based on computer vision
CN114815871A (en) 2022-07-29 A vision-based autonomous landing method for vertical take-off and landing UAV mobile platforms
CN117636284A (en) 2024-03-01 Unmanned aerial vehicle autonomous landing method and device based on visual image guidance
CN113378701A (en) 2021-09-10 Ground multi-AGV state monitoring method based on unmanned aerial vehicle

Legal Events

Date Code Title Description
2019-03-12 PB01 Publication
2019-03-12 PB01 Publication
2019-04-05 SE01 Entry into force of request for substantive examination
2019-04-05 SE01 Entry into force of request for substantive examination
2021-08-06 GR01 Patent grant
2021-08-06 GR01 Patent grant