CN102567989A - Space positioning method based on binocular stereo vision - Google Patents
- ️Wed Jul 11 2012
CN102567989A - Space positioning method based on binocular stereo vision - Google Patents
Space positioning method based on binocular stereo vision Download PDFInfo
-
Publication number
- CN102567989A CN102567989A CN2011103908630A CN201110390863A CN102567989A CN 102567989 A CN102567989 A CN 102567989A CN 2011103908630 A CN2011103908630 A CN 2011103908630A CN 201110390863 A CN201110390863 A CN 201110390863A CN 102567989 A CN102567989 A CN 102567989A Authority
- CN
- China Prior art keywords
- calibration
- coordinates
- point
- image
- camera Prior art date
- 2011-11-30 Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Image Processing (AREA)
Abstract
The invention provides a new space positioning method based on binocular stereo vision, and aims to solve the problems of the complicated calibration process of video cameras and low positioning accuracy in the conventional space positioning method. The method comprises the following steps of: obtaining a homography matrix and a distortion coefficient of video cameras by calibrating two video cameras; obtaining actual coordinates of target points captured by the two video cameras; obtaining theoretical coordinates of the target points captured by the two video cameras; and obtaining space coordinates of the target points according to the theoretical coordinates of the target points captured by the two cameras and the homography matrix. The method is convenient to operate, the distortion of video camera lens and a charge coupled device (CCD) array is comprehensively considered, and sub-pixel image coordinates of calibration points can be automatically extracted, so that the efficiency and accuracy of calibration are improved, errors caused by misuse of constraint conditions or inaccurate known parameters are avoided, and professional technical requirements of users are reduced. The method is applied to the fields of engineering surveying and mapping, monitoring, medical treatment and the like.
Description
Technical field
The present invention relates to relate to Flame Image Process and machine vision technique field, proposed a kind of space orientation technique that can realize automatic extraction of calibration point and image distortion correction based on binocular vision.
Background technology
Science and technology development, the raising of automaticity, space orientation have become the demand of many applications.Along with the development of machine vision technique, utilize the image information of camera collection to carry out target localization and had bigger feasibility and bigger advantage.Nowadays the location based on single eye stereo vision has obtained using widely.What but monocular vision obtained is 2-D data, has lost a large amount of depth informations, has caused that locating speed is slow, bearing accuracy is unreliable.Comparatively speaking, the processing mode of the human eyes of binocular stereo vision direct modeling has the ability of the object depth information obtained, and can obtain the accurate spatial positional information of impact point, thereby have bigger researching value.Increasing engineering mapping, military affairs, space flight and aviation, medical treatment and the fire-fighting domain of being applied to of technique of binocular stereoscopic vision.Use on the engineering mapping and accurately measure the three-dimensional body girth, realized the quick and precisely detection of robot Distribution of Magnetic Field through binocular vision based on technique of binocular stereoscopic vision.Use robot to go to the front in the military affairs; Locate hostile target and shoot and effectively to reduce casualties through the camera in the robot; Use mechanical hand to perform an operation in the medical treatment, accurately locate the extraterrestrial target point, realize locating fast and accurately burning things which may cause a fire disaster based on binocular space orientation in the fire-fighting; The realization robotization puts out a fire, disaster hidden-trouble is eliminated in fire early stage, farthest reduces the loss that fire brings.
Binocular space orientation based on machine vision is to take a space object from a plurality of viewpoints; Obtain its a series of images under different angles; Utilize the difference between the respective pixel in a plurality of images, calculate the 3 dimensional coil geometry and the three-dimensional space position of target object.Space orientation all need be carried out camera calibration usually in earlier stage; The camera calibration technology is an important component part of space object three-dimensional reconstruction; Directly influence the precision of three-dimensional reconstruction; Being one of main source of systematic error, is to obtain the three-dimensional information key link from two dimensional image, is one of most important work of stereoscopic vision research.Go for camera model arbitrarily based on classic method in the scaling method of binocular stereo vision location; But the high calibration process of stated accuracy is complicated; Need high-precision known structure information, and the active vision scaling method needs some movable information of known video camera, though solution procedure is simple, robustness can not be used in the unknown and uncontrollable occasion of camera motion than higher; Camera self-calibration method only need be set up the correspondence between the image; Dirigibility is strong, potential applied range, but robustness is not high.
Summary of the invention
The objective of the invention is to solve problems such as the camera calibration that exists in the current space orientation process is difficult, loaded down with trivial details, positioning error is big based on location, binocular stereo vision implementation space.
The objective of the invention is to realize like this, the space-location method based on binocular stereo vision comprises the steps:
1) through two video cameras are demarcated, tries to achieve the homography matrix and the distortion factor of video camera;
2) obtain the actual coordinate of the impact point that two video cameras absorb;
3), obtain the theoretical coordinate of the impact point that two video cameras absorb through following formula:
x d = x u * s + x u ( k 1 r u 2 + k 2 r u 4 ) + p 1 ( 3 x u 2 + y u 2 ) + 2 p 2 x u y u y d = y u + y u ( k 1 r u 2 + k 2 r u 4 ) + p 2 ( 3 y u 2 + x u 2 ) + 2 p 1 x u y u
Wherein, k 1, k 2Be coefficient of radial distortion, p 1p 2Be the centrifugal distortion coefficient, s is the aspect ratio distortion factor;
The theoretical coordinate and the homography matrix of the impact point that 4) absorbs according to two video cameras obtain the volume coordinate of impact point.
Further, said step 1) comprises the steps:
11) picked-up scaling board image, said scaling board is a surface plate, is arranged with calibration point on the scaling board uniformly, and said scaling board has frame, and one jiao of scaling board is provided with bearing mark, at least one of said scaling board;
12) generate the scaling board description document, said scaling board description document comprises scaling board overall dimension, each calibration point positional information, calibration point diameter, calibration point spacing, mark angular direction, calibration point line number columns;
13) separate scaling board interior zone and background image;
14) obtain the calibration point central coordinate of circle, comprise real image coordinate and volume coordinate;
15),, ask for the homography matrix and the distortion factor of video camera according to following two formulas according to calibration point real image coordinate and volume coordinate:
z c u v 1 = a x 0 u 0 0 a y v 0 0 0 1 R T x w y w z w 1 = M 1 M 2 x w y w z w 1 = M x w y w z w 1 - - - ( 1 )
Wherein, (x w, y w, z w) be the world coordinates of spatial point P, (u, the v) pixel coordinate of spatial point P, a y=f/d yBe scale factor on the imaging plane longitudinal axis axle, a x=f/d xIt is scale factor on the imaging plane transverse axis; (u 0, v 0) be the center point coordinate of imaging plane, R is 3 * 3 video camera rotation matrixs, T is 3 * 1 video camera translation matrix, z cBe the scale factor of the unknown, M 1Being intrinsic parameters of the camera, is the intrinsic parameter of video camera inner geometry optical characteristics decision, by a x, a y, u 0, v 0Decision; M 2Be the video camera external parameter, fully by the orientation decision of video camera with respect to world coordinate system; M is single mapping matrix;
x d = x u * s + x u ( k 1 r u 2 + k 2 r u 4 ) + p 1 ( 3 x u 2 + y u 2 ) + 2 p 2 x u y u y d = y u + y u ( k 1 r u 2 + k 2 r u 4 ) + p 2 ( 3 y u 2 + x u 2 ) + 2 p 1 x u y u - - - ( 2 )
Wherein, k 1, k 2Be coefficient of radial distortion, p 1p 2Be the centrifugal distortion coefficient, s is the aspect ratio distortion factor.
Further, said step 13) specifically comprises the steps: 131) use Gaussian filter to carry out image smoothing; 132) use thresholding method to extract the zone of the bigger doubtful scaling board of brightness; 133) from the doubtful zone that has obtained, extract zone that the calibration point that comprises meets the scaling board design standards most as correct scaling board zone, accomplish separating of scaling board interior zone and background image.
Further, said step 14) specifically comprises the steps: 141) use rim detection to extract all doubtful occluding contours; 142) the edge amplitude is carried out Threshold Segmentation; Extract the high occluding contour of all amplitudes; Because black calibration point is big with white calibration backboard scape color contrast,, thereby calibration point occluding contour and other occluding contour are separated so its edge amplitude is higher; 143) center of circle pixel coordinate of all calibration points on the extraction scaling board: adopt linear method to minimize the algebraically error to obtain fitted ellipse, extract oval minimum external quadrilateral, tetragonal center is the center of circle of calibration point.
Further, said step 15) specifically comprises the steps:
151) extract all calibration point real image coordinate P (x through above-mentioned calibration point extraction method u, y u) and volume coordinate P (x w, y w, z w);
152) by volume coordinate P (x w, y w, z w) and mapping matrix M tentatively ask for theoretical image coordinate P (x according to formula (1) d, y d);
153) by real image coordinate P (x u, y u) and get theoretical image coordinate P ' (x d, y d) tentatively ask distortion factor least square solution K according to formula (2);
154) by real image coordinate P (x u, y u) and distortion factor K ask for theoretical image coordinate P ' (x according to formula (2) d, y d);
155) by image coordinate P ' (x d, y d) and volume coordinate P (x w, y w, z w) tentatively ask for mapping matrix least square solution M ' according to formula (1)
156) repeating step 152) to step 155), set the circulation stop condition | PP ' |<d, | PP ' | the theoretical image coordinate P (x that is tried to achieve for step (3) d, y d) with step 154) theoretical image coordinate the P ' (x that tries to achieve d, y d) spacing; Mapping matrix M that is tried to achieve when circulation stops and distortion factor K are last institute and ask.
The present invention has avoided the substep solution procedure at video camera homography matrix and distortion model coefficient solution procedure; Avoided process, directly found the solution whole video camera imaging parameters and distortion model coefficient least square solution simultaneously through process of iteration based on the other partial parameters of known portions parametric solution.Though the substep solution procedure can solve the inside and outside parameter of camera respectively, the space orientation process need not known concrete inside and outside parameter, only need know that final homography matrix gets final product, and the substep process can cause error of calculation diffusion.Also can cause the error of subsequent calculations to increase because of the out of true of known parameters based on known partial parameters.Use process of iteration directly to ask for the video camera imaging parameter simultaneously among the present invention and the distortion model coefficient is based on the transforming relationship of directly setting up spatial point and real image coordinate on the basis of setting up correct imaging model and distortion model, error is reduced to minimum.The space-location method based on binocular stereo vision that the present invention proposes is intended to improve the shooting stated accuracy; Reduce and demarcate difficulty; Improve space orientation speed and accuracy, make technique of binocular stereoscopic vision more extensively effective application in fields such as engineering mapping, monitoring, medical treatment.
Description of drawings:
Fig. 1 is based on the space-location method schematic flow sheet of binocular stereo vision;
Fig. 2 calibration point extracts flow process automatically;
Fig. 3 scaling board structural representation
Fig. 4 homography matrix and distortion factor are asked for flow process.
Specific embodiments:
Referring to the space-location method of Fig. 1 present embodiment, comprise the steps: based on binocular stereo vision
1) through two video cameras are demarcated, tries to achieve the homography matrix and the distortion factor of video camera; Specifically comprise the steps: referring to Fig. 2
11) picked-up scaling board image is a surface plate referring to the said scaling board of Fig. 3, is arranged with calibration point on the scaling board uniformly, and said scaling board has frame, and one jiao of scaling board is provided with bearing mark, at least one of said scaling board;
12) generate the scaling board description document, said scaling board description document comprises scaling board overall dimension, each calibration point positional information, calibration point diameter, calibration point spacing, mark angular direction, calibration point line number columns;
13) separate scaling board interior zone and background image; Specifically comprise the steps: 131) use Gaussian filter to carry out image smoothing; 132) use thresholding method to extract the zone of the bigger doubtful scaling board of brightness; 133) from the doubtful zone that has obtained, extract zone that the calibration point that comprises meets the scaling board design standards most as correct scaling board zone, accomplish separating of scaling board interior zone and background image.
14) obtain the central coordinate of circle of all calibration points, comprise real image coordinate P (x u, y u) and volume coordinate P (x w, y w, z w); Specifically comprise the steps: 141) use rim detection to extract all doubtful occluding contours; 142) the edge amplitude is carried out Threshold Segmentation; Extract the high occluding contour of all amplitudes; Because black calibration point is big with white calibration backboard scape color contrast,, thereby calibration point occluding contour and other occluding contour are separated so its edge amplitude is higher; 143) center of circle pixel coordinate of all calibration points on the extraction scaling board: adopt linear method to minimize the algebraically error to obtain fitted ellipse, extract oval minimum external quadrilateral, tetragonal center is the center of circle of calibration point.
15),, ask for the homography matrix and the distortion factor of video camera according to following two formulas according to calibration point real image coordinate and volume coordinate:
z c u v 1 = a x 0 u 0 0 a y v 0 0 0 1 R T x w y w z w 1 = M 1 M 2 x w y w z w 1 = M x w y w z w 1 - - - ( 1 )
Wherein, (x w, y w, z w) be the world coordinates of spatial point P, (u, the v) pixel coordinate of spatial point P, a y=f/d yBe scale factor on the imaging plane longitudinal axis axle, a x=f/d xIt is scale factor on the imaging plane transverse axis; (u 0, v 0) be the center point coordinate of imaging plane, R is 3 * 3 video camera rotation matrixs, T is 3 * 1 video camera translation matrix, z cBe the scale factor of the unknown, M 1Being intrinsic parameters of the camera, is the intrinsic parameter of video camera inner geometry optical characteristics decision, by a x, a y, u 0, v 0Decision; M 2Be the video camera external parameter, fully by the orientation decision of video camera with respect to world coordinate system; M is single mapping matrix;
x d = x u * s + x u ( k 1 r u 2 + k 2 r u 4 ) + p 1 ( 3 x u 2 + y u 2 ) + 2 p 2 x u y u y d = y u + y u ( k 1 r u 2 + k 2 r u 4 ) + p 2 ( 3 y u 2 + x u 2 ) + 2 p 1 x u y u - - - ( 2 )
Wherein, k 1, k 2Be coefficient of radial distortion, p 1p 2Be the centrifugal distortion coefficient, s is the aspect ratio distortion factor.
Referring to Fig. 4, step 15) specifically comprises the steps:
151) by volume coordinate P (x w, y w, z w) and mapping matrix M tentatively ask for theoretical image coordinate P (x according to formula (1) d, y d);
152) by real image coordinate P (x u, y u) and get theoretical image coordinate P ' (x d, y d) tentatively ask distortion factor least square solution K according to formula (2);
153) by real image coordinate P (x u, y u) and distortion factor K ask for theoretical image coordinate P ' (x according to formula (2) d, y d);
154) by image coordinate P ' (x d, y d) and volume coordinate P (x w, y w, z w) tentatively ask for mapping matrix least square solution M ' according to formula (1)
155) repeating step 151) to step 154), set the circulation stop condition | PP ' |<d, | PP ' | the theoretical image coordinate P (x that is tried to achieve for step (13) d, y d) with step 154) theoretical image coordinate the P ' (x that tries to achieve d, y d) spacing; Mapping matrix M that is tried to achieve when circulation stops and distortion factor K are last institute and ask.
2) obtain the actual coordinate of the impact point that two video cameras absorb;
3), obtain the theoretical coordinate of the impact point that two video cameras absorb through following formula:
x d = x u * s + x u ( k 1 r u 2 + k 2 r u 4 ) + p 1 ( 3 x u 2 + y u 2 ) + 2 p 2 x u y u y d = y u + y u ( k 1 r u 2 + k 2 r u 4 ) + p 2 ( 3 y u 2 + x u 2 ) + 2 p 1 x u y u
Wherein, k 1, k 2Be coefficient of radial distortion, p 1p 2Be the centrifugal distortion coefficient, s is the aspect ratio distortion factor;
The theoretical coordinate and the homography matrix of the impact point that 4) absorbs according to two video cameras obtain the volume coordinate of impact point.
The camera calibration process of this example is that two video cameras are placed respectively, and adjustment camera position and angle appear in the two video camera market scopes target area simultaneously, and fix video camera.In the target area, put scaling board; And assurance scaling board complete being presented among two camera views of while; Two video cameras are pickup image respectively; Extract a group of feature point image coordinate through above-mentioned scaling board Feature Points Extraction, the input calibration strip has the volume coordinate of one jiao of black three cornet mark, can calculate the volume coordinate of all calibration points according to the size of scaling board.Again put the scaling board position, absorb 10 above scaling board images, and extract its image coordinate and volume coordinate, combine formula (1) and formula (2) through solution by iterative method video camera imaging parameter and distortion model coefficient least square solution according to flow process shown in Figure 5.
Binocular space orientation promptly is to take same impact point from different perspectives by two video cameras, according to the process of this impact point in its volume coordinate of position recovering of two camera views.Behind camera calibration, can obtain the projection matrix and the distortion factor of left and right cameras respectively, in the binocular position fixing process, obtain the real image coordinate P of impact point in left and right cameras respectively through image recognition technology earlier 1, P 2, ask for the theoretical image coordinate P of impact point according to distortion factor and real image coordinate 1', P 2', theoretical image coordinate P 1', P 2' be the linear transformation relation with spatial point coordinate P, promptly formula (1) transformational relation is eliminated unknown scale factor z respectively by formula (1) left and right cameras cCan get AP=b, simultaneous can be tried to achieve volume coordinate P least square solution P=A -B, wherein A -Generalized inverse matrix for A.
Claims (5)
1.基于双目立体视觉的空间定位方法,其特征在于:包括如下步骤:1. based on the spatial positioning method of binocular stereo vision, it is characterized in that: comprise the steps: 1)通过对两个摄像机进行标定,求得摄像机的单应性矩阵和畸变系数;1) Obtain the homography matrix and distortion coefficient of the camera by calibrating the two cameras; 2)获得两个摄像机摄取到的目标点的实际坐标;2) Obtain the actual coordinates of the target points picked up by the two cameras; 3)通过下式,获得两个摄像机摄取到的目标点的理论坐标:3) The theoretical coordinates of the target points captured by the two cameras are obtained by the following formula: xx dd == xx uu ** sthe s ++ xx uu (( kk 11 rr uu 22 ++ kk 22 rr uu 44 )) ++ pp 11 (( 33 xx uu 22 ++ ythe y uu 22 )) ++ 22 pp 22 xx uu ythe y uu ythe y dd == ythe y uu ++ ythe y uu (( kk 11 rr uu 22 ++ kk 22 rr uu 44 )) ++ pp 22 (( 33 ythe y uu 22 ++ xx uu 22 )) ++ 22 pp 11 xx uu ythe y uu 其中,k1、k2是径向畸变系数,p1 p2是离心畸变系数,s为图像纵横比畸变系数;Among them, k 1 and k 2 are the radial distortion coefficients, p 1 p 2 is the centrifugal distortion coefficient, and s is the image aspect ratio distortion coefficient; 4)根据两个摄像机摄取到的目标点的理论坐标和单应性矩阵,获得目标点的空间坐标。4) According to the theoretical coordinates and the homography matrix of the target point captured by the two cameras, the spatial coordinates of the target point are obtained. 2.根据权利要求1所述的基于双目立体视觉的空间定位方法,其特征在于:所述步骤1)包括如下步骤:2. the spatial positioning method based on binocular stereo vision according to claim 1, is characterized in that: described step 1) comprises the steps: 11)摄取标定板图像,所述标定板为平面板,标定板上均匀的排列有标定点,所述标定板具有边框,标定板的一角设有方向标记,所述标定板至少一块;11) Take an image of a calibration board, the calibration board is a flat board, calibration points are evenly arranged on the calibration board, the calibration board has a frame, a corner of the calibration board is provided with a direction mark, and at least one of the calibration boards; 12)生成标定板描述文件,所述标定板描述文件包括标定板总体尺寸、每个标定点位置信息、标定点直径、标定点间距、标记角方向、标定点行数列数;12) Generate a calibration plate description file, the calibration plate description file includes the overall size of the calibration plate, the position information of each calibration point, the diameter of the calibration point, the distance between the calibration points, the direction of the marking angle, the number of rows and columns of calibration points; 13)分离标定板内部区域与背景图像;13) Separate the internal area of the calibration board from the background image; 14)获得标定点圆心坐标,包括实际图像坐标和空间坐标;14) Obtain the coordinates of the center of the calibration point, including the actual image coordinates and space coordinates; 15)根据标定点实际图像坐标和空间坐标,根据以下两式,求取摄像机的单应性矩阵和畸变系数:15) According to the actual image coordinates and space coordinates of the calibration point, according to the following two formulas, the homography matrix and distortion coefficient of the camera are obtained: zz cc uu vv 11 == aa xx 00 uu 00 00 aa ythe y vv 00 00 00 11 RR TT xx ww ythe y ww zz ww 11 == Mm 11 Mm 22 xx ww ythe y ww zz ww 11 == Mm xx ww ythe y ww zz ww 11 其中,(xw,yw,zw)为空间点P的世界坐标,(u,v)空间点P的像素坐标,ay=f/dy是成像平面纵轴轴上尺度因子,ax=f/dx是成像平面横轴上尺度因子;(u0,v0)是成像平面的中心点坐标,R为3×3摄像机旋转矩阵,T为3×1摄像机平移矩阵,zc为未知的尺度因子,M1是摄像机内部参数,是摄像机内部几何光学特性决定的固有参数,由ax、ay、u0、v0决定;M2是摄像机外部参数,完全由摄像机相对于世界坐标系的方位决定;M是单映射矩阵;Among them, (x w , y w , z w ) are the world coordinates of the spatial point P, (u, v) the pixel coordinates of the spatial point P, a y = f/d y is the scale factor on the longitudinal axis of the imaging plane, a x = f/d x is the scale factor on the horizontal axis of the imaging plane; (u 0 , v 0 ) is the coordinates of the center point of the imaging plane, R is the 3×3 camera rotation matrix, T is the 3×1 camera translation matrix, z c is an unknown scale factor, M 1 is the internal parameter of the camera, which is an inherent parameter determined by the geometrical optical characteristics of the camera, and is determined by a x , a y , u 0 , v 0 ; M 2 is the external parameter of the camera, which is completely determined by the camera relative to The orientation of the world coordinate system is determined; M is a single mapping matrix; xx dd == xx uu ** sthe s ++ xx uu (( kk 11 rr uu 22 ++ kk 22 rr uu 44 )) ++ pp 11 (( 33 xx uu 22 ++ ythe y uu 22 )) ++ 22 pp 22 xx uu ythe y uu ythe y dd == ythe y uu ++ ythe y uu (( kk 11 rr uu 22 ++ kk 22 rr uu 44 )) ++ pp 22 (( 33 ythe y uu 22 ++ xx uu 22 )) ++ 22 pp 11 xx uu ythe y uu 其中,k1、k2是径向畸变系数,p1 p2是离心畸变系数,s为图像纵横比畸变系数。Among them, k 1 and k 2 are radial distortion coefficients, p 1 p 2 is centrifugal distortion coefficient, and s is image aspect ratio distortion coefficient. 3.根据权利要求2所述的基于双目立体视觉的空间定位方法,其特征在于:所述步骤13)具体包括如下步骤:131)使用高斯滤波器进行图像平滑;132)使用阈值分割法提取出亮度较大的疑似标定板的区域;133)从已经得到的疑似区域中提取出包含的标定点最符合标定板设计标准的区域作为正确的标定板区域,完成标定板内部区域与背景图像的分离。3. the spatial positioning method based on binocular stereo vision according to claim 2, is characterized in that: described step 13) specifically comprises the following steps: 131) use Gaussian filter to carry out image smoothing; 132) use threshold segmentation method to extract 133) From the obtained suspected areas, extract the area containing the calibration points that best meets the design standard of the calibration plate as the correct calibration plate area, and complete the internal area of the calibration plate and the background image. separate. 4.根据权利要求3所述的基于双目立体视觉的空间定位方法,其特征在于:所述步骤14)具体包括如下步骤:141)使用边缘检测提取出所有疑似的封闭轮廓线;142)对边缘振幅进行阈值分割,提取出所有振幅高的封闭轮廓线,因为黑色标定点与白色标定板背景颜色对比度大,所以它的边缘振幅较高,从而将标定点封闭轮廓线与其它封闭轮廓线分离开来;143)提取标定板上所有标定点的圆心像素坐标:采用线性方法来最小化代数误差以得到拟合椭圆,提取出椭圆的最小外接四边形,四边形的中心即为标定点的圆心。4. the spatial positioning method based on binocular stereo vision according to claim 3, is characterized in that: described step 14) specifically comprises the steps: 141) use edge detection to extract all suspected closed contour lines; 142) pair The edge amplitude is thresholded to extract all closed contour lines with high amplitude. Because the contrast between the black calibration point and the background color of the white calibration plate is large, its edge amplitude is high, thereby separating the closed contour line of the calibration point from other closed contour lines. Kailai; 143) Extract the pixel coordinates of the centers of all calibration points on the calibration board: use a linear method to minimize the algebraic error to obtain a fitted ellipse, extract the smallest circumscribed quadrilateral of the ellipse, and the center of the quadrilateral is the center of the calibration point. 5.根据权利要求2至4中任一项所述的基于双目立体视觉的空间定位方法,其特征在于:所述步骤15)具体包括如下步骤:5. according to the spatial positioning method based on binocular stereovision according to any one of claims 2 to 4, it is characterized in that: described step 15) specifically comprises the steps: 151)通过上述标定点自动提取方法提取所有标定点实际图像坐标P(xu,yu)和空间坐标P(xw,yw,zw);151) Extract the actual image coordinates P(x u , y u ) and space coordinates P(x w , y w , z w ) of all calibration points through the above-mentioned automatic extraction method of calibration points; 152)由空间坐标P(xw,yw,zw)和映射矩阵M根据公式(1)初步求取理论图像坐标P(xd,yd);152) Initially calculate the theoretical image coordinates P(x d , y d ) from the space coordinates P(x w , y w , z w ) and the mapping matrix M according to the formula (1); 153)由实际图像坐标P(xu,yu)和取理论图像坐标P’(xd,yd)初步求畸变系数最小二乘解K;153) Calculate the distortion coefficient least squares solution K initially from the actual image coordinates P(x u , y u ) and the theoretical image coordinates P'(x d , y d ); 154)由实际图像坐标P(xu,yu)和畸变系数K求取理论图像坐标P’(xd,yd)154) Obtain theoretical image coordinates P'(x d , y d ) from actual image coordinates P(x u , y u ) and distortion coefficient K 155)由图像坐标P’(xd,yd)和空间坐标P(xw,yw,zw)根据公式(1)初步求取映射矩阵最小二乘解M’;155) From the image coordinates P'(x d , y d ) and the space coordinates P(x w , y w , z w ), according to formula (1), initially obtain the least squares solution M' of the mapping matrix; 156)重复步骤152)至步骤155),设定循环停止条件|PP’|<d,|PP’|为步骤(3)所求得的理论图像坐标P(xd,yd)与步骤154)求得的理论图像坐标P’(xd,yd)的间距;循环停止时所求得的映射矩阵M和畸变系数K即为最后所求。156) Repeat step 152) to step 155), set the loop stop condition |PP'|<d, |PP'| is the theoretical image coordinate P(x d , y d ) obtained in step (3) ) distance between the theoretical image coordinates P'(x d , y d ) obtained; the mapping matrix M and distortion coefficient K obtained when the loop stops are finally obtained.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2011103908630A CN102567989A (en) | 2011-11-30 | 2011-11-30 | Space positioning method based on binocular stereo vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2011103908630A CN102567989A (en) | 2011-11-30 | 2011-11-30 | Space positioning method based on binocular stereo vision |
Publications (1)
Publication Number | Publication Date |
---|---|
CN102567989A true CN102567989A (en) | 2012-07-11 |
Family
ID=46413336
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2011103908630A Pending CN102567989A (en) | 2011-11-30 | 2011-11-30 | Space positioning method based on binocular stereo vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102567989A (en) |
Cited By (47)
* Cited by examiner, † Cited by third partyPublication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103115613A (en) * | 2013-02-04 | 2013-05-22 | 安徽大学 | Three-dimensional space positioning method |
CN103217100A (en) * | 2013-03-29 | 2013-07-24 | 南京工业大学 | Online binocular vision measuring device of large bus compartment |
CN103747186A (en) * | 2013-12-30 | 2014-04-23 | 华中科技大学 | Time-division three-path image acquisition device and calibration method for same |
CN103780844A (en) * | 2013-12-30 | 2014-05-07 | 华中科技大学 | Time-sharing two-path image acquiring device and calibration method thereof |
CN104301661A (en) * | 2013-07-19 | 2015-01-21 | 中兴通讯股份有限公司 | A smart home monitoring method, client and corresponding device |
CN104551411A (en) * | 2014-11-18 | 2015-04-29 | 南京大学 | Calibration method of laser galvanometer processing system under guidance of binocular stereoscopic vision |
CN104574415A (en) * | 2015-01-26 | 2015-04-29 | 南京邮电大学 | Target space positioning method based on single camera |
CN104688351A (en) * | 2015-02-28 | 2015-06-10 | 华南理工大学 | Non-blocking positioning method for surgical instrument based on two binocular vision systems |
CN104700385A (en) * | 2013-12-06 | 2015-06-10 | 广西大学 | Binocular vision positioning device based on FPGA |
CN104776832A (en) * | 2015-04-16 | 2015-07-15 | 浪潮软件集团有限公司 | Method, set top box and system for positioning objects in space |
CN104777327A (en) * | 2015-03-17 | 2015-07-15 | 河海大学 | Time-space image speed measuring system and method based on auxiliary laser calibration |
CN104933718A (en) * | 2015-06-23 | 2015-09-23 | 广东省自动化研究所 | Physical coordinate positioning method based on binocular vision |
CN104942401A (en) * | 2015-06-15 | 2015-09-30 | 中国地质大学(武汉) | Tube blank cold centering method based on binocular stereoscopic vision and tube blank cold centering device |
CN104950893A (en) * | 2015-06-26 | 2015-09-30 | 浙江大学 | Homography matrix based visual servo control method for shortest path |
CN105080023A (en) * | 2015-09-02 | 2015-11-25 | 南京航空航天大学 | Automatic tracking and positioning jet flow extinguishing method |
CN105488779A (en) * | 2014-09-18 | 2016-04-13 | 宝山钢铁股份有限公司 | Camera distortion correction calibration board and calibration method |
CN105678787A (en) * | 2016-02-03 | 2016-06-15 | 西南交通大学 | Heavy-duty lorry driving barrier detection and tracking method based on binocular fisheye camera |
CN105955311A (en) * | 2016-05-11 | 2016-09-21 | 阔地教育科技有限公司 | Tracking control method, tracking control device and tracking control system |
CN106041937A (en) * | 2016-08-16 | 2016-10-26 | 河南埃尔森智能科技有限公司 | Control method of manipulator grabbing control system based on binocular stereoscopic vision |
CN106133648A (en) * | 2014-03-26 | 2016-11-16 | 微软技术许可有限责任公司 | Eye gaze based on self adaptation homography is followed the tracks of |
CN106204656A (en) * | 2016-07-21 | 2016-12-07 | 中国科学院遥感与数字地球研究所 | Target based on video and three-dimensional spatial information location and tracking system and method |
WO2017008516A1 (en) * | 2015-07-15 | 2017-01-19 | 华为技术有限公司 | Two-camera relative position calculation system, device and apparatus |
CN106780633A (en) * | 2017-02-20 | 2017-05-31 | 北京创想智控科技有限公司 | A kind of method for correcting image, device and binocular vision system |
CN106949836A (en) * | 2017-05-25 | 2017-07-14 | 中国科学技术大学 | A kind of stereoscopic vision camera homonymy target location caliberating device and method |
CN107263468A (en) * | 2017-05-23 | 2017-10-20 | 陕西科技大学 | A kind of SCARA robotic asssembly methods of utilization digital image processing techniques |
CN107677223A (en) * | 2017-10-25 | 2018-02-09 | 烟台大学 | A wheel photographing measuring device and measuring method of a non-contact four-wheel aligner |
CN107728617A (en) * | 2017-09-27 | 2018-02-23 | 速感科技(北京)有限公司 | More mesh online calibration method, mobile robot and systems |
CN108251878A (en) * | 2016-12-29 | 2018-07-06 | 中国空气动力研究与发展中心超高速空气动力研究所 | A kind of aluminium alloy model surface mark point production method of light-seeking before binocular |
CN108648237A (en) * | 2018-03-16 | 2018-10-12 | 中国科学院信息工程研究所 | A kind of space-location method of view-based access control model |
CN109062229A (en) * | 2018-08-03 | 2018-12-21 | 北京理工大学 | The navigator of underwater robot system based on binocular vision follows formation method |
CN109064512A (en) * | 2018-07-30 | 2018-12-21 | 安徽慧视金瞳科技有限公司 | A kind of more calibration point coordinate value detection methods of interactive mode Teaching System |
CN109445455A (en) * | 2018-09-21 | 2019-03-08 | 深圳供电局有限公司 | Unmanned aerial vehicle autonomous landing method and control system thereof |
CN109916304A (en) * | 2019-04-01 | 2019-06-21 | 易思维(杭州)科技有限公司 | Mirror surface/class mirror surface three-dimensional measurement of objects system calibrating method |
CN110514114A (en) * | 2019-07-30 | 2019-11-29 | 江苏海事职业技术学院 | A method for calibrating the spatial position of tiny targets based on binocular vision |
CN110619293A (en) * | 2019-09-06 | 2019-12-27 | 沈阳天眼智云信息科技有限公司 | Flame detection method based on binocular vision |
CN110738600A (en) * | 2019-10-22 | 2020-01-31 | 安徽农业大学 | A camera super-resolution image acquisition method based on binocular vision |
CN110807807A (en) * | 2018-08-01 | 2020-02-18 | 深圳市优必选科技有限公司 | A pattern, method, device and equipment for target positioning of monocular vision |
CN111360821A (en) * | 2020-02-21 | 2020-07-03 | 海南大学 | Picking control method, device and equipment and computer scale storage medium |
CN111650968A (en) * | 2020-07-28 | 2020-09-11 | 南京天创电子技术有限公司 | Method for measuring positioning error of holder |
CN112365531A (en) * | 2020-10-13 | 2021-02-12 | 西安理工大学 | Reliability evaluation method for ellipse detection result of automatic scrolling system |
CN113108716A (en) * | 2021-04-30 | 2021-07-13 | 上海汇像信息技术有限公司 | Calibration template for calibrating rotating shaft of rotary table |
CN113409236A (en) * | 2020-06-29 | 2021-09-17 | 华中科技大学 | Steel arch frame hinge hole detection method based on binocular vision and application thereof |
CN114581513A (en) * | 2022-03-07 | 2022-06-03 | 清华大学 | A spatial coordinate positioning method, device and electronic device |
US11433541B2 (en) | 2019-12-18 | 2022-09-06 | Industrial Technology Research Institute | Automated calibration system and method for a workpiece coordinate frame of a robot |
WO2022222379A1 (en) * | 2021-04-23 | 2022-10-27 | 深圳市商汤科技有限公司 | Position determination method and apparatus, electronic device and storage medium |
CN117532604A (en) * | 2023-11-08 | 2024-02-09 | 哈尔滨工业大学 | Object pose and high-order motion information observation method based on stereoscopic vision |
CN118365713A (en) * | 2024-04-26 | 2024-07-19 | 浙江安吉吾知科技有限公司 | Dynamic Calibration Method for Randomly Moving Multi-Cameras |
Citations (1)
* Cited by examiner, † Cited by third partyPublication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101673399A (en) * | 2009-09-29 | 2010-03-17 | 浙江工业大学 | Calibration method of coded structured light three-dimensional vision system |
-
2011
- 2011-11-30 CN CN2011103908630A patent/CN102567989A/en active Pending
Patent Citations (1)
* Cited by examiner, † Cited by third partyPublication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101673399A (en) * | 2009-09-29 | 2010-03-17 | 浙江工业大学 | Calibration method of coded structured light three-dimensional vision system |
Non-Patent Citations (5)
* Cited by examiner, † Cited by third partyTitle |
---|
《应用光学》 20030910 邱志强等 "用射影不变性纠正鱼眼镜头畸变" 36-38 1-5 第24卷, 第5期 * |
HAI DU ET AL.: "The study for particle image velocimetry system based on binocular vision", 《MEASUREMENT》, vol. 42, no. 4, 31 May 2009 (2009-05-31), pages 619 - 627 * |
刘晶晶: "基于双目立体视觉的三维定位技术研究", 《中国优秀硕士学位论文全文数据库》, 13 May 2009 (2009-05-13), pages 15 - 44 * |
罗珍茜等: "基于HALCON的摄像机标定", 《电视技术》, vol. 34, no. 4, 17 April 2010 (2010-04-17), pages 100 - 102 * |
邱志强等: ""用射影不变性纠正鱼眼镜头畸变"", 《应用光学》, vol. 24, no. 5, 10 September 2003 (2003-09-10), pages 36 - 38 * |
Cited By (66)
* Cited by examiner, † Cited by third partyPublication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103115613A (en) * | 2013-02-04 | 2013-05-22 | 安徽大学 | Three-dimensional space positioning method |
CN103115613B (en) * | 2013-02-04 | 2015-04-08 | 安徽大学 | Three-dimensional space positioning method |
CN103217100A (en) * | 2013-03-29 | 2013-07-24 | 南京工业大学 | Online binocular vision measuring device of large bus compartment |
CN104301661B (en) * | 2013-07-19 | 2019-08-27 | 南京中兴软件有限责任公司 | A kind of smart home monitoring method, client and related device |
CN104301661A (en) * | 2013-07-19 | 2015-01-21 | 中兴通讯股份有限公司 | A smart home monitoring method, client and corresponding device |
CN104700385A (en) * | 2013-12-06 | 2015-06-10 | 广西大学 | Binocular vision positioning device based on FPGA |
CN103780844A (en) * | 2013-12-30 | 2014-05-07 | 华中科技大学 | Time-sharing two-path image acquiring device and calibration method thereof |
CN103747186A (en) * | 2013-12-30 | 2014-04-23 | 华中科技大学 | Time-division three-path image acquisition device and calibration method for same |
CN106133648B (en) * | 2014-03-26 | 2019-05-31 | 微软技术许可有限责任公司 | Eye gaze tracking based on adaptive homography |
CN106133648A (en) * | 2014-03-26 | 2016-11-16 | 微软技术许可有限责任公司 | Eye gaze based on self adaptation homography is followed the tracks of |
CN105488779A (en) * | 2014-09-18 | 2016-04-13 | 宝山钢铁股份有限公司 | Camera distortion correction calibration board and calibration method |
CN104551411A (en) * | 2014-11-18 | 2015-04-29 | 南京大学 | Calibration method of laser galvanometer processing system under guidance of binocular stereoscopic vision |
CN104574415A (en) * | 2015-01-26 | 2015-04-29 | 南京邮电大学 | Target space positioning method based on single camera |
CN104574415B (en) * | 2015-01-26 | 2017-05-10 | 南京邮电大学 | Target space positioning method based on single camera |
CN104688351A (en) * | 2015-02-28 | 2015-06-10 | 华南理工大学 | Non-blocking positioning method for surgical instrument based on two binocular vision systems |
CN104777327B (en) * | 2015-03-17 | 2018-03-20 | 河海大学 | Time-space image velocity-measuring system and method based on laser assisted demarcation |
CN104777327A (en) * | 2015-03-17 | 2015-07-15 | 河海大学 | Time-space image speed measuring system and method based on auxiliary laser calibration |
CN104776832A (en) * | 2015-04-16 | 2015-07-15 | 浪潮软件集团有限公司 | Method, set top box and system for positioning objects in space |
CN104942401B (en) * | 2015-06-15 | 2017-03-15 | 中国地质大学(武汉) | The cold spotting device of pipe based on binocular stereo vision and the cold centring means of pipe |
CN104942401A (en) * | 2015-06-15 | 2015-09-30 | 中国地质大学(武汉) | Tube blank cold centering method based on binocular stereoscopic vision and tube blank cold centering device |
CN104933718A (en) * | 2015-06-23 | 2015-09-23 | 广东省自动化研究所 | Physical coordinate positioning method based on binocular vision |
CN104933718B (en) * | 2015-06-23 | 2019-02-15 | 广东省智能制造研究所 | A physical coordinate positioning method based on binocular vision |
CN104950893A (en) * | 2015-06-26 | 2015-09-30 | 浙江大学 | Homography matrix based visual servo control method for shortest path |
WO2017008516A1 (en) * | 2015-07-15 | 2017-01-19 | 华为技术有限公司 | Two-camera relative position calculation system, device and apparatus |
US10559090B2 (en) | 2015-07-15 | 2020-02-11 | Huawei Technologies Co., Ltd. | Method and apparatus for calculating dual-camera relative position, and device |
CN105080023A (en) * | 2015-09-02 | 2015-11-25 | 南京航空航天大学 | Automatic tracking and positioning jet flow extinguishing method |
CN105678787A (en) * | 2016-02-03 | 2016-06-15 | 西南交通大学 | Heavy-duty lorry driving barrier detection and tracking method based on binocular fisheye camera |
CN105955311A (en) * | 2016-05-11 | 2016-09-21 | 阔地教育科技有限公司 | Tracking control method, tracking control device and tracking control system |
CN106204656A (en) * | 2016-07-21 | 2016-12-07 | 中国科学院遥感与数字地球研究所 | Target based on video and three-dimensional spatial information location and tracking system and method |
CN106041937A (en) * | 2016-08-16 | 2016-10-26 | 河南埃尔森智能科技有限公司 | Control method of manipulator grabbing control system based on binocular stereoscopic vision |
CN108251878A (en) * | 2016-12-29 | 2018-07-06 | 中国空气动力研究与发展中心超高速空气动力研究所 | A kind of aluminium alloy model surface mark point production method of light-seeking before binocular |
CN106780633B (en) * | 2017-02-20 | 2019-09-06 | 北京创想智控科技有限公司 | A kind of method for correcting image, device and binocular vision system |
CN106780633A (en) * | 2017-02-20 | 2017-05-31 | 北京创想智控科技有限公司 | A kind of method for correcting image, device and binocular vision system |
CN107263468A (en) * | 2017-05-23 | 2017-10-20 | 陕西科技大学 | A kind of SCARA robotic asssembly methods of utilization digital image processing techniques |
CN107263468B (en) * | 2017-05-23 | 2020-08-11 | 陕西科技大学 | A SCARA Robot Assembly Method Using Digital Image Processing Technology |
CN106949836B (en) * | 2017-05-25 | 2024-02-23 | 中国科学技术大学 | Device and method for calibrating same-side target position of stereoscopic camera |
CN106949836A (en) * | 2017-05-25 | 2017-07-14 | 中国科学技术大学 | A kind of stereoscopic vision camera homonymy target location caliberating device and method |
CN107728617B (en) * | 2017-09-27 | 2021-07-06 | 速感科技(北京)有限公司 | Multi-view online calibration method, mobile robot and system |
CN107728617A (en) * | 2017-09-27 | 2018-02-23 | 速感科技(北京)有限公司 | More mesh online calibration method, mobile robot and systems |
CN107677223A (en) * | 2017-10-25 | 2018-02-09 | 烟台大学 | A wheel photographing measuring device and measuring method of a non-contact four-wheel aligner |
CN108648237A (en) * | 2018-03-16 | 2018-10-12 | 中国科学院信息工程研究所 | A kind of space-location method of view-based access control model |
CN108648237B (en) * | 2018-03-16 | 2022-05-03 | 中国科学院信息工程研究所 | A Vision-Based Spatial Localization Method |
CN109064512A (en) * | 2018-07-30 | 2018-12-21 | 安徽慧视金瞳科技有限公司 | A kind of more calibration point coordinate value detection methods of interactive mode Teaching System |
CN110807807B (en) * | 2018-08-01 | 2022-08-05 | 深圳市优必选科技有限公司 | Monocular vision target positioning pattern, method, device and equipment |
CN110807807A (en) * | 2018-08-01 | 2020-02-18 | 深圳市优必选科技有限公司 | A pattern, method, device and equipment for target positioning of monocular vision |
CN109062229A (en) * | 2018-08-03 | 2018-12-21 | 北京理工大学 | The navigator of underwater robot system based on binocular vision follows formation method |
CN109445455B (en) * | 2018-09-21 | 2022-09-30 | 深圳供电局有限公司 | Unmanned aerial vehicle autonomous landing method and control system thereof |
CN109445455A (en) * | 2018-09-21 | 2019-03-08 | 深圳供电局有限公司 | Unmanned aerial vehicle autonomous landing method and control system thereof |
CN109916304B (en) * | 2019-04-01 | 2021-02-02 | 易思维(杭州)科技有限公司 | Mirror surface/mirror surface-like object three-dimensional measurement system calibration method |
CN109916304A (en) * | 2019-04-01 | 2019-06-21 | 易思维(杭州)科技有限公司 | Mirror surface/class mirror surface three-dimensional measurement of objects system calibrating method |
CN110514114A (en) * | 2019-07-30 | 2019-11-29 | 江苏海事职业技术学院 | A method for calibrating the spatial position of tiny targets based on binocular vision |
CN110619293A (en) * | 2019-09-06 | 2019-12-27 | 沈阳天眼智云信息科技有限公司 | Flame detection method based on binocular vision |
CN110738600A (en) * | 2019-10-22 | 2020-01-31 | 安徽农业大学 | A camera super-resolution image acquisition method based on binocular vision |
US11433541B2 (en) | 2019-12-18 | 2022-09-06 | Industrial Technology Research Institute | Automated calibration system and method for a workpiece coordinate frame of a robot |
CN111360821A (en) * | 2020-02-21 | 2020-07-03 | 海南大学 | Picking control method, device and equipment and computer scale storage medium |
CN113409236A (en) * | 2020-06-29 | 2021-09-17 | 华中科技大学 | Steel arch frame hinge hole detection method based on binocular vision and application thereof |
CN111650968A (en) * | 2020-07-28 | 2020-09-11 | 南京天创电子技术有限公司 | Method for measuring positioning error of holder |
CN112365531A (en) * | 2020-10-13 | 2021-02-12 | 西安理工大学 | Reliability evaluation method for ellipse detection result of automatic scrolling system |
WO2022222379A1 (en) * | 2021-04-23 | 2022-10-27 | 深圳市商汤科技有限公司 | Position determination method and apparatus, electronic device and storage medium |
CN113108716A (en) * | 2021-04-30 | 2021-07-13 | 上海汇像信息技术有限公司 | Calibration template for calibrating rotating shaft of rotary table |
CN114581513A (en) * | 2022-03-07 | 2022-06-03 | 清华大学 | A spatial coordinate positioning method, device and electronic device |
CN114581513B (en) * | 2022-03-07 | 2024-04-19 | 清华大学 | A spatial coordinate positioning method, device and electronic equipment |
CN117532604A (en) * | 2023-11-08 | 2024-02-09 | 哈尔滨工业大学 | Object pose and high-order motion information observation method based on stereoscopic vision |
CN117532604B (en) * | 2023-11-08 | 2024-05-10 | 哈尔滨工业大学 | Object pose and high-order motion information observation method based on stereoscopic vision |
CN118365713A (en) * | 2024-04-26 | 2024-07-19 | 浙江安吉吾知科技有限公司 | Dynamic Calibration Method for Randomly Moving Multi-Cameras |
CN118365713B (en) * | 2024-04-26 | 2024-10-29 | 浙江安吉吾知科技有限公司 | Dynamic calibration method for random moving multi-camera |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102567989A (en) | 2012-07-11 | Space positioning method based on binocular stereo vision |
CN109859272B (en) | 2023-05-19 | Automatic focusing binocular camera calibration method and device |
US8934721B2 (en) | 2015-01-13 | Microscopic vision measurement method based on adaptive positioning of camera coordinate frame |
CN102810205B (en) | 2015-08-05 | The scaling method of a kind of shooting or photographic means |
CN105931222B (en) | 2018-11-02 | The method for realizing high-precision camera calibration with low precision two dimensional surface target |
CN107144241B (en) | 2019-01-01 | A kind of binocular vision high-precision measuring method based on depth of field compensation |
CN109064516B (en) | 2021-09-24 | A camera self-calibration method based on absolute quadratic image |
CN104517291B (en) | 2017-08-01 | Pose measurement method based on target coaxial circle feature |
CN104182982A (en) | 2014-12-03 | Overall optimizing method of calibration parameter of binocular stereo vision camera |
CN102368137B (en) | 2013-07-03 | Embedded calibrating stereoscopic vision system |
CN111667536A (en) | 2020-09-15 | Parameter calibration method based on zoom camera depth estimation |
CN103278138A (en) | 2013-09-04 | Method for measuring three-dimensional position and posture of thin component with complex structure |
CN105157609A (en) | 2015-12-16 | Two-sets-of-camera-based global morphology measurement method of large parts |
CN109712232B (en) | 2023-05-09 | Object surface contour three-dimensional imaging method based on light field |
CN106091984A (en) | 2016-11-09 | A kind of three dimensional point cloud acquisition methods based on line laser |
CN104848801B (en) | 2017-06-13 | A kind of line structured light vision sensor calibration method based on parallel bicylindrical target |
CN103971378A (en) | 2014-08-06 | Three-dimensional reconstruction method of panoramic image in mixed vision system |
CN112132906A (en) | 2020-12-25 | A method and system for calibrating external parameters between a depth camera and a visible light camera |
CN110889873A (en) | 2020-03-17 | A target positioning method, device, electronic device and storage medium |
CN103473758A (en) | 2013-12-25 | Secondary calibration method of binocular stereo vision system |
CN104748764B (en) | 2017-05-24 | Method for calibrating space angle of acoustic image plane in acoustic field visualization system |
CN105469389A (en) | 2016-04-06 | Grid ball target for visual sensor calibration and corresponding calibration method |
CN111862193A (en) | 2020-10-30 | A method and device for binocular vision positioning of electric welding spot based on shape descriptor |
CN110514114A (en) | 2019-11-29 | A method for calibrating the spatial position of tiny targets based on binocular vision |
CN109084959B (en) | 2020-10-02 | An Optical Axis Parallelism Correction Method Based on Binocular Ranging Algorithm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
2012-07-11 | C06 | Publication | |
2012-07-11 | PB01 | Publication | |
2012-09-12 | C10 | Entry into substantive examination | |
2012-09-12 | SE01 | Entry into force of request for substantive examination | |
2014-09-17 | C12 | Rejection of a patent application after its publication | |
2014-09-17 | RJ01 | Rejection of invention patent application after publication |
Application publication date: 20120711 |