CN104463949B - A kind of quick three-dimensional reconstructing method and its system based on light field numeral refocusing - Google Patents
- ️Tue Feb 06 2018
Info
-
Publication number
- CN104463949B CN104463949B CN201410581347.XA CN201410581347A CN104463949B CN 104463949 B CN104463949 B CN 104463949B CN 201410581347 A CN201410581347 A CN 201410581347A CN 104463949 B CN104463949 B CN 104463949B Authority
- CN
- China Prior art keywords
- light field
- dimensional
- image
- dimensional reconstruction
- field data Prior art date
- 2014-10-24 Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 55
- 238000012545 processing Methods 0.000 claims abstract description 9
- 230000008569 process Effects 0.000 claims description 11
- 230000008676 import Effects 0.000 claims description 2
- 230000000007 visual effect Effects 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 13
- 230000000694 effects Effects 0.000 description 12
- 238000005516 engineering process Methods 0.000 description 11
- 238000003384 imaging method Methods 0.000 description 11
- 238000005259 measurement Methods 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 6
- 238000002474 experimental method Methods 0.000 description 5
- 210000001747 pupil Anatomy 0.000 description 5
- 238000004458 analytical method Methods 0.000 description 4
- 241000272194 Ciconiiformes Species 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 230000005855 radiation Effects 0.000 description 3
- 238000009877 rendering Methods 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 2
- 230000004907 flux Effects 0.000 description 2
- 238000007689 inspection Methods 0.000 description 2
- 238000000691 measurement method Methods 0.000 description 2
- 238000000386 microscopy Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 241001292396 Cirrhitidae Species 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 230000001902 propagating effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Studio Devices (AREA)
Abstract
The present invention provides a kind of quick three-dimensional reconstructing method based on light field numeral refocusing, comprises the steps of:First, equipment is obtained using light field data and obtains space four-dimension light field data;Secondly, digital refocusing module carries out digital refocusing processing to space four-dimension light field data, obtains focussing plane sequence image;Finally, three-dimensional reconstruction module carries out three-dimensional reconstruction to focussing plane sequence image.The present invention also provides a kind of quick three-dimensional reconstructing system based on light field numeral refocusing, including light field data obtains equipment, digital refocusing module and three-dimensional reconstruction module.The present invention only needs shooting once, it is not necessary to mobile camera and target, as a result can be checked with visual angle;Shooting difficulty and the complexity of algorithm for reconstructing are reduced, has shortened the time for obtaining image, is adapted to the three-dimensional reconstruction of moving target, has expanded the applicable depth bounds of DFF algorithms, is adapted to the three-dimensional reconstruction of big depth of field scene.
Description
技术领域technical field
本发明属于计算成像、图像处理和计算机视觉等技术领域,尤其涉及一种基于光场数字重聚焦的快速三维重建方法及其系统。The invention belongs to the technical fields of computational imaging, image processing and computer vision, and in particular relates to a fast three-dimensional reconstruction method and system based on light field digital refocusing.
背景技术Background technique
三维重建是指对三维物体建立适合计算机表示和处理的数学模型,是在计算机中建立表达客观世界的虚拟现实的关键技术,广泛应用于计算机动画、虚拟现实、工业检测等领域。基于视觉的三维重建技术形成了多种理论方法,如主动视觉法和被动视觉法;单目视觉法、双目视觉法、三目视觉或多目视觉法;单视图法和多视图法等。多视图三维重建通过照相机对某个目标物进行拍摄,然后利用拍摄到的图像序列来恢复目标物的三维结构模型,该方法可基于图像序列和视频。目前多视图三维重建方法主要有轮廓法、纹理法、变焦法、亮度法、明暗法等。3D reconstruction refers to the establishment of a mathematical model suitable for computer representation and processing of 3D objects. It is a key technology for establishing a virtual reality that expresses the objective world in a computer. It is widely used in computer animation, virtual reality, industrial inspection and other fields. Vision-based 3D reconstruction technology has formed a variety of theoretical methods, such as active vision method and passive vision method; monocular vision method, binocular vision method, trinocular vision or multi-eye vision method; single-view method and multi-view method. Multi-view 3D reconstruction uses a camera to shoot a target object, and then uses the captured image sequence to restore the 3D structural model of the target object. This method can be based on image sequence and video. At present, the multi-view 3D reconstruction methods mainly include contour method, texture method, zoom method, brightness method, shading method and so on.
变焦法通过获取目标物不同焦深的图像,利用聚焦分析或者离焦分析的方法来实现三维重建。聚焦深度测量(Depth_From_Focus,DFF)就是目标平面与摄像机的图像传感器平面共轭,根据像距由透镜成像公式求得被测面相对于摄像机的距离,即深度。获取目标图像有单相机和相机阵列两种方法。聚焦深度测量(DFF)方法采用单相机方式拍摄图像时,需要多次反复拍摄目标,或者每次拍摄改变相机焦距等参数,或者每次改变相机和目标的相对位置。三维重建时,单相机聚焦深度测量方法拍摄时间长,过程复杂,不适于运动目标。单相机的镜头孔径大小受限,此方法同样不适于远距离目标的重建。基于相机阵列的聚焦深度测量方法突破了拍摄次数和镜头孔径的限制,通过一次拍摄便可获得目标物不同焦深的序列图像,简化了拍摄过程。The zoom method obtains images of different focal depths of the target object, and uses focus analysis or defocus analysis to achieve 3D reconstruction. Focus depth measurement (Depth_From_Focus, DFF) is the conjugate of the target plane and the image sensor plane of the camera. According to the image distance, the distance between the measured surface and the camera is obtained by the lens imaging formula, that is, the depth. There are two methods for acquiring target images: single camera and camera array. When the depth of focus measurement (DFF) method uses a single camera to capture images, it is necessary to repeatedly capture the target, or change the camera focal length and other parameters each time, or change the relative position of the camera and the target each time. In 3D reconstruction, the single-camera focus depth measurement method takes a long time to shoot, the process is complicated, and it is not suitable for moving objects. The lens aperture of a single camera is limited, and this method is also not suitable for the reconstruction of distant objects. The focus depth measurement method based on the camera array breaks through the limitations of the number of shots and the lens aperture, and a sequence of images with different focal depths of the target can be obtained through one shot, which simplifies the shooting process.
光场是指空间中每一个点和每一个方向的辐射函数总和。1996年,美国斯坦福大学教授Levoy(LEVOY,M.,AND HANRAHAN,P.Light field rendering.InSIGGRAPH 96,31–42,1996.)和微软研究院的GORTLER等(The Lumigraph.In SIGGRAPH 96,43–54,1996.)提出光场四维参数化方法,Levoy教授并进一步提出了光场渲染理论。光场数字重聚焦是光场成像技术的重要应用,指的是一次曝光后获得的照片利用数字图像处理技术将模糊离焦图像进行反演,从而重建出聚焦准确的清晰目标图像。光场数字重聚焦由美国斯坦福大学博士Ng提出并实现(NG,R.,LEVOY,M.,BR′EDIF,M.,DUVAL,G.,HOROWITZ,M.,AND HANRAHAN,P.2005.Light field photography with a hand-held plenopticcamera.Tech.Rep.CSTR2005-02,Stanford Computer Science;Ng,R Fourier slicephotography ACM Transactions on Graphics 2005NO.3735-744)。The light field refers to the sum of radiation functions at every point and every direction in space. In 1996, Levoy (LEVOY, M., AND HANRAHAN, P. Light field rendering. In SIGGRAPH 96, 31–42, 1996.), a professor at Stanford University, and GORTLER of Microsoft Research (The Lumigraph. In SIGGRAPH 96, 43– 54,1996.) proposed a four-dimensional parameterization method of light field, and Professor Levoy further proposed the theory of light field rendering. Light field digital refocusing is an important application of light field imaging technology. It refers to the use of digital image processing technology to invert the blurred out-of-focus image of the photo obtained after one exposure, so as to reconstruct a clear target image with accurate focus. Light field digital refocusing was proposed and implemented by Dr. Ng from Stanford University (NG, R., LEVOY, M., BR′EDIF, M., DUVAL, G., HOROWITZ, M., AND HANRAHAN, P.2005.Light field photography with a hand-held plenoptic camera. Tech. Rep. CSTR2005-02, Stanford Computer Science; Ng, R Fourier slice photography ACM Transactions on Graphics 2005NO.3735-744).
Halcon是德国MVtec公司开发的一套完善的标准的机器视觉算法包,拥有应用广泛的机器视觉集成开发环境,主要集中在字符识别、质量检测、标定、定位等,而且具有非常强大的3d视觉功能。Halcon软件中的DFF(depth_from_focus,聚焦深度测量)三维重建的基本原理是:首先沿Z轴不断调整摄像机的位置,获取目标的序列图像,从而保证整个序列图像覆盖了物体在摄像机中Z轴方向的全部信息,每幅图像有模糊和聚焦清晰区域;然后通过一定的算法(包括菲波拉齐搜索法、金字塔结构搜索法、拉普拉斯算子搜索法以及动态规划搜索法等)获取每一个像素对应在序列图像中聚焦清晰的位置,从而重建出一幅每一景深部位均十分清晰的图像,为全聚焦图;再通过聚焦分析恢复深度信息,最后恢复出比较精确的物体深度信息,全聚焦图和深度信息相结合便可以实现三维重建。DFF算法的基本原理图如图3所示。Halcon is a complete standard machine vision algorithm package developed by MVtec in Germany. It has a widely used machine vision integrated development environment, mainly focusing on character recognition, quality inspection, calibration, positioning, etc., and has a very powerful 3D vision function. . The basic principle of DFF (depth_from_focus, depth of focus measurement) 3D reconstruction in Halcon software is: first, adjust the position of the camera continuously along the Z axis to obtain the sequence images of the target, so as to ensure that the entire sequence of images covers the distance of the object in the Z axis direction of the camera. All information, each image has blurred and focused clear areas; then through a certain algorithm (including Fibonacci search method, pyramid structure search method, Laplace operator search method and dynamic programming search method, etc.) The pixel corresponds to the clearly focused position in the sequence image, so as to reconstruct a very clear image of each depth of field, which is a full-focus map; then restore the depth information through focus analysis, and finally restore more accurate object depth information, the whole Combining the focus map and depth information enables 3D reconstruction. The basic principle diagram of the DFF algorithm is shown in Figure 3.
目前,还没有一种能够将光场数据获取设备、数字重聚焦技术和DFF技术三者结合对目标物进行快速三维重建的方法和系统。At present, there is no method and system that can combine light field data acquisition equipment, digital refocusing technology and DFF technology to perform fast three-dimensional reconstruction of the target object.
发明内容Contents of the invention
本发明要解决的技术问题在于提供一种拍摄快捷、不需要移动相机和目标、三维重建效果好的基于光场数字重聚焦的快速三维重建方法及其系统,以解决现有技术存在的问题。The technical problem to be solved by the present invention is to provide a fast 3D reconstruction method and system based on light field digital refocusing that is fast in shooting, does not need to move the camera and target, and has good 3D reconstruction effect, so as to solve the problems existing in the prior art.
为解决上述技术问题,本发明采用以下技术方案:In order to solve the problems of the technologies described above, the present invention adopts the following technical solutions:
一种基于光场数字重聚焦的快速三维重建方法,包含以下步骤:A fast three-dimensional reconstruction method based on light field digital refocusing, comprising the following steps:
首先,采用光场数据获取设备获取空间四维光场数据;First, the light field data acquisition equipment is used to obtain spatial four-dimensional light field data;
其次,数字重聚焦模块对所述空间四维光场数据进行数字重聚焦处理,得到聚焦平面序列图像;Secondly, the digital refocusing module performs digital refocusing processing on the spatial four-dimensional light field data to obtain focal plane sequence images;
最后,三维重建模块对所述聚焦平面序列图像进行三维重建,得到三维重建图像。Finally, the 3D reconstruction module performs 3D reconstruction on the focus plane sequence images to obtain a 3D reconstructed image.
所述光场数据获取设备可采用Lytro光场相机、光场显微镜、Adobe光场2.0光场结构、raytrix相机、相机阵列、canon光场相机模块、西班牙光场镜头、鹈鹕成像阵列相机、皮克斯超级光场镜头、英伟达近眼光场显示器或光场传感器。The light field data acquisition device can adopt Lytro light field camera, light field microscope, Adobe light field 2.0 light field structure, raytrix camera, camera array, canon light field camera module, Spanish light field lens, Pelican imaging array camera, Pixar Super Light Field Lens, Nvidia Near-Eye Light Field Display, or Light Field Sensor.
所述空间四维光场数据包括空间的二维位置信息和二维方向信息。The spatial four-dimensional light field data includes spatial two-dimensional position information and two-dimensional direction information.
所述数字重聚焦处理包括:(1)对空间四维光场数据进行四维参数化表示:定义光学系统出瞳面的坐标平面为u-v平面,定义图像传感器所在平面为x-y平面,所述两个平面均垂直于系统光轴;光线进入相机系统后,经过u-v平面上的点(u,v)投射到x-y平面上的点(x,y),将所述光线记为LF(u,v,x,y),F为x-y平面相对于u-v平面的距离,即相机系统的工作焦距;采用公式得到像距为F处的像面上的成像,式中EF(x,y)是像距为F的像面上坐标点(x,y)处图像传感器接受到的能量,即入射光通量,LF(u,v,x,y)表示光线(u,v,x,y)携带的能量,A(u,v)是接受光线的传感器像元面积,θ是光线LF(u,v,x,y)与像面法线方向的夹角,所述公式可以简化为EF(x,y)=∫∫LF(u,v,x,y)dudv;(2)对空间四维光场数据进行数字重聚焦:假设光场相机系统在像距为l=F处得到的像EF(x,y)不清晰,在像距l'=αF像面上的像是清晰的,其中α是一个调节像距大小的系数,采用公式通过改变位置参数α的值,得到相机系统在不同像距αF处的像面上的成像。The digital refocusing process includes: (1) performing a four-dimensional parameterized representation on the spatial four-dimensional light field data: defining the coordinate plane of the exit pupil surface of the optical system as the uv plane, defining the plane where the image sensor is located as the xy plane, and the two planes are perpendicular to the optical axis of the system; after the light enters the camera system, it passes through the point (u, v) on the uv plane and projects to the point (x, y) on the xy plane, and the light is recorded as L F (u, v, x, y), F is the distance between the xy plane and the uv plane, that is, the working focal length of the camera system; use the formula Obtain the imaging on the image plane at the image distance F, where E F (x, y) is the energy received by the image sensor at the coordinate point (x, y) on the image plane with the image distance F, that is, the incident luminous flux, L F (u, v, x, y) represents the energy carried by light (u, v, x, y), A (u, v) is the sensor pixel area receiving light, θ is the light L F (u, v , x, y) and the angle between the normal direction of the image plane, the formula can be simplified as E F (x, y)=∫∫L F (u, v, x, y) dudv; (2) for the four-dimensional space Digital refocusing of light field data: Assume that the image E F (x, y) obtained by the light field camera system at the image distance l=F is not clear, and the image on the image plane at the image distance l'=αF is clear, Where α is a coefficient to adjust the size of the image distance, using the formula By changing the value of the position parameter α, the imaging of the camera system on the image plane at different image distances αF is obtained.
所述三维重建采用聚焦深度测量DFF算法实现。The three-dimensional reconstruction is realized by using the depth of focus measurement DFF algorithm.
一种实施上述方法的三维重建系统,包括:A three-dimensional reconstruction system implementing the above method, comprising:
光场数据获取设备,用于获取空间四维光场数据;Light field data acquisition equipment, used to acquire spatial four-dimensional light field data;
数字重聚焦模块,用于对空间四维光场数据进行数字重聚焦处理,得到聚焦平面序列图像;The digital refocusing module is used to perform digital refocusing processing on spatial four-dimensional light field data to obtain focal plane sequence images;
三维重建模块,将数字重聚焦模块处理得到的聚焦平面序列图像进行三维重建,得到三维重建图像。The three-dimensional reconstruction module performs three-dimensional reconstruction on the focus plane sequence images processed by the digital refocusing module to obtain a three-dimensional reconstruction image.
本发明的有益效果:1、仅需拍摄一次,不需要移动相机和目标就可以实现三维重建,结果可以任意视角查看;2、系统拍摄装置小巧,拍摄过程快捷;3、将光场相机、数字重聚焦技术和DFF技术三者结合,降低了拍摄难度和重建算法的复杂度,减短了获取图像的时间,适合运动目标的三维重建,拓展了DFF算法的适用深度范围,适合大景深场景的三维重建。4、本发明支持转化成VC、matlab等常用语言,可移植性强,同时支持芯片级程序开发,能够进一步用于实现微小型化三维测量重建系统。Beneficial effects of the present invention: 1. It only needs to be photographed once, and three-dimensional reconstruction can be realized without moving the camera and the target, and the result can be viewed from any angle of view; 2. The system photographing device is small and compact, and the photographing process is fast; The combination of refocusing technology and DFF technology reduces the difficulty of shooting and the complexity of reconstruction algorithms, shortens the time to acquire images, is suitable for 3D reconstruction of moving targets, expands the applicable depth range of DFF algorithms, and is suitable for scenes with large depth of field Three-dimensional reconstruction. 4. The present invention supports conversion into commonly used languages such as VC and matlab, has strong portability, and supports chip-level program development at the same time, and can be further used to realize a miniaturized three-dimensional measurement and reconstruction system.
附图说明Description of drawings
图1为本发明的光场的四维参数化示意图。Fig. 1 is a four-dimensional parameterized schematic diagram of the light field of the present invention.
图2为本发明的数字重聚焦原理示意图。Fig. 2 is a schematic diagram of the digital refocusing principle of the present invention.
图3为本发明的DFF算法的基本原理示意图。Fig. 3 is a schematic diagram of the basic principle of the DFF algorithm of the present invention.
图4为本发明的DFF算法三维重建流程图。Fig. 4 is a flow chart of the three-dimensional reconstruction of the DFF algorithm of the present invention.
图5为本发明的光场相机结构示意图。FIG. 5 is a schematic structural diagram of the light field camera of the present invention.
图6为本发明获取的带有两个凹槽的支架的光场原始图像。Fig. 6 is the original image of the light field of the bracket with two grooves acquired by the present invention.
图7为本发明对带有两个凹槽的支架的数字重聚焦的序列图像。FIG. 7 is a sequence of images of digital refocusing of a stent with two grooves according to the present invention.
图8为本发明对带有两个凹槽的支架的不同视角的三维重建效果图。Fig. 8 is a three-dimensional reconstruction effect diagram of different viewing angles of the stent with two grooves according to the present invention.
图9为本发明对连续深度变化的人脸以及离散深度的电池、U盘、花的三维重建效果图。Fig. 9 is a three-dimensional reconstruction effect diagram of a human face with continuous depth changes and a battery, a U disk, and a flower with discrete depths according to the present invention.
图10为本发明在目标物处于强烈反光下的原始图像和重建效果图。Fig. 10 is the original image and reconstruction effect diagram of the present invention under strong reflection of the target object.
图11为本发明分别在目标物处于黑色背景下、灰色背景下以及蓝色背景下的重建效果图。Fig. 11 is a reconstruction effect diagram of the present invention when the target object is in a black background, a gray background and a blue background respectively.
图12为本发明的三维重建系统原理框图。Fig. 12 is a schematic block diagram of the three-dimensional reconstruction system of the present invention.
具体实施方式Detailed ways
下面结合附图和具体实施方式对本发明作进一步详细说明。The present invention will be described in further detail below in conjunction with the accompanying drawings and specific embodiments.
本发明将光场数据获取设备、数字重聚焦技术和DFF技术整合,提供一种基于光场数字重聚焦的快速三维重建方法,即光场数字重聚焦聚焦法(Light_Field Digital_Refocus Depth_From_Focus,LFDRDFF),该方法包含以下步骤:The present invention integrates light field data acquisition equipment, digital refocus technology and DFF technology to provide a fast three-dimensional reconstruction method based on light field digital refocus, that is, light field digital refocus focusing method (Light_Field Digital_Refocus Depth_From_Focus, LFDRDFF), which The method includes the following steps:
首先,采用光场数据获取设备获取空间四维光场数据。First, the light field data acquisition equipment is used to acquire spatial four-dimensional light field data.
光场数据获取设备可采用美国Lytro公司的光场相机、光场显微镜(Light FieldMicroscopy)、Adobe光场2.0光场结构、raytrix相机、相机阵列、canon光场相机模块(Toshiba LightField Camera Module)、西班牙光场镜头(cafafis lens)、鹈鹕成像阵列相机(Pelican Imaging Array Camera)、皮克斯超级光场镜头(pixar-super-lightfield-lens)、英伟达近眼光场显示器(nvidia-near-eye-light-field-display)、光场传感器等。空间四维光场数据包括空间的二维位置信息和二维方向信息。The light field data acquisition equipment can use the light field camera, light field microscope (Light Field Microscopy), Adobe light field 2.0 light field structure, raytrix camera, camera array, canon light field camera module (Toshiba LightField Camera Module) of the American Lytro company (Light Field Microscopy), Spain Light field lens (cafafis lens), Pelican Imaging Array Camera (Pelican Imaging Array Camera), Pixar super light field lens (pixar-super-lightfield-lens), Nvidia near-eye light field display (nvidia-near-eye-light-field -display), light field sensor, etc. Spatial four-dimensional light field data includes spatial two-dimensional position information and two-dimensional direction information.
其次,数字重聚焦模块对所述空间四维光场数据进行数字重聚焦处理,得到聚焦平面序列图像。Secondly, the digital refocusing module performs digital refocusing processing on the spatial four-dimensional light field data to obtain a sequence of focused plane images.
所述聚焦平面序列图像可采用美国Lytro公司的Lytro Desktop软件、Matlab的光场工具箱、LFDisplay、LFP File Viewer、LFP File Reader、python-lfp-reader、LytroFile Reader、lfptools、Lytro.Splitte或者lfpsplitter得到。The focal plane sequence images can be obtained by using Lytro Desktop software from Lytro Company of the United States, light field toolbox of Matlab, LFDisplay, LFP File Viewer, LFP File Reader, python-lfp-reader, LytroFile Reader, lfptools, Lytro.Splitte or lfpsplitter .
数字重聚焦方法如下:The digital refocusing method is as follows:
(1)光场的四维参数化表示(1) Four-dimensional parametric representation of light field
光场是指空间中每一个点和每一个方向的辐射函数总和。光场成像理论雏形源自1903年Ives的发明专利US 725567:Parallax stereogram and process of making same和1908年Lippman发明的集成照相术(integral photography,IP)。1936年Cershun提出光场的概念。1996年,美国斯坦福大学教授Levoy和微软研究院的GORTLER等提出了光场四维参数化方法,Levoy教授并进一步提出了光场渲染理论。光场的四维参数化表示如图1所示。图中定义了两个坐标平面:第一个坐标平面为u-v平面,表示光学系统出瞳面;第二个坐标平面为x-y平面,表示图像传感器所在的平面,所述两个平面均垂直于系统光轴。The light field refers to the sum of radiation functions at every point and every direction in space. The prototype of light field imaging theory originated from Ives' invention patent US 725567: Parallax stereogram and process of making same in 1903 and integrated photography (integral photography, IP) invented by Lippman in 1908. In 1936, Cershun proposed the concept of light field. In 1996, Professor Levoy of Stanford University and GORTLER of Microsoft Research proposed a four-dimensional parameterization method of light field, and Professor Levoy further proposed the theory of light field rendering. The 4D parametric representation of the light field is shown in Figure 1. Two coordinate planes are defined in the figure: the first coordinate plane is the u-v plane, which represents the pupil exit surface of the optical system; the second coordinate plane is the x-y plane, which represents the plane where the image sensor is located, and the two planes are perpendicular to the system optical axis.
光线进入相机系统后,经过u-v平面上的点(u,v)投射到x-y平面上的点(x,y),将这束光线记为LF(u,v,x,y),其中LF(u,v,x,y)的数值表示光线携带的能量大小,坐标点(u,v)和(x,y)的连线表示光线的传播方向,角标F表示x-y平面相对于u-v平面的距离大小,即透镜焦距。在x-y平面上点(x,y)所接受的总的辐射能量计算如下:After the light enters the camera system, it passes through the point (u, v) on the uv plane and projects to the point (x, y) on the xy plane, and this beam of light is recorded as L F (u, v, x, y), where L The value of F (u, v, x, y) indicates the amount of energy carried by the light, the line connecting the coordinate points (u, v) and (x, y) indicates the direction of propagation of the light, and the subscript F indicates that the xy plane is relative to uv The distance of the plane is the focal length of the lens. The total radiation energy received by a point (x,y) on the xy plane is calculated as follows:
式(1)中,F是光学系统像方主面到像面之间的距离,EF(x,y)是像距为F的像面上坐标点(x,y)处的入射光通量,A(u,v)是接受光线的传感器像元面积。假定x-y平面和u-v平面无限大,在光学系统入瞳和CCD光电传感器面积之外传播的光线LF(u,v,x,y)=0,θ是光线LF(u,v,x,y)与像面法线方向的夹角,与其它量无关。为分析方便,忽略比例因子等参数,公式可以简化为:In formula (1), F is the distance between the main surface of the image side of the optical system and the image surface, E F (x, y) is the incident luminous flux at the coordinate point (x, y) on the image surface with the image distance F, A(u, v) is the sensor pixel area receiving light. Assuming that the xy plane and uv plane are infinitely large, the light LF (u, v, x, y) that propagates outside the entrance pupil of the optical system and the CCD photosensor area = 0, θ is the light LF (u, v, x, y) The included angle with the normal direction of the image plane has nothing to do with other quantities. For the convenience of analysis, the scale factor is ignored and other parameters, the formula can be simplified as:
式(2)中描述了光场摄像系统在像距为F的像面上的成像表达式。Equation (2) describes the imaging expression of the light field camera system on the image plane with the image distance F.
(2)光场数字重聚焦(2) Light field digital refocusing
美国斯坦福大学博士Ng提出光场数字重聚焦理论并实现。参见图2,假设光场相机系统在像距为l=F处得到的像EF(x,y)不清楚,在像距l'=αF像面上的像是清晰的,α是一个调节像距大小的系数。图2中沿某一方向传播的一条光线在像距为F的像面上的投射点坐标为(x,y),所述光线记为LF(u,v,x,y)。所述光线在像距为αF的像面上的投射坐标将发生变化,此时记为LαF。Dr. Ng from Stanford University in the United States proposed the theory of light field digital refocusing and realized it. Referring to Figure 2, it is assumed that the image E F (x, y) obtained by the light field camera system at the image distance l=F is unclear, and the image on the image plane at the image distance l'=αF is clear, and α is an adjustment Coefficient of image distance size. In FIG. 2 , the coordinates of a projection point of a ray propagating along a certain direction on the image plane with an image distance F are (x, y), and the ray is denoted as L F (u, v, x, y). The projection coordinates of the light rays on the image plane with the image distance αF will change, which is denoted as L αF at this time.
为计算方便,假定光线在像距为αF的像面上的透射点坐标用(x,y)表示,由图2的几何关系可知,这条光线在像距为F的像面上的投射点坐标为光线LαF和LF是在不同像面上记录的同一条光线,所以存在下式:For the convenience of calculation, it is assumed that the coordinates of the transmission point of the ray on the image plane with an image distance of αF are represented by (x, y). According to the geometric relationship in Figure 2, the projection point of this ray on the image plane with an image distance of F The coordinates are Light L αF and LF are the same light recorded on different image planes, so there is the following formula:
将式(3)带入式(2)中,可得像距l'为αF的像面上相机的成像表达式:Putting Equation (3) into Equation (2), the imaging expression of the camera on the image plane whose image distance l' is αF can be obtained:
通过式(4)可以计算目标景物在不同像距l'=αF处像面上所成的目标景物像。改变位置参数α的值,就可以得到相机在不同像距αF处目标景物所成的像。得到αF处像面的过程就是光场相机的数字重聚焦过程。The target scene images formed by the target scene on the image plane at different image distances l'=αF can be calculated by formula (4). By changing the value of the position parameter α , the images of the target scene at different image distances αF can be obtained. The process of obtaining the image plane at αF is the digital refocusing process of the light field camera.
最后,三维重建模块对所述聚焦平面序列图像进行三维重建,得到三维重建图像。所述三维重建可采用德国MVTec公司机器视觉软件包Halcon软件的DFF算法对所述聚焦平面序列图像进行三维重建,也可以采用其他机器视觉和图像处理软件的DFF算法对聚焦平面序列图像进行三维重建。Finally, the 3D reconstruction module performs 3D reconstruction on the focus plane sequence images to obtain a 3D reconstructed image. The three-dimensional reconstruction can use the DFF algorithm of the machine vision software package Halcon software of the German MVTec company to carry out three-dimensional reconstruction of the focal plane sequence images, and can also use the DFF algorithm of other machine vision and image processing software to carry out three-dimensional reconstruction of the focal plane sequence images .
DFF算法支持并行计算,效率高,速度快;该方法支持转化成VC、matlab等常用语言,可移植性强,同时支持芯片级程序开发,有利于用于实现微小型化三维测量重建系统。The DFF algorithm supports parallel computing, with high efficiency and fast speed; this method supports conversion into common languages such as VC and matlab, and has strong portability. It also supports chip-level program development, which is conducive to the realization of miniaturized 3D measurement and reconstruction systems.
参见图12,本发明还提供一种实施上述方法的三维重建系统,即LFDRDFF三维重建系统,包括:光场数据获取设备,用于获取空间四维光场数据;数字重聚焦模块,用于对空间四维光场数据进行数字重聚焦处理,得到聚焦平面序列图像;三维重建模块,将数字重聚焦模块处理得到的聚焦平面序列图像进行三维重建,得到三维重建图像。Referring to Fig. 12, the present invention also provides a three-dimensional reconstruction system implementing the above method, that is, the LFDRDFF three-dimensional reconstruction system, including: a light field data acquisition device for acquiring spatial four-dimensional light field data; a digital refocusing module for space The four-dimensional light field data is digitally refocused to obtain a focus plane sequence image; the three-dimensional reconstruction module performs three-dimensional reconstruction on the focus plane sequence image processed by the digital refocus module to obtain a three-dimensional reconstruction image.
本实施例使用Lytro光场相机获取空间四维光场数据。Lytro光场相机主要由主镜头、微透镜阵列和数字图像传感器三部分组成,结构示意图如图5所示。微透镜的作用是将主透镜的光瞳成像在图像传感器上并覆盖若干个传感器单元,相当于将整个光瞳分割成若干个子孔径。微透镜的位置反映了二维空间位置信息,微透镜覆盖的传感器像元位置反映了二维方向信息,这些信息与传感器单元的输出信号相对应,从而得到空间四维光场数据。In this embodiment, a Lytro light field camera is used to acquire spatial four-dimensional light field data. The Lytro light field camera is mainly composed of three parts: the main lens, the microlens array and the digital image sensor. The schematic diagram of the structure is shown in Figure 5. The function of the microlens is to image the pupil of the main lens on the image sensor and cover several sensor units, which is equivalent to dividing the entire pupil into several sub-apertures. The position of the microlens reflects two-dimensional spatial position information, and the sensor pixel position covered by the microlens reflects two-dimensional direction information, which corresponds to the output signal of the sensor unit, thereby obtaining spatial four-dimensional light field data.
本发明系统组成简单,显著简化了三维数据采集装置;目标三维数据获取过程简单快捷,不需要相机和目标物的移动,不需要多次拍摄,显著地提高了数据获取速度。The system of the invention is simple in composition and significantly simplifies the three-dimensional data acquisition device; the target three-dimensional data acquisition process is simple and fast, does not require the movement of the camera and the target object, and does not require multiple shootings, which significantly improves the data acquisition speed.
本发明选用了带有两个凹槽的支架作为目标物,目标物背景为黑色,采用基于光场数字重聚焦的快速三维重建实现的具体实施例如下:The present invention selects a bracket with two grooves as the target object, and the background of the target object is black. The specific implementation examples of fast three-dimensional reconstruction based on light field digital refocusing are as follows:
首先利用Lytro相机获取目标物的光场数据,通过Lytro光场相机对目标物进行一次拍摄,获得其原始图像,如图6所示,图6(a)为目标(支架)原始图,图6(b)为目标的Lytro光场图像局部图(图6(a)白框所示),图6(c)为图6(b)光场图像局部放大图的微透镜图像(图6(b)白框所示)。First, use the Lytro camera to obtain the light field data of the target object, and take a shot of the target object through the Lytro light field camera to obtain its original image, as shown in Figure 6, Figure 6(a) is the original image of the target (bracket), and Figure 6 (b) is the local image of the Lytro light field image of the target (shown in the white box in Figure 6(a), Figure 6(c) is the microlens image of the local enlarged image of the light field image in Figure 6(b) (Figure 6(b) ) shown in the white box).
然后将Lytro相机所得光场原始图像导入数字重聚焦模块中,通过Lytro Desktop软件的数字重聚焦方法完成数字重聚焦处理,得到目标物不同焦平面(以参数λ表示)的序列清晰图像,结果如图7所示。重构出的序列图像共有9幅,其中λ取正值的5幅,λ取负值的4幅。当λ=-6.92时,聚焦平面最远,当λ=18.25时,聚焦平面最近。Then import the original image of the light field obtained by the Lytro camera into the digital refocusing module, and complete the digital refocusing process through the digital refocusing method of the Lytro Desktop software, and obtain a sequence of clear images of different focal planes (indicated by parameter λ) of the target object. The results are as follows Figure 7 shows. There are 9 reconstructed sequence images in total, among which λ takes 5 pictures with positive values, and λ takes 4 pictures with negative values. When λ=-6.92, the focus plane is the farthest, and when λ=18.25, the focus plane is the closest.
最后三维重建模块通过Halcon软件中的DFF算法对重构出的序列图像进行三维重建,得到三维重建图像。该方法需要输入λ取不同数值的多幅图像,其工作流程图如图4所示,便可快速重构出目标物的三维图像,其三维图像可以通过任意视角进行查看。图8为实验获得的不同视角的三维重建图。Finally, the 3D reconstruction module performs 3D reconstruction on the reconstructed sequence images through the DFF algorithm in the Halcon software to obtain a 3D reconstructed image. This method needs to input multiple images with different values of λ, and its workflow is shown in Figure 4. It can quickly reconstruct the 3D image of the target object, and its 3D image can be viewed from any angle of view. Fig. 8 is a three-dimensional reconstruction diagram of different viewing angles obtained in the experiment.
本发明还针对两种不同结构层次的目标物体:连续深度变化的人脸,以及离散深度的电池、U盘和花进行了三维重建,重建结果如图9所示,其中,图9(a)为连续深度人脸的不同视角三维重建图,图9(b)为离散深度电池、u盘和花的不同视角三维重建图。实验结果表明,LFDRDFF方法对不同深度结构的目标都可以正确重建,重建深度范围可以达到目视距离,拓展了DFF算法深度范围,适合大景深场景和运动目标的三维重建。The present invention also carries out three-dimensional reconstruction for two target objects with different structural levels: human face with continuous depth changes, and batteries, U disks and flowers with discrete depths. The reconstruction results are shown in Figure 9, where Figure 9(a) Figure 9(b) is a 3D reconstruction image of a continuous depth face from different perspectives, and Figure 9(b) is a 3D reconstruction diagram of a discrete depth battery, USB disk and flower from different perspectives. The experimental results show that the LFDRDFF method can correctly reconstruct objects with different depth structures, and the reconstruction depth range can reach the visual distance, which expands the depth range of the DFF algorithm, and is suitable for 3D reconstruction of scenes with large depth of field and moving objects.
以下对影响三维重建效果的因素进行分析:The factors that affect the effect of 3D reconstruction are analyzed as follows:
(1)参数λ的取值对重建效果的影响(1) The influence of the value of the parameter λ on the reconstruction effect
光场重构平面参数λ表征了重构平面的深度信息,λ的离散程度、数量和正负对称性就意味着序列重构平面的深度特征,因此λ是正确三维重建的关键因素。本发明针对凹槽进行了多次重复实验,部分实验中λ取值见表1所示,整体实验的λ数据统计如表2所示。The light field reconstruction plane parameter λ represents the depth information of the reconstruction plane, and the discreteness, quantity, and positive and negative symmetry of λ mean the depth characteristics of the sequence reconstruction plane, so λ is the key factor for correct 3D reconstruction. The present invention has carried out many repeated experiments on the groove, and the value of λ in some experiments is shown in Table 1, and the statistics of λ data in the overall experiment are shown in Table 2.
实验证明:λ数量多于8、正负λ数目相当时,三维重建结果正确性越高;Lytro相机拍摄焦平面是目标深度范围的中间平面时,重建结果正确性较高;当拍摄焦平面偏离中间平面时,重构结果就会失真。Experiments have proved that: when the number of λ is more than 8 and the number of positive and negative λ is equal, the correctness of the 3D reconstruction result is higher; when the focal plane captured by the Lytro camera is the middle plane of the target depth range, the correctness of the reconstruction result is higher; when the focal plane deviates from When the middle plane is reached, the reconstruction result will be distorted.
(2)拍摄条件对重建效果的影响(2) The influence of shooting conditions on the reconstruction effect
拍摄过程中目标物的强烈反光和目标物背景也对重建效果产生不同的影响。强烈反光下所拍摄的目标物及重建图如图10所示。原始图(如图10(a)所示)中未反光区域对应重建图(如图10(b)所示)中黑色部分,该部分反映了正确的深度信息,原始图中白色反光区域对应着重建图中白色区域,该部分未正确获得深度信息,因此导致重建错误。The strong reflection of the target and the background of the target also have different effects on the reconstruction effect during the shooting process. Figure 10 shows the target object and its reconstruction picture taken under strong reflection. The non-reflective area in the original image (as shown in Figure 10(a)) corresponds to the black part in the reconstructed image (as shown in Figure 10(b)), which reflects the correct depth information, and the white reflective area in the original image corresponds to The white area in the reconstruction image, the depth information is not obtained correctly for this part, thus causing the reconstruction error.
不同背景下(图11I为黑背景、图11II为灰背景、图11III为蓝背景)所拍摄的目标物的原始图像(图11I(a)、图11II(a)、图11III(a))及重建图(图11I(c)、图11II(c)、图11III(c))如图11所示。Halcon软件的DFF算法将目标物的原始图像转化为灰度图像(图11I(b)、图11II(b)、图11III(b)),不同背景转化为灰度图像时其灰度值也不相同,其中蓝背景转化为灰度图像时基本为黑色,灰度值最低,其对重建图的影响最大,重建效果最不理想。黑背景和灰背景转化为灰度图像时均为灰色,黑背景下重建效果最好,灰背景次之。The original images (Fig. 11I(a), Fig. 11II(a), Fig. 11III(a)) and The reconstruction maps (Fig. 11I(c), Fig. 11II(c), Fig. 11III(c)) are shown in Fig. 11 . The DFF algorithm of the Halcon software converts the original image of the target object into a grayscale image (Figure 11I(b), Figure 11II(b), Figure 11III(b)), and the gray value of different backgrounds is different when converted into grayscale images. Similarly, when the blue background is converted into a grayscale image, it is basically black, and the grayscale value is the lowest, which has the greatest impact on the reconstruction image, and the reconstruction effect is the least ideal. When the black background and the gray background are transformed into grayscale images, both are gray, and the reconstruction effect is the best under the black background, followed by the gray background.
表1:Table 1:
表2:Table 2:
Claims (1)
1.一种基于光场数字重聚焦的快速三维重建方法的三维重建系统,包括以下步骤:1. A three-dimensional reconstruction system based on the fast three-dimensional reconstruction method of light field digital refocusing, comprising the following steps: 1)光场数据获取设备,用于获取空间四维光场数据;1) Light field data acquisition equipment, used to acquire spatial four-dimensional light field data; 2)数字重聚焦模块,用于对空间四维光场数据进行数字重聚焦处理,得到聚焦平面序列图像;2) Digital refocusing module, which is used to perform digital refocusing processing on spatial four-dimensional light field data to obtain focal plane sequence images; 3)三维重建模块,用于将数字重聚焦模块处理得到的聚焦平面序列图像进行三维重建,得到三维重建图像;3) A three-dimensional reconstruction module, which is used to perform three-dimensional reconstruction on the focus plane sequence images processed by the digital refocusing module to obtain a three-dimensional reconstruction image; 其特征在于:It is characterized by: 所述1)光场数据获取设备,用于获取空间四维光场数据;为利用Lytro相机获取目标物的光场数据,通过Lytro光场相机对目标物进行一次拍摄,获得其原始图像,原始图像选择黑背景或者灰背景,避免蓝色背景;The 1) light field data acquisition equipment is used to acquire spatial four-dimensional light field data; in order to use the Lytro camera to acquire the light field data of the target object, the target object is shot once through the Lytro light field camera to obtain its original image, the original image Choose black or gray background, avoid blue background; 所述2)数字重聚焦模块,用于对空间四维光场数据进行数字重聚焦处理,得到聚焦平面序列图像;为将Lytro相机所得光场原始图像导入数字重聚焦模块中,通过LytroDesktop软件的数字重聚焦方法完成数字重聚焦处理,得到以参数λ表示目标物不同焦平面的序列清晰图像,当λ为负值时,聚焦平面远,当λ为正值时,聚焦平面近,取λ数量多于8,正负λ数量相当;2) The digital refocusing module is used to perform digital refocusing processing on the spatial four-dimensional light field data to obtain a sequence image of the focus plane; in order to import the original light field image obtained by the Lytro camera into the digital refocusing module, the digital The refocusing method completes the digital refocusing process, and obtains a sequence of clear images with different focal planes of the target object represented by the parameter λ. When λ is a negative value, the focal plane is far away. When λ is a positive value, the focal plane is close. The number of λ is larger At 8, the number of positive and negative λ is equivalent; 所述3)三维重建模块,用于将数字重聚焦模块处理得到的聚焦平面序列图像进行三维重建,得到三维重建图像;为通过Halcon软件中的DFF算法对重构出的序列图像进行三维重建,得到快速重构出目标物的三维图像。The 3) three-dimensional reconstruction module is used to perform three-dimensional reconstruction on the focus plane sequence images processed by the digital refocusing module to obtain three-dimensional reconstruction images; to perform three-dimensional reconstruction on the reconstructed sequence images through the DFF algorithm in the Halcon software, A three-dimensional image of the target object can be rapidly reconstructed.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410581347.XA CN104463949B (en) | 2014-10-24 | 2014-10-24 | A kind of quick three-dimensional reconstructing method and its system based on light field numeral refocusing |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410581347.XA CN104463949B (en) | 2014-10-24 | 2014-10-24 | A kind of quick three-dimensional reconstructing method and its system based on light field numeral refocusing |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104463949A CN104463949A (en) | 2015-03-25 |
CN104463949B true CN104463949B (en) | 2018-02-06 |
Family
ID=52909931
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410581347.XA Active CN104463949B (en) | 2014-10-24 | 2014-10-24 | A kind of quick three-dimensional reconstructing method and its system based on light field numeral refocusing |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104463949B (en) |
Families Citing this family (36)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102028088B1 (en) | 2015-04-30 | 2019-10-02 | 구글 엘엘씨 | Set of virtual glasses for viewing real scenes correcting the position of lenses different from the eye |
CN104899870B (en) * | 2015-05-15 | 2017-08-25 | 清华大学深圳研究生院 | The depth estimation method being distributed based on light field data |
EP3106912A1 (en) * | 2015-06-17 | 2016-12-21 | Thomson Licensing | An apparatus and a method for obtaining a registration error map representing a level of fuzziness of an image |
CN106447762B (en) * | 2015-08-07 | 2019-05-07 | 中国科学院深圳先进技术研究院 | 3D reconstruction method and system based on light field information |
CN105427302B (en) * | 2015-11-17 | 2018-01-16 | 中国科学院自动化研究所 | A kind of three-dimensional acquisition and reconstructing system based on the sparse camera collection array of movement |
CN106254857B (en) * | 2015-12-31 | 2018-05-04 | 北京智谷睿拓技术服务有限公司 | Light field display control method and device, light field display device |
CN105704476B (en) * | 2016-01-14 | 2018-03-20 | 东南大学 | A kind of virtual visual point image frequency domain fast acquiring method based on edge reparation |
CN105791881A (en) * | 2016-03-15 | 2016-07-20 | 深圳市望尘科技有限公司 | Optical-field-camera-based realization method for three-dimensional scene recording and broadcasting |
CA3018604C (en) * | 2016-04-12 | 2023-11-07 | Quidient, Llc | Quotidian scene reconstruction engine |
CN106296811A (en) * | 2016-08-17 | 2017-01-04 | 李思嘉 | A kind of object three-dimensional reconstruction method based on single light-field camera |
CN107788947A (en) * | 2016-09-07 | 2018-03-13 | 爱博诺德(北京)医疗科技有限公司 | Eye examination apparatus and method |
CN106846469B (en) * | 2016-12-14 | 2019-12-03 | 北京信息科技大学 | Method and device for reconstructing 3D scene from focus stack based on feature point tracking |
CN107084794B (en) * | 2017-04-10 | 2021-06-22 | 东南大学 | Flame three-dimensional temperature field measuring system and method based on light field layered imaging technology |
CN107219620A (en) * | 2017-05-27 | 2017-09-29 | 中国科学院光电技术研究所 | Single-tube light field microscope |
CN107525945B (en) * | 2017-08-23 | 2019-08-02 | 南京理工大学 | 3D-3C particle image speed-measuring system and method based on integration imaging technology |
CN107909578A (en) * | 2017-10-30 | 2018-04-13 | 上海理工大学 | Light field image refocusing method based on hexagon stitching algorithm |
CN108007435B (en) * | 2017-11-15 | 2020-11-20 | 长春理工大学 | A camera positioning device and method for positioning a target camera based on four points |
CN112055836B (en) * | 2018-01-14 | 2022-09-27 | 光场实验室公司 | Light field vision correction device |
CN108470149A (en) * | 2018-02-14 | 2018-08-31 | 天目爱视(北京)科技有限公司 | A kind of 3D 4 D datas acquisition method and device based on light-field camera |
CN109360212B (en) * | 2018-11-02 | 2023-05-09 | 太原科技大学 | Frequency domain light field digital refocusing Jiao Suanfa capable of inhibiting resampling error |
CN109712232B (en) * | 2018-12-25 | 2023-05-09 | 东南大学苏州医疗器械研究院 | Object surface contour three-dimensional imaging method based on light field |
CN109632092A (en) * | 2018-12-29 | 2019-04-16 | 东南大学 | A kind of luminance test system and method based on spatial light field |
CN109859186B (en) * | 2019-01-31 | 2020-12-29 | 江苏理工学院 | A method for detecting positive and negative electrodes of lithium battery modules based on halcon |
CN110012196A (en) * | 2019-02-22 | 2019-07-12 | 华中光电技术研究所(中国船舶重工集团有限公司第七一七研究所) | A kind of light-field camera refocusing method |
CN109916531B (en) * | 2019-03-04 | 2020-09-11 | 东南大学 | Semitransparent flame three-dimensional temperature field measuring method based on light field refocusing |
CN110009693B (en) * | 2019-04-01 | 2020-12-11 | 清华大学深圳研究生院 | Rapid blind calibration method of light field camera |
CN111800588A (en) * | 2019-04-08 | 2020-10-20 | 深圳市视觉动力科技有限公司 | Optical unmanned aerial vehicle monitoring system based on three-dimensional light field technology |
CN110441271B (en) * | 2019-07-15 | 2020-08-28 | 清华大学 | Light field high-resolution deconvolution method and system based on convolutional neural network |
CN110675451B (en) * | 2019-09-17 | 2023-03-17 | 浙江荷湖科技有限公司 | Digital self-adaptive correction method and system based on phase space optics |
CN111080774B (en) * | 2019-12-16 | 2020-09-15 | 首都师范大学 | Method and system for reconstructing light field by applying depth sampling |
CN111179354A (en) * | 2019-12-16 | 2020-05-19 | 中国辐射防护研究院 | method for experimentally calibrating refocusing distance and corresponding α value of light field camera |
CN111288925B (en) * | 2020-01-18 | 2022-05-06 | 武汉烽火凯卓科技有限公司 | Three-dimensional reconstruction method and device based on digital focusing structure illumination light field |
CN111881925B (en) * | 2020-08-07 | 2023-04-18 | 吉林大学 | Significance detection method based on camera array selective light field refocusing |
CN112132771B (en) * | 2020-11-02 | 2022-05-27 | 西北工业大学 | Multi-focus image fusion method based on light field imaging |
CN112967268B (en) * | 2021-03-24 | 2022-08-09 | 清华大学 | Digital optical tomography method and device based on optical field |
CN113902791B (en) * | 2021-11-22 | 2022-06-21 | 郑州大学 | A three-dimensional reconstruction method and device based on liquid lens depth focusing |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101865673A (en) * | 2010-06-08 | 2010-10-20 | 清华大学 | A method and device for collecting and three-dimensional reconstruction of microscopic observation field |
CN102520787A (en) * | 2011-11-09 | 2012-06-27 | 浙江大学 | Real-time spatial three-dimensional presentation system and real-time spatial three-dimensional presentation method |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE112009005074T8 (en) * | 2009-05-21 | 2012-12-27 | Intel Corp. | TECHNIQUES FOR QUICK STEREO RECONSTRUCTION FROM PICTURES |
-
2014
- 2014-10-24 CN CN201410581347.XA patent/CN104463949B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101865673A (en) * | 2010-06-08 | 2010-10-20 | 清华大学 | A method and device for collecting and three-dimensional reconstruction of microscopic observation field |
CN102520787A (en) * | 2011-11-09 | 2012-06-27 | 浙江大学 | Real-time spatial three-dimensional presentation system and real-time spatial three-dimensional presentation method |
Non-Patent Citations (3)
Title |
---|
一个基于多视图立体视觉的三维重建方法;苗兰芳;《浙江师范大学学报(自然科学版)》;20130831;第36卷(第3期);第241-246页 * |
光场成像技术进展;聂云峰 等;《中国科学院研究生院学报》;20110930;第28卷(第5期);第2-3节 * |
基于序列图像三维重建的稻种品种识别;钱燕 等;《农业工程学报》;20140430;第30卷(第7期);第1.1节 * |
Also Published As
Publication number | Publication date |
---|---|
CN104463949A (en) | 2015-03-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104463949B (en) | 2018-02-06 | A kind of quick three-dimensional reconstructing method and its system based on light field numeral refocusing |
Ihrke et al. | 2016 | Principles of light field imaging: Briefly revisiting 25 years of research |
US8290358B1 (en) | 2012-10-16 | Methods and apparatus for light-field imaging |
KR102583723B1 (en) | 2023-10-05 | A method and an apparatus for generating data representative of a light field |
CN103945210B (en) | 2015-08-05 | A kind of multi-cam image pickup method realizing shallow Deep Canvas |
JP6862569B2 (en) | 2021-04-21 | Virtual ray tracing method and dynamic refocus display system for light field |
Li et al. | 2012 | Simplified integral imaging pickup method for real objects using a depth camera |
JP2019532451A (en) | 2019-11-07 | Apparatus and method for obtaining distance information from viewpoint |
KR20170005009A (en) | 2017-01-11 | Generation and use of a 3d radon image |
KR102253320B1 (en) | 2021-05-17 | Method for displaying 3 dimension image in integral imaging microscope system, and integral imaging microscope system implementing the same |
CN104007556A (en) | 2014-08-27 | Low crosstalk integrated imaging three-dimensional display method based on microlens array group |
TWI752905B (en) | 2022-01-21 | Image processing device and image processing method |
JP7479729B2 (en) | 2024-05-09 | Three-dimensional representation method and device |
CN113763301B (en) | 2024-03-29 | A three-dimensional image synthesis method and device that reduces the probability of miscutting |
Kagawa et al. | 2012 | A three‐dimensional multifunctional compound‐eye endoscopic system with extended depth of field |
Liu et al. | 2016 | Stereo-based bokeh effects for photography |
Yang et al. | 2023 | Real-time light-field generation based on the visual hull for the 3D light-field display with free-viewpoint texture mapping |
Ikeya et al. | 2018 | Capturing method for integral three-dimensional imaging using multiviewpoint robotic cameras |
CN110553585A (en) | 2019-12-10 | 3D information acquisition device based on optical array |
CN110908133A (en) | 2020-03-24 | An integrated imaging 3D display device based on a dihedral corner mirror array |
TWI717387B (en) | 2021-02-01 | An apparatus and a method for computer implemented generating data, a light field imaging device, a device for rendering an image and a non-transitory computer program product for a programmable apparatus |
Bazeille et al. | 2019 | Light-field image acquisition from a conventional camera: design of a four minilens ring device |
CN117956133B (en) | 2024-10-11 | Integrated imaging micro-image array generation method based on optimal voxel spatial distribution |
CN113132715B (en) | 2023-08-04 | Image processing method and device, electronic equipment and storage medium thereof |
Martínez‐Corral et al. | 2014 | Three‐Dimensional Integral Imaging and Display |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
2015-03-25 | C06 | Publication | |
2015-03-25 | PB01 | Publication | |
2015-04-22 | SE01 | Entry into force of request for substantive examination | |
2018-02-06 | GR01 | Patent grant | |
2018-02-06 | GR01 | Patent grant |