patents.google.com

US20110211040A1 - System and method for creating interactive panoramic walk-through applications - Google Patents

  • ️Thu Sep 01 2011

US20110211040A1 - System and method for creating interactive panoramic walk-through applications - Google Patents

System and method for creating interactive panoramic walk-through applications Download PDF

Info

Publication number
US20110211040A1
US20110211040A1 US13/127,474 US200913127474A US2011211040A1 US 20110211040 A1 US20110211040 A1 US 20110211040A1 US 200913127474 A US200913127474 A US 200913127474A US 2011211040 A1 US2011211040 A1 US 2011211040A1 Authority
US
United States
Prior art keywords
image
panoramic
images
view
vertical
Prior art date
2008-11-05
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13/127,474
Inventor
Pierre-Alain Lindemann
David Lindemann
Gérard Crittin
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
2008-11-05
Filing date
2009-11-05
Publication date
2011-09-01
2009-11-05 Application filed by Individual filed Critical Individual
2009-11-05 Priority to US13/127,474 priority Critical patent/US20110211040A1/en
2011-09-01 Publication of US20110211040A1 publication Critical patent/US20110211040A1/en
Status Abandoned legal-status Critical Current

Links

  • 238000000034 method Methods 0.000 title claims abstract description 71
  • 230000002452 interceptive effect Effects 0.000 title claims abstract description 36
  • 230000033001 locomotion Effects 0.000 claims description 40
  • 238000012986 modification Methods 0.000 claims description 24
  • 230000004048 modification Effects 0.000 claims description 24
  • 238000005259 measurement Methods 0.000 claims description 17
  • 230000003287 optical effect Effects 0.000 claims description 15
  • 230000008569 process Effects 0.000 claims description 9
  • 238000005070 sampling Methods 0.000 claims description 8
  • 238000004891 communication Methods 0.000 claims description 7
  • 238000009877 rendering Methods 0.000 claims description 6
  • 230000001133 acceleration Effects 0.000 claims description 5
  • 238000013519 translation Methods 0.000 claims description 5
  • 230000004044 response Effects 0.000 claims description 4
  • 238000006073 displacement reaction Methods 0.000 abstract description 5
  • 230000000007 visual effect Effects 0.000 abstract description 5
  • 238000012545 processing Methods 0.000 description 23
  • 230000037361 pathway Effects 0.000 description 17
  • 230000006870 function Effects 0.000 description 12
  • 230000010354 integration Effects 0.000 description 10
  • 238000003384 imaging method Methods 0.000 description 6
  • 230000008901 benefit Effects 0.000 description 5
  • 230000000873 masking effect Effects 0.000 description 5
  • 230000005540 biological transmission Effects 0.000 description 4
  • 230000000694 effects Effects 0.000 description 4
  • 239000012530 fluid Substances 0.000 description 4
  • 238000007726 management method Methods 0.000 description 4
  • 238000004091 panning Methods 0.000 description 3
  • 230000004075 alteration Effects 0.000 description 2
  • 238000004422 calculation algorithm Methods 0.000 description 2
  • 230000008859 change Effects 0.000 description 2
  • 238000011960 computer-aided design Methods 0.000 description 2
  • 230000007423 decrease Effects 0.000 description 2
  • 238000005516 engineering process Methods 0.000 description 2
  • 230000009191 jumping Effects 0.000 description 2
  • 230000007246 mechanism Effects 0.000 description 2
  • 230000001360 synchronised effect Effects 0.000 description 2
  • 238000012546 transfer Methods 0.000 description 2
  • 241001465754 Metazoa Species 0.000 description 1
  • 206010034960 Photophobia Diseases 0.000 description 1
  • 241000700159 Rattus Species 0.000 description 1
  • 240000007591 Tilia tomentosa Species 0.000 description 1
  • 230000006978 adaptation Effects 0.000 description 1
  • 230000006399 behavior Effects 0.000 description 1
  • 230000000903 blocking effect Effects 0.000 description 1
  • 238000004364 calculation method Methods 0.000 description 1
  • 230000006835 compression Effects 0.000 description 1
  • 238000007906 compression Methods 0.000 description 1
  • 238000010276 construction Methods 0.000 description 1
  • 238000012937 correction Methods 0.000 description 1
  • 238000013500 data storage Methods 0.000 description 1
  • 230000001419 dependent effect Effects 0.000 description 1
  • 230000001627 detrimental effect Effects 0.000 description 1
  • 239000000428 dust Substances 0.000 description 1
  • 230000002708 enhancing effect Effects 0.000 description 1
  • 230000000763 evoking effect Effects 0.000 description 1
  • 238000003702 image correction Methods 0.000 description 1
  • 238000003780 insertion Methods 0.000 description 1
  • 230000037431 insertion Effects 0.000 description 1
  • 208000013469 light sensitivity Diseases 0.000 description 1
  • 238000013507 mapping Methods 0.000 description 1
  • 230000007935 neutral effect Effects 0.000 description 1
  • 230000008447 perception Effects 0.000 description 1
  • 238000004321 preservation Methods 0.000 description 1
  • 230000000750 progressive effect Effects 0.000 description 1
  • 238000011160 research Methods 0.000 description 1
  • 230000035945 sensitivity Effects 0.000 description 1
  • 238000004088 simulation Methods 0.000 description 1
  • 238000001228 spectrum Methods 0.000 description 1
  • 230000006641 stabilisation Effects 0.000 description 1
  • 230000003068 static effect Effects 0.000 description 1
  • 239000000126 substance Substances 0.000 description 1
  • 238000006467 substitution reaction Methods 0.000 description 1
  • 230000001131 transforming effect Effects 0.000 description 1
  • 230000001960 triggered effect Effects 0.000 description 1
  • 238000012800 visualization Methods 0.000 description 1
  • XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/02Picture taking arrangements specially adapted for photogrammetry or photographic surveying, e.g. controlling overlapping of pictures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/01Satellite radio beacon positioning systems transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/13Receivers
    • G01S19/14Receivers specially adapted for specific applications
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/12Panospheric to cylindrical image transformations
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B37/00Panoramic or wide-screen photography; Photographing extended surfaces, e.g. for surveying; Photographing internal surfaces, e.g. of pipe
    • G03B37/06Panoramic or wide-screen photography; Photographing extended surfaces, e.g. for surveying; Photographing internal surfaces, e.g. of pipe involving anamorphosis

Definitions

  • the present invention relates generally to virtual tours. More specifically, the present invention relates to virtual walk-through applications using panoramic images, 3D images or a combination of both.
  • a virtual tour (or virtual reality tour) is a virtual reality simulation of an existing location, which is usually built using contents consisting principally of 2D panoramic images, sequence of linked still images or video sequences, and/or image-based rendering (IBR) consisting of image-based models of existing physical locations, as well as other multimedia content such as sound effects, music, narration, and text.
  • a virtual tour is accessed on a personal computer (typically connected to the Internet) or a mobile terminal.
  • virtual tours aim at evoking an experience of moving through the represented space.
  • Virtual tours can be especially useful for universities and in the real estate industry, looking to attract prospective students and tenants/buyers, respectively, eliminating for the consumer the cost of travel to numerous individual locations.
  • panorama indicates an unbroken view, so essentially, a panorama in that respect could be either a series of photographic images or panning video footage.
  • the terms ‘panoramic tour’ and ‘virtual tour’ are generally associated with virtual tours created using stills cameras. Such virtual tours created with still cameras are made up of a number of images taken from a single view point. The camera and lens are rotated around what is referred to as a nodal point (the exact point at the back of the lens where the light converges). These images are stitched together using specialist software to create a panorama representing a near 360 degree viewing angle, as viewed from a single “view point”; the panoramas are each resized and configured for optimal on-line use. Some ‘panographers’ will then add navigation features, such as hotspots (allowing the user to “jump” from one viewpoint or panorama to the next) and integrate geographic information such as plans or maps.
  • navigation features such as hotspots (allowing the user to “jump” from one viewpoint or panorama to the next) and integrate geographic
  • a seamless panoramic image can not be created from still images whenever such still images are captured from different nodal points or, for two consecutive images, from a single nodal point but with different focal lengths or focus distances. Images captured from a single camera rotating on its nodal point can be stitched seamlessly but this solution can not be used for applications involving axial translation, where, for example, images are captured from a vehicle in motion.
  • Catadioptric optical systems provide images having a 360° horizontal field of view and near 180° vertical field of view.
  • the resulting panoramic images are of a annular shape and generally must be sliced and open and “unwarped” to create a panoramic image of a rectangular shape.
  • the unwarping step causes image distortion which, together with the optical distortions caused by the Catadioptric optics having unevenly distributed angles along its radial axis (vertical axis of the view), must be compensated by specialised application software.
  • Patent document US 2007/0211955 to Pan discloses a perspective correction method allowing e-panning without image distortion, wherein image correction step is performed on image slices (horizontal section of the wide-angle image) by repositioning each pixel to a corresponding point on a cylindrical surface.
  • This method consumes significant processing power and bandwidth for respectively correcting and transmitting the images whenever fast user motion is involved during navigation, and is therefore not optimal for providing a seamless navigation experience at relatively high user directed panning speed.
  • occlusion meaning, with regard to 2D images, the non-projection of a surface to a point of observation, and with regard to a 3-D space, the effect of one object blocking another object from view.
  • Limitations of current virtual tour technology, such as object occlusion, have had the detrimental result on virtual tours never materializing outside of the real estate industry.
  • VWT Virtual Walk-Through
  • StreetView and applications of the like provide visualisation at road-view level, that is, visiting a city as viewed from a car, wherein the user follows a pathway formed by a plurality of panoramas accessible along main streets, the user following the pathway by “jumping” (generally from a graphical interface allowing clicking of on screen icons) from a panorama or point of view to the next distant point of view.
  • This application uses a series of standard photographic images, taken by multiple cameras systems mounted to produce images representative of multiple view angles; panoramic images are produced by computation (stitching) of still images from the multiple cameras.
  • panoramic images can not provide accurate representation of geometric objects, of for example buildings, due to the inherent discontinuity (break) of such panoramic images, this discontinuity is due to the physical impossibility of superposing a single nodal point from multiple cameras and view angles.
  • “STREET VIEW” products and the like provide images which suffer from trapezoidal distortion whenever the view angle is not pointing toward the horizon; this distortion is due to the perspective.
  • “STREET VIEW”'s images do not reflect human vision behaviours, which keep vertical lines mostly parallel whenever a viewer tilts its view gently above or below the horizon.
  • Google “STREET VIEW” also creates ground plane distortion where planar ground seems to be inclined due to the unwanted motion of the cameras caused by the inertial force.
  • Other current walk-through products such as “EVERYSCAPE” (www.everyscape.com by Everyscape, Waltham, Mass.) and “EARTHMINE” (www.earthmine.com by Earthmine inc., Berkeley, Calif.) also produce trapezoidal distortion, which makes them unfit for applications requiring continuous undistorted images (i.e. images which more closely correspond to human vision), as for example, for virtual shopping.
  • the trapezoidal distortion drawback is inherent also to virtual walk-through applications based on 3D virtual images, which can be used for example for visiting a virtual building using real-time 3D engine, such as Second Life (www.secondlife.com by Linden Research Inc, San Francisco, Calif.) or video games.
  • Second Life www.secondlife.com by Linden Research Inc, San Francisco, Calif.
  • video games such as Second Life (www.secondlife.com by Linden Research Inc, San Francisco, Calif.) or video games.
  • “EARTHMINE” provides commercial online walk-through for applications such as management of buildings and assets, telemetric measurement and other cadastral works.
  • the product combines high resolution still images and 3D mesh information to provide pathways wherein user jumps from one distant view point to another.
  • EVERYSCAPE provides commercial online panoramic products wherein motion between two consecutive view points is simulated by video postproduction effects. This product does not allow user to pan and tilt its viewing angle during its displacement along the travel path. During the travel motion effect, images are no longer panoramic unless the fields of view of images representative of the next fixed point, are constrained to the motion axis.
  • the prior art describes several techniques whose purpose is reducing the bandwidth associated with the transmission of panoramic images and applications between a server and the user remote terminal, allowing a user to navigate a walkthrough space while downloading data.
  • Predefined pathways have the additional benefit of simplifying user navigation, notably by preventing the user from searching available paths or from hitting objects repetitively during motion, as would be the case when user tries to walk through walls or door images.
  • U.S. Pat. Nos. 6,388,688 and 6,580,441 both to Schileru-Rey, disclose a computer system and method that allow interactive navigation and exploration of spatial environments, wherein pathways are represented by branches, and intersections in the real environment are represented by nodes. User selects which path to follow from the node.
  • a branch represents a video sequence or animation played during motion between two adjacent view points.
  • Virtual objects can be integrated to specific branches or nodes, without assigning a geographic coordinate to virtual objects; each object is linked to at least one branch or node and displayed when user is travelling on said branch or node.
  • U.S. Pat. No. 7,103,232 to Kotake discloses an IBR system with improved broadcasting performance, where a panoramic image is created by stitching several images from several cameras, preferably video cameras, pointing to distinct points of view, the cameras being synchronised by use of a single time code.
  • the '232 system provides panoramic images divided in six images sections of 60° horizontal field of view, and broadcast typically only two of the six sections' image (providing a 120° field of view) at any giving time with an aim to reduce processing power and communication bandwidth.
  • the '232 solution is not optimized however for walk through applications allowing fast movement across the horizontal plane beyond 120°; moreover, the '232 patent does not disclose broadcasting images of different image resolution, meaning that it only covers broadcasting of image of the highest possible image resolution.
  • U.S. Pat. No. 6,633,317 to Jiang Li discloses a data transfer scheme, dubbed spatial video streaming, allowing a client to selectively retrieve image segments associated with the viewer's current viewpoint and viewing direction, rather than transmitting the image data in the typical frame by frame manner.
  • the method of patent 317' divides walkthrough space in a grid; each cell of the grid is assigned to at least one image of the surrounding scene as viewed from that cell; images are characterised similar to a concentric mosaic in that each cells is represented by a sequence of image columns.
  • Patent 317' allows transmission of part of the images (compressed or not) needed in an attempt to anticipate viewer's change of view point within the walkthrough space, starting with image data corresponding to viewpoints immediately adjacent the current viewpoint, with subsequent image data associated with viewpoints radiating progressively out from the current viewpoint.
  • Patent 317' is well suited to open walkthrough spaces where user can move in any direction using multiple source of image data (simple 2D image, panoramic image or concentric mosaic), such as in a typical 3D environment.
  • this method is not suited to the optimal transmission of full panoramic images in situations where user travels along predefined pathways consisting of several view points in a linear arrangement within a network of pathways within the walkthrough space.
  • this method is not optimized, in terms of response time, to allow user to change his travel plan, for example, by making a U-turn or to travel along another pathway.
  • this method allows travel in any direction (along a predefined pathway) the amount of data download to represent a given view point is greater and therefore less suited for fast and responsive viewing experience on the Internet or other network media having limited bandwidth.
  • U.S. Pat. No. 6,693,649 to Lipscomb discloses a solution for non-linear mapping between media and display allowing “hotspots”, defined as an outline of two points connected by straight lines, to be used in the context of panoramas. Such “hotspots” are referenced to each image using two angle's or two pixel's coordinates; such values are only valid for each distinct image.
  • What is needed is an optimized system or method for the real-time construction and broadcasting of panoramic walkthrough applications, which allow the user, from each view point or geographical coordinate along a network of pathway, to have a complete view from a first person point of view, the view covering substantially 360° in field of view.
  • What is needed is such a system or method that combines high rate fluid panoramic imaging broadcasting and the possibility of seamlessly providing higher quality images in which the visual perspective perception based on human vision is preserved.
  • a system, apparatus and method for creating interactive panoramic walk-through applications having a 2D image acquisition system includes, comprising a holding device such as a vehicle equipped with a camera connected to a catadioptric optical system providing near 360° field of view, a memory device adapted to store data including images and geographic coordinate information related to these images, communication device for transferring data from the memory device to a computer, a fixation and stabilisation system connecting the camera to the holding means for aligning the camera perpendicular to the horizontal plane; a processor and associated software for performing image modification steps; and optionally, 3D virtual images.
  • a holding device such as a vehicle equipped with a camera connected to a catadioptric optical system providing near 360° field of view
  • a memory device adapted to store data including images and geographic coordinate information related to these images
  • communication device for transferring data from the memory device to a computer
  • a fixation and stabilisation system connecting the camera to the holding means for aligning the camera perpendicular to the horizontal plane
  • the image capture system includes a location measurement device (GPS), a distance measurement device (odometer) and an inertial measurement unit (IMU) for measuring rate of acceleration and changes in rotational attributes (attitude) of the vehicle, the fixation device or camera.
  • GPS location measurement device
  • Odometer distance measurement device
  • IMU inertial measurement unit
  • the stored data includes the image, date and time of image capture, geographical coordinate information, and other information, notably image reference and image group, so as to identify, for example, a particular district or street, as well as camera settings such as aperture, and image-related information such as camera model, speed and ISO reference.
  • the image modification steps include the steps for providing panoramic images based on two points perspective, of unwarping images, vertical stretching of image proportional to the divergence of the field of view to the horizon of the optical system, and expanding horizontal edges.
  • the image modification steps optionally include the step of automatic blurring of portion of images, such as faces or numerical car plates.
  • Software operates a computer to perform image modification steps such as processes images in two resolutions, where low resolution is used for interactive walkthrough panoramic motion, and high resolution for interactive panoramic motion.
  • preservation of visual perspective based on human vision is provided, notably by use of two points perspective that does not produce the trapezoidal distortion that is inherent with standard 360° environment interactive applications.
  • FIG. 1 is a schematic side view of a catadioptric (mirror-based) panoramic optical system.
  • FIG. 2 is a schematic view of a panoramic image made using the catadioptric based panoramic optical system of FIG. 1 .
  • FIG. 3 is a schematic side view of a lens based panoramic optical system.
  • FIG. 4 is a schematic view of a panoramic image made using the lens based panoramic optical system of FIG. 3 .
  • FIG. 5 is a flow chart showing the unwarping modification steps.
  • FIG. 6A-6C are schematic views of a panoramic image, made from either systems of FIG. 1 and FIG. 2 , following each of the unwarping modification steps of FIG. 5 .
  • FIG. 7 is a schematic view of a panoramic image before and after modification steps to compensate for vertical distortion.
  • FIG. 8 is a side view showing the vehicle, image capture system, measurement means and attached odometer over a schematic ruler representing distance of vehicle' travel along a road.
  • FIG. 9 is a schematic view of the data storage apparatus of the present invention.
  • FIG. 10 is a floor plan view showing the division of a panoramic virtual 3D image into four sections to be compatible with standard field of view ( ⁇ 179.9 ⁇ °) in standard 3D applications.
  • FIG. 11 is a schematic view showing four images resulting from the rendering of a 360° field of view in a standard 3D application by the rendering of section step, providing the division of a 360° panoramic field of view (FOV) on four 3D images of 90° of FOV of FIG. 10 .
  • FOV 360° panoramic field of view
  • FIG. 12 is a schematic view showing the steps of assembly and panoramic distortion over the images of FIG. 11 , wherein image 250 shows an expansion of horizontal edges, image 252 shows a distortion of image to compensate for standard perspective where image are projected on a plane, and image 254 shows a cut of image bulb, discontinuous section of the image, at top and bottom of the image.
  • FIG. 13 is a schematic view of a panoramic image of FIG. 7 after modification steps to expand horizontal edges. Same step is typically applied to image 254 of FIG. 12 .
  • FIG. 14 is a flow chart showing the two points perspective distortion steps.
  • FIG. 15 is a schematic view showing the steps of modifying a panoramic image in order to provide a resulting image based on a two points perspective similar to human vision.
  • an image capture system 20 consists of a panoramic optic 30 , 30 ′, a camera 40 and a memory device 50 such as a computer, mounted on a vehicle 70 or other portable holding device.
  • the panoramic optic 30 , 30 ′ is a physical panoramic optic providing 2D panoramic images.
  • the optic 30 , 30 ′ including either, a “lens and mirror” based optic system (catadioptric system) 32 , 38 , 42 as shown in FIG. 1 , or a physical optical panoramic system 33 (consisting of an ultra wide angle lens or fisheye system with lens providing more than 200° of continuous vertical field of view), without mirror, as shown in FIG. 3 .
  • Both systems 30 , 30 ′ are commercially available and reflect the substantially 360 degree panoramic field of view into the lens based optics connected to camera 40 .
  • the mirror shape and lens used is specifically chosen and disposed such that the effective camera 40 maintains a single viewpoint.
  • a lens is available from Nikon, Canon, and other vendors for example Bellissimo Inc, Carlson Nev. (www.0-360.com).
  • the single viewpoint means the complete panorama is effectively imaged or viewed from a single point in space. Thus, one can simply warp the acquired image into a cylindrical or spherical panorama.
  • the image capture system can use a panoramic lens optic (not shown) commercially available or specifically designed to be adapted to the image capture system 20 of the present invention.
  • This panoramic lens optic is composed of a lens assembly that distorts the field of view (FOV), wherein the FOV is expanded (on the geometric opposite) to cover an additional 90° in all directions, from the camera's original FOV, providing in total a FOV of at least 180° of vertical FOV, and ideally at least 240° of vertical FOV.
  • FOV field of view
  • Such a panoramic lens allows more vertical field of view than a catadioptric system, but accentuates chromatic aberration and is more vulnerable to dust, drops and lens flare artefacts.
  • catadioptric systems 32 and physical optical panoramic systems 33 are to provide image almost free of chromatic aberrations or discontinuities (breaks). Moreover, since a complete panorama is obtained on each image shot, dynamic scenes can be captured.
  • a first advantage of a physical optical panoramic and catadioptric system over multiple camera systems is that the former avoids the need to stitch multiple images to create a full panoramic image and image color and exposure are consistent inside one point of view over the 360° range.
  • a second advantage is that the geometric nodal point does not need to be simulated, as is the case with stitched motion images.
  • the accuracy of an object's geometry in the image is not relative to it's distance from the camera.
  • objects located proximate to the camera are discontinuous and produce ghosts images and artefacts over the resulting panoramic image.
  • the camera 40 used in the system 30 , 30 ′ can be any kind of imaging device (e.g., a conventional camera with chemical exposition film, video camera, etc), but is typically a high resolution digital camera, having CCD or CMOS captors of typically 12 Megapixel of resolution or more, with controllable aperture and fast response time of typically 4.5 images/second or more.
  • imaging device e.g., a conventional camera with chemical exposition film, video camera, etc
  • CCD or CMOS captors typically 12 Megapixel of resolution or more
  • controllable aperture and fast response time typically 4.5 images/second or more.
  • Speed of displacement of the vehicle varies; it is typically of 10 km/h and 2.5 km/h for respectively outdoor and indoor image acquisition applications; this provides for a resolution of three to four or more images per meter (at 2.5 km/h) for indoor applications to one image per meter for outdoor applications (at 10 Km/h).
  • Typical speed and number of image per meter disclosed in this document are provided by example and do not constitute a limitation to the applicable field of the 1.5 present invention. Images can be captured at higher vehicle velocity; during which, satisfactorily images can be captured using a camera with a better sensitivity or a lower image resolution or at a lower capture rate allowing fewer view points along a pathway. Identical or a higher number of view points may be captured using a faster capture device. Higher or lower density (images per meter) may be achieved based on requirements of the specific application field, and on hardware evolution.
  • image bracketing and HDR are used to enhance the dynamic spectrum of the final image.
  • Image bracketing and HDR is a technique used to mix multiple images of the same viewpoint, each image having different exposure time.
  • Image bracketing and HDR require the immobility of the vehicle during image acquisition, the drawback being a slower image capture process.
  • the digital camera 40 is coupled to a catadioptric optic system 32 by an optic apparatus 42 such as is commercially available from manufacturers such as Nikon and Cannon, then by a standard connector 38 provided by the catadioptric lens manufacturer for each proprietary optic mounting format.
  • the digital camera 40 could also be coupled with the panoramic lens 33 by an optic apparatus 43 that is commercially available from the above-mentioned manufacturers
  • the memory device receives and stores the images transferred from the camera 40 , together with other information received from measurement device 60 such as geographic coordinates (including altitude) related to the images as well as geometric orientation, acceleration, rate of rotation on all three axes (attitude) and travel distance information of the capture vehicle and/or the measurement device.
  • Memory device 50 is typically a computer installed with an operating system, proprietary software and a logical device with multiple processing cores and/or CPU arrangement.
  • the proprietary software manages the distribution of the load of image acquisition to multiple threads on multiple CPUs, processing cores or logical processing units. The distribution of load is achieved by attributing the processing work of each subsequent image to another CPU core or logical processing unit.
  • a typical image capture sequence In a computer 50 having four logical processing units, a typical image capture sequence according to the present invention would be processed in sequential order as follows: (i) first image's acquisition processing work is performed on “logical processing unit 1 ”, (ii) second image's acquisition processing work is performed on “logical processing unit 2 ”, (iii) third image's acquisition processing work is performed on “logical processing unit 3 ”, (iv) fourth image's acquisition processing work is performed on “logical processing unit 4 ”, (v) fifth image acquisition processing work is performed on “logical processing unit 1 ”, (vi) sixth image acquisition processing work is performed on “logical processing unit 2 ”.
  • the distribution of acquisition processing across multiple CPUs increases the rate of image acquisition in a given time, resulting in better image acquisition performance and accrued image acquisition reliability.
  • Images are distributed from memory device 50 to multiple storage devices 52 to achieve the high data bandwidth required by the in-motion image capture method of the present invention.
  • Memory device 50 and multiple storage devices 52 are located onboard vehicle 70 or remote thereto.
  • a communication device 54 allows transfer of data from the memory device 50 to a central computer 80 .
  • Data is stored in a source database 400 on the central computer 80 , wherein each image has a unique identification.
  • Each unique image ID is associated with a specific time reference being the database 400 , the time reference is the time when the image is captured. Because the time reference needs high precision, the time reference is given as universal time reference, provided by the GPS unit, which is typically more precise than the internal computer clock. With the time reference, the image capture location can easily be retrieved.
  • the measurement device 60 mounted on the vehicle 70 comprises a GPS tracking device 62 or a similar device able to determine geographic coordinate information from a satellite signal, radio signal or otherwise.
  • Each image is recorded in the memory device 50 or on a central computer 80 with the associated geographic coordinate information, namely of the location of image capture, which is stored either on a dedicated recording device or on the memory device 50 such as an on-board computer or on a remote central computer 80 .
  • Data is transferred using a communication protocol such as USB, Bluetooth, Ethernet, WiFi, and stored on the destination apparatus in any standard database format.
  • Geographic coordinates also referred herein as “GPS data” 162 , are stored with specific GPS universal time reference to images, allowing the determination of the exact geographic location at which each image was taken.
  • Memory device 50 is synchronised to the GPS clock to store universal time reference with any stored data.
  • a system and method for the integration of virtual objects in an interactive panoramic walk through application, using a method for determining geographic coordinates with increased precision is described in PCT application entitled SYSTEM AND METHOD FOR THE PRECISE INTEGRATION OF VIRTUAL OBJECTS TO INTERACTIVE PANORAMIC WALK-THROUGH APPLICATIONS, by Lindemann et al. which is being concurrently filed with the instant application and is incorporated by reference hereto.
  • GPS devices have limited precision in altitude tracking
  • other devices such as an altimeter or any altitude tracking device can be use in adjunction to GPS device to enhance the precision of altitude tracking of images.
  • direction can be obtained from an electronic compass or other direction tracking device, thereby enhancing the precision of the recorded path of image.
  • An odometer 66 is connected to vehicle 70 for indicating distance traveled between any two image locations, thus improving the precision of the geographic coordinates associated with each image.
  • Odometer 66 may be an electronic or mechanical device.
  • An Inertial Measurement Unit—IMU device 67 on board vehicle 70 detects the current rate of acceleration of the vehicle as well as changes in rotational attributes (attitude), including pitch, roll and yaw. Such data may be used to correct image inconsistencies caused by travel over uneven surfaces. It should be noted that the vehicle's acceleration or speed does not affect the capture density, number of image along a pathway, because successive images are automatically triggered as a function of the distance value between any two successive images, as provided, for example, by GPS data, odometer data 166 and/or IMU data 167 .
  • the host vehicle 70 for image capture of outdoor locations, can be any vehicle adapted for circulation on roads, namely cars, trucks, or any vehicle adapted to limited circulation areas or indoor circulation such as golf carts, electric vehicles and mobility scooters (scooters for the disabled), etc.
  • FIG. 8 shows vehicle 70 as a small car.
  • remote controlled vehicles, unmanned vehicles, robots, and stair-climbing robots can be used as the host vehicle 70 for the image capture system 20 .
  • the Image capture system 20 can also be carried by a human and for some special applications, small animals such as rats.
  • a flying machine can be used, in which case an odometer is emulated by the use of GPS data or/and triangulation using radio signals. Triangulation techniques using radio signals require that two or more emitters be located at known positions.
  • source images 210 (2D panoramic images) from the image capture system 20 are modified using a computer with a logical device such as central computer 80 .
  • Images modification steps comprise the steps of unwarping 312 , compensation of vertical distortion 322 , expansion of horizontal edges 332 and the resolution of two point perspective distortion 342 , in order to obtain release images 280 that can be broadcast by web server 82 .
  • source images 210 obtained by a panoramic optic 30 , 30 ′ are typically circular in shape, referred in the art as annular images.
  • Unwarping 312 of source images 210 is achieved using conventional software techniques, so as to form cylindrical images.
  • the unwarping operation is typically performed in three consecutive operations: in a first operation 313 , the image is centered and aligned relative to a grid consisting typically of geographic direction (North-East-South-West axis) or corresponding degrees of the 360 degrees panorama, where the North axis direction is referred to as 0 degree for convenience; in a second operation 314 , the circular shaped image is opened up after “slicing” a section from the image center to an edge, typically the bottom point of the image on the “South” or 180 degree direction the resulting image 211 is in the shape of a circular arc; and in third operation 315 , the circular arc is further unwarped to form an unwarped image 212 of rectangular shape.
  • 6A-6C , 7 and 13 show section marks on the vertical axis where the “A” mark indicates the South or the 180 degree direction, “B” indicates the East or the 90 degree direction, “C” indicates the North or 0 degree direction, and “D” indicates West or the 270 degree direction.
  • the field of view 34 of catadioptric optic is typically un-evenly distributed along the vertical axis of the mirror, causing optical deformations, where the image 212 appears compressed near the upper and lower edges of the image.
  • compensation of vertical distortion 322 is performed as shown in FIG. 7 , and involves modifying each unwarped image 212 by software operation to compensate for the compression of the upper and lower edges of the images, to provide a resulting image 222 that is typically larger along the vertical axis, compared to the original unwarped image 212 .
  • Compensation of vertical distortion step 322 is performed by applying a function curve to affect pixel distribution along the vertical axis.
  • a function curve to affect pixel distribution along the vertical axis.
  • an empty data buffer corresponding to a destination image 222 is set-up, resulting in a blank image.
  • correspondence between coordinates in the source image 212 and destination image 222 is determined using a function curve.
  • pixel values color, hue, intensity, CMYK or pixel RGB values
  • At least one image sampling method such as “median”, “summed area”, “bilinear” or “trilinear” known image sampling methods is applied to obtain sub pixel data whenever a pixel's coordinates, size and shape on the original image 212 do not match resulting pixel's coordinates, size and shape on the destination image 222 .
  • One resulting pixel can match a source pixel exactly or can be a variable portion of one or several source pixels), thereby allowing the destination image to have different vertical resolution (number of pixels) as compared to source image 212 .
  • the function curve is determined by either measuring the curvature of different optic elements of the panoramic optic 30 , typically the main lens or mirror, or by measuring the vertical distortion produced by said optic elements.
  • Vertical distortion is measured by acquiring the image of a calibration model (a 2D image) at a given distance with the image panoramic optic 30 and camera 40 in a calibration room having visible demarkations (for example, a geometrically precise grid painted on a room wall). This helps determine distance discrepancies between the image and the calibration model.
  • a measurement unit for example 1 vertical meter
  • a measurement unit of 1 vertical meter at the distance between a tree, 213 on both images 212 and 222 , and the position of the capture optic 30 fills the same number of pixel near the horizon and near the top or bottom of the image.
  • This measurement unit which fills a number of pixels N near the horizon of the image, will fill the same number of pixels N at the top or bottom of the image.
  • same measurement unit at same distance will fill fewer pixels near the top and bottom of image compared as to the horizon (image vertical middle).
  • the system 20 of the present invention also allows the use of 3D virtual images, alone or in combination with 2D (optical) panoramic images, for the purpose of creating and broadcasting interactive panoramic walk-through applications.
  • Current commercially available 3D rendering software engines used in CAD (“Computer aided-design”), 3D modeling and gaming applications are not meant to provide panoramic images having near 360 degree horizontal field of view.
  • the computation of a virtual 360 degree panoramic image is thus typically achieved by transforming a 3D scene 240 , originating from a 3D modeling, for example, into a panoramic 254 image equivalent to the resulting image 222 .
  • computing a virtual 360 degree panoramic image is performed, for each 3D scene 240 , in a first step 253 by rendering four images, called sections 244 (numbered 1 to 4 on each of FIGS. 10-12 and 15 ), having a 90° horizontal field of view from a single nodal point 241 and each section 244 pointing toward one of the four right angle directions (front, left, back and right). From the nodal point 241 each section 244 is thus oriented, at a view angle of 90°, from the preceding and following sections.
  • the combination of the four original sections 244 can be represented by a single rectangular shape image 250 .
  • the combination of the four 90° view sections can be used to define the inner faces of a box; consequently, image 250 represents the juxtaposition over the same plane of the four inner faces of a box viewed from a central position inside this box.
  • image 250 is modified as if image 250 was then projected over a continuous surface of cylindrical shape.
  • This image modification is done by stretching vertically the resulting image 250 , section per section, along a sinusoidal curve using software which affects the sinusoidal stretching logical step 352 , and resulting in modified image 252 .
  • This sinusoidal stretching logical step 352 is described in detail below.
  • an empty data buffer corresponding to the destination image 252 is setup (resulting in a blank image).
  • the sinusoidal stretching logical step 352 is performed by evaluating resulting images pixels using a sinusoidal function curve that points on the original image pixels.
  • This function curve computes the vertical coordinates in the destination image 252 to determine the corresponding coordinates in the source image 250 .
  • This sinusoidal function curve decreases its amplitude for vertical coordinates that are farther from the top edge of the section, the amplitude reaching a null value (0) at the vertical center of the section, then the amplitude continues to decrease as negative number and reaches the exact opposite value at the bottom edge of the section.
  • This sinusoidal function curve is calculated from the start of an image section 244 to the end of the same section. The same function curve is used horizontally for each section, which results in an image plane 252 that contains 4 sections stretched in the same way regarding to the section local coordinate system.
  • pixel values at a given coordinate of source image 250 are copied according to pixel color value at corresponding coordinates (given by said sinusoidal function curve of the second sub step) on the destination image 252 , on a pixel per pixel basis.
  • At least one image sampling method such as a, “median”, “summed area”, “bilinear” or “trilinear” known image sampling method is applied to obtain sub pixel data, whenever the pixel's coordinates, size and shape, on the source image 250 do not match the resulting pixel's coordinates, size and shape on the destination image 252 (one resulting pixel can match a source pixel exactly or can be a variable portion of one or many source pixels), allowing the destination image 252 to have different vertical resolution (number of pixels per mm) compared to source image 250 .
  • Pixel values can be either color, hue, intensity values, CMYK (Cyan, Magenta, Yellow, Key) values or pixel RGB (Red, Green, Blue) values.
  • the sinusoidal stretching step 352 is performed to compensate the distance difference (i) between the view point and the pixels laid over a flat surface (the face of a box discussed above) and (ii) between the view point and same pixel laid over a cylindrical surface (the inner surface of a cylinder as viewed from within).
  • the area, filled with the projection of an object grows in the inverse proportional manner as the distance between said object and the view point.
  • the center of an edge of a flat surface (any face of the box) is located closer to the viewpoint, compared to the center of an edge of an equivalent cylindrical surface (as if the box would be contained within the arc of a cylinder).
  • a logical step 354 is performed which removes the circular area on the upper and lower edges of each image's section of image 252 , so that ultimately a single rectangular shape image 254 is created.
  • each image 222 (at this stage, image 254 and image 222 can be processed by the system in the same manner), which is either a panoramic image from the image capture system 30 or an image resulting of the modification of a 3D image, is further modified by the expansion of horizontal edges logical step 332 shown in FIG. 13 .
  • Image 222 is typically divided in two sections. The left edge of image 222 is cropped, copied and pasted on the opposite edge of the image to form expanded image 232 .
  • the left border section of each image is sliced in a sub-section on the vertical axis, and the edge sub-section is cropped and displaced on the opposite edge of the image, so that two identical sub-sections of the image appear on both opposite sides of the image.
  • the expansion of horizontal edges step 332 reduces the processing workload necessary for the real-time performance of the two points perspective distortion step 342 , allowing an output rate typically of 15 or more images per second. This rate is sufficiently fast to create the illusion of a seamless motion, without jerky movements, during displacement (image panoramic motion or rotation of the view) within the image. Increase of system performance is achieved because the expansion of horizontal edges 332 avoids need for the image broadcasting system to manipulate two images simultaneously.
  • the system of the present invention achieves perpetual panoramic motion without slowdown: when the border of an image is reached, the image broadcasting system simply displays the other side of the same image, in an invisible manner to the user.
  • a virtual walk-through application providing a convincing natural, immersive navigation environment should be visually as close as possible to human vision during panoramic motion (view point rotation, horizontal displacement inside one panoramic image), or translation motion (view point moving in space forward or backward on the pathway), irrespective of motion speed.
  • a panoramic image closer to human vision during panoramic motion with optimised calculation performance is provided by the use of two-point perspective distortion.
  • the image is vertically compressed depending on vertical elevation (distance to the horizon) in order to compensate for significant horizontal distortions which occur further from the horizon.
  • Such distortion is systematically caused by the cylindrical projection of the image and is more severe toward the vertical lower and upper parts of the image.
  • the present invention provides compensation of horizontal distortion similar to the projection of a cylindrical view to a planar view.
  • the present invention departs from the traditional projection of a plane inside a cylinder by assembling an image from the juxtaposition of several vertical image strips 235 , each of them optionally having a different width, and by stretching said strips. This avoids the need to recalculate the position of each pixel of the image according to the projection of a plane inside a cylinder. Only a selected vertical and a horizontal portion of the image are displayed and visible to the user at any given time.
  • the method of the present invention described below works in a “staircase-like” manner (visible on the zoomed image 236 of FIG. 15 ) whereby the width of each stripe is controlled and reduced and the number of stripes increased to provide the illusion of a progressive curvature that is a perspective closer to human vision.
  • the panoramic images 232 are further modified by the two points perspective distortion step 342 in order to provide images based on a two points perspective.
  • the two points perspective distortion 342 step is achieved by applying two basic operations, as shown in FIG. 15 , allowing vertical parallel lines in the real world to appear as parallel lines on screen during the panoramic motion and interactive navigation.
  • a first operation 343 the viewable portion 234 of any image 232 viewed by a user on screen, is divided into vertical slices 235 (stripes) in real time, which are each vertically stretched along sinusoidal lines in real time to recover a two points perspective.
  • the sinusoidal virtual canvas (the vertical slices) is in a fixed position relative to the viewport which is the rectangular window by where the image is viewed by the user.
  • the viewable portion 234 travels across fixed vertical slices 235 , during which time only a portion of the panoramic image is visible on screen.
  • a second operation 345 vertical panoramic interactive motion is achieved by simple vertical translation of vertical slices 235 without affecting sinusoidal distortion (that is the relative position and relative size of each stripe regarding each other stripe). All the slices 235 are then vertically compressed proportional to the angular divergence of the field of view to the horizon, in order to minimise vertical distortion of the two points perspective.
  • the two points perspective distortion 342 avoids trapezoidal distortion found on a regular three point perspective used in 3D computer graphics.
  • the image modification steps of the present invention processes images in two resolutions; low resolution for interactive walkthrough panoramic motion (view point translation), and high resolution for interactive panoramic motion (view point rotation) applications.
  • the system of the present invention can include logical means for the automatic blurring, masking or removal of objects appearing on images, such as human faces and car plates.
  • Automatic blurring of faces is achieved by a face recognition algorithm.
  • image tracking software known in the art each individual face is identified and tracked throughout any sequence of images.
  • software blurs the images on the coordinates corresponding to recognizable faces.
  • Automatic blurring of car plates is achieved by a car plate recognition algorithm.
  • image tracking software known in the art each individual car plate is identified and tracked throughout any sequence of images.
  • a software blurs car plates on the coordinates corresponding to recognizable car plates.
  • Identical processes can be used to mask objects from images. The processes above are not limited to faces or car plates but can be used to blurred or mask any objects specified by the system's operator, with corresponding software adaptation.
  • panoramic resulting images 232 are stored on the central computer 80 , each image being associated with tridimensional (XYZ) geographic coordinates indicative of the image's location of capture and optionally other references such as date of capture, with day and time; project or name of location; image group, so as to identify for example a particular district or street; digital photography settings, such as aperture setting, speed, ISO (measure of the light sensitivity of the digital sensor), exposure, light measure, camera model; identified for mounted photos filters such as UV, neutral gray, etc.; vehicle information such as vehicle speed and model; and camera operator identification.
  • XYZ tridimensional
  • GIS Geographic Information System
  • layers of the utilities' services infrastructure water, cable, electric distribution infrastructure
  • contact information associated with commercial establishments located in the vicinity
  • the system and method for creating panoramic walk-through applications herein provide image modification steps that enable the seamless and transparent integration of panoramic images (from a panoramic optic 20 ) with virtual (3D) images, wherein virtual images and panoramic images can be combined in a plurality of ways, for example, to mask or super-impose each other. It is therefore possible to combine panoramic images, virtual images and 3D objects without limitation in any panoramic view; the resulting view having a seamless appearance during walkthrough interactive motion and panoramic motion.
  • the system of the present invention positions each image and virtual objects on a 3D space using absolute geographic coordinates. Any panoramic image is edited from either a source image 210 or a virtual image 240 has geographic coordinates in 3D space. Due to the use of absolute coordinates, the system of the present invention allows the addition (pining) of virtual 3D objects to any panoramic image or virtual image.
  • Masking operations are achieved by the system in which a mask is defined, that is, the representation in shape and volume of a real object appearing on the panoramic images is defined.
  • This technique is well known in the art and used to integrate 3D objects in photos in the real-estate 3D industry, wherein said technique is limited to static view. Effectively, the mask position and orientation are correct for a given 3D object only for a given view point.
  • the use of absolute 3D space coordinates for referencing both panoramic images and virtual images inside the 3D space allows any given 3D mask to be used on any panoramic image in a pathway.
  • the mask becomes visible from any view points that have a view on the mask's geographical coordinates.
  • the 3D mask is designed to exist at a position and a shape identical to respectively, the 3D position and shape of a real physical object viewable on one or more panoramic images.
  • the 3D mask pixels will occlude the pixel of the 3D object; in which case, pixels of the panoramic image representative of the physical object may become visible when the mask is rendered (the mask is “transparent” to an extent where pixels of the 3D object behind the mask pixels also become transparent, revealing the pixel of background panoramic image).
  • the mask can be displayed with a virtual shadow from, or cast virtual shadow to, other 3D objects to enhance the realism of the integration of 3D in the real image.
  • a 3D mask can be occluded by a 3D object when the pixels representing said 3D object are located, relative to the view line, in front (closer to viewpoint) of pixels of said mask.
  • the masking process is determined on a pixel per pixel basis, complex situations may arise.
  • a 3D object can be partially occluded by a 3D mask that is partially occluded by another 3D object.
  • the flexibility of the masking process allows a near visually perfect integration of 3D object inside real image used in interactive panoramic walk-through applications.
  • 3D computer graphic layering using masking operations will produce new panoramic 3D layered images based on real world panoramic image composed of pixels from 3D objects and 3D masks.
  • These panoramic 3D layered images are stored in the database 400 with geographic coordinates of the 3D space equivalent to coordinates of the source panoramic image and tagged with a reference to the 3D computer graphic layering program.
  • Said 3D computer graphic layering program reference can be a 3D application source, or a file name, a project name or an application name or both.
  • 3D computer graphic layering can be used to mix 3D images from different 3D rendering packages, or to add objects in the 3D space without the need to re-render the full 3D panoramic images (typically necessary when inserting new objects in already rendered 3D panoramic images).
  • 3D computer graphic layering can therefore be used for insertion and management of advertizing, wherein a virtual advertisement can be inserted by defining a 2D (flat) or 3D object and superimposing an advertising image on said object.
  • Image is available in alternative version broadcasted depending on the context: typically, one panoramic representation of a location is edited without any advertising, and several other copies are made available to different advertisers; a software based management tool allows advertisers to select an advertising to be imported in the system; selection and display of advertising to be broadcast can be dependent upon, for example, available user information such as location and language, and the identity of the web portal from which the user is accessing the panoramic application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Instructional Devices (AREA)

Abstract

System and method of the invention provides for creating, storing and broadcasting interactive panoramic walk-through applications. The combination of images is determined by the user's choice of direction of displacement at each intersection point and from each view point or geographical coordinate, in order to provide a complete view from a first person's point of view. The system provides a visual perspective comparable to a human visual experience.

Description

  • This application claims the benefit of U.S. Provisional Application No. 61/111,346, entitled SYSTEM AND METHOD FOR CREATING AND BROADCASTING INTERACTIVE PANORAMIC WALK-THROUGH APPLICATIONS, filed Nov. 5, 2008.

  • FIELD OF THE INVENTION
  • The present invention relates generally to virtual tours. More specifically, the present invention relates to virtual walk-through applications using panoramic images, 3D images or a combination of both.

  • BACKGROUND OF THE INVENTION
  • A virtual tour (or virtual reality tour) is a virtual reality simulation of an existing location, which is usually built using contents consisting principally of 2D panoramic images, sequence of linked still images or video sequences, and/or image-based rendering (IBR) consisting of image-based models of existing physical locations, as well as other multimedia content such as sound effects, music, narration, and text. A virtual tour is accessed on a personal computer (typically connected to the Internet) or a mobile terminal. Although not replacing real travel, virtual tours aim at evoking an experience of moving through the represented space. Virtual tours can be especially useful for universities and in the real estate industry, looking to attract prospective students and tenants/buyers, respectively, eliminating for the consumer the cost of travel to numerous individual locations.

  • The word panorama indicates an unbroken view, so essentially, a panorama in that respect could be either a series of photographic images or panning video footage. However, the terms ‘panoramic tour’ and ‘virtual tour’ are generally associated with virtual tours created using stills cameras. Such virtual tours created with still cameras are made up of a number of images taken from a single view point. The camera and lens are rotated around what is referred to as a nodal point (the exact point at the back of the lens where the light converges). These images are stitched together using specialist software to create a panorama representing a near 360 degree viewing angle, as viewed from a single “view point”; the panoramas are each resized and configured for optimal on-line use. Some ‘panographers’ will then add navigation features, such as hotspots (allowing the user to “jump” from one viewpoint or panorama to the next) and integrate geographic information such as plans or maps.

  • Current virtual tours photographic techniques suffer from several limitations. A seamless panoramic image can not be created from still images whenever such still images are captured from different nodal points or, for two consecutive images, from a single nodal point but with different focal lengths or focus distances. Images captured from a single camera rotating on its nodal point can be stitched seamlessly but this solution can not be used for applications involving axial translation, where, for example, images are captured from a vehicle in motion.

  • Catadioptric optical systems provide images having a 360° horizontal field of view and near 180° vertical field of view. The resulting panoramic images are of a annular shape and generally must be sliced and open and “unwarped” to create a panoramic image of a rectangular shape. The unwarping step causes image distortion which, together with the optical distortions caused by the Catadioptric optics having unevenly distributed angles along its radial axis (vertical axis of the view), must be compensated by specialised application software.

  • Patent document US 2007/0211955 to Pan discloses a perspective correction method allowing e-panning without image distortion, wherein image correction step is performed on image slices (horizontal section of the wide-angle image) by repositioning each pixel to a corresponding point on a cylindrical surface. This method consumes significant processing power and bandwidth for respectively correcting and transmitting the images whenever fast user motion is involved during navigation, and is therefore not optimal for providing a seamless navigation experience at relatively high user directed panning speed.

  • Also, with current image capture solutions, objects near the camera are responsible for occlusion on distant objects; “occlusion” meaning, with regard to 2D images, the non-projection of a surface to a point of observation, and with regard to a 3-D space, the effect of one object blocking another object from view. Limitations of current virtual tour technology, such as object occlusion, have had the detrimental result on virtual tours never materializing outside of the real estate industry.

  • Virtual Walk-Through (“VWT”) applications constitute an evolution over virtual tours. This technology eliminates the occlusion limitation by enabling user to travel to a point where distant objects are no longer occluded.

  • Commercial online walk-through products such as Google “Street View” provide virtual outdoor walk-through of cities using images captured from a camera mounted on a road vehicle which circulates on motorways and roads at speeds ranging from 30 kmh to 80 kmh. These products are limited to outdoor views wherein any two consecutive points of view are positioned at a relatively long distance from each other. Typically, StreetView and applications of the like provide visualisation at road-view level, that is, visiting a city as viewed from a car, wherein the user follows a pathway formed by a plurality of panoramas accessible along main streets, the user following the pathway by “jumping” (generally from a graphical interface allowing clicking of on screen icons) from a panorama or point of view to the next distant point of view. This application uses a series of standard photographic images, taken by multiple cameras systems mounted to produce images representative of multiple view angles; panoramic images are produced by computation (stitching) of still images from the multiple cameras. Such panoramic images can not provide accurate representation of geometric objects, of for example buildings, due to the inherent discontinuity (break) of such panoramic images, this discontinuity is due to the physical impossibility of superposing a single nodal point from multiple cameras and view angles. Furthermore, “STREET VIEW” products and the like provide images which suffer from trapezoidal distortion whenever the view angle is not pointing toward the horizon; this distortion is due to the perspective. Although geometrically correct, “STREET VIEW”'s images do not reflect human vision behaviours, which keep vertical lines mostly parallel whenever a viewer tilts its view gently above or below the horizon.

  • Google “STREET VIEW” also creates ground plane distortion where planar ground seems to be inclined due to the unwanted motion of the cameras caused by the inertial force. Other current walk-through products, such as “EVERYSCAPE” (www.everyscape.com by Everyscape, Waltham, Mass.) and “EARTHMINE” (www.earthmine.com by Earthmine inc., Berkeley, Calif.) also produce trapezoidal distortion, which makes them unfit for applications requiring continuous undistorted images (i.e. images which more closely correspond to human vision), as for example, for virtual shopping.

  • The trapezoidal distortion drawback is inherent also to virtual walk-through applications based on 3D virtual images, which can be used for example for visiting a virtual building using real-

    time

    3D engine, such as Second Life (www.secondlife.com by Linden Research Inc, San Francisco, Calif.) or video games.

  • “EARTHMINE” provides commercial online walk-through for applications such as management of buildings and assets, telemetric measurement and other cadastral works. The product combines high resolution still images and 3D mesh information to provide pathways wherein user jumps from one distant view point to another.

  • “EVERYSCAPE” provides commercial online panoramic products wherein motion between two consecutive view points is simulated by video postproduction effects. This product does not allow user to pan and tilt its viewing angle during its displacement along the travel path. During the travel motion effect, images are no longer panoramic unless the fields of view of images representative of the next fixed point, are constrained to the motion axis.

  • In sum, current virtual walk-through applications and systems suffer several important limitations. Travel along pathways is achieved by jumping from one view point to another, instead of by a fluid travel motion. Views suffer from high occlusion rate, as lots of objects are never visible at all along pathways. Generally, images are not standard panoramic images but rather patchy assembly of 2D images where many discontinuities are found on each image.

  • The prior art describes several techniques whose purpose is reducing the bandwidth associated with the transmission of panoramic images and applications between a server and the user remote terminal, allowing a user to navigate a walkthrough space while downloading data.

  • The use of a predefined pathway has been widely adopted to prevent storage and transmission of redundant image data. Predefined pathways have the additional benefit of simplifying user navigation, notably by preventing the user from searching available paths or from hitting objects repetitively during motion, as would be the case when user tries to walk through walls or door images.

  • U.S. Pat. Nos. 6,388,688 and 6,580,441, both to Schileru-Rey, disclose a computer system and method that allow interactive navigation and exploration of spatial environments, wherein pathways are represented by branches, and intersections in the real environment are represented by nodes. User selects which path to follow from the node. A branch represents a video sequence or animation played during motion between two adjacent view points. Virtual objects can be integrated to specific branches or nodes, without assigning a geographic coordinate to virtual objects; each object is linked to at least one branch or node and displayed when user is travelling on said branch or node.

  • U.S. Pat. No. 7,103,232 to Kotake discloses an IBR system with improved broadcasting performance, where a panoramic image is created by stitching several images from several cameras, preferably video cameras, pointing to distinct points of view, the cameras being synchronised by use of a single time code. The '232 system provides panoramic images divided in six images sections of 60° horizontal field of view, and broadcast typically only two of the six sections' image (providing a 120° field of view) at any giving time with an aim to reduce processing power and communication bandwidth. The '232 solution is not optimized however for walk through applications allowing fast movement across the horizontal plane beyond 120°; moreover, the '232 patent does not disclose broadcasting images of different image resolution, meaning that it only covers broadcasting of image of the highest possible image resolution.

  • U.S. Pat. No. 6,633,317 to Jiang Li discloses a data transfer scheme, dubbed spatial video streaming, allowing a client to selectively retrieve image segments associated with the viewer's current viewpoint and viewing direction, rather than transmitting the image data in the typical frame by frame manner. The method of patent 317' divides walkthrough space in a grid; each cell of the grid is assigned to at least one image of the surrounding scene as viewed from that cell; images are characterised similar to a concentric mosaic in that each cells is represented by a sequence of image columns. Method of patent 317' allows transmission of part of the images (compressed or not) needed in an attempt to anticipate viewer's change of view point within the walkthrough space, starting with image data corresponding to viewpoints immediately adjacent the current viewpoint, with subsequent image data associated with viewpoints radiating progressively out from the current viewpoint. Patent 317' is well suited to open walkthrough spaces where user can move in any direction using multiple source of image data (simple 2D image, panoramic image or concentric mosaic), such as in a typical 3D environment. However, this method is not suited to the optimal transmission of full panoramic images in situations where user travels along predefined pathways consisting of several view points in a linear arrangement within a network of pathways within the walkthrough space. Additionally, being view direction sensitive, this method is not optimized, in terms of response time, to allow user to change his travel plan, for example, by making a U-turn or to travel along another pathway. Finally as this method allows travel in any direction (along a predefined pathway) the amount of data download to represent a given view point is greater and therefore less suited for fast and responsive viewing experience on the Internet or other network media having limited bandwidth.

  • Consequently, no system of the prior art provides a system optimized for seamless broadcasting of fluid motion where the user can orientate (pan and tilt) the field of view during motion and where the user can stop the motion anywhere along the travel path, in order to discover local objects in detail without occlusion.

  • Integration of virtual objects (such as images, icons, etc) to panoramas has been limited to the integration of two dimensional objects in specific view point images wherein said objects are not visible from distant view points.

  • U.S. Pat. No. 6,693,649 to Lipscomb discloses a solution for non-linear mapping between media and display allowing “hotspots”, defined as an outline of two points connected by straight lines, to be used in the context of panoramas. Such “hotspots” are referenced to each image using two angle's or two pixel's coordinates; such values are only valid for each distinct image.

  • Allocation of a third dimension value for virtual objects and determination of precise geographic location information for each view point are prerequisite for the seamless integration of hotspots and other virtual objects in panoramas, where said object would be visible from any point having a direct line of sight to the object. Consequently, no prior art system provides a system and advanced features based on geographical information such as the ability to pin an element of information on any location in a view, such element staying spatially fixed to the point during the travel. The present method and system for creating and assembling interactive walkthrough applications overcomes these shortcomings as will now be described.

  • Given the market need for immersive online applications, what is needed therefore is a system and method for providing seamless, quality, fluid walk-through navigation using any combination of 2D panoramic images, virtual 3D, and optionally, virtual objects or any images or a combination thereof.

  • What is needed is an optimized system or method for the real-time construction and broadcasting of panoramic walkthrough applications, which allow the user, from each view point or geographical coordinate along a network of pathway, to have a complete view from a first person point of view, the view covering substantially 360° in field of view. What is needed is such a system or method that combines high rate fluid panoramic imaging broadcasting and the possibility of seamlessly providing higher quality images in which the visual perspective perception based on human vision is preserved.

  • What is needed is a system or method for creating and broadcasting interactive panoramic walk-through applications that can combine still panoramic images that can be captured from either indoor or outdoor locations, and virtual scenes.

  • Finally, what is needed is a system or method for creating and broadcasting interactive panoramic walk-through applications that can make genuinely interactive functions available and accessible in the images.

  • SUMMARY OF THE INVENTION
  • A system, apparatus and method for creating interactive panoramic walk-through applications having a 2D image acquisition system is provided. The system includes, comprising a holding device such as a vehicle equipped with a camera connected to a catadioptric optical system providing near 360° field of view, a memory device adapted to store data including images and geographic coordinate information related to these images, communication device for transferring data from the memory device to a computer, a fixation and stabilisation system connecting the camera to the holding means for aligning the camera perpendicular to the horizontal plane; a processor and associated software for performing image modification steps; and optionally, 3D virtual images.

  • The image capture system includes a location measurement device (GPS), a distance measurement device (odometer) and an inertial measurement unit (IMU) for measuring rate of acceleration and changes in rotational attributes (attitude) of the vehicle, the fixation device or camera.

  • The stored data includes the image, date and time of image capture, geographical coordinate information, and other information, notably image reference and image group, so as to identify, for example, a particular district or street, as well as camera settings such as aperture, and image-related information such as camera model, speed and ISO reference.

  • The image modification steps include the steps for providing panoramic images based on two points perspective, of unwarping images, vertical stretching of image proportional to the divergence of the field of view to the horizon of the optical system, and expanding horizontal edges. The image modification steps optionally include the step of automatic blurring of portion of images, such as faces or numerical car plates.

  • Software operates a computer to perform image modification steps such as processes images in two resolutions, where low resolution is used for interactive walkthrough panoramic motion, and high resolution for interactive panoramic motion.

  • According to another preferred embodiment of the present invention, preservation of visual perspective based on human vision is provided, notably by use of two points perspective that does not produce the trapezoidal distortion that is inherent with standard 360° environment interactive applications.

  • It is an object of the invention to provide a system and method for providing immersive, interactive and intuitive walk-through applications using 2D panoramic true images, virtual 3D images or a combination of both, that can provide seamless quality walk-through navigation and high quality imaging.

  • It is another object of the present invention to provide a system or method that combines high rate panoramic imaging broadcasting and the possibility of seamlessly providing higher quality images in which visual perspective based on human vision is preserved.

  • It is another object of the present invention to provide a system or method for creating and broadcasting interactive panoramic walk-through applications that can combine indoor and outdoor images, based on 2D panoramic images and virtual 3D images.

  • It is another object of the present invention to provide a system or method for creating and broadcasting interactive panoramic walk-through applications that can provide genuinely interactive functions accessible in the images.

  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1

    is a schematic side view of a catadioptric (mirror-based) panoramic optical system.

  • FIG. 2

    is a schematic view of a panoramic image made using the catadioptric based panoramic optical system of

    FIG. 1

    .

  • FIG. 3

    is a schematic side view of a lens based panoramic optical system.

  • FIG. 4

    is a schematic view of a panoramic image made using the lens based panoramic optical system of

    FIG. 3

    .

  • FIG. 5

    is a flow chart showing the unwarping modification steps.

  • FIG. 6A-6C

    are schematic views of a panoramic image, made from either systems of

    FIG. 1

    and

    FIG. 2

    , following each of the unwarping modification steps of

    FIG. 5

    .

  • FIG. 7

    is a schematic view of a panoramic image before and after modification steps to compensate for vertical distortion.

  • FIG. 8

    is a side view showing the vehicle, image capture system, measurement means and attached odometer over a schematic ruler representing distance of vehicle' travel along a road.

  • FIG. 9

    is a schematic view of the data storage apparatus of the present invention.

  • FIG. 10

    is a floor plan view showing the division of a panoramic virtual 3D image into four sections to be compatible with standard field of view (<179.9˜°) in standard 3D applications.

  • FIG. 11

    is a schematic view showing four images resulting from the rendering of a 360° field of view in a standard 3D application by the rendering of section step, providing the division of a 360° panoramic field of view (FOV) on four 3D images of 90° of FOV of

    FIG. 10

    .

  • FIG. 12

    is a schematic view showing the steps of assembly and panoramic distortion over the images of

    FIG. 11

    , wherein

    image

    250 shows an expansion of horizontal edges,

    image

    252 shows a distortion of image to compensate for standard perspective where image are projected on a plane, and

    image

    254 shows a cut of image bulb, discontinuous section of the image, at top and bottom of the image.

  • FIG. 13

    is a schematic view of a panoramic image of

    FIG. 7

    after modification steps to expand horizontal edges. Same step is typically applied to image 254 of

    FIG. 12

    .

  • FIG. 14

    is a flow chart showing the two points perspective distortion steps.

  • FIG. 15

    is a schematic view showing the steps of modifying a panoramic image in order to provide a resulting image based on a two points perspective similar to human vision.

  • DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS Image Capture System
  • Referring now to

    FIGS. 1-4

    and 8-9, an

    image capture system

    20 consists of a

    panoramic optic

    30, 30′, a

    camera

    40 and a

    memory device

    50 such as a computer, mounted on a

    vehicle

    70 or other portable holding device.

  • The

    panoramic optic

    30, 30′ is a physical panoramic optic providing 2D panoramic images. The optic 30, 30′ including either, a “lens and mirror” based optic system (catadioptric system) 32, 38, 42 as shown in

    FIG. 1

    , or a physical optical panoramic system 33 (consisting of an ultra wide angle lens or fisheye system with lens providing more than 200° of continuous vertical field of view), without mirror, as shown in

    FIG. 3

    . Both

    systems

    30, 30′ are commercially available and reflect the substantially 360 degree panoramic field of view into the lens based optics connected to

    camera

    40.

  • The mirror shape and lens used is specifically chosen and disposed such that the

    effective camera

    40 maintains a single viewpoint. Such a lens is available from Nikon, Canon, and other vendors for example Bellissimo Inc, Carlson Nev. (www.0-360.com). The single viewpoint means the complete panorama is effectively imaged or viewed from a single point in space. Thus, one can simply warp the acquired image into a cylindrical or spherical panorama.

  • Optionally in the catadioptric system, the image capture system can use a panoramic lens optic (not shown) commercially available or specifically designed to be adapted to the

    image capture system

    20 of the present invention. This panoramic lens optic is composed of a lens assembly that distorts the field of view (FOV), wherein the FOV is expanded (on the geometric opposite) to cover an additional 90° in all directions, from the camera's original FOV, providing in total a FOV of at least 180° of vertical FOV, and ideally at least 240° of vertical FOV. Such a panoramic lens allows more vertical field of view than a catadioptric system, but accentuates chromatic aberration and is more vulnerable to dust, drops and lens flare artefacts.

  • The most significant advantage of

    catadioptric systems

    32 and physical optical panoramic systems 33 (mono camera system) over multi-camera image capture systems, is to provide image almost free of chromatic aberrations or discontinuities (breaks). Moreover, since a complete panorama is obtained on each image shot, dynamic scenes can be captured.

  • A first advantage of a physical optical panoramic and catadioptric system over multiple camera systems is that the former avoids the need to stitch multiple images to create a full panoramic image and image color and exposure are consistent inside one point of view over the 360° range. A second advantage is that the geometric nodal point does not need to be simulated, as is the case with stitched motion images. Moreover, with a physical optical panoramic or catadioptric system, the accuracy of an object's geometry in the image is not relative to it's distance from the camera. In a multiple camera system where the nodal point is simulated by software techniques, objects located proximate to the camera are discontinuous and produce ghosts images and artefacts over the resulting panoramic image.

  • The

    camera

    40 used in the

    system

    30, 30′ can be any kind of imaging device (e.g., a conventional camera with chemical exposition film, video camera, etc), but is typically a high resolution digital camera, having CCD or CMOS captors of typically 12 Megapixel of resolution or more, with controllable aperture and fast response time of typically 4.5 images/second or more.

  • Fast response time is required to obtain still images during image acquisition while the

    vehicle

    70 is in motion. Speed of displacement of the vehicle varies; it is typically of 10 km/h and 2.5 km/h for respectively outdoor and indoor image acquisition applications; this provides for a resolution of three to four or more images per meter (at 2.5 km/h) for indoor applications to one image per meter for outdoor applications (at 10 Km/h). Typical speed and number of image per meter disclosed in this document are provided by example and do not constitute a limitation to the applicable field of the 1.5 present invention. Images can be captured at higher vehicle velocity; during which, satisfactorily images can be captured using a camera with a better sensitivity or a lower image resolution or at a lower capture rate allowing fewer view points along a pathway. Identical or a higher number of view points may be captured using a faster capture device. Higher or lower density (images per meter) may be achieved based on requirements of the specific application field, and on hardware evolution.

  • For a low light environment, image bracketing and high dynamic range (HDR) image processing is used to enhance the dynamic spectrum of the final image. Image bracketing and HDR is a technique used to mix multiple images of the same viewpoint, each image having different exposure time. Image bracketing and HDR require the immobility of the vehicle during image acquisition, the drawback being a slower image capture process.

  • The

    digital camera

    40 is coupled to a

    catadioptric optic system

    32 by an

    optic apparatus

    42 such as is commercially available from manufacturers such as Nikon and Cannon, then by a

    standard connector

    38 provided by the catadioptric lens manufacturer for each proprietary optic mounting format. The

    digital camera

    40 could also be coupled with the

    panoramic lens

    33 by an

    optic apparatus

    43 that is commercially available from the above-mentioned manufacturers

  • The memory device, such as

    computer

    50, receives and stores the images transferred from the

    camera

    40, together with other information received from

    measurement device

    60 such as geographic coordinates (including altitude) related to the images as well as geometric orientation, acceleration, rate of rotation on all three axes (attitude) and travel distance information of the capture vehicle and/or the measurement device.

    Memory device

    50 is typically a computer installed with an operating system, proprietary software and a logical device with multiple processing cores and/or CPU arrangement. The proprietary software manages the distribution of the load of image acquisition to multiple threads on multiple CPUs, processing cores or logical processing units. The distribution of load is achieved by attributing the processing work of each subsequent image to another CPU core or logical processing unit. In a

    computer

    50 having four logical processing units, a typical image capture sequence according to the present invention would be processed in sequential order as follows: (i) first image's acquisition processing work is performed on “

    logical processing unit

    1”, (ii) second image's acquisition processing work is performed on “

    logical processing unit

    2”, (iii) third image's acquisition processing work is performed on “

    logical processing unit

    3”, (iv) fourth image's acquisition processing work is performed on “

    logical processing unit

    4”, (v) fifth image acquisition processing work is performed on “

    logical processing unit

    1”, (vi) sixth image acquisition processing work is performed on “

    logical processing unit

    2”. The distribution of acquisition processing across multiple CPUs increases the rate of image acquisition in a given time, resulting in better image acquisition performance and accrued image acquisition reliability.

  • Images are distributed from

    memory device

    50 to

    multiple storage devices

    52 to achieve the high data bandwidth required by the in-motion image capture method of the present invention.

    Memory device

    50 and

    multiple storage devices

    52 are located

    onboard vehicle

    70 or remote thereto. A

    communication device

    54 allows transfer of data from the

    memory device

    50 to a

    central computer

    80. Data is stored in a

    source database

    400 on the

    central computer

    80, wherein each image has a unique identification. Each unique image ID is associated with a specific time reference being the

    database

    400, the time reference is the time when the image is captured. Because the time reference needs high precision, the time reference is given as universal time reference, provided by the GPS unit, which is typically more precise than the internal computer clock. With the time reference, the image capture location can easily be retrieved.

  • Data Measurement Means
  • The

    measurement device

    60 mounted on the

    vehicle

    70 comprises a

    GPS tracking device

    62 or a similar device able to determine geographic coordinate information from a satellite signal, radio signal or otherwise. Each image is recorded in the

    memory device

    50 or on a

    central computer

    80 with the associated geographic coordinate information, namely of the location of image capture, which is stored either on a dedicated recording device or on the

    memory device

    50 such as an on-board computer or on a remote

    central computer

    80. Data is transferred using a communication protocol such as USB, Bluetooth, Ethernet, WiFi, and stored on the destination apparatus in any standard database format.

  • Geographic coordinates, also referred herein as “GPS data” 162, are stored with specific GPS universal time reference to images, allowing the determination of the exact geographic location at which each image was taken.

    Memory device

    50 is synchronised to the GPS clock to store universal time reference with any stored data. A system and method for the integration of virtual objects in an interactive panoramic walk through application, using a method for determining geographic coordinates with increased precision is described in PCT application entitled SYSTEM AND METHOD FOR THE PRECISE INTEGRATION OF VIRTUAL OBJECTS TO INTERACTIVE PANORAMIC WALK-THROUGH APPLICATIONS, by Lindemann et al. which is being concurrently filed with the instant application and is incorporated by reference hereto.

  • Because GPS devices have limited precision in altitude tracking, other devices such as an altimeter or any altitude tracking device can be use in adjunction to GPS device to enhance the precision of altitude tracking of images. Further, because GPS devices have limited precision in direction tracking, direction can be obtained from an electronic compass or other direction tracking device, thereby enhancing the precision of the recorded path of image.

  • An

    odometer

    66 is connected to

    vehicle

    70 for indicating distance traveled between any two image locations, thus improving the precision of the geographic coordinates associated with each image.

    Odometer

    66 may be an electronic or mechanical device. An Inertial Measurement Unit—

    IMU device

    67 on

    board vehicle

    70 detects the current rate of acceleration of the vehicle as well as changes in rotational attributes (attitude), including pitch, roll and yaw. Such data may be used to correct image inconsistencies caused by travel over uneven surfaces. It should be noted that the vehicle's acceleration or speed does not affect the capture density, number of image along a pathway, because successive images are automatically triggered as a function of the distance value between any two successive images, as provided, for example, by GPS data,

    odometer data

    166 and/or

    IMU data

    167.

  • Vehicle
  • The

    host vehicle

    70, for image capture of outdoor locations, can be any vehicle adapted for circulation on roads, namely cars, trucks, or any vehicle adapted to limited circulation areas or indoor circulation such as golf carts, electric vehicles and mobility scooters (scooters for the disabled), etc.

    FIG. 8

    shows

    vehicle

    70 as a small car. For typically smaller, steep roads as well as image capture of indoor locations, remote controlled vehicles, unmanned vehicles, robots, and stair-climbing robots, can be used as the

    host vehicle

    70 for the

    image capture system

    20. In a miniaturized version, the

    Image capture system

    20 can also be carried by a human and for some special applications, small animals such as rats. Where the terrain is difficult, a flying machine can be used, in which case an odometer is emulated by the use of GPS data or/and triangulation using radio signals. Triangulation techniques using radio signals require that two or more emitters be located at known positions.

  • Logical Means for Performing Image Modification Steps
  • According to a preferred embodiment of the present invention, source images 210 (2D panoramic images) from the

    image capture system

    20 are modified using a computer with a logical device such as

    central computer

    80. Images modification steps comprise the steps of

    unwarping

    312, compensation of

    vertical distortion

    322, expansion of

    horizontal edges

    332 and the resolution of two

    point perspective distortion

    342, in order to obtain

    release images

    280 that can be broadcast by

    web server

    82.

  • As shown in

    FIGS. 1

    , 3, 5 and 6A-6C,

    source images

    210 obtained by a

    panoramic optic

    30, 30′ are typically circular in shape, referred in the art as annular images.

    Unwarping

    312 of

    source images

    210 is achieved using conventional software techniques, so as to form cylindrical images. The unwarping operation is typically performed in three consecutive operations: in a

    first operation

    313, the image is centered and aligned relative to a grid consisting typically of geographic direction (North-East-South-West axis) or corresponding degrees of the 360 degrees panorama, where the North axis direction is referred to as 0 degree for convenience; in a

    second operation

    314, the circular shaped image is opened up after “slicing” a section from the image center to an edge, typically the bottom point of the image on the “South” or 180 degree direction the resulting

    image

    211 is in the shape of a circular arc; and in

    third operation

    315, the circular arc is further unwarped to form an

    unwarped image

    212 of rectangular shape. For ease of understanding, images of

    FIGS. 6A-6C

    , 7 and 13 show section marks on the vertical axis where the “A” mark indicates the South or the 180 degree direction, “B” indicates the East or the 90 degree direction, “C” indicates the North or 0 degree direction, and “D” indicates West or the 270 degree direction.

  • As shown in

    FIGS. 2 and 4

    , the field of

    view

    34 of catadioptric optic is typically un-evenly distributed along the vertical axis of the mirror, causing optical deformations, where the

    image

    212 appears compressed near the upper and lower edges of the image. To correct this vertical unevenness, compensation of

    vertical distortion

    322 is performed as shown in

    FIG. 7

    , and involves modifying each

    unwarped image

    212 by software operation to compensate for the compression of the upper and lower edges of the images, to provide a resulting

    image

    222 that is typically larger along the vertical axis, compared to the original

    unwarped image

    212.

  • Compensation of

    vertical distortion step

    322 is performed by applying a function curve to affect pixel distribution along the vertical axis. In a first sub-step, an empty data buffer corresponding to a

    destination image

    222 is set-up, resulting in a blank image. In a second sub-step, correspondence between coordinates in the

    source image

    212 and

    destination image

    222 is determined using a function curve. In a third sub-step, pixel values (color, hue, intensity, CMYK or pixel RGB values) at given coordinates of

    source image

    212 are copied to pixel values at corresponding coordinates on

    destination image

    222 on a pixel per pixel basis. On the third sub-step, at least one image sampling method such as “median”, “summed area”, “bilinear” or “trilinear” known image sampling methods is applied to obtain sub pixel data whenever a pixel's coordinates, size and shape on the

    original image

    212 do not match resulting pixel's coordinates, size and shape on the

    destination image

    222. One resulting pixel can match a source pixel exactly or can be a variable portion of one or several source pixels), thereby allowing the destination image to have different vertical resolution (number of pixels) as compared to

    source image

    212.

  • The function curve is determined by either measuring the curvature of different optic elements of the

    panoramic optic

    30, typically the main lens or mirror, or by measuring the vertical distortion produced by said optic elements. Vertical distortion is measured by acquiring the image of a calibration model (a 2D image) at a given distance with the image

    panoramic optic

    30 and

    camera

    40 in a calibration room having visible demarkations (for example, a geometrically precise grid painted on a room wall). This helps determine distance discrepancies between the image and the calibration model. In other words, on the

    destination image

    222, a measurement unit (for example 1 vertical meter) at a given distance (for illustration purpose a measurement unit of 1 vertical meter at the distance between a tree, 213 on both

    images

    212 and 222, and the position of the capture optic 30) fills the same number of pixel near the horizon and near the top or bottom of the image. This measurement unit which fills a number of pixels N near the horizon of the image, will fill the same number of pixels N at the top or bottom of the image. On the

    unwarped image

    212, same measurement unit at same distance will fill fewer pixels near the top and bottom of image compared as to the horizon (image vertical middle).

  • Use of Virtual 3D Images Instead of Real Camera Image.
  • The

    system

    20 of the present invention also allows the use of 3D virtual images, alone or in combination with 2D (optical) panoramic images, for the purpose of creating and broadcasting interactive panoramic walk-through applications. Current commercially available 3D rendering software engines used in CAD (“Computer aided-design”), 3D modeling and gaming applications are not meant to provide panoramic images having near 360 degree horizontal field of view. As shown in

    FIGS. 10 to 15

    , the computation of a virtual 360 degree panoramic image is thus typically achieved by transforming a

    3D scene

    240, originating from a 3D modeling, for example, into a panoramic 254 image equivalent to the resulting

    image

    222. Because standard 3D software is not meant to render image that have an horizontal field of view larger than ˜179.9°, the computation of a virtual 360 degree panoramic image is thus achieved by dividing the 360° field of view in 4 sections of 90° fields of view, which in

    FIG. 10

    is represented by

    numbers

    1, 2, 3 and 4.

  • According to a preferred embodiment of the present invention purpose is to increase image broadcasting speed and overall system performance, computing a virtual 360 degree panoramic image is performed, for each

    3D scene

    240, in a

    first step

    253 by rendering four images, called sections 244 (numbered 1 to 4 on each of

    FIGS. 10-12

    and 15), having a 90° horizontal field of view from a single

    nodal point

    241 and each

    section

    244 pointing toward one of the four right angle directions (front, left, back and right). From the

    nodal point

    241 each

    section

    244 is thus oriented, at a view angle of 90°, from the preceding and following sections. The combination of the four

    original sections

    244 can be represented by a single

    rectangular shape image

    250. In 3D space, the combination of the four 90° view sections can be used to define the inner faces of a box; consequently,

    image

    250 represents the juxtaposition over the same plane of the four inner faces of a box viewed from a central position inside this box.

  • As disclosed by US 2007/0211.955 to Pan with respect to panoramic imaging, and as is known in 3D imaging, the projection of wide angle or panoramic images over a cylindrical field of view reduces the number of computing steps necessary to provide real-time compensation to image distortion, and thus eases the creation of interactive panoramic motion. This is due to the fact that, in this kind of image, each image section has the same proportion ratio independently of the angle of viewing angle.

  • In a second step, according to a preferred embodiment of the present invention, as shown in

    FIG. 12 image

    250 is modified as if

    image

    250 was then projected over a continuous surface of cylindrical shape. This image modification is done by stretching vertically the resulting

    image

    250, section per section, along a sinusoidal curve using software which affects the sinusoidal stretching logical step 352, and resulting in modified

    image

    252. This sinusoidal stretching logical step 352 is described in detail below.

  • In a first sub-step an empty data buffer corresponding to the

    destination image

    252 is setup (resulting in a blank image).

  • In a second sub-step, the sinusoidal stretching logical step 352 is performed by evaluating resulting images pixels using a sinusoidal function curve that points on the original image pixels. This function curve computes the vertical coordinates in the

    destination image

    252 to determine the corresponding coordinates in the

    source image

    250. This sinusoidal function curve decreases its amplitude for vertical coordinates that are farther from the top edge of the section, the amplitude reaching a null value (0) at the vertical center of the section, then the amplitude continues to decrease as negative number and reaches the exact opposite value at the bottom edge of the section. This sinusoidal function curve is calculated from the start of an

    image section

    244 to the end of the same section. The same function curve is used horizontally for each section, which results in an

    image plane

    252 that contains 4 sections stretched in the same way regarding to the section local coordinate system.

  • In a third sub-step, pixel values at a given coordinate of

    source image

    250 are copied according to pixel color value at corresponding coordinates (given by said sinusoidal function curve of the second sub step) on the

    destination image

    252, on a pixel per pixel basis. During the third step, at least one image sampling method such as a, “median”, “summed area”, “bilinear” or “trilinear” known image sampling method is applied to obtain sub pixel data, whenever the pixel's coordinates, size and shape, on the

    source image

    250 do not match the resulting pixel's coordinates, size and shape on the destination image 252 (one resulting pixel can match a source pixel exactly or can be a variable portion of one or many source pixels), allowing the

    destination image

    252 to have different vertical resolution (number of pixels per mm) compared to

    source image

    250. Pixel values can be either color, hue, intensity values, CMYK (Cyan, Magenta, Yellow, Key) values or pixel RGB (Red, Green, Blue) values.

  • The sinusoidal stretching step 352 is performed to compensate the distance difference (i) between the view point and the pixels laid over a flat surface (the face of a box discussed above) and (ii) between the view point and same pixel laid over a cylindrical surface (the inner surface of a cylinder as viewed from within). In geometric perspective, the area, filled with the projection of an object, grows in the inverse proportional manner as the distance between said object and the view point. The center of an edge of a flat surface (any face of the box) is located closer to the viewpoint, compared to the center of an edge of an equivalent cylindrical surface (as if the box would be contained within the arc of a cylinder). In the geometric projection of a box to a cylinder, the upper and lower edges of each face of a box projected from the common center of the box and the cylinder become an arc that follow a portion of a sinus curve. Any pixel of the image is then positioned in the resulting

    image

    252 relative to its vertical distance between said pixel and top and bottom edge of the box, at the same relative distance to the top and bottom edge (arc) of the cylindrical projected box. Top and bottom edges of the projected faces of the box to a cylinder are comparable to top and bottom edges in the modified

    image

    252.

  • In a third step, a

    logical step

    354 is performed which removes the circular area on the upper and lower edges of each image's section of

    image

    252, so that ultimately a single

    rectangular shape image

    254 is created.

  • In order to increase image broadcasting speed and overall system performance, each image 222 (at this stage,

    image

    254 and

    image

    222 can be processed by the system in the same manner), which is either a panoramic image from the

    image capture system

    30 or an image resulting of the modification of a 3D image, is further modified by the expansion of horizontal edges

    logical step

    332 shown in

    FIG. 13

    .

    Image

    222 is typically divided in two sections. The left edge of

    image

    222 is cropped, copied and pasted on the opposite edge of the image to form expanded

    image

    232. In other words, the left border section of each image is sliced in a sub-section on the vertical axis, and the edge sub-section is cropped and displaced on the opposite edge of the image, so that two identical sub-sections of the image appear on both opposite sides of the image. The expansion of horizontal edges step 332 reduces the processing workload necessary for the real-time performance of the two points

    perspective distortion step

    342, allowing an output rate typically of 15 or more images per second. This rate is sufficiently fast to create the illusion of a seamless motion, without jerky movements, during displacement (image panoramic motion or rotation of the view) within the image. Increase of system performance is achieved because the expansion of

    horizontal edges

    332 avoids need for the image broadcasting system to manipulate two images simultaneously. Otherwise, this is the case every time the interactive panoramic reaches a vertical image split at 0° or 360° image borders. The system of the present invention achieves perpetual panoramic motion without slowdown: when the border of an image is reached, the image broadcasting system simply displays the other side of the same image, in an invisible manner to the user.

  • A virtual walk-through application providing a convincing natural, immersive navigation environment should be visually as close as possible to human vision during panoramic motion (view point rotation, horizontal displacement inside one panoramic image), or translation motion (view point moving in space forward or backward on the pathway), irrespective of motion speed.

  • According to a preferred embodiment of the present invention, a panoramic image closer to human vision during panoramic motion, with optimised calculation performance is provided by the use of two-point perspective distortion. During vertical panoramic motion, the image is vertically compressed depending on vertical elevation (distance to the horizon) in order to compensate for significant horizontal distortions which occur further from the horizon. Such distortion is systematically caused by the cylindrical projection of the image and is more severe toward the vertical lower and upper parts of the image. The present invention provides compensation of horizontal distortion similar to the projection of a cylindrical view to a planar view. In order to speed up image modification steps to real time, the present invention departs from the traditional projection of a plane inside a cylinder by assembling an image from the juxtaposition of several vertical image strips 235, each of them optionally having a different width, and by stretching said strips. This avoids the need to recalculate the position of each pixel of the image according to the projection of a plane inside a cylinder. Only a selected vertical and a horizontal portion of the image are displayed and visible to the user at any given time. The method of the present invention described below works in a “staircase-like” manner (visible on the zoomed

    image

    236 of

    FIG. 15

    ) whereby the width of each stripe is controlled and reduced and the number of stripes increased to provide the illusion of a progressive curvature that is a perspective closer to human vision.

  • For this purpose, the

    panoramic images

    232 are further modified by the two points

    perspective distortion step

    342 in order to provide images based on a two points perspective. The two

    points perspective distortion

    342 step is achieved by applying two basic operations, as shown in

    FIG. 15

    , allowing vertical parallel lines in the real world to appear as parallel lines on screen during the panoramic motion and interactive navigation.

  • In a first operation 343 (shown in

    FIG. 14

    ), the

    viewable portion

    234 of any

    image

    232 viewed by a user on screen, is divided into vertical slices 235 (stripes) in real time, which are each vertically stretched along sinusoidal lines in real time to recover a two points perspective. The sinusoidal virtual canvas (the vertical slices) is in a fixed position relative to the viewport which is the rectangular window by where the image is viewed by the user. As the user pans his view during navigation, the

    viewable portion

    234 travels across fixed

    vertical slices

    235, during which time only a portion of the panoramic image is visible on screen.

  • In a

    second operation

    345, vertical panoramic interactive motion is achieved by simple vertical translation of

    vertical slices

    235 without affecting sinusoidal distortion (that is the relative position and relative size of each stripe regarding each other stripe). All the

    slices

    235 are then vertically compressed proportional to the angular divergence of the field of view to the horizon, in order to minimise vertical distortion of the two points perspective. The two

    points perspective distortion

    342 avoids trapezoidal distortion found on a regular three point perspective used in 3D computer graphics.

  • The image modification steps of the present invention processes images in two resolutions; low resolution for interactive walkthrough panoramic motion (view point translation), and high resolution for interactive panoramic motion (view point rotation) applications.

  • Additional image modification steps are possible, through the use of a logical processor.

  • In order to ensure protection of privacy and to avoid infringement to privacy rights and right to images, the system of the present invention can include logical means for the automatic blurring, masking or removal of objects appearing on images, such as human faces and car plates. Automatic blurring of faces is achieved by a face recognition algorithm. Using image tracking software known in the art, each individual face is identified and tracked throughout any sequence of images. Based on the location of faces or image-based coordinates provided by the tracking software mechanism, software blurs the images on the coordinates corresponding to recognizable faces. Automatic blurring of car plates is achieved by a car plate recognition algorithm. Using image tracking software known in the art, each individual car plate is identified and tracked throughout any sequence of images. Based on the location of car plates or image-based coordinates provided by the tracking software mechanism, a software blurs car plates on the coordinates corresponding to recognizable car plates. Identical processes can be used to mask objects from images. The processes above are not limited to faces or car plates but can be used to blurred or mask any objects specified by the system's operator, with corresponding software adaptation.

  • Management of Images
  • According to a preferred embodiment of the present invention, panoramic resulting

    images

    232 are stored on the

    central computer

    80, each image being associated with tridimensional (XYZ) geographic coordinates indicative of the image's location of capture and optionally other references such as date of capture, with day and time; project or name of location; image group, so as to identify for example a particular district or street; digital photography settings, such as aperture setting, speed, ISO (measure of the light sensitivity of the digital sensor), exposure, light measure, camera model; identified for mounted photos filters such as UV, neutral gray, etc.; vehicle information such as vehicle speed and model; and camera operator identification.

  • Precise reference to the geographic coordinates indicative of the image's location of capture allows, among other things, the combination or the superposition of panoramic images of the present invention with any other heterogeneous geo-referenced digital information. Views of a street can therefore be combined with data of a GIS (“Geographic Information System”) application, such as, for example, layers of the utilities' services infrastructure (water, cable, electric distribution infrastructure), or contact information associated with commercial establishments located in the vicinity, PCT application No. ______, to Lindemann et al, entitled SYSTEM AND METHOD FOR CREATING AND BROADCASTING INTERACTIVE PANORAMIC WALK-THROUGH APPLICATIONS, filed concurrently with the instant application and incorporated by reference hereto, discloses a method for the assembly and broadcasting of interactive panoramic walk through applications, using an object-based database. Still further, PCT application No. ______, to Lindemann et al, entitled SYSTEM AND METHOD FOR THE PRECISE INTEGRATION OF VIRTUAL OBJECTS TO INTERACTIVE PANORAMIC WALK-THROUGH APPLICATIONS, filed concurrently with the instant application and incorporated by reference hereto, discloses a method for the integration of virtual objects to interactive panoramic walk through applications, using a method for determining geographic coordinates with increased precision. The foregoing PCT applications provide functionality which enhances the user experience or increases the efficiency of the

    system

    20 of the invention.

  • 3D Computer Graphic Layering
  • According to a preferred embodiment of the present invention, the system and method for creating panoramic walk-through applications herein provide image modification steps that enable the seamless and transparent integration of panoramic images (from a panoramic optic 20) with virtual (3D) images, wherein virtual images and panoramic images can be combined in a plurality of ways, for example, to mask or super-impose each other. It is therefore possible to combine panoramic images, virtual images and 3D objects without limitation in any panoramic view; the resulting view having a seamless appearance during walkthrough interactive motion and panoramic motion.

  • The system of the present invention positions each image and virtual objects on a 3D space using absolute geographic coordinates. Any panoramic image is edited from either a

    source image

    210 or a

    virtual image

    240 has geographic coordinates in 3D space. Due to the use of absolute coordinates, the system of the present invention allows the addition (pining) of virtual 3D objects to any panoramic image or virtual image.

  • Masking operations are achieved by the system in which a mask is defined, that is, the representation in shape and volume of a real object appearing on the panoramic images is defined. This technique is well known in the art and used to integrate 3D objects in photos in the real-

    estate

    3D industry, wherein said technique is limited to static view. Effectively, the mask position and orientation are correct for a given 3D object only for a given view point.

  • According to an embodiment of the present invention, the use of absolute 3D space coordinates for referencing both panoramic images and virtual images inside the 3D space allows any given 3D mask to be used on any panoramic image in a pathway. The mask becomes visible from any view points that have a view on the mask's geographical coordinates.

  • The 3D mask is designed to exist at a position and a shape identical to respectively, the 3D position and shape of a real physical object viewable on one or more panoramic images. When the 3D mask applied to cover a real physical object visible on a panoramic image is located closer to the viewpoint of the panoramic image, compared to the 3D object (i.e., located further from the viewpoint), the 3D mask pixels will occlude the pixel of the 3D object; in which case, pixels of the panoramic image representative of the physical object may become visible when the mask is rendered (the mask is “transparent” to an extent where pixels of the 3D object behind the mask pixels also become transparent, revealing the pixel of background panoramic image). In some cases, the mask can be displayed with a virtual shadow from, or cast virtual shadow to, other 3D objects to enhance the realism of the integration of 3D in the real image.

  • A 3D mask can be occluded by a 3D object when the pixels representing said 3D object are located, relative to the view line, in front (closer to viewpoint) of pixels of said mask. As the masking process is determined on a pixel per pixel basis, complex situations may arise. For example, a 3D object can be partially occluded by a 3D mask that is partially occluded by another 3D object. The flexibility of the masking process allows a near visually perfect integration of 3D object inside real image used in interactive panoramic walk-through applications.

  • According to a preferred embodiment of the present invention, 3D computer graphic layering using masking operations will produce new panoramic 3D layered images based on real world panoramic image composed of pixels from 3D objects and 3D masks. These panoramic 3D layered images are stored in the

    database

    400 with geographic coordinates of the 3D space equivalent to coordinates of the source panoramic image and tagged with a reference to the 3D computer graphic layering program. Said 3D computer graphic layering program reference can be a 3D application source, or a file name, a project name or an application name or both. 3D computer graphic layering can be used to mix 3D images from different 3D rendering packages, or to add objects in the 3D space without the need to re-render the full 3D panoramic images (typically necessary when inserting new objects in already rendered 3D panoramic images). 3D computer graphic layering can therefore be used for insertion and management of advertizing, wherein a virtual advertisement can be inserted by defining a 2D (flat) or 3D object and superimposing an advertising image on said object.

  • Image is available in alternative version broadcasted depending on the context: typically, one panoramic representation of a location is edited without any advertising, and several other copies are made available to different advertisers; a software based management tool allows advertisers to select an advertising to be imported in the system; selection and display of advertising to be broadcast can be dependent upon, for example, available user information such as location and language, and the identity of the web portal from which the user is accessing the panoramic application.

  • Other characteristics and modes of execution of the invention are described in the appended claims.

  • Further, the invention should be considered as comprising all possible combinations of every feature described in the instant specification, appended claims, and/or drawing figures which may be considered new, inventive and industrially applicable.

  • Multiple variations and modifications are possible in the embodiments of the invention described here. Although certain illustrative embodiments of the invention have been shown and described here, a wide range of modifications, changes, and substitutions is contemplated in the foregoing disclosure. While the above description contains many specifics, these should not be construed as limitations on the scope of the invention, but rather as exemplifications of one or another preferred embodiment thereof. In some instances, some features of the present invention may be employed without a corresponding use of the other features. Accordingly, it is appropriate that the foregoing description be construed broadly and understood as being given by way of illustration and example only, the spirit and scope of the invention being limited only by the claims which ultimately issue in this application.

Claims (20)

1. A method for creating panoramic images for use in walk-through applications comprises the steps of:

(a) capturing panoramic images using a panoramic optic system (30, 30′);

(b) unwarping images, comprising the sub-steps of:

(i) centering and aligning the image (210) relative to a grid;

(ii) opening up the images after slicing a section from the image center to an edge to form an image (211) having a circular arc shape; and

(iii) unwarping the circular arc shape image to form an unwarped image (212) of rectangular shape;

(c) vertical stretching (322) of the image proportional to the divergence of the field of view to the horizon of the optical system; and

(d) expanding horizontal edges (332), comprising the sub-steps of

(i) cropping and copying a section of the image (222) located on the left or right edge of the image; and

(ii) pasting said section on the opposite edge of the image to form an expanded image (232), so that two identical sub-sections of the image appear on opposite sides of the image.

2. The method of

claim 1

wherein the vertical stretching of image step (322) comprises the sub-steps of:

(i) setting up a data buffer corresponding to a destination image (222);

(ii) using a function curve to determine correspondence between coordinates in the source image (212) and destination image (222); and

(iii) copying pixel values of source image (212) to pixel values at corresponding coordinates on destination image (222) on a pixel per pixel basis, using an image sampling method selected from a group consisting of “median”, “summed area”, “bilinear” and “trilinear” image sampling methods.

3. A method for computing a virtual 360 degree panoramic image from a virtual 360 degree scene (240), for use in walk-through applications, comprising the steps of:

(a) rendering (253) the scene in four sections (244), each having a 90° horizontal field of view, each section being oriented, from the nodal point (241), at a view angle of 90°, from the preceding and following sections, said sections being represented in a single image (250) of rectangular shape;

(b) vertically stretching (352) the single image, section per section, along a sinusoidal curve; and

(c) removing (354) upper and lower edges sections to provide a resulting image (254) of rectangular shape.

4. The method of

claim 3

, further comprising the additional step of expanding (332), horizontal edges comprising the sub-steps of:

(a) cropping and copying a section of the image (254) located on the horizontal edge of the image; and

(b) pasting the section on the opposite edge of the image to form an elongated image (232) which then comprises the section on both of its opposite sides.

5. The method of

claim 3

wherein the vertical stretching step (352) comprises the sub-steps of:

(i) setting up a data buffer corresponding to a destination image (252);

(ii) sinusoidal stretching (352) using a function curve to compute the vertical coordinates in the destination image (252) to find the corresponding coordinates in the source image (250); and

(iii) copying pixel values of source image (250) to pixel values at corresponding coordinates on destination image (252) on a pixel per pixel basis, using an image sampling method selected from a group consisting of “median”, “summed area”, “bilinear” and “trilinear” image sampling methods.

6. The method of

claim 2

wherein the pixel values comprise values selected from a group consisting of “Hue, Lightness, Saturation values”, “CMYK values” and “pixel RGB values”.

7. A method for computing a virtual 360 degree panoramic image based on two points perspective, for use in assembling and broadcasting of walkthrough applications, providing panoramic images closer to human vision during panoramic motion, comprising the steps of:

(a) dividing the viewable portion (234) of a given panoramic image (232), in vertical slices (235);

(b) stretching said vertical slices (235) along sinusoidal lines to recover a two points perspective, wherein a group of vertical slices is in a horizontal fixed position relative to a viewport;

(c) providing vertical panoramic interactive motion by vertical translation of said vertical slices (235) without affecting sinusoidal distortion, wherein said group of vertical slices are vertically compressed proportional to the angular divergence of the field of view to the horizon, in order to minimise vertical distortion of two points perspective.

8. The method of

claim 7

, wherein said vertical stripes are of non-uniform width.

9. The method of

claim 7

, wherein said steps are performed in real time and broadcasted by a web server (82) to a user terminal, in response to user motion in a walkthrough space.

10. The method of

claim 1

, wherein the image modification steps further include the step of automatic blurring of portion of images, such as faces or numerical car plates.

11. The method of

claim 1

, wherein image modification steps processes images in two resolutions; low resolution for interactive walkthrough panoramic motion, and high resolution for interactive panoramic motion.

12. A system implementing the method of

claim 1

, comprising

(a) an image capture apparatus (20) for capturing panoramic images, further comprising a camera (40), an optic system (30, 30′) connected to the camera, the optic system providing a substantially 360° field of view, and a memory device (50);

(b) a communication device for transferring data from the memory device to a computer (80);

(d) instructions executable on a computer for performing image modification steps; and

(e) a database for storing image related data.

13. The system of

claim 12

, wherein the image capture apparatus further comprises a memory device to store data including images, geographic coordinate information related to said images, and distance information.

14. The system of

claim 12

, wherein the image capture system comprises a location measurement device (GPS) and a distance measurement device.

15. The system of

claim 12

, wherein the image capture system comprises an inertial measurement unit for measuring rates of acceleration and changes in rotational attributes of the vehicle, fixation means or camera.

16. The system of

claim 12

, wherein the image related data includes the image, date and time of image capture, geographical coordinate information, project or group related information, so as to identify for example a particular district or street, and camera settings such as aperture, speed and ISO.

17. The system of

claim 1

, wherein the computer which executes instructions for performing image modification steps, processes images in two resolutions; low resolution for interactive walkthrough panoramic motion, and high resolution for interactive panoramic motion.

18. The method of

claim 5

wherein the pixel values comprise values selected from a group consisting of “Hue, Lightness, Saturation values”, “CMYK values” and “pixel RGB values”.

19. A system implementing the method of

claim 2

, comprising

(a) an image capture apparatus (20) for capturing panoramic images, further comprising a camera (40), an optic system (30, 30′) connected to the camera, the optic system providing a substantially 360° field of view, and a memory device (50);

(b) a communication device for transferring data from the memory device to a computer (80);

(d) instructions executable on a computer for performing image modification steps; and

(e) a database for storing image related data.

20. A system implementing the method of

claim 3

, comprising

(a) an image capture apparatus (20) for capturing panoramic images, further comprising a camera (40), an optic system (30, 30′) connected to the camera, the optic system providing a substantially 360° field of view, and a memory device (50);

(b) a communication device for transferring data from the memory device to a computer (80);

(d) instructions executable on a computer for performing image modification steps; and

(e) a database for storing image related data.

US13/127,474 2008-11-05 2009-11-05 System and method for creating interactive panoramic walk-through applications Abandoned US20110211040A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US13/127,474 US20110211040A1 (en) 2008-11-05 2009-11-05 System and method for creating interactive panoramic walk-through applications

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US11134608P 2008-11-05 2008-11-05
PCT/IB2009/007335 WO2010052548A2 (en) 2008-11-05 2009-11-05 System and method for creating interactive panoramic walk-through applications
US13/127,474 US20110211040A1 (en) 2008-11-05 2009-11-05 System and method for creating interactive panoramic walk-through applications

Publications (1)

Publication Number Publication Date
US20110211040A1 true US20110211040A1 (en) 2011-09-01

Family

ID=41666646

Family Applications (2)

Application Number Title Priority Date Filing Date
US13/127,479 Expired - Fee Related US8893026B2 (en) 2008-11-05 2009-11-05 System and method for creating and broadcasting interactive panoramic walk-through applications
US13/127,474 Abandoned US20110211040A1 (en) 2008-11-05 2009-11-05 System and method for creating interactive panoramic walk-through applications

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US13/127,479 Expired - Fee Related US8893026B2 (en) 2008-11-05 2009-11-05 System and method for creating and broadcasting interactive panoramic walk-through applications

Country Status (2)

Country Link
US (2) US8893026B2 (en)
WO (3) WO2010052548A2 (en)

Cited By (79)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110043341A1 (en) * 2009-08-18 2011-02-24 Toshiba Alpine Automotive Technology Corporation Image display apparatus for vehicle
US20110173565A1 (en) * 2010-01-12 2011-07-14 Microsoft Corporation Viewing media in the context of street-level images
US20110292076A1 (en) * 2010-05-28 2011-12-01 Nokia Corporation Method and apparatus for providing a localized virtual reality environment
US20120072052A1 (en) * 2010-05-11 2012-03-22 Aaron Powers Navigation Portals for a Remote Vehicle Control User Interface
US20120076426A1 (en) * 2009-09-16 2012-03-29 Olaworks, Inc. Method and system for matching panoramic images using a graph structure, and computer-readable recording medium
US20120133639A1 (en) * 2010-11-30 2012-05-31 Microsoft Corporation Strip panorama
US20120188169A1 (en) * 2011-01-20 2012-07-26 Ebay Inc. Three dimensional proximity recommendation system
US20130236063A1 (en) * 2012-03-07 2013-09-12 Xerox Corporation Multiple view transportation imaging systems
US8705893B1 (en) 2013-03-14 2014-04-22 Palo Alto Research Center Incorporated Apparatus and method for creating floor plans
EP2779114A1 (en) * 2013-03-11 2014-09-17 Dai Nippon Printing Co., Ltd. Apparatus for interactive virtual walkthrough
US20140267883A1 (en) * 2013-03-14 2014-09-18 Konica Minolta Laboratory U.S.A., Inc. Method of selecting a subset from an image set for generating high dynamic range image
US9002647B1 (en) 2014-06-27 2015-04-07 Google Inc. Generating turn-by-turn direction previews
WO2015134537A1 (en) * 2014-03-04 2015-09-11 Gopro, Inc. Generation of video based on spherical content
US20150278029A1 (en) * 2014-03-27 2015-10-01 Salesforce.Com, Inc. Reversing object manipulations in association with a walkthrough for an application or online service
US20150296170A1 (en) * 2014-04-11 2015-10-15 International Business Machines Corporation System and method for fine-grained control of privacy from image and video recording devices
US9189839B1 (en) 2014-04-24 2015-11-17 Google Inc. Automatically generating panorama tours
US9197682B2 (en) 2012-12-21 2015-11-24 Nokia Technologies Oy Method, apparatus, and computer program product for generating a video stream of a mapped route
US9244940B1 (en) 2013-09-27 2016-01-26 Google Inc. Navigation paths for panorama
CN105407261A (en) * 2014-08-15 2016-03-16 索尼公司 Image processing device and method, and electronic equipment
US20160104285A1 (en) * 2014-10-10 2016-04-14 IEC Infrared Systems LLC Calibrating Panoramic Imaging System In Multiple Dimensions
US9324184B2 (en) 2011-12-14 2016-04-26 Microsoft Technology Licensing, Llc Image three-dimensional (3D) modeling
US9406153B2 (en) 2011-12-14 2016-08-02 Microsoft Technology Licensing, Llc Point of interest (POI) data positioning in image
US9418472B2 (en) 2014-07-17 2016-08-16 Google Inc. Blending between street view and earth view
US9488489B2 (en) 2012-09-28 2016-11-08 Google Inc. Personalized mapping with photo tours
US20170060512A1 (en) * 2015-08-28 2017-03-02 International Business Machines Corporation Managing digital object viewability for a transparent display system
DE102015217492A1 (en) * 2015-09-14 2017-03-16 Volkswagen Aktiengesellschaft Device and method for creating or updating an environment map of a motor vehicle
US20170193704A1 (en) * 2015-12-11 2017-07-06 Nokia Technologies Oy Causing provision of virtual reality content
US9769367B2 (en) 2015-08-07 2017-09-19 Google Inc. Speech and computer vision-based control
WO2017158229A1 (en) * 2016-03-17 2017-09-21 Nokia Technologies Oy Method and apparatus for processing video information
US9794632B1 (en) 2016-04-07 2017-10-17 Gopro, Inc. Systems and methods for synchronization based on audio track changes in video editing
US9812175B2 (en) 2016-02-04 2017-11-07 Gopro, Inc. Systems and methods for annotating a video
US9813623B2 (en) 2015-10-30 2017-11-07 Essential Products, Inc. Wide field of view camera for integration with a mobile device
US9819865B2 (en) 2015-10-30 2017-11-14 Essential Products, Inc. Imaging device and method for generating an undistorted wide view image
US9836819B1 (en) 2015-12-30 2017-12-05 Google Llc Systems and methods for selective retention and editing of images captured by mobile image capture device
US9836484B1 (en) 2015-12-30 2017-12-05 Google Llc Systems and methods that leverage deep learning to selectively store images at a mobile image capture device
US9836853B1 (en) 2016-09-06 2017-12-05 Gopro, Inc. Three-dimensional convolutional neural networks for video highlight detection
US9838641B1 (en) 2015-12-30 2017-12-05 Google Llc Low power framework for processing, compressing, and transmitting images at a mobile image capture device
US9838731B1 (en) 2016-04-07 2017-12-05 Gopro, Inc. Systems and methods for audio track selection in video editing with audio mixing option
US9906721B2 (en) * 2015-10-30 2018-02-27 Essential Products, Inc. Apparatus and method to record a 360 degree image
US9928630B2 (en) 2016-07-26 2018-03-27 International Business Machines Corporation Hiding sensitive content visible through a transparent display
US9966108B1 (en) 2015-01-29 2018-05-08 Gopro, Inc. Variable playback speed template for video editing application
US9984293B2 (en) 2014-07-23 2018-05-29 Gopro, Inc. Video scene classification by activity
US10008021B2 (en) 2011-12-14 2018-06-26 Microsoft Technology Licensing, Llc Parallax compensation
US10038842B2 (en) 2011-11-01 2018-07-31 Microsoft Technology Licensing, Llc Planar panorama imagery generation
US20180261000A1 (en) * 2014-04-22 2018-09-13 Google Llc Selecting time-distributed panoramic images for display
US10083718B1 (en) 2017-03-24 2018-09-25 Gopro, Inc. Systems and methods for editing videos based on motion
US10096341B2 (en) 2015-01-05 2018-10-09 Gopro, Inc. Media identifier generation for camera-captured media
US10109319B2 (en) 2016-01-08 2018-10-23 Gopro, Inc. Digital media editing
US10127943B1 (en) 2017-03-02 2018-11-13 Gopro, Inc. Systems and methods for modifying videos based on music
CN109154499A (en) * 2016-08-18 2019-01-04 深圳市大疆创新科技有限公司 System and method for enhancing stereoscopic display
US10185895B1 (en) 2017-03-23 2019-01-22 Gopro, Inc. Systems and methods for classifying activities captured within images
US10186012B2 (en) 2015-05-20 2019-01-22 Gopro, Inc. Virtual lens simulation for video and photo cropping
US10186298B1 (en) 2015-10-20 2019-01-22 Gopro, Inc. System and method of generating video from video clips based on moments of interest within the video clips
US10187690B1 (en) 2017-04-24 2019-01-22 Gopro, Inc. Systems and methods to detect and correlate user responses to media content
US10185891B1 (en) 2016-07-08 2019-01-22 Gopro, Inc. Systems and methods for compact convolutional neural networks
CN109282744A (en) * 2018-08-01 2019-01-29 北京农业信息技术研究中心 Crop node unit phenotype monitoring device and method
US10192585B1 (en) 2014-08-20 2019-01-29 Gopro, Inc. Scene and activity identification in video summary generation based on motion detected in a video
US10204273B2 (en) 2015-10-20 2019-02-12 Gopro, Inc. System and method of providing recommendations of moments of interest within video clips post capture
US10225511B1 (en) 2015-12-30 2019-03-05 Google Llc Low power framework for controlling image sensor mode in a mobile image capture device
US10262639B1 (en) 2016-11-08 2019-04-16 Gopro, Inc. Systems and methods for detecting musical features in audio content
US10284809B1 (en) 2016-11-07 2019-05-07 Gopro, Inc. Systems and methods for intelligently synchronizing events in visual content with musical features in audio content
US10341712B2 (en) 2016-04-07 2019-07-02 Gopro, Inc. Systems and methods for audio track selection in video editing
US10360945B2 (en) 2011-08-09 2019-07-23 Gopro, Inc. User interface for editing digital media objects
CN110047035A (en) * 2019-04-15 2019-07-23 深圳市数字城市工程研究中心 Panoramic video hotspot interaction system and interaction equipment
US10373298B2 (en) * 2015-09-15 2019-08-06 Huawei Technologies Co., Ltd. Image distortion correction method and apparatus
US10397476B2 (en) * 2008-02-08 2019-08-27 Google Llc Panoramic camera with multiple image sensors using timed shutters
US10400929B2 (en) 2017-09-27 2019-09-03 Quick Fitting, Inc. Fitting device, arrangement and method
US10534966B1 (en) 2017-02-02 2020-01-14 Gopro, Inc. Systems and methods for identifying activities and/or events represented in a video
USD877765S1 (en) 2014-04-22 2020-03-10 Google Llc Display screen with graphical user interface or portion thereof
EP3667414A1 (en) * 2018-12-14 2020-06-17 Axis AB A system for panoramic imaging
US10732809B2 (en) 2015-12-30 2020-08-04 Google Llc Systems and methods for selective retention and editing of images captured by mobile image capture device
US10841557B2 (en) 2016-05-12 2020-11-17 Samsung Electronics Co., Ltd. Content navigation
US10969047B1 (en) 2020-01-29 2021-04-06 Quick Fitting Holding Company, Llc Electrical conduit fitting and assembly
US11023999B2 (en) * 2017-08-31 2021-06-01 Canon Kabushiki Kaisha Image processing apparatus, information processing system, information processing method, and storage medium
US11035510B1 (en) 2020-01-31 2021-06-15 Quick Fitting Holding Company, Llc Electrical conduit fitting and assembly
USD934281S1 (en) 2014-04-22 2021-10-26 Google Llc Display screen with graphical user interface or portion thereof
US11163813B2 (en) 2014-04-22 2021-11-02 Google Llc Providing a thumbnail image that follows a main image
TWI758787B (en) * 2020-07-16 2022-03-21 江俊昇 A method for civil engineering design in a single software interface
US12058296B2 (en) * 2012-12-31 2024-08-06 Virtually Anywhere Interactive, Llc Content management for virtual tours

Families Citing this family (84)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7925439B2 (en) * 2006-10-19 2011-04-12 Topcon Positioning Systems, Inc. Gimbaled satellite positioning system antenna
US9454847B2 (en) * 2009-02-24 2016-09-27 Google Inc. System and method of indicating transition between street level images
US20100241525A1 (en) * 2009-03-18 2010-09-23 Microsoft Corporation Immersive virtual commerce
US20100306672A1 (en) * 2009-06-01 2010-12-02 Sony Computer Entertainment America Inc. Method and apparatus for matching users in multi-user computer simulations
US8581900B2 (en) * 2009-06-10 2013-11-12 Microsoft Corporation Computing transitions between captured driving runs
WO2011011737A1 (en) * 2009-07-24 2011-01-27 Digimarc Corporation Improved audio/video methods and systems
JP5406813B2 (en) * 2010-10-05 2014-02-05 株式会社ソニー・コンピュータエンタテインメント Panorama image display device and panorama image display method
US9876953B2 (en) 2010-10-29 2018-01-23 Ecole Polytechnique Federale De Lausanne (Epfl) Omnidirectional sensor array system
US20120179983A1 (en) * 2011-01-07 2012-07-12 Martin Lemire Three-dimensional virtual environment website
US9930225B2 (en) * 2011-02-10 2018-03-27 Villmer Llc Omni-directional camera and related viewing software
US8601380B2 (en) 2011-03-16 2013-12-03 Nokia Corporation Method and apparatus for displaying interactive preview information in a location-based user interface
US9036000B1 (en) * 2011-09-27 2015-05-19 Google Inc. Street-level imagery acquisition and selection
JP5659305B2 (en) 2011-11-07 2015-01-28 株式会社ソニー・コンピュータエンタテインメント Image generating apparatus and image generating method
JP5865388B2 (en) 2011-11-07 2016-02-17 株式会社ソニー・コンピュータエンタテインメント Image generating apparatus and image generating method
JP5769813B2 (en) * 2011-11-07 2015-08-26 株式会社ソニー・コンピュータエンタテインメント Image generating apparatus and image generating method
US9838687B1 (en) * 2011-12-02 2017-12-05 Amazon Technologies, Inc. Apparatus and method for panoramic video hosting with reduced bandwidth streaming
US9723223B1 (en) 2011-12-02 2017-08-01 Amazon Technologies, Inc. Apparatus and method for panoramic video hosting with directional audio
US9516225B2 (en) 2011-12-02 2016-12-06 Amazon Technologies, Inc. Apparatus and method for panoramic video hosting
WO2013114473A1 (en) * 2012-02-02 2013-08-08 パナソニック株式会社 Server, terminal device, image retrieval method, image processing method, and program
US10130872B2 (en) 2012-03-21 2018-11-20 Sony Interactive Entertainment LLC Apparatus and method for matching groups to users for online communities and computer simulations
US10186002B2 (en) 2012-03-21 2019-01-22 Sony Interactive Entertainment LLC Apparatus and method for matching users to groups for online communities and computer simulations
DE202013012425U1 (en) * 2012-08-08 2016-10-28 Google Inc. INDIVIDUAL PICTURES FROM ONE POINT OF INTEREST IN A PICTURE
KR20140030668A (en) * 2012-09-03 2014-03-12 엘지전자 주식회사 Mobile terminal and control method therof
US9235923B1 (en) * 2012-09-28 2016-01-12 Google Inc. Systems and methods for providing a visualization of satellite sightline obstructions
CN103854335B (en) * 2012-12-05 2017-05-03 厦门雅迅网络股份有限公司 Automobile data recorder panoramic video generation method
US9218368B2 (en) 2012-12-21 2015-12-22 Dropbox, Inc. System and method for organizing files based on an identification code
KR102089614B1 (en) * 2013-08-28 2020-04-14 삼성전자주식회사 Method for taking spherical panoramic image and an electronic device thereof
US9528834B2 (en) 2013-11-01 2016-12-27 Intelligent Technologies International, Inc. Mapping techniques using probe vehicles
US9781356B1 (en) 2013-12-16 2017-10-03 Amazon Technologies, Inc. Panoramic video viewer
US9971844B2 (en) * 2014-01-30 2018-05-15 Apple Inc. Adaptive image loading
US9787799B2 (en) 2014-02-27 2017-10-10 Dropbox, Inc. Systems and methods for managing content items having multiple resolutions
US10885104B2 (en) * 2014-02-27 2021-01-05 Dropbox, Inc. Systems and methods for selecting content items to store and present locally on a user device
USD781318S1 (en) 2014-04-22 2017-03-14 Google Inc. Display screen with graphical user interface or portion thereof
US9607411B2 (en) * 2014-04-23 2017-03-28 Ebay Inc. Specular highlights on photos of objects
US10321126B2 (en) 2014-07-08 2019-06-11 Zspace, Inc. User input device camera
US10275935B2 (en) 2014-10-31 2019-04-30 Fyusion, Inc. System and method for infinite synthetic image generation from multi-directional structured image array
WO2016071244A2 (en) 2014-11-06 2016-05-12 Koninklijke Philips N.V. Method and system of communication for use in hospitals
US10516893B2 (en) * 2015-02-14 2019-12-24 Remote Geosystems, Inc. Geospatial media referencing system
US10121223B2 (en) * 2015-03-02 2018-11-06 Aerial Sphere, Llc Post capture imagery processing and deployment systems
TWI581051B (en) * 2015-03-12 2017-05-01 Three - dimensional panoramic image generation method
EP3289572B1 (en) 2015-04-28 2019-06-26 Signify Holding B.V. Metadata in multi image scenes
US11095869B2 (en) 2015-09-22 2021-08-17 Fyusion, Inc. System and method for generating combined embedded multi-view interactive digital media representations
US10147211B2 (en) 2015-07-15 2018-12-04 Fyusion, Inc. Artificially rendering images using viewpoint interpolation and extrapolation
US10222932B2 (en) * 2015-07-15 2019-03-05 Fyusion, Inc. Virtual reality environment based manipulation of multilayered multi-view interactive digital media representations
US11006095B2 (en) 2015-07-15 2021-05-11 Fyusion, Inc. Drone based capture of a multi-view interactive digital media
US10242474B2 (en) 2015-07-15 2019-03-26 Fyusion, Inc. Artificially rendering images using viewpoint interpolation and extrapolation
EP3338106B1 (en) * 2015-08-17 2022-06-22 C360 Technologies, Inc. Generating objects in real time panoramic video
US10104286B1 (en) 2015-08-27 2018-10-16 Amazon Technologies, Inc. Motion de-blurring for panoramic frames
US10609379B1 (en) 2015-09-01 2020-03-31 Amazon Technologies, Inc. Video compression across continuous frame edges
US9843724B1 (en) 2015-09-21 2017-12-12 Amazon Technologies, Inc. Stabilization of panoramic video
US11783864B2 (en) 2015-09-22 2023-10-10 Fyusion, Inc. Integration of audio into a multi-view interactive digital media representation
US10198355B2 (en) 2015-10-29 2019-02-05 Dropbox, Inc. Proving a dynamic digital content cache
JP6733267B2 (en) * 2016-03-31 2020-07-29 富士通株式会社 Information processing program, information processing method, and information processing apparatus
US11212437B2 (en) * 2016-06-06 2021-12-28 Bryan COLIN Immersive capture and review
CN109863754B (en) * 2016-06-07 2021-12-28 维斯比特股份有限公司 Virtual reality 360-degree video camera system for live streaming
CN106384367B (en) * 2016-08-26 2019-06-14 深圳拍乐科技有限公司 A kind of method at the automatic stabilisation visual angle of panorama camera
US11202017B2 (en) 2016-10-06 2021-12-14 Fyusion, Inc. Live style transfer on a mobile device
CN106507086B (en) * 2016-10-28 2018-08-31 北京灵境世界科技有限公司 A kind of 3D rendering methods of roaming outdoor scene VR
US10536693B2 (en) 2016-11-22 2020-01-14 Pixvana, Inc. Analytic reprocessing for data stream system and method
US10437879B2 (en) 2017-01-18 2019-10-08 Fyusion, Inc. Visual search using multi-view interactive digital media representations
FR3063558A1 (en) * 2017-03-02 2018-09-07 Stmicroelectronics (Rousset) Sas METHOD FOR CONTROLLING THE REAL-TIME DETECTION OF A SCENE BY A WIRELESS COMMUNICATION APPARATUS AND APPARATUS THEREFOR
US10313651B2 (en) 2017-05-22 2019-06-04 Fyusion, Inc. Snapshots at predefined intervals or angles
US11069147B2 (en) 2017-06-26 2021-07-20 Fyusion, Inc. Modification of multi-view interactive digital media representation
KR102434402B1 (en) * 2017-09-19 2022-08-22 한국전자통신연구원 Apparatus and method for providing mixed reality content
CN108108396B (en) * 2017-12-01 2020-10-02 上海市环境科学研究院 A kind of aircraft aerial picture stitching management system
US10423886B2 (en) 2017-12-29 2019-09-24 Forward Thinking Systems, LLC Electronic logs with compliance support and prediction
US10791268B2 (en) 2018-02-07 2020-09-29 Structionsite Inc. Construction photograph integration with 3D model images
US10339384B2 (en) 2018-02-07 2019-07-02 Structionsite Inc. Construction photograph integration with 3D model images
US10592747B2 (en) 2018-04-26 2020-03-17 Fyusion, Inc. Method and apparatus for 3-D auto tagging
US10467758B1 (en) 2018-07-13 2019-11-05 Structionsite Inc. Imagery-based construction progress tracking
FI20185717A1 (en) * 2018-08-30 2020-03-01 Tridify Oy Automatic generation of a virtual reality walkthrough
US10983677B2 (en) 2018-11-16 2021-04-20 Dropbox, Inc. Prefetching digital thumbnails from remote servers to client devices based on a dynamic determination of file display criteria
US10997697B1 (en) 2018-12-28 2021-05-04 Gopro, Inc. Methods and apparatus for applying motion blur to overcaptured content
CN111486865A (en) * 2019-01-29 2020-08-04 北京理工大学 Transfer alignment filter, transfer alignment method and guided aircraft using the same
US10616483B1 (en) 2019-02-27 2020-04-07 Hong Kong Applied Science and Technology Research Institute Company Limited Apparatus and method of generating electronic three-dimensional walkthrough environment
CN110060201B (en) * 2019-04-15 2023-02-28 深圳市数字城市工程研究中心 Hot spot interaction method for panoramic video
US10754893B1 (en) * 2019-09-09 2020-08-25 Forward Thinking Systems, LLC Providing access to vehicle videos
CN110888962B (en) * 2019-12-05 2023-11-10 徐书诚 Computer system for realizing internet geographic heterogeneous information display
CN113179420B (en) * 2021-04-26 2022-08-30 本影(上海)网络科技有限公司 City-level wide-area high-precision CIM scene server dynamic stream rendering technical method
WO2022231482A1 (en) * 2021-04-29 2022-11-03 Хальдун Саид Аль-Зубейди Virtual transport system and operating method thereof
CN114005025A (en) * 2021-09-28 2022-02-01 山东农业大学 A method, device and system for automatic monitoring of high-point panoramic color-changing trees
CN114115632B (en) * 2021-12-03 2024-11-29 国网浙江省电力有限公司电力科学研究院 Quick system of immersive virtual reality training platform and use method
CN114898059B (en) * 2022-05-24 2024-11-26 北京百度网讯科技有限公司 Topological method, device and computer program product for scattered-point panorama
HRP20230614A1 (en) * 2023-06-27 2025-01-03 MMM Agramservis d.o.o. Device and procedure for camera set stabilization on autonomous vehicle for monitoring agricultural areas

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6504571B1 (en) * 1998-05-18 2003-01-07 International Business Machines Corporation System and methods for querying digital image archives using recorded parameters
US20030026588A1 (en) * 2001-05-14 2003-02-06 Elder James H. Attentive panoramic visual sensor
US20030080963A1 (en) * 1995-11-22 2003-05-01 Nintendo Co., Ltd. High performance low cost video game system with coprocessor providing high speed efficient 3D graphics and digital audio signal processing
US6633317B2 (en) * 2001-01-02 2003-10-14 Microsoft Corporation Image-based walkthrough system and process employing spatial video streaming
US20040125044A1 (en) * 2002-09-05 2004-07-01 Akira Suzuki Display system, display control apparatus, display apparatus, display method and user interface device
US20040169724A1 (en) * 2002-12-09 2004-09-02 Ekpar Frank Edughom Method and apparatus for creating interactive virtual tours
US20040257384A1 (en) * 1999-05-12 2004-12-23 Park Michael C. Interactive image seamer for panoramic images
US6865028B2 (en) * 2001-07-20 2005-03-08 6115187 Canada Inc. Method for capturing a panoramic image by means of an image sensor rectangular in shape
US20050180623A1 (en) * 1996-10-25 2005-08-18 Frederick Mueller Method and apparatus for scanning three-dimensional objects
US7103232B2 (en) * 2001-03-07 2006-09-05 Canon Kabushiki Kaisha Storing and processing partial images obtained from a panoramic image
US7110617B2 (en) * 2000-10-27 2006-09-19 Microsoft Corporation Rebinning methods and arrangements for use in compressing image-based rendering (IBR) data
US7411628B2 (en) * 2004-05-07 2008-08-12 Micronas Usa, Inc. Method and system for scaling, filtering, scan conversion, panoramic scaling, YC adjustment, and color conversion in a display controller
US20080260290A1 (en) * 2004-02-03 2008-10-23 Koninklijke Philips Electronic, N.V. Changing the Aspect Ratio of Images to be Displayed on a Screen
US20090058988A1 (en) * 2007-03-16 2009-03-05 Kollmorgen Corporation System for Panoramic Image Processing

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0772842B1 (en) * 1994-05-19 2003-11-12 Geospan Corporation Method for collecting and processing visual and spatial position information
US6282362B1 (en) * 1995-11-07 2001-08-28 Trimble Navigation Limited Geographical position/image digital recording and display system
JPH11168754A (en) * 1997-12-03 1999-06-22 Mr System Kenkyusho:Kk Image recording method, image database system, image recorder, and computer program storage medium
JP3634677B2 (en) * 1999-02-19 2005-03-30 キヤノン株式会社 Image interpolation method, image processing method, image display method, image processing apparatus, image display apparatus, and computer program storage medium
US6388688B1 (en) 1999-04-06 2002-05-14 Vergics Corporation Graph-based visual navigation through spatial environments
US6693649B1 (en) 1999-05-27 2004-02-17 International Business Machines Corporation System and method for unifying hotspots subject to non-linear transformation and interpolation in heterogeneous media representations
US6895126B2 (en) * 2000-10-06 2005-05-17 Enrico Di Bernardo System and method for creating, storing, and utilizing composite images of a geographic location
JP2002209208A (en) * 2001-01-11 2002-07-26 Mixed Reality Systems Laboratory Inc Image processing unit and its method, and storage medium
US7126630B1 (en) * 2001-02-09 2006-10-24 Kujin Lee Method and apparatus for omni-directional image and 3-dimensional data acquisition with data annotation and dynamic range extension method
AU2002254217A1 (en) * 2001-02-24 2002-09-12 Eyesee360, Inc. Method and apparatus for processing photographic images
JP2002258740A (en) * 2001-03-02 2002-09-11 Mixed Reality Systems Laboratory Inc Method and device for recording picture and method and device for reproducing picture
JP3764953B2 (en) * 2002-04-17 2006-04-12 立山マシン株式会社 Method for recording image conversion parameters in an annular image
JP2003344526A (en) * 2002-05-31 2003-12-03 Mitsubishi Heavy Ind Ltd Instrument and method for measuring flying object
US7570261B1 (en) * 2003-03-06 2009-08-04 Xdyne, Inc. Apparatus and method for creating a virtual three-dimensional environment, and method of generating revenue therefrom
US6968973B2 (en) 2003-05-31 2005-11-29 Microsoft Corporation System and process for viewing and navigating through an interactive video tour
TW200734965A (en) 2006-03-10 2007-09-16 Sony Taiwan Ltd A perspective correction panning method for wide-angle image
US7925982B2 (en) * 2006-09-01 2011-04-12 Cheryl Parker System and method of overlaying and integrating data with geographic mapping applications
EP2142883B1 (en) * 2007-04-22 2017-10-18 Ilookabout INC. Method of obtaining geographically related images using a vehicle

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030080963A1 (en) * 1995-11-22 2003-05-01 Nintendo Co., Ltd. High performance low cost video game system with coprocessor providing high speed efficient 3D graphics and digital audio signal processing
US20050180623A1 (en) * 1996-10-25 2005-08-18 Frederick Mueller Method and apparatus for scanning three-dimensional objects
US6504571B1 (en) * 1998-05-18 2003-01-07 International Business Machines Corporation System and methods for querying digital image archives using recorded parameters
US20040257384A1 (en) * 1999-05-12 2004-12-23 Park Michael C. Interactive image seamer for panoramic images
US7110617B2 (en) * 2000-10-27 2006-09-19 Microsoft Corporation Rebinning methods and arrangements for use in compressing image-based rendering (IBR) data
US6633317B2 (en) * 2001-01-02 2003-10-14 Microsoft Corporation Image-based walkthrough system and process employing spatial video streaming
US7103232B2 (en) * 2001-03-07 2006-09-05 Canon Kabushiki Kaisha Storing and processing partial images obtained from a panoramic image
US20030026588A1 (en) * 2001-05-14 2003-02-06 Elder James H. Attentive panoramic visual sensor
US6865028B2 (en) * 2001-07-20 2005-03-08 6115187 Canada Inc. Method for capturing a panoramic image by means of an image sensor rectangular in shape
US20040125044A1 (en) * 2002-09-05 2004-07-01 Akira Suzuki Display system, display control apparatus, display apparatus, display method and user interface device
US20040169724A1 (en) * 2002-12-09 2004-09-02 Ekpar Frank Edughom Method and apparatus for creating interactive virtual tours
US20080260290A1 (en) * 2004-02-03 2008-10-23 Koninklijke Philips Electronic, N.V. Changing the Aspect Ratio of Images to be Displayed on a Screen
US7411628B2 (en) * 2004-05-07 2008-08-12 Micronas Usa, Inc. Method and system for scaling, filtering, scan conversion, panoramic scaling, YC adjustment, and color conversion in a display controller
US20090058988A1 (en) * 2007-03-16 2009-03-05 Kollmorgen Corporation System for Panoramic Image Processing

Cited By (165)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10666865B2 (en) 2008-02-08 2020-05-26 Google Llc Panoramic camera with multiple image sensors using timed shutters
US10397476B2 (en) * 2008-02-08 2019-08-27 Google Llc Panoramic camera with multiple image sensors using timed shutters
US20110043341A1 (en) * 2009-08-18 2011-02-24 Toshiba Alpine Automotive Technology Corporation Image display apparatus for vehicle
US8400329B2 (en) * 2009-08-18 2013-03-19 Toshiba Alpine Automotive Technology Corporation Image display apparatus for vehicle
US20120076426A1 (en) * 2009-09-16 2012-03-29 Olaworks, Inc. Method and system for matching panoramic images using a graph structure, and computer-readable recording medium
US8472678B2 (en) * 2009-09-16 2013-06-25 Intel Corporation Method and system for matching panoramic images using a graph structure, and computer-readable recording medium
US8447136B2 (en) * 2010-01-12 2013-05-21 Microsoft Corporation Viewing media in the context of street-level images
US20110173565A1 (en) * 2010-01-12 2011-07-14 Microsoft Corporation Viewing media in the context of street-level images
US9002535B2 (en) * 2010-05-11 2015-04-07 Irobot Corporation Navigation portals for a remote vehicle control user interface
US20120072052A1 (en) * 2010-05-11 2012-03-22 Aaron Powers Navigation Portals for a Remote Vehicle Control User Interface
US20110292076A1 (en) * 2010-05-28 2011-12-01 Nokia Corporation Method and apparatus for providing a localized virtual reality environment
US9122707B2 (en) * 2010-05-28 2015-09-01 Nokia Technologies Oy Method and apparatus for providing a localized virtual reality environment
US20120133639A1 (en) * 2010-11-30 2012-05-31 Microsoft Corporation Strip panorama
US10163131B2 (en) * 2011-01-20 2018-12-25 Ebay Inc. Three dimensional proximity recommendation system
US9183588B2 (en) * 2011-01-20 2015-11-10 Ebay, Inc. Three dimensional proximity recommendation system
US11461808B2 (en) 2011-01-20 2022-10-04 Ebay Inc. Three dimensional proximity recommendation system
US10997627B2 (en) 2011-01-20 2021-05-04 Ebay Inc. Three dimensional proximity recommendation system
US20120188169A1 (en) * 2011-01-20 2012-07-26 Ebay Inc. Three dimensional proximity recommendation system
US20160063551A1 (en) * 2011-01-20 2016-03-03 Ebay Inc. Three dimensional proximity recommendation system
US10535079B2 (en) * 2011-01-20 2020-01-14 Ebay Inc. Three dimensional proximity recommendation system
US20190087860A1 (en) * 2011-01-20 2019-03-21 Ebay Inc. Three dimensional proximity recommendation system
US10360945B2 (en) 2011-08-09 2019-07-23 Gopro, Inc. User interface for editing digital media objects
US10038842B2 (en) 2011-11-01 2018-07-31 Microsoft Technology Licensing, Llc Planar panorama imagery generation
US9324184B2 (en) 2011-12-14 2016-04-26 Microsoft Technology Licensing, Llc Image three-dimensional (3D) modeling
US10008021B2 (en) 2011-12-14 2018-06-26 Microsoft Technology Licensing, Llc Parallax compensation
US9406153B2 (en) 2011-12-14 2016-08-02 Microsoft Technology Licensing, Llc Point of interest (POI) data positioning in image
US20130236063A1 (en) * 2012-03-07 2013-09-12 Xerox Corporation Multiple view transportation imaging systems
US8731245B2 (en) * 2012-03-07 2014-05-20 Xerox Corporation Multiple view transportation imaging systems
US9488489B2 (en) 2012-09-28 2016-11-08 Google Inc. Personalized mapping with photo tours
US9197682B2 (en) 2012-12-21 2015-11-24 Nokia Technologies Oy Method, apparatus, and computer program product for generating a video stream of a mapped route
US12058296B2 (en) * 2012-12-31 2024-08-06 Virtually Anywhere Interactive, Llc Content management for virtual tours
EP2779114A1 (en) * 2013-03-11 2014-09-17 Dai Nippon Printing Co., Ltd. Apparatus for interactive virtual walkthrough
US8705893B1 (en) 2013-03-14 2014-04-22 Palo Alto Research Center Incorporated Apparatus and method for creating floor plans
US20140267883A1 (en) * 2013-03-14 2014-09-18 Konica Minolta Laboratory U.S.A., Inc. Method of selecting a subset from an image set for generating high dynamic range image
US8902328B2 (en) * 2013-03-14 2014-12-02 Konica Minolta Laboratory U.S.A., Inc. Method of selecting a subset from an image set for generating high dynamic range image
US9244940B1 (en) 2013-09-27 2016-01-26 Google Inc. Navigation paths for panorama
US9658744B1 (en) 2013-09-27 2017-05-23 Google Inc. Navigation paths for panorama
WO2015134537A1 (en) * 2014-03-04 2015-09-11 Gopro, Inc. Generation of video based on spherical content
US10084961B2 (en) 2014-03-04 2018-09-25 Gopro, Inc. Automatic generation of video from spherical content using audio/visual analysis
US9760768B2 (en) 2014-03-04 2017-09-12 Gopro, Inc. Generation of video from spherical content using edit maps
US9754159B2 (en) 2014-03-04 2017-09-05 Gopro, Inc. Automatic generation of video from spherical content using location-based metadata
US9652667B2 (en) 2014-03-04 2017-05-16 Gopro, Inc. Automatic generation of video from spherical content using audio/visual analysis
US20150278029A1 (en) * 2014-03-27 2015-10-01 Salesforce.Com, Inc. Reversing object manipulations in association with a walkthrough for an application or online service
US9983943B2 (en) * 2014-03-27 2018-05-29 Salesforce.Com, Inc. Reversing object manipulations in association with a walkthrough for an application or online service
US9571785B2 (en) * 2014-04-11 2017-02-14 International Business Machines Corporation System and method for fine-grained control of privacy from image and video recording devices
US10531038B2 (en) * 2014-04-11 2020-01-07 International Business Machines Corporation System and method for fine-grained control of privacy from image and video recording devices
US20150296170A1 (en) * 2014-04-11 2015-10-15 International Business Machines Corporation System and method for fine-grained control of privacy from image and video recording devices
US20170048480A1 (en) * 2014-04-11 2017-02-16 International Business Machines Corporation System and method for fine-grained control of privacy from image and video recording devices
USD1008302S1 (en) 2014-04-22 2023-12-19 Google Llc Display screen with graphical user interface or portion thereof
US11860923B2 (en) 2014-04-22 2024-01-02 Google Llc Providing a thumbnail image that follows a main image
USD1006046S1 (en) 2014-04-22 2023-11-28 Google Llc Display screen with graphical user interface or portion thereof
US10540804B2 (en) * 2014-04-22 2020-01-21 Google Llc Selecting time-distributed panoramic images for display
USD933691S1 (en) 2014-04-22 2021-10-19 Google Llc Display screen with graphical user interface or portion thereof
USD994696S1 (en) 2014-04-22 2023-08-08 Google Llc Display screen with graphical user interface or portion thereof
USD877765S1 (en) 2014-04-22 2020-03-10 Google Llc Display screen with graphical user interface or portion thereof
US20180261000A1 (en) * 2014-04-22 2018-09-13 Google Llc Selecting time-distributed panoramic images for display
USD934281S1 (en) 2014-04-22 2021-10-26 Google Llc Display screen with graphical user interface or portion thereof
US11163813B2 (en) 2014-04-22 2021-11-02 Google Llc Providing a thumbnail image that follows a main image
US9830745B1 (en) 2014-04-24 2017-11-28 Google Llc Automatically generating panorama tours
US10643385B1 (en) 2014-04-24 2020-05-05 Google Llc Automatically generating panorama tours
US9189839B1 (en) 2014-04-24 2015-11-17 Google Inc. Automatically generating panorama tours
US11481977B1 (en) 2014-04-24 2022-10-25 Google Llc Automatically generating panorama tours
US9342911B1 (en) 2014-04-24 2016-05-17 Google Inc. Automatically generating panorama tours
US12002163B1 (en) 2014-04-24 2024-06-04 Google Llc Automatically generating panorama tours
US9841291B2 (en) 2014-06-27 2017-12-12 Google Llc Generating turn-by-turn direction previews
US9377320B2 (en) 2014-06-27 2016-06-28 Google Inc. Generating turn-by-turn direction previews
US10775188B2 (en) 2014-06-27 2020-09-15 Google Llc Generating turn-by-turn direction previews
US11067407B2 (en) 2014-06-27 2021-07-20 Google Llc Generating turn-by-turn direction previews
US9002647B1 (en) 2014-06-27 2015-04-07 Google Inc. Generating turn-by-turn direction previews
US9898857B2 (en) 2014-07-17 2018-02-20 Google Llc Blending between street view and earth view
US9418472B2 (en) 2014-07-17 2016-08-16 Google Inc. Blending between street view and earth view
US10074013B2 (en) 2014-07-23 2018-09-11 Gopro, Inc. Scene and activity identification in video summary generation
US11069380B2 (en) 2014-07-23 2021-07-20 Gopro, Inc. Scene and activity identification in video summary generation
US10339975B2 (en) 2014-07-23 2019-07-02 Gopro, Inc. Voice-based video tagging
US11776579B2 (en) 2014-07-23 2023-10-03 Gopro, Inc. Scene and activity identification in video summary generation
US10776629B2 (en) 2014-07-23 2020-09-15 Gopro, Inc. Scene and activity identification in video summary generation
US9984293B2 (en) 2014-07-23 2018-05-29 Gopro, Inc. Video scene classification by activity
CN105407261A (en) * 2014-08-15 2016-03-16 索尼公司 Image processing device and method, and electronic equipment
US10643663B2 (en) 2014-08-20 2020-05-05 Gopro, Inc. Scene and activity identification in video summary generation based on motion detected in a video
US10262695B2 (en) 2014-08-20 2019-04-16 Gopro, Inc. Scene and activity identification in video summary generation
US10192585B1 (en) 2014-08-20 2019-01-29 Gopro, Inc. Scene and activity identification in video summary generation based on motion detected in a video
US9876954B2 (en) * 2014-10-10 2018-01-23 Iec Infrared Systems, Llc Calibrating panoramic imaging system in multiple dimensions
US10367996B2 (en) 2014-10-10 2019-07-30 Iec Infrared Systems, Llc Calibrating panoramic imaging system in multiple dimensions
US20160104285A1 (en) * 2014-10-10 2016-04-14 IEC Infrared Systems LLC Calibrating Panoramic Imaging System In Multiple Dimensions
US20160105649A1 (en) * 2014-10-10 2016-04-14 IEC Infrared Systems LLC Panoramic View Imaging System With Drone Integration
US10033924B2 (en) 2014-10-10 2018-07-24 Iec Infrared Systems, Llc Panoramic view imaging system
US10084960B2 (en) * 2014-10-10 2018-09-25 Iec Infrared Systems, Llc Panoramic view imaging system with drone integration
US10096341B2 (en) 2015-01-05 2018-10-09 Gopro, Inc. Media identifier generation for camera-captured media
US10559324B2 (en) 2015-01-05 2020-02-11 Gopro, Inc. Media identifier generation for camera-captured media
US9966108B1 (en) 2015-01-29 2018-05-08 Gopro, Inc. Variable playback speed template for video editing application
US11688034B2 (en) 2015-05-20 2023-06-27 Gopro, Inc. Virtual lens simulation for video and photo cropping
US10817977B2 (en) 2015-05-20 2020-10-27 Gopro, Inc. Virtual lens simulation for video and photo cropping
US11164282B2 (en) 2015-05-20 2021-11-02 Gopro, Inc. Virtual lens simulation for video and photo cropping
US10679323B2 (en) 2015-05-20 2020-06-09 Gopro, Inc. Virtual lens simulation for video and photo cropping
US10529051B2 (en) 2015-05-20 2020-01-07 Gopro, Inc. Virtual lens simulation for video and photo cropping
US10535115B2 (en) 2015-05-20 2020-01-14 Gopro, Inc. Virtual lens simulation for video and photo cropping
US10186012B2 (en) 2015-05-20 2019-01-22 Gopro, Inc. Virtual lens simulation for video and photo cropping
US10529052B2 (en) 2015-05-20 2020-01-07 Gopro, Inc. Virtual lens simulation for video and photo cropping
US10395338B2 (en) 2015-05-20 2019-08-27 Gopro, Inc. Virtual lens simulation for video and photo cropping
US9769367B2 (en) 2015-08-07 2017-09-19 Google Inc. Speech and computer vision-based control
US10136043B2 (en) 2015-08-07 2018-11-20 Google Llc Speech and computer vision-based control
US9733881B2 (en) * 2015-08-28 2017-08-15 International Business Machines Corporation Managing digital object viewability for a transparent display system
US20170060512A1 (en) * 2015-08-28 2017-03-02 International Business Machines Corporation Managing digital object viewability for a transparent display system
DE102015217492A1 (en) * 2015-09-14 2017-03-16 Volkswagen Aktiengesellschaft Device and method for creating or updating an environment map of a motor vehicle
US10373298B2 (en) * 2015-09-15 2019-08-06 Huawei Technologies Co., Ltd. Image distortion correction method and apparatus
US10186298B1 (en) 2015-10-20 2019-01-22 Gopro, Inc. System and method of generating video from video clips based on moments of interest within the video clips
US10204273B2 (en) 2015-10-20 2019-02-12 Gopro, Inc. System and method of providing recommendations of moments of interest within video clips post capture
US10789478B2 (en) 2015-10-20 2020-09-29 Gopro, Inc. System and method of providing recommendations of moments of interest within video clips post capture
US10748577B2 (en) 2015-10-20 2020-08-18 Gopro, Inc. System and method of generating video from video clips based on moments of interest within the video clips
US11468914B2 (en) 2015-10-20 2022-10-11 Gopro, Inc. System and method of generating video from video clips based on moments of interest within the video clips
US9819865B2 (en) 2015-10-30 2017-11-14 Essential Products, Inc. Imaging device and method for generating an undistorted wide view image
US9813623B2 (en) 2015-10-30 2017-11-07 Essential Products, Inc. Wide field of view camera for integration with a mobile device
US9906721B2 (en) * 2015-10-30 2018-02-27 Essential Products, Inc. Apparatus and method to record a 360 degree image
US10218904B2 (en) 2015-10-30 2019-02-26 Essential Products, Inc. Wide field of view camera for integration with a mobile device
US20170193704A1 (en) * 2015-12-11 2017-07-06 Nokia Technologies Oy Causing provision of virtual reality content
US9838641B1 (en) 2015-12-30 2017-12-05 Google Llc Low power framework for processing, compressing, and transmitting images at a mobile image capture device
US9836819B1 (en) 2015-12-30 2017-12-05 Google Llc Systems and methods for selective retention and editing of images captured by mobile image capture device
US11159763B2 (en) 2015-12-30 2021-10-26 Google Llc Low power framework for controlling image sensor mode in a mobile image capture device
US10225511B1 (en) 2015-12-30 2019-03-05 Google Llc Low power framework for controlling image sensor mode in a mobile image capture device
US10732809B2 (en) 2015-12-30 2020-08-04 Google Llc Systems and methods for selective retention and editing of images captured by mobile image capture device
US9836484B1 (en) 2015-12-30 2017-12-05 Google Llc Systems and methods that leverage deep learning to selectively store images at a mobile image capture device
US10728489B2 (en) 2015-12-30 2020-07-28 Google Llc Low power framework for controlling image sensor mode in a mobile image capture device
US10607651B2 (en) 2016-01-08 2020-03-31 Gopro, Inc. Digital media editing
US11049522B2 (en) 2016-01-08 2021-06-29 Gopro, Inc. Digital media editing
US10109319B2 (en) 2016-01-08 2018-10-23 Gopro, Inc. Digital media editing
US11238635B2 (en) 2016-02-04 2022-02-01 Gopro, Inc. Digital media editing
US10424102B2 (en) 2016-02-04 2019-09-24 Gopro, Inc. Digital media editing
US10769834B2 (en) 2016-02-04 2020-09-08 Gopro, Inc. Digital media editing
US10083537B1 (en) 2016-02-04 2018-09-25 Gopro, Inc. Systems and methods for adding a moving visual element to a video
US9812175B2 (en) 2016-02-04 2017-11-07 Gopro, Inc. Systems and methods for annotating a video
US10565769B2 (en) 2016-02-04 2020-02-18 Gopro, Inc. Systems and methods for adding visual elements to video content
WO2017158229A1 (en) * 2016-03-17 2017-09-21 Nokia Technologies Oy Method and apparatus for processing video information
US10783609B2 (en) 2016-03-17 2020-09-22 Nokia Technologies Oy Method and apparatus for processing video information
US9794632B1 (en) 2016-04-07 2017-10-17 Gopro, Inc. Systems and methods for synchronization based on audio track changes in video editing
US9838731B1 (en) 2016-04-07 2017-12-05 Gopro, Inc. Systems and methods for audio track selection in video editing with audio mixing option
US10341712B2 (en) 2016-04-07 2019-07-02 Gopro, Inc. Systems and methods for audio track selection in video editing
US10841557B2 (en) 2016-05-12 2020-11-17 Samsung Electronics Co., Ltd. Content navigation
US10185891B1 (en) 2016-07-08 2019-01-22 Gopro, Inc. Systems and methods for compact convolutional neural networks
US9928630B2 (en) 2016-07-26 2018-03-27 International Business Machines Corporation Hiding sensitive content visible through a transparent display
CN109154499A (en) * 2016-08-18 2019-01-04 深圳市大疆创新科技有限公司 System and method for enhancing stereoscopic display
US9836853B1 (en) 2016-09-06 2017-12-05 Gopro, Inc. Three-dimensional convolutional neural networks for video highlight detection
US10560657B2 (en) 2016-11-07 2020-02-11 Gopro, Inc. Systems and methods for intelligently synchronizing events in visual content with musical features in audio content
US10284809B1 (en) 2016-11-07 2019-05-07 Gopro, Inc. Systems and methods for intelligently synchronizing events in visual content with musical features in audio content
US10262639B1 (en) 2016-11-08 2019-04-16 Gopro, Inc. Systems and methods for detecting musical features in audio content
US10546566B2 (en) 2016-11-08 2020-01-28 Gopro, Inc. Systems and methods for detecting musical features in audio content
US10534966B1 (en) 2017-02-02 2020-01-14 Gopro, Inc. Systems and methods for identifying activities and/or events represented in a video
US10991396B2 (en) 2017-03-02 2021-04-27 Gopro, Inc. Systems and methods for modifying videos based on music
US11443771B2 (en) 2017-03-02 2022-09-13 Gopro, Inc. Systems and methods for modifying videos based on music
US10127943B1 (en) 2017-03-02 2018-11-13 Gopro, Inc. Systems and methods for modifying videos based on music
US10679670B2 (en) 2017-03-02 2020-06-09 Gopro, Inc. Systems and methods for modifying videos based on music
US10185895B1 (en) 2017-03-23 2019-01-22 Gopro, Inc. Systems and methods for classifying activities captured within images
US10083718B1 (en) 2017-03-24 2018-09-25 Gopro, Inc. Systems and methods for editing videos based on motion
US10789985B2 (en) 2017-03-24 2020-09-29 Gopro, Inc. Systems and methods for editing videos based on motion
US11282544B2 (en) 2017-03-24 2022-03-22 Gopro, Inc. Systems and methods for editing videos based on motion
US10187690B1 (en) 2017-04-24 2019-01-22 Gopro, Inc. Systems and methods to detect and correlate user responses to media content
US11023999B2 (en) * 2017-08-31 2021-06-01 Canon Kabushiki Kaisha Image processing apparatus, information processing system, information processing method, and storage medium
US10400929B2 (en) 2017-09-27 2019-09-03 Quick Fitting, Inc. Fitting device, arrangement and method
CN109282744A (en) * 2018-08-01 2019-01-29 北京农业信息技术研究中心 Crop node unit phenotype monitoring device and method
US20200195845A1 (en) * 2018-12-14 2020-06-18 Axis Ab System for panoramic imaging
US10827118B2 (en) * 2018-12-14 2020-11-03 Axis Ab System for panoramic imaging
EP3667414A1 (en) * 2018-12-14 2020-06-17 Axis AB A system for panoramic imaging
CN110047035A (en) * 2019-04-15 2019-07-23 深圳市数字城市工程研究中心 Panoramic video hotspot interaction system and interaction equipment
US10969047B1 (en) 2020-01-29 2021-04-06 Quick Fitting Holding Company, Llc Electrical conduit fitting and assembly
US11035510B1 (en) 2020-01-31 2021-06-15 Quick Fitting Holding Company, Llc Electrical conduit fitting and assembly
TWI758787B (en) * 2020-07-16 2022-03-21 江俊昇 A method for civil engineering design in a single software interface

Also Published As

Publication number Publication date
US8893026B2 (en) 2014-11-18
WO2010052558A2 (en) 2010-05-14
US20110214072A1 (en) 2011-09-01
WO2010052550A2 (en) 2010-05-14
WO2010052548A2 (en) 2010-05-14
WO2010052558A3 (en) 2010-07-08
WO2010052548A3 (en) 2010-08-19
WO2010052550A9 (en) 2010-08-26

Similar Documents

Publication Publication Date Title
US20110211040A1 (en) 2011-09-01 System and method for creating interactive panoramic walk-through applications
US12211160B2 (en) 2025-01-28 Techniques for capturing and displaying partial motion in virtual or augmented reality scenes
US9858717B2 (en) 2018-01-02 System and method for producing multi-angle views of an object-of-interest from images in an image dataset
Kopf et al. 2010 Street slide: browsing street level imagery
US8963943B2 (en) 2015-02-24 Three-dimensional urban modeling apparatus and method
AU2008322565B2 (en) 2013-09-05 Method and apparatus of taking aerial surveys
US20150207991A1 (en) 2015-07-23 Digital 3d/360 degree camera system
JP2011515760A (en) 2011-05-19 Visualizing camera feeds on a map
Huang et al. 2008 Panoramic imaging: sensor-line cameras and laser range-finders
CN106296783A (en) 2017-01-04 A kind of combination space overall situation 3D view and the space representation method of panoramic pictures
US20090167786A1 (en) 2009-07-02 Methods and apparatus for associating image data
US20030225513A1 (en) 2003-12-04 Method and apparatus for providing multi-level blended display of arbitrary shaped textures in a geo-spatial context
US20070070233A1 (en) 2007-03-29 System and method for correlating captured images with their site locations on maps
Jian et al. 2017 Augmented virtual environment: fusion of real-time video and 3D models in the digital earth system
JP4272966B2 (en) 2009-06-03 3DCG synthesizer
JP2020008664A (en) 2020-01-16 Driving simulator
JP2004265396A (en) 2004-09-24 Image forming system and image forming method
Koeva 2016 3D modelling and interactive web-based visualization of cultural heritage objects
JP2010045693A (en) 2010-02-25 Image acquiring system for generating three-dimensional moving image of line
CN111524230A (en) 2020-08-11 A method and computer system for linked browsing of a three-dimensional model and an unfolded panorama
WO2009069165A2 (en) 2009-06-04 Transition method between two three-dimensional geo-referenced maps
CN110751616B (en) 2022-02-18 Indoor and outdoor panoramic house-watching video fusion method
CN114494563B (en) 2022-10-11 Method and device for fusion display of aerial video on digital earth
KR20090132317A (en) 2009-12-30 Method and system for providing video based additional information
CN111210514B (en) 2023-04-18 Method for fusing photos into three-dimensional scene in batch

Legal Events

Date Code Title Description
2014-09-11 STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION