Yu et al., 2021 - Google Patents
- ️Fri Jan 01 2021
Yu et al., 2021
View PDF-
Document ID
- 393376939480649588 Author
- Li R
- Tancik M
- Li H
- Ng R
- Kanazawa A Publication year
- 2021 Publication venue
- Proceedings of the IEEE/CVF international conference on computer vision
External Links
Snippet
We introduce a method to render Neural Radiance Fields (NeRFs) in real time using PlenOctrees, an octree-based 3D representation which supports view-dependent effects. Our method can render 800x800 images at more than 150 FPS, which is over 3000 times …
- 238000009877 rendering 0 title abstract description 53
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/20—Perspective computation
- G06T15/205—Image-based rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding, e.g. from bit-mapped to non bit-mapped
- G06T9/001—Model-based coding, e.g. wire frame
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/06—Ray-tracing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/005—Tree description, e.g. octree, quadtree
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/50—Lighting effects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/32—Image data format
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/08—Bandwidth reduction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/36—Image preprocessing, i.e. processing the image information without deciding about the identity of the image
- G06K9/46—Extraction of features or characteristics of the image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Yu et al. | 2021 | Plenoctrees for real-time rendering of neural radiance fields |
Yariv et al. | 2023 | Bakedsdf: Meshing neural sdfs for real-time view synthesis |
Xu et al. | 2022 | Point-nerf: Point-based neural radiance fields |
Wang et al. | 2023 | F2-nerf: Fast neural radiance field training with free camera trajectories |
Lazova et al. | 2023 | Control-nerf: Editable feature volumes for scene rendering and manipulation |
Li et al. | 2024 | Spacetime gaussian feature splatting for real-time dynamic view synthesis |
Tewari et al. | 2022 | Advances in neural rendering |
Cao et al. | 2023 | Hexplane: A fast representation for dynamic scenes |
Li et al. | 2022 | Neural 3d video synthesis from multi-view video |
Liu et al. | 2020 | Neural sparse voxel fields |
Attal et al. | 2023 | HyperReel: High-fidelity 6-DoF video with ray-conditioned sampling |
Reiser et al. | 2021 | Kilonerf: Speeding up neural radiance fields with thousands of tiny mlps |
Liu et al. | 2022 | Devrf: Fast deformable voxel radiance fields for dynamic scenes |
Rosu et al. | 2023 | Permutosdf: Fast multi-view reconstruction with implicit surfaces using permutohedral lattices |
Dai et al. | 2020 | Neural point cloud rendering via multi-plane projection |
Pittaluga et al. | 2019 | Revealing scenes by inverting structure from motion reconstructions |
Zhang et al. | 2022 | Differentiable point-based radiance fields for efficient view synthesis |
Olszewski et al. | 2019 | Transformable bottleneck networks |
Alatan et al. | 2007 | Scene representation technologies for 3DTV—A survey |
CN113706714A (en) | 2021-11-26 | New visual angle synthesis method based on depth image and nerve radiation field |
Peng et al. | 2023 | Representing volumetric videos as dynamic mlp maps |
Weng et al. | 2020 | Vid2actor: Free-viewpoint animatable person synthesis from video in the wild |
Yang et al. | 2021 | Deep optimized priors for 3d shape modeling and reconstruction |
Liu et al. | 2023 | Real-time neural rasterization for large scenes |
Wu et al. | 2022 | Learning to generate 3d shapes from a single example |