US10068311B2 - Varying effective resolution by screen location by changing active color sample count within multiple render targets - Google Patents
- ️Tue Sep 04 2018
Info
-
Publication number
- US10068311B2 US10068311B2 US14/246,061 US201414246061A US10068311B2 US 10068311 B2 US10068311 B2 US 10068311B2 US 201414246061 A US201414246061 A US 201414246061A US 10068311 B2 US10068311 B2 US 10068311B2 Authority
- US
- United States Prior art keywords
- pixel
- active
- screen
- metadata
- regions Prior art date
- 2014-04-05 Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires 2034-09-07
Links
- 238000012545 processing Methods 0.000 claims abstract description 48
- 238000000034 method Methods 0.000 claims description 25
- 230000003068 static effect Effects 0.000 claims 2
- 238000009877 rendering Methods 0.000 description 23
- 239000000872 buffer Substances 0.000 description 18
- 239000012634 fragment Substances 0.000 description 12
- 238000010586 diagram Methods 0.000 description 7
- 238000013507 mapping Methods 0.000 description 7
- 239000007787 solid Substances 0.000 description 7
- 238000006243 chemical reaction Methods 0.000 description 6
- 210000003128 head Anatomy 0.000 description 6
- 230000006870 function Effects 0.000 description 4
- 230000001419 dependent effect Effects 0.000 description 3
- 230000002093 peripheral effect Effects 0.000 description 3
- 238000005070 sampling Methods 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 238000012937 correction Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000003672 processing method Methods 0.000 description 2
- 210000001747 pupil Anatomy 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 239000000758 substrate Substances 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000004804 winding Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/017—Head mounted
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0189—Sight systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/40—Filling a planar surface by adding surface attributes, e.g. colour or texture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/50—Lighting effects
- G06T15/80—Shading
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/10—Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4023—Scaling of whole images or parts thereof, e.g. expanding or contracting based on decimating pixels or lines of pixels; based on inserting pixels or lines of pixels
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0101—Head-up displays characterised by optical features
- G02B2027/014—Head-up displays characterised by optical features comprising information/image processing systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/36—Level of detail
Definitions
- aspects of the present disclosure are related to computer graphics.
- the present disclosure is related to varying resolution by screen location.
- GPU graphics processing unit
- the GPU is a specialized electronic circuit designed to accelerate the creation of images in a frame buffer intended for output to a display.
- GPUs are used in embedded systems, mobile phones, personal computers, tablet computers, portable game devices, workstations, and game consoles.
- a GPU is typically designed to be efficient at manipulating computer graphics. GPUs often have a highly parallel processing architecture that makes the GPU more effective than a general-purpose CPU for algorithms where processing of large blocks of data is done in parallel.
- the CPU may send the GPU instructions, commonly referred to as draw commands, that instruct the GPU to implement a particular graphics processing task, e.g. render a particular texture that has changed with respect to a previous frame in an image.
- draw commands may be coordinated by the CPU with a graphics application programming interface (API) in order to issue graphics rendering commands that correspond to the state of the particular application's virtual environment.
- API graphics application programming interface
- a GPU may perform a series of processing tasks in a “graphics pipeline” to translate the visuals in the virtual environment into images that can be rendered onto a display.
- a typical graphics pipeline may include performing certain rendering or shading operations on virtual objects in the virtual space, transformation and rasterization of the virtual objects in the scene to produce pixel data suitable for output display, and additional rendering tasks on the pixels (or fragments) before outputting the rendered image on a display.
- Virtual objects of an image are often described in virtual space in terms of shapes known as primitives, which together make the shapes of the objects in the virtual scene.
- objects in a three-dimensional virtual world to be rendered may be reduced to a series of distinct triangle primitives having vertices defined in terms of their coordinates in three-dimensional space, whereby these polygons make up the surfaces of the objects.
- Each polygon may have an associated index that can be used by the graphics processing system to distinguish a given polygon from other polygons.
- each vertex may have an associated index that can be used to distinguish a given vertex from other vertices.
- a graphics pipeline may perform certain operations on these primitives to produce visuals for the virtual scene and transform this data into a two-dimensional format suitable for reproduction by the pixels of the display.
- graphics primitive information is used to refer to data representative of a graphics primitive.
- data includes, but is not limited to, vertex information (e.g., data representing vertex positions or vertex indices) and polygon information, e.g., polygon indices and information that associates particular vertices with particular polygons.
- the GPU may perform rendering tasks by implementing programs commonly known as shaders.
- a typical graphics pipeline may include vertex shaders, which may manipulate certain properties of the primitives on a per-vertex basis, as well as pixel shaders (also known as “fragment shaders”), which operate downstream from the vertex shaders in the graphics pipeline and may manipulate certain values on a per-pixel basis before transmitting the pixel data to a display.
- the fragment shaders may manipulate values relevant to applying textures to primitives.
- the pipeline may also include other shaders at various stages in the pipeline, such as geometry shaders that use the output of the vertex shaders to generate a new set of primitives, as well as compute shaders (CS) which may be implemented by a GPU to perform certain other general computational tasks.
- shaders such as geometry shaders that use the output of the vertex shaders to generate a new set of primitives, as well as compute shaders (CS) which may be implemented by a GPU to perform certain other general computational tasks.
- Graphical display devices having a wide field of view have been developed.
- Such devices include head mounted display (HMD) devices.
- HMD head mounted display
- a small display device is worn on a user's head.
- the display device has a display optic in front of one eye (monocular HMD) or each eye (binocular HMD).
- An HMD device typically includes sensors that can sense the orientation of the device and change the scene shown by the display optics as the user's head moves.
- most stages of rendering scenes for wide FOV displays are performed by planar rendering where all parts of the screen have the same number of pixels per unit area.
- FIG. 1A and FIG. 1B are simplified diagrams illustrating certain parameters of wide field of view (FOV) displays.
- FOV wide field of view
- FIG. 1C illustrates different solid angles for different portions of a wide FOV display.
- FIGS. 2A-2C illustrate examples of the relative importance of pixels in different regions of different wide FOV displays in accordance with aspects of the present disclosure.
- FIG. 2D illustrates an example of different pixel resolution for different regions of a screen of a FOV display in accordance with aspects of the present disclosure.
- FIG. 3A is a block diagram of a graphics processing system in accordance with aspects of the present disclosure.
- FIG. 3B is a block diagram of a graphics processing pipeline in accordance with aspects of the present disclosure.
- FIGS. 4A-4C schematically illustrate an example of varying effective resolution by screen location by changing active color sample count within multiple render targets in accordance with aspects of the present disclosure.
- FIG. 4D is a schematic diagram illustrating an example of a metadata configuration for implementing pixel active sample count varying by screen location in accordance with aspects of the present disclosure.
- FIG. 4E is a schematic diagram illustrating an alternative example of a metadata configuration for implementing pixel active sample count varying by screen location in accordance with aspects of the present disclosure.
- FIGS. 1A-1C illustrate a previously unappreciated problem with large FOV displays.
- FIG. 1A illustrates a 90 degree FOV display
- FIG. 1B illustrates a 114 degree FOV display.
- three dimensional geometry is rendered using a planar projection to the view plane.
- rendering geometry onto a high FOV view plane is very inefficient.
- edge regions 112 and central regions 114 of view plane 101 are the same area but represent very different solid angles, as seen by a viewer 103 . Consequently, pixels near the edge of the screen hold much less meaningful information than pixels near the center.
- these regions have the same number of pixels and the time spent rendering equal sized regions on the screen is the same.
- FIGS. 2A-2C illustrate the relative importance of different portions of a large FOV display in two dimensions for different sized fields of view.
- FIG. 2A expresses the variance in solid angle for each square of a planar checkerboard perpendicular to the direction of view, in the case that the checkerboard subtends an angle of 114 degrees. In other words, it expresses the inefficiency of conventional planar projective rendering to a 114 degree FOV display.
- FIG. 2B expresses the same information for a 90 degree FOV display. In such planar projective rendering, the projection compresses tiles 202 in the image 201 that are at the edges and tiles 203 at the corners into smaller solid angles compared to tiles 204 at the center.
- each tile in the image 201 has the same number of pixels in screen space, there is an inefficiency factor of roughly 4 ⁇ for rendering the edge tiles 202 compared to the center tiles 204 .
- conventional rendering of the edge tiles 202 involves roughly 4 times as much processing per unit solid angle than for the center tiles 204 .
- the inefficiency factor is roughly 8 ⁇ .
- the inefficiency factor is roughly 2.5 ⁇ .
- the inefficiency is dependent on the size of the FOV. For example, for the 90 degree FOV display shown in FIG. 2B , the inefficiency factors are roughly 2 ⁇ for rendering the edge tiles 202 , roughly 3 ⁇ for rendering the corner tiles 203 , and roughly 1.7 ⁇ overall for rendering the image 201 .
- FIG. 2C Another way of looking at this situation is shown in FIG. 2C , in which the screen 102 has been divided into rectangles of approximately equal “importance” in terms of pixels per unit solid angle subtended. Each rectangle makes roughly the same contribution to the final image as seen through the display.
- the planar projection distorts the importance of edge rectangles 202 and corner rectangles 203 .
- the corner rectangles 203 might make less of a contribution to the center rectangles due to the display optics, which may choose to make the visual density of pixels (as expressed as pixels per solid angle) higher towards the center of the display.
- an image 210 for a wide FOV display would be advantageous for an image 210 for a wide FOV display to have pixel densities that are smaller at edge regions 212 , 214 , 216 , 218 than at center regions 215 and smaller at corner regions 211 , 213 , 217 , and 219 than at the edge regions 212 , 214 , 216 , 218 as shown in FIG. 2D . It would also be advantageous to render a conventional graphical image on the screen of a wide FOV display in a way that gets the same effect as varying the pixel densities across the screen without having to significantly modify the underlying graphical image data or data format or the processing of the data.
- part of the graphics pipeline uses metadata that specifies the number of active color samples per pixel in different regions of the screen.
- the metadata is associated with the screen, not the image.
- the image data is not changed, but in the graphics pipeline pixel shader execution is done only over the active color samples. For example, in the image data there may be four color samples per pixel.
- the metadata may specify the active count to be four, in which case, the pixel shader is invoked for all four color samples. In a 3 ⁇ 4 resolution region, the active count may be three, in which case, the pixel shader is invoked for three of the four color samples (e.g., the first 3 ).
- the active count In a 1 ⁇ 2 resolution region, the active count would be two and the pixel shader would be invoked for two of the color samples. In a 1 ⁇ 4 resolution region, the active count would be one and the pixel shader would be invoked for only one of the four color samples.
- FIG. 3A illustrates a block diagram of a computer system 300 that may be used to implement graphics processing according to aspects of the present disclosure.
- the system 300 may be an embedded system, mobile phone, personal computer, tablet computer, portable game device, workstation, game console, and the like.
- the system 300 generally may include a central processor unit (CPU) 302 , a graphics processor unit (GPU) 304 , and a memory 308 that is accessible to both the CPU and GPU.
- the CPU 302 and GPU 304 may each include one or more processor cores, e.g., a single core, two cores, four cores, eight cores, or more.
- the memory 308 may be in the form of an integrated circuit that provides addressable memory, e.g., RAM, DRAM, and the like.
- the memory 308 may include graphics memory 328 that may store graphics resources and temporarily store graphics buffers 305 of data for a graphics rendering pipeline.
- the graphics buffers 305 may include, e.g., vertex buffers for storing vertex parameter values, index buffers for holding vertex indices, depth buffers (e.g., Z-buffers) for storing depth values of graphics content, stencil buffers, frame buffers for storing completed frames to be sent to a display, and other buffers.
- vertex buffers for storing vertex parameter values
- index buffers for holding vertex indices
- depth buffers e.g., Z-buffers
- the graphics memory 328 is shown as part of the main memory.
- the graphics memory could be a separate component, possibly integrated into the GPU 304 .
- the CPU 302 and GPU 304 may access the memory 308 using a data bus 309 . In some cases, it may be useful for the system 300 to include two or more different buses.
- the memory 308 may contain data that can be accessed by the CPU 302 and GPU 304 .
- the GPU 304 may include a plurality of compute units configured to perform graphics processing tasks in parallel. Each compute unit may include its own dedicated local memory store, such as a local data share.
- the CPU may be configured to execute CPU code 303 C , which may include an application that utilizes graphics, a compiler and a graphics API.
- the graphics API can be configured to issue draw commands to programs implemented by the GPU.
- the CPU code 303 C may also implement physics simulations and other functions.
- the GPU 304 may be configured to operate as discussed above.
- the GPU may execute GPU code 303 G , which may implement shaders, such as compute shaders CS, vertex shaders VS, and pixel shaders PS, as discussed above.
- the system may include one or more buffers 305 , which may include a frame buffer FB.
- the GPU code 303 G may also optionally implement other types of shaders (not shown), such as pixel shaders or geometry shaders.
- Each compute unit may include its own dedicated local memory store, such as a local data share.
- the GPU 304 may include a texture unit 306 configured to perform certain operations for applying textures to primitives as part of a graphics pipeline.
- the CPU code 303 c and GPU code 303 g and other elements of the system 300 are configured so that a rasterization stage of the graphics pipeline receives metadata MD specifying an active sample configuration for a particular region of the display device 316 among a plurality of regions of the display device.
- the rasterization stage receives pixel data for one or more pixels in the particular region.
- the pixel data specifies the same sample count (number of color samples for each pixel) over the entire surface.
- the active sample count is less than or equal to the color sample count and the color sample count is two or more.
- the rasterization stage invokes a pixel shader PS only for active samples.
- the metadata MD specifies different active sample configurations for regions that are to have different pixel sample resolutions (number of pixel samples per unit area of the display). In this way pixel sample resolution can vary for different regions of the display device 316 and the graphics processing load can be reduced for low-resolution regions of the display simply by reducing the active sample count for these regions relative to high resolution regions.
- the metadata MD includes a mask of active samples for each region.
- the GPU 304 performs a logical AND between the mask a samples covered by a primitive to determine the active samples for the primitive for which the pixel shader is to be invoked.
- the metadata MD specifies an active sample count for a particular region of the display device 316 among a plurality of regions of the display device.
- the active sample count is less than or equal to the color sample count and the color sample count is two or more.
- the rasterization stage invokes a pixel shader PS only for a number of the color samples for the pixel equal to the active sample count, typically sequential color samples starting with the first in some consistently defined sample order.
- the CPU code 303 c , GPU code 303 g , and texture unit 306 may be further configured to implement certain texture mapping operations to implement modifications to texture mapping operations in conjunction with screen location dependent variable pixel resolution.
- a pixel shader PS and the texture unit 306 can be configured to generate one or more texture coordinates UV per pixel location XY for a primitive to provide a coordinate set for one or more texture mapping operations, calculate gradient values Gr from the texture coordinates UV (possibly including corrections to account for differing sample density over the screen) and determine a level of detail (LOD) for a texture to apply to the primitive.
- LOD level of detail
- certain components of the GPU e.g., certain types of shaders or the texture unit 306 , may be implemented as special purpose hardware, such as an application-specific integrated circuit (ASIC), Field Programmable Gate Array (FPGA), or a system on chip (SoC or SOC).
- ASIC application-specific integrated circuit
- FPGA Field Programmable Gate Array
- SoC system on chip
- an application-specific integrated circuit is an integrated circuit customized for a particular use, rather than intended for general-purpose use.
- a Field Programmable Gate Array is an integrated circuit designed to be configured by a customer or a designer after manufacturing—hence “field-programmable”.
- the FPGA configuration is generally specified using a hardware description language (HDL), similar to that used for an ASIC.
- HDL hardware description language
- SoC system on a chip or system on chip
- SOC system on chip
- IC integrated circuit
- a typical SoC includes the following hardware components:
- DMA controllers route data directly between external interfaces and memory, bypassing the processor core and thereby increasing the data throughput of the SoC.
- a typical SoC includes both the hardware components described above, and executable instructions (e.g., software or firmware) that controls the processor core(s), peripherals and interfaces.
- executable instructions e.g., software or firmware
- some or all of the functions of the shaders or the texture unit 306 may alternatively be implemented by appropriately configured software instructions executed by a software programmable general purpose computer processor. Such instructions may be embodied in a computer-readable medium, e.g., memory 308 or storage device 315 .
- the system 300 may also include well-known support functions 310 , which may communicate with other components of the system, e.g., via the bus 309 .
- Such support functions may include, but are not limited to, input/output (I/O) elements 311 , power supplies (P/S) 312 , a clock (CLK) 313 and cache 314 .
- the GPU 304 may include its own GPU cache 314 G , and the GPU may be configured so that programs running on the GPU 304 can read-through or write-though the GPU cache 314 G .
- the system 300 may include the display device 316 to present rendered graphics 317 to a user.
- the display device 316 is a separate component that works in conjunction with the system 300 .
- the display device 316 may be in the form of a flat panel display, head mounted display (HMD), cathode ray tube (CRT) screen, projector, or other device that can display visible text, numerals, graphical symbols or images.
- the display 316 is a large field of view (FOV) device having a curved screen.
- the display device 316 displays rendered graphic images 317 processed in accordance with various techniques described herein.
- the system 300 may optionally include a mass storage device 315 such as a disk drive, CD-ROM drive, flash memory, tape drive, or the like to store programs and/or data.
- the system 300 may also optionally include a user interface unit 318 to facilitate interaction between the system 300 and a user.
- the user interface 318 may include a keyboard, mouse, joystick, light pen, game controller, or other device that may be used in conjunction with a graphical user interface (GUI).
- GUI graphical user interface
- the system 300 may also include a network interface 320 to enable the device to communicate with other devices over a network 322 .
- the network 322 may be, e.g., a local area network (LAN), a wide area network such as the internet, a personal area network, such as a Bluetooth network or other type of network.
- LAN local area network
- a wide area network such as the internet
- a personal area network such as a Bluetooth network or other type of network.
- the system 300 is configured to implement portions of a graphics rendering pipeline.
- FIG. 3B illustrates an example of a graphics rendering pipeline 330 in accordance with aspects of the present disclosure.
- the rendering pipeline 330 may be configured to render graphics as images that depict a scene having a two-dimensional or preferably three-dimensional geometry in virtual space (sometime referred to herein as “world space”).
- the early stages of the pipeline may include operations performed in virtual space before the scene is rasterized and converted to screen space as a set of discrete picture elements suitable for output on the display device 316 .
- various resources contained in the graphics memory 328 may be utilized at the pipeline stages and inputs and outputs to the stages may be temporarily stored in buffers contained in the graphics memory before the final values of the images are determined.
- the rendering pipeline may operate on input data 332 , which may include one or more virtual objects defined by a set of vertices that are set up in virtual space and have geometry that is defined with respect to coordinates in the scene.
- the early stages of the pipeline may include what is broadly categorized as a vertex processing stage 334 in FIG. 3B , and this may include various computations to process the vertices of the objects in virtual space.
- This may include vertex shading computations 336 , which may manipulate various parameter values of the vertices in the scene, such as position values (e.g., X-Y coordinate and Z-depth values), color values, lighting values, texture coordinates, and the like.
- the vertex shading computations 336 are performed by one or more programmable vertex shaders.
- the vertex processing stage may optionally include additional vertex processing computations, such as tessellation and geometry shader computations 338 which may be optionally used to generate new vertices and new geometries in virtual space.
- additional vertex processing computations such as tessellation and geometry shader computations 338 which may be optionally used to generate new vertices and new geometries in virtual space.
- the pipeline 330 may then proceed to rasterization processing stages 340 associated with converting the scene geometry into screen space and a set of discrete picture elements, i.e., pixels.
- the virtual space geometry may be transformed to screen space geometry through operations that may essentially compute the projection of the objects and vertices from virtual space to the viewing window (or “viewport”) of the scene.
- the vertices may define a set of primitives.
- the rasterization processing stage 340 depicted in FIG. 3B may include primitive assembly operations 342 , which may set up the primitives defined by each set of vertices in the scene.
- Each vertex may be defined by an index, and each primitive may be defined with respect to these vertex indices, which may be stored in index buffers in the graphics memory 328 .
- the primitives may preferably include at least triangles defined by three vertices each, but may also include point primitives line primitives, and other polygonal shapes.
- certain primitives may optionally be culled. For example, those primitives whose indices indicate a certain winding order may be considered to be back-facing and may be culled from the scene.
- the rasterization processing stages may include scan conversion operations 344 , which may sample the primitives at each pixel and generate fragments (sometimes referred to as pixels) from the primitives for further processing when the samples are covered by the primitive.
- scan conversion operations 344 may sample the primitives at each pixel and generate fragments (sometimes referred to as pixels) from the primitives for further processing when the samples are covered by the primitive.
- multiple samples for each pixel are taken within the primitives during the scan conversion operations 344 , which may be used for anti-aliasing purposes.
- different pixels may be sampled differently. For example, some edge pixels may contain a lower sampling density than center pixels to optimize certain aspects of the rendering for certain types of display device 316 , such as head mounted displays (HMDs).
- HMDs head mounted displays
- the fragments (or “pixels”) generated from the primitives during scan conversion 344 may have parameter values that may be interpolated to the locations of the pixels from the vertex parameter values 339 of the vertices of the primitive that created them.
- the rasterization stage 340 may include parameter interpolation operations 346 stage to compute these interpolated fragment parameter values 349 , which may be used as inputs for further processing at the later stages of the pipeline.
- a coarse rasterization 343 can be done to find all the predefined screen subsections (sometimes referred to herein as coarse rasterization tiles) that the primitive overlaps.
- sub-section dependent metadata MD e.g., an active sample count or other parameters, are received that allow the effective resolution to be modified for that subsection.
- Scan conversion 344 and subsequent processing stages generate the final pixel values by performing pixel processing only on the specified number of active samples for the relevant subsection or subsections.
- the graphics pipeline 330 may include further pixel processing operations, indicated generally at 350 in FIG. 3B , to further manipulate the interpolated parameter values 349 and perform further operations determining how the fragments contribute to the final pixel values for display 316 .
- Some of these pixel processing tasks may include pixel shading computations 352 that may be used to further manipulate the interpolated parameter values 349 of the fragments.
- the pixel shading computations may be performed by a programmable pixel shader, and pixel shader invocations 348 may be initiated based on the sampling of the primitives during the rasterization processing stages 340 .
- the pixel shader invocations 348 may also be initiated based on the metadata MD specifying the active sample count for each pixel in the particular region of the display device 316 in which a primitive is to be rendered. For each pixel in the particular region, pixel shader invocations 348 occur only for a number of the color samples for the pixel equal to the active sample count.
- FIG. 4A illustrates an example of how the metadata MD could be configured to specify different active color samples for different regions 401 of the display screen 316 .
- each region may correspond to a fixed size portion of the display.
- each region may correspond to a variable size portion of the display.
- the metadata MD can define each region 401 by ranges of pixels in the vertical and horizontal directions.
- the metadata MD can define each region by coarse rasterization tiles of some size, e.g. 32 pixels ⁇ 32 pixels.
- the metadata associated with a particular region includes information specifying the active color sample count for that region.
- the metadata may be stored in the form of a table in the memory 308 and/or graphics memory 328 .
- each pixel 403 of each region 401 of the display screen 316 can be defined to have 8 depth samples and 4 color samples.
- the metadata for any particular region can specify an active color sample count of 1, 2, 3, or 4 for that region, depending on the desired resolution for that region.
- central regions of the screen 316 are desired to have full resolution, so the metadata MD specifies an active sample count of 4 for those regions.
- the pixel shader invocations 348 are unrolled as a default and depth samples are always written. If the invocations are always unrolled, the pixel shading computations 352 super-sample the pixels by virtue of being invoked for each active color sample, as opposed to multi-sampling them.
- the metadata MD specifies 4 color samples per pixel for a central region 401 C and 2 color samples per pixel in an edge region 401 E it equates to 2 ⁇ higher resolution in the horizontal and vertical directions in the central region 401 C compared to the edge region 401 E .
- FIG. 4B illustrates an example of a single sample per pixel image in which a 2-pixel by 2-pixel quad 406 represents 4 color samples that fully describe four pixels. Depth samples are not differentiated from color samples at this point in the example. For the triangle 405 , 3 of the 4 pixel-locations are covered by the triangle, so this one quad is passed to the pixel-shader PS for pixel shading computations 352 as a single fragment with 3 covered samples. The pixel shader PS shades 4 color samples and stores 3 of them in the frame-buffer FB.
- FIG. 4C by contrast there are four color samples per pixel, so the 16 samples shown only correspond to 2 ⁇ 2 pixels, while in the example in FIG. 4B it was 4 ⁇ 4 pixels.
- This diagram shows the three fragments created when the active pixels are unrolled so that pixel shader invocation occurs on each covered sample, each contains one covered sample.
- the active sample count can be varied by selectively disabling samples specified by the metadata MD. For example, if the upper right and lower left samples in each pixel are rendered inactive, only the one active sample location is covered and therefore only the middle of the three fragments depicted in FIG. 4C would be passed to the pixel-shader PS.
- the metadata is fixed for the optics and FOV of the display 316 .
- FIG. 4D illustrates an example of how the metadata MD could be configured to specify different active pixel samples (or active color samples) for different subsections 401 of the display screen 316 .
- central subsections of the screen 316 are desired to have full resolution, and subsections further from the center have progressively lower resolution.
- each pixel 403 of each region 401 of the display screen 316 can be defined to have a fixed number of depth and color samples, e.g., 8 depth samples and 4 color samples.
- the metadata for any particular region can specify an active color sample count of 1, 2, 3, or 4 for that region, depending on the desired resolution for that region.
- the metadata could vary to implement foveal rendering for eye tracking.
- the system 300 includes hardware for tracking a user's gaze, i.e., where a user's eye is pointing, and relating this information to a corresponding screen location that the user is looking at.
- One example of such hardware could include a digital camera in a known location with respect to the screen of the display device 316 and pointed in the general direction of a user. The digital camera could be part of the user interface 318 or a separate component.
- the CPU code 303 C could include image analysis software that analyzes images from the camera to determine (a) if the user is in the image; (b) if the user is facing the camera; (c) if the user is facing the screen; (d) if the user's eyes are visible; (e) the orientation of the pupils of the user's eyes relative to the user's head; and (f) the orientation of the user's head relative to the camera. From the known position and orientation of the camera with respect to the screen, the orientation of the pupils of the user's eyes relative to the user's head and the orientation of the user's head relative to the camera the image analysis software could determine whether the user is looking at the screen and, if so, screen space coordinates for the portion 401 of the screen the user is looking at.
- the CPU code 303 c could then pass these screen coordinates to the GPU code 303 G , which could determine the subsection or subsections containing the portion 401 .
- the GPU code could then modify the metadata MD accordingly so that the pixel resolution is highest in the subsection or subsections containing the portion 401 and progressively lower in subsections further away from the portion 401 , as shown in FIG. 4E .
- the pixel shading computations 352 may output values to one or more buffers 305 in graphics memory 328 , sometimes referred to as render targets, or if multiple, as multiple render targets (MRTs).
- MRTs allow pixel shaders to optionally output to more than one render target, each with the same screen dimensions but potentially with a different pixel format.
- Render target format limitations often mean that any one render target can only accept up to four independent output values (channels) and that the formats of those four channels are tightly tied to each other.
- MRTs allow a single pixel shader to output many more values in a mix of different formats.
- render targets are “texture-like”, in that they store values per screen space pixel, but, for various performance reasons, render target formats are becoming more specialized in recent hardware generations, sometimes (but not always) requiring what is called a “resolve” to reformat the data before it is compatible with being read in by the texture units.
- the pixel processing 350 may generally culminate in render output operations 356 , which may include what are commonly known as raster operations (ROP). Rasterization Operations (ROP) is simply run multiple times per pixel, once for each render target among the multiple render targets (MRTs).
- the final pixel values 359 may be determined in a frame buffer, which may optionally include merging fragments, applying stencils, depth tests, and certain per sample processing tasks.
- the final pixel values 359 include the collected output to all active render targets (MRTs).
- the GPU 304 uses the final pixel values 359 to make up a finished frame 360 , which may optionally be displayed on the pixels of the display device 316 in real-time.
- the output operations 350 may also include texture mapping operations 354 , which may be performed to some extent by one or more shaders (e.g., pixel shaders PS compute shaders CS, vertex shaders VS or other types of shaders) and to some extent by the texture units 306 .
- the shader computations 352 include calculating texture coordinates UV from screen space coordinates XY, and sending the texture coordinates to the Texture Operations 354 , and receiving texture data TX.
- the texture coordinates UV could be calculated from the screen space coordinates XY in an arbitrary fashion, but typically are calculated from interpolated input values or sometimes from the results of previous texture operations.
- Gradients Gr are often directly calculated from quads of texture coordinates by the texture units 306 (Texture Operations hardware units), but can optionally be calculated explicitly by the pixel shader computations 352 and passed to the texture operations 354 rather than relying on the texture units 306 to perform the default calculation.
- the texture operations 356 generally include the following stages, which can be performed by some combination of a pixel shader PS and a texture unit 306 .
- gradient values Gr are calculated from the texture coordinates UV (potentially with corrections for non-orthonormality of the sample locations) and used to determine a level of detail (LOD) for a texture to apply to the primitive.
- LOD level of detail
- Additional aspects of the present disclosure include a graphics processing method, comprising: receiving metadata specifying an active sample configuration for a particular region of the display device among a plurality of regions of the display device; receiving pixel data for one or more pixels in the particular region, wherein the pixel data specifies the same number of color samples for each pixel; and for each pixel in the particular region, invoking a pixel shader only for color samples specified to be active samples by the active sample configuration.
- An additional aspect of the present disclosure includes a graphics processing method in which different regions of a screen of a display device have different pixel resolution.
- Another additional aspect is a computer-readable medium having computer executable instructions embodied therein that, when executed, implement one or both of the foregoing methods.
- a further aspect is an electromagnetic or other signal carrying computer-readable instructions for performing one or both of the foregoing methods.
- An additional further aspect is a computer program product downloadable from a communication network and/or stored on a computer-readable and/or microprocessor-executable medium, characterized in that it comprises program code instructions for implementing one or both of the foregoing methods.
- Another additional further aspect is a graphics processing system configured to implement one or both of the foregoing methods.
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Optics & Photonics (AREA)
- Software Systems (AREA)
- Geometry (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Architecture (AREA)
- Image Generation (AREA)
- Controls And Circuits For Display Device (AREA)
Abstract
A graphics processing unit (GPU) is configured to receive metadata specifying an active sample configuration for a particular region of a display device among a plurality of regions of the display device and receive pixel data for one or more pixels in the particular region. The pixel data specifies the same number of color samples for each pixel. For each pixel in the particular region, the GPU invokes a pixel shader only for color samples specified to be active samples by the configuration.
Description
This application is related to commonly-assigned, co-pending U.S. patent application Ser. No. 14/246,064, to Tobias Berghoff, entitled “METHOD FOR EFFICIENT CONSTRUCTION OF HIGH RESOLUTION DISPLAY BUFFERS”, filed the same day as the present application, the entire contents of which are herein incorporated by reference.
This application is related to commonly-assigned, co-pending U.S. patent application Ser. No. 14/246,067, to Tobias Berghoff, entitled “GRAPHICS PROCESSING ENHANCEMENT BY TRACKING OBJECT AND/OR PRIMITIVE IDENTIFIERS”, filed the same day as the present application, the entire contents of which are herein incorporated by reference.
This application is related to commonly-assigned, co-pending U.S. patent application Ser. No. 14/246,068, to Mark Evan Cerny, entitled “GRADIENT ADJUSTMENT FOR TEXTURE MAPPING TO NON-ORTHONORMAL GRID”, filed the same day as the present application, the entire contents of which are herein incorporated by reference.
This application is related to commonly-assigned, co-pending U.S. patent application Ser. No. 14/246,063, to Mark Evan Cerny, entitled “VARYING EFFECTIVE RESOLUTION BY SCREEN LOCATION BY ALTERING RASTERIZATION PARAMETERS”, filed the same day as the present application, the entire contents of which are herein incorporated by reference.
This application is related to commonly-assigned, co-pending U.S. patent application Ser. No. 14/246,066, to Mark Evan Cerny, entitled “VARYING EFFECTIVE RESOLUTION BY SCREEN LOCATION IN GRAPHICS PROCESSING BY APPROXIMATING PROJECTION OF VERTICES ONTO CURVED VIEWPORT”, filed the same day as the present application, the entire contents of which are herein incorporated by reference.
This application is related to commonly-assigned, co-pending U.S. patent application Ser. No. 14/246,062, to Mark Evan Cerny, entitled “GRADIENT ADJUSTMENT FOR TEXTURE MAPPING FOR MULTIPLE RENDER TARGETS WITH RESOLUTION THAT VARIES BY SCREEN LOCATION”, filed the same day as the present application, the entire contents of which are herein incorporated by reference.
FIELD OF THE DISCLOSUREAspects of the present disclosure are related to computer graphics. In particular, the present disclosure is related to varying resolution by screen location.
BACKGROUNDGraphics processing typically involves coordination of two processors, a central processing unit (CPU) and a graphics processing unit (GPU). The GPU is a specialized electronic circuit designed to accelerate the creation of images in a frame buffer intended for output to a display. GPUs are used in embedded systems, mobile phones, personal computers, tablet computers, portable game devices, workstations, and game consoles. A GPU is typically designed to be efficient at manipulating computer graphics. GPUs often have a highly parallel processing architecture that makes the GPU more effective than a general-purpose CPU for algorithms where processing of large blocks of data is done in parallel.
The CPU may send the GPU instructions, commonly referred to as draw commands, that instruct the GPU to implement a particular graphics processing task, e.g. render a particular texture that has changed with respect to a previous frame in an image. These draw commands may be coordinated by the CPU with a graphics application programming interface (API) in order to issue graphics rendering commands that correspond to the state of the particular application's virtual environment.
In order to render textures for a particular program, a GPU may perform a series of processing tasks in a “graphics pipeline” to translate the visuals in the virtual environment into images that can be rendered onto a display. A typical graphics pipeline may include performing certain rendering or shading operations on virtual objects in the virtual space, transformation and rasterization of the virtual objects in the scene to produce pixel data suitable for output display, and additional rendering tasks on the pixels (or fragments) before outputting the rendered image on a display.
Virtual objects of an image are often described in virtual space in terms of shapes known as primitives, which together make the shapes of the objects in the virtual scene. For example, objects in a three-dimensional virtual world to be rendered may be reduced to a series of distinct triangle primitives having vertices defined in terms of their coordinates in three-dimensional space, whereby these polygons make up the surfaces of the objects. Each polygon may have an associated index that can be used by the graphics processing system to distinguish a given polygon from other polygons. Likewise, each vertex may have an associated index that can be used to distinguish a given vertex from other vertices. A graphics pipeline may perform certain operations on these primitives to produce visuals for the virtual scene and transform this data into a two-dimensional format suitable for reproduction by the pixels of the display. The term graphics primitive information (or simply “primitive information”), as used herein, is used to refer to data representative of a graphics primitive. Such data includes, but is not limited to, vertex information (e.g., data representing vertex positions or vertex indices) and polygon information, e.g., polygon indices and information that associates particular vertices with particular polygons.
As part of the graphics pipeline, the GPU may perform rendering tasks by implementing programs commonly known as shaders. A typical graphics pipeline may include vertex shaders, which may manipulate certain properties of the primitives on a per-vertex basis, as well as pixel shaders (also known as “fragment shaders”), which operate downstream from the vertex shaders in the graphics pipeline and may manipulate certain values on a per-pixel basis before transmitting the pixel data to a display. The fragment shaders may manipulate values relevant to applying textures to primitives. The pipeline may also include other shaders at various stages in the pipeline, such as geometry shaders that use the output of the vertex shaders to generate a new set of primitives, as well as compute shaders (CS) which may be implemented by a GPU to perform certain other general computational tasks.
Graphical display devices having a wide field of view (FOV) have been developed. Such devices include head mounted display (HMD) devices. In an HMD device, a small display device is worn on a user's head. The display device has a display optic in front of one eye (monocular HMD) or each eye (binocular HMD). An HMD device typically includes sensors that can sense the orientation of the device and change the scene shown by the display optics as the user's head moves. Conventionally, most stages of rendering scenes for wide FOV displays are performed by planar rendering where all parts of the screen have the same number of pixels per unit area.
To provide a realistic experience it is desirable for the graphics presented by a wide FOV display device to be of high quality and efficiently rendered.
It is within this context that the present disclosure arises.
BRIEF DESCRIPTION OF THE DRAWINGSThe teachings of the present disclosure can be readily understood by considering the following detailed description in conjunction with the accompanying drawings, in which:
and
FIG. 1Bare simplified diagrams illustrating certain parameters of wide field of view (FOV) displays.
illustrates different solid angles for different portions of a wide FOV display.
illustrate examples of the relative importance of pixels in different regions of different wide FOV displays in accordance with aspects of the present disclosure.
illustrates an example of different pixel resolution for different regions of a screen of a FOV display in accordance with aspects of the present disclosure.
is a block diagram of a graphics processing system in accordance with aspects of the present disclosure.
is a block diagram of a graphics processing pipeline in accordance with aspects of the present disclosure.
schematically illustrate an example of varying effective resolution by screen location by changing active color sample count within multiple render targets in accordance with aspects of the present disclosure.
is a schematic diagram illustrating an example of a metadata configuration for implementing pixel active sample count varying by screen location in accordance with aspects of the present disclosure.
is a schematic diagram illustrating an alternative example of a metadata configuration for implementing pixel active sample count varying by screen location in accordance with aspects of the present disclosure.
Although the following detailed description contains many specific details for the purposes of illustration, anyone of ordinary skill in the art will appreciate that many variations and alterations to the following details are within the scope of the invention. Accordingly, the exemplary embodiments of the invention described below are set forth without any loss of generality to, and without imposing limitations upon, the claimed invention.
INTRODUCTIONillustrate a previously unappreciated problem with large FOV displays.
FIG. 1Aillustrates a 90 degree FOV display and
FIG. 1Billustrates a 114 degree FOV display. In a conventional large FOV display, three dimensional geometry is rendered using a planar projection to the view plane. However, it turns out that rendering geometry onto a high FOV view plane is very inefficient. As may be seen in
FIG. 1C,
edge regions112 and
central regions114 of
view plane101 are the same area but represent very different solid angles, as seen by a
viewer103. Consequently, pixels near the edge of the screen hold much less meaningful information than pixels near the center. When rendering the scene conventionally, these regions have the same number of pixels and the time spent rendering equal sized regions on the screen is the same.
illustrate the relative importance of different portions of a large FOV display in two dimensions for different sized fields of view.
FIG. 2Aexpresses the variance in solid angle for each square of a planar checkerboard perpendicular to the direction of view, in the case that the checkerboard subtends an angle of 114 degrees. In other words, it expresses the inefficiency of conventional planar projective rendering to a 114 degree FOV display.
FIG. 2Bexpresses the same information for a 90 degree FOV display. In such planar projective rendering, the projection compresses
tiles202 in the
image201 that are at the edges and
tiles203 at the corners into smaller solid angles compared to
tiles204 at the center. Because of this compression, and the fact that each tile in the
image201 has the same number of pixels in screen space, there is an inefficiency factor of roughly 4× for rendering the
edge tiles202 compared to the
center tiles204. By this it is meant that conventional rendering of the
edge tiles202 involves roughly 4 times as much processing per unit solid angle than for the
center tiles204. For the
corner tiles203, the inefficiency factor is roughly 8×. When averaged over the
whole image201, the inefficiency factor is roughly 2.5×.
The inefficiency is dependent on the size of the FOV. For example, for the 90 degree FOV display shown in
FIG. 2B, the inefficiency factors are roughly 2× for rendering the
edge tiles202, roughly 3× for rendering the
corner tiles203, and roughly 1.7× overall for rendering the
image201.
Another way of looking at this situation is shown in
FIG. 2C, in which the
screen102 has been divided into rectangles of approximately equal “importance” in terms of pixels per unit solid angle subtended. Each rectangle makes roughly the same contribution to the final image as seen through the display. One can see how the planar projection distorts the importance of
edge rectangles202 and
corner rectangles203. In fact, the
corner rectangles203 might make less of a contribution to the center rectangles due to the display optics, which may choose to make the visual density of pixels (as expressed as pixels per solid angle) higher towards the center of the display.
Based on the foregoing observations, it would be advantageous for an
image210 for a wide FOV display to have pixel densities that are smaller at
edge regions212, 214, 216, 218 than at
center regions215 and smaller at
corner regions211,213, 217, and 219 than at the
edge regions212, 214, 216, 218 as shown in
FIG. 2D. It would also be advantageous to render a conventional graphical image on the screen of a wide FOV display in a way that gets the same effect as varying the pixel densities across the screen without having to significantly modify the underlying graphical image data or data format or the processing of the data.
According to aspects of the present disclosure these advantages can be obtained in the graphics pipeline by varying the number of active color samples for which pixel shaders are invoked for different regions of the screen of a large FOV display device.
To implement this, part of the graphics pipeline uses metadata that specifies the number of active color samples per pixel in different regions of the screen. The metadata is associated with the screen, not the image. The image data is not changed, but in the graphics pipeline pixel shader execution is done only over the active color samples. For example, in the image data there may be four color samples per pixel. For a full resolution region of the screen, the metadata may specify the active count to be four, in which case, the pixel shader is invoked for all four color samples. In a ¾ resolution region, the active count may be three, in which case, the pixel shader is invoked for three of the four color samples (e.g., the first 3). In a ½ resolution region, the active count would be two and the pixel shader would be invoked for two of the color samples. In a ¼ resolution region, the active count would be one and the pixel shader would be invoked for only one of the four color samples.
System and Apparatus
Aspects of the present disclosure include graphics processing systems that are configured to implement graphics processing with variable pixel sample resolution. By way of example, and not by way of limitation,
FIG. 3Aillustrates a block diagram of a
computer system300 that may be used to implement graphics processing according to aspects of the present disclosure. According to aspects of the present disclosure, the
system300 may be an embedded system, mobile phone, personal computer, tablet computer, portable game device, workstation, game console, and the like.
The
system300 generally may include a central processor unit (CPU) 302, a graphics processor unit (GPU) 304, and a
memory308 that is accessible to both the CPU and GPU. The
CPU302 and
GPU304 may each include one or more processor cores, e.g., a single core, two cores, four cores, eight cores, or more. The
memory308 may be in the form of an integrated circuit that provides addressable memory, e.g., RAM, DRAM, and the like. The
memory308 may include
graphics memory328 that may store graphics resources and temporarily store graphics buffers 305 of data for a graphics rendering pipeline. The graphics buffers 305 may include, e.g., vertex buffers for storing vertex parameter values, index buffers for holding vertex indices, depth buffers (e.g., Z-buffers) for storing depth values of graphics content, stencil buffers, frame buffers for storing completed frames to be sent to a display, and other buffers. In the example shown in
FIG. 3A, the
graphics memory328 is shown as part of the main memory. In alternative implementations, the graphics memory could be a separate component, possibly integrated into the
GPU304.
By way of example, and not by way of limitation, the
CPU302 and
GPU304 may access the
memory308 using a
data bus309. In some cases, it may be useful for the
system300 to include two or more different buses. The
memory308 may contain data that can be accessed by the
CPU302 and
GPU304. The
GPU304 may include a plurality of compute units configured to perform graphics processing tasks in parallel. Each compute unit may include its own dedicated local memory store, such as a local data share.
The CPU may be configured to execute CPU code 303 C, which may include an application that utilizes graphics, a compiler and a graphics API. The graphics API can be configured to issue draw commands to programs implemented by the GPU. The CPU code 303 C may also implement physics simulations and other functions. The
GPU304 may be configured to operate as discussed above. In particular, the GPU may execute GPU code 303 G, which may implement shaders, such as compute shaders CS, vertex shaders VS, and pixel shaders PS, as discussed above. To facilitate passing of data between the compute shaders CS and the vertex shaders VS the system may include one or
more buffers305, which may include a frame buffer FB. The GPU code 303 G may also optionally implement other types of shaders (not shown), such as pixel shaders or geometry shaders. Each compute unit may include its own dedicated local memory store, such as a local data share. The
GPU304 may include a
texture unit306 configured to perform certain operations for applying textures to primitives as part of a graphics pipeline.
According to aspects of the present disclosure, the CPU code 303 c and GPU code 303 g and other elements of the
system300 are configured so that a rasterization stage of the graphics pipeline receives metadata MD specifying an active sample configuration for a particular region of the
display device316 among a plurality of regions of the display device. The rasterization stage receives pixel data for one or more pixels in the particular region. The pixel data specifies the same sample count (number of color samples for each pixel) over the entire surface. The active sample count is less than or equal to the color sample count and the color sample count is two or more. For each pixel in the particular region, the rasterization stage invokes a pixel shader PS only for active samples. The metadata MD specifies different active sample configurations for regions that are to have different pixel sample resolutions (number of pixel samples per unit area of the display). In this way pixel sample resolution can vary for different regions of the
display device316 and the graphics processing load can be reduced for low-resolution regions of the display simply by reducing the active sample count for these regions relative to high resolution regions.
In some implementations, the metadata MD includes a mask of active samples for each region. The
GPU304 performs a logical AND between the mask a samples covered by a primitive to determine the active samples for the primitive for which the pixel shader is to be invoked.
In an alternative implementation, the metadata MD specifies an active sample count for a particular region of the
display device316 among a plurality of regions of the display device. The active sample count is less than or equal to the color sample count and the color sample count is two or more. For each pixel in the particular region, the rasterization stage invokes a pixel shader PS only for a number of the color samples for the pixel equal to the active sample count, typically sequential color samples starting with the first in some consistently defined sample order.
In some implementations, the CPU code 303 c, GPU code 303 g, and
texture unit306 may be further configured to implement certain texture mapping operations to implement modifications to texture mapping operations in conjunction with screen location dependent variable pixel resolution. For example, a pixel shader PS and the
texture unit306 can be configured to generate one or more texture coordinates UV per pixel location XY for a primitive to provide a coordinate set for one or more texture mapping operations, calculate gradient values Gr from the texture coordinates UV (possibly including corrections to account for differing sample density over the screen) and determine a level of detail (LOD) for a texture to apply to the primitive.
By way of example, and not by way of limitation, certain components of the GPU, e.g., certain types of shaders or the
texture unit306, may be implemented as special purpose hardware, such as an application-specific integrated circuit (ASIC), Field Programmable Gate Array (FPGA), or a system on chip (SoC or SOC).
As used herein and as is generally understood by those skilled in the art, an application-specific integrated circuit (ASIC) is an integrated circuit customized for a particular use, rather than intended for general-purpose use.
As used herein and as is generally understood by those skilled in the art, a Field Programmable Gate Array (FPGA) is an integrated circuit designed to be configured by a customer or a designer after manufacturing—hence “field-programmable”. The FPGA configuration is generally specified using a hardware description language (HDL), similar to that used for an ASIC.
As used herein and as is generally understood by those skilled in the art, a system on a chip or system on chip (SoC or SOC) is an integrated circuit (IC) that integrates all components of a computer or other electronic system into a single chip. It may contain digital, analog, mixed-signal, and often radio-frequency functions—all on a single chip substrate. A typical application is in the area of embedded systems.
A typical SoC includes the following hardware components:
-
- One or more processor cores (e.g., microcontroller, microprocessor or digital signal processor (DSP) cores.
- Memory blocks, e.g., read only memory (ROM), random access memory (RAM), electrically erasable programmable read-only memory (EEPROM) and flash memory.
- Timing sources, such as oscillators or phase-locked loops.
- Peripherals, such as counter-timers, real-time timers, or power-on reset generators.
- External interfaces, e.g., industry standards such as universal serial bus (USB), FireWire, Ethernet, universal asynchronous receiver/transmitter (USART), serial peripheral interface (SPI) bus.
- Analog interfaces including analog to digital converters (ADCs) and digital to analog converters (DACs).
- Voltage regulators and power management circuits.
These components are connected by either a proprietary or industry-standard bus. Direct Memory Access (DMA) controllers route data directly between external interfaces and memory, bypassing the processor core and thereby increasing the data throughput of the SoC.
A typical SoC includes both the hardware components described above, and executable instructions (e.g., software or firmware) that controls the processor core(s), peripherals and interfaces.
According to aspects of the present disclosure, some or all of the functions of the shaders or the
texture unit306 may alternatively be implemented by appropriately configured software instructions executed by a software programmable general purpose computer processor. Such instructions may be embodied in a computer-readable medium, e.g.,
memory308 or storage device 315.
The
system300 may also include well-known support functions 310, which may communicate with other components of the system, e.g., via the
bus309. Such support functions may include, but are not limited to, input/output (I/O)
elements311, power supplies (P/S) 312, a clock (CLK) 313 and
cache314. In addition to the
cache314, the
GPU304 may include its
own GPU cache314 G, and the GPU may be configured so that programs running on the
GPU304 can read-through or write-though the
GPU cache314 G.
The
system300 may include the
display device316 to present rendered
graphics317 to a user. In alternative implementations, the
display device316 is a separate component that works in conjunction with the
system300. The
display device316 may be in the form of a flat panel display, head mounted display (HMD), cathode ray tube (CRT) screen, projector, or other device that can display visible text, numerals, graphical symbols or images. In particularly useful implementations, the
display316 is a large field of view (FOV) device having a curved screen. The
display device316 displays rendered
graphic images317 processed in accordance with various techniques described herein.
The
system300 may optionally include a mass storage device 315 such as a disk drive, CD-ROM drive, flash memory, tape drive, or the like to store programs and/or data. The
system300 may also optionally include a
user interface unit318 to facilitate interaction between the
system300 and a user. The
user interface318 may include a keyboard, mouse, joystick, light pen, game controller, or other device that may be used in conjunction with a graphical user interface (GUI). The
system300 may also include a
network interface320 to enable the device to communicate with other devices over a
network322. The
network322 may be, e.g., a local area network (LAN), a wide area network such as the internet, a personal area network, such as a Bluetooth network or other type of network. These components may be implemented in hardware, software, or firmware, or some combination of two or more of these.
Graphics Pipeline
According to aspects of the present disclosure, the
system300 is configured to implement portions of a graphics rendering pipeline.
FIG. 3Billustrates an example of a
graphics rendering pipeline330 in accordance with aspects of the present disclosure.
The
rendering pipeline330 may be configured to render graphics as images that depict a scene having a two-dimensional or preferably three-dimensional geometry in virtual space (sometime referred to herein as “world space”). The early stages of the pipeline may include operations performed in virtual space before the scene is rasterized and converted to screen space as a set of discrete picture elements suitable for output on the
display device316. Throughout the pipeline, various resources contained in the
graphics memory328 may be utilized at the pipeline stages and inputs and outputs to the stages may be temporarily stored in buffers contained in the graphics memory before the final values of the images are determined.
The rendering pipeline may operate on
input data332, which may include one or more virtual objects defined by a set of vertices that are set up in virtual space and have geometry that is defined with respect to coordinates in the scene. The early stages of the pipeline may include what is broadly categorized as a
vertex processing stage334 in
FIG. 3B, and this may include various computations to process the vertices of the objects in virtual space. This may include
vertex shading computations336, which may manipulate various parameter values of the vertices in the scene, such as position values (e.g., X-Y coordinate and Z-depth values), color values, lighting values, texture coordinates, and the like. Preferably, the
vertex shading computations336 are performed by one or more programmable vertex shaders. The vertex processing stage may optionally include additional vertex processing computations, such as tessellation and
geometry shader computations338 which may be optionally used to generate new vertices and new geometries in virtual space. Once the stage referred to as vertex processing 334 is complete, at this stage in the pipeline the scene is defined by a set of vertices which each have a set of vertex parameter values 339.
The
pipeline330 may then proceed to rasterization processing stages 340 associated with converting the scene geometry into screen space and a set of discrete picture elements, i.e., pixels. The virtual space geometry may be transformed to screen space geometry through operations that may essentially compute the projection of the objects and vertices from virtual space to the viewing window (or “viewport”) of the scene. The vertices may define a set of primitives.
The
rasterization processing stage340 depicted in
FIG. 3Bmay include
primitive assembly operations342, which may set up the primitives defined by each set of vertices in the scene. Each vertex may be defined by an index, and each primitive may be defined with respect to these vertex indices, which may be stored in index buffers in the
graphics memory328. The primitives may preferably include at least triangles defined by three vertices each, but may also include point primitives line primitives, and other polygonal shapes. During the
primitive assembly stage342, certain primitives may optionally be culled. For example, those primitives whose indices indicate a certain winding order may be considered to be back-facing and may be culled from the scene.
After primitives are assembled, the rasterization processing stages may include
scan conversion operations344, which may sample the primitives at each pixel and generate fragments (sometimes referred to as pixels) from the primitives for further processing when the samples are covered by the primitive. Optionally, multiple samples for each pixel are taken within the primitives during the
scan conversion operations344, which may be used for anti-aliasing purposes. In certain implementations, different pixels may be sampled differently. For example, some edge pixels may contain a lower sampling density than center pixels to optimize certain aspects of the rendering for certain types of
display device316, such as head mounted displays (HMDs). The fragments (or “pixels”) generated from the primitives during
scan conversion344 may have parameter values that may be interpolated to the locations of the pixels from the vertex parameter values 339 of the vertices of the primitive that created them. The
rasterization stage340 may include
parameter interpolation operations346 stage to compute these interpolated fragment parameter values 349, which may be used as inputs for further processing at the later stages of the pipeline.
According to aspects of the present disclosure, between
primitive assembly342 and
scan conversion344 certain operations take place that account for the fact that different subsections of the screen have different pixel resolutions. In particular implementations, once the screen location for the vertices of a primitive are known, a coarse rasterization 343 can be done to find all the predefined screen subsections (sometimes referred to herein as coarse rasterization tiles) that the primitive overlaps. For each subsection that the primitive overlaps, sub-section dependent metadata MD, e.g., an active sample count or other parameters, are received that allow the effective resolution to be modified for that subsection.
Scan conversion344 and subsequent processing stages generate the final pixel values by performing pixel processing only on the specified number of active samples for the relevant subsection or subsections.
The
graphics pipeline330 may include further pixel processing operations, indicated generally at 350 in
FIG. 3B, to further manipulate the interpolated
parameter values349 and perform further operations determining how the fragments contribute to the final pixel values for
display316. Some of these pixel processing tasks may include
pixel shading computations352 that may be used to further manipulate the interpolated
parameter values349 of the fragments. The pixel shading computations may be performed by a programmable pixel shader, and
pixel shader invocations348 may be initiated based on the sampling of the primitives during the rasterization processing stages 340. As noted above, the
pixel shader invocations348 may also be initiated based on the metadata MD specifying the active sample count for each pixel in the particular region of the
display device316 in which a primitive is to be rendered. For each pixel in the particular region,
pixel shader invocations348 occur only for a number of the color samples for the pixel equal to the active sample count.
illustrates an example of how the metadata MD could be configured to specify different active color samples for
different regions401 of the
display screen316. In some implementations, each region may correspond to a fixed size portion of the display. In other implementations, each region may correspond to a variable size portion of the display. In further implementations, the metadata MD can define each
region401 by ranges of pixels in the vertical and horizontal directions. In yet further implementations, the metadata MD can define each region by coarse rasterization tiles of some size, e.g. 32 pixels×32 pixels. The metadata associated with a particular region includes information specifying the active color sample count for that region. By way of example and not by way of limitation, the metadata may be stored in the form of a table in the
memory308 and/or
graphics memory328.
By way of example, and not by way of limitation, each
pixel403 of each
region401 of the
display screen316 can be defined to have 8 depth samples and 4 color samples. The metadata for any particular region can specify an active color sample count of 1, 2, 3, or 4 for that region, depending on the desired resolution for that region. In the example illustrated in
FIG. 4A, central regions of the
screen316 are desired to have full resolution, so the metadata MD specifies an active sample count of 4 for those regions.
In such implementations, much of the processing in the
graphics pipeline330 occurs as normal. For example,
primitive assembly342 and other portions of
rasterization processing340, such as
scan conversion344 and
parameter interpolation346 would be implemented conventionally. The screen would have a single pixel format, e.g., specifying the color sample count, the depth sample count, the location, and other parameters. A new feature of aspects of the present disclosure is that for
pixel shader invocation348 each
region401 the metadata MD specifies the active color sample count and pixel shaders are invoked a number of times equal to the active color sample count.
In certain implementations, the
pixel shader invocations348 are unrolled as a default and depth samples are always written. If the invocations are always unrolled, the
pixel shading computations352 super-sample the pixels by virtue of being invoked for each active color sample, as opposed to multi-sampling them.
For example, if the metadata MD specifies 4 color samples per pixel for a
central region401 C and 2 color samples per pixel in an
edge region401 E it equates to 2× higher resolution in the horizontal and vertical directions in the
central region401 C compared to the
edge region401 E.
illustrates an example of a single sample per pixel image in which a 2-pixel by 2-
pixel quad406 represents 4 color samples that fully describe four pixels. Depth samples are not differentiated from color samples at this point in the example. For the
triangle405, 3 of the 4 pixel-locations are covered by the triangle, so this one quad is passed to the pixel-shader PS for
pixel shading computations352 as a single fragment with 3 covered samples. The pixel shader PS shades 4 color samples and
stores3 of them in the frame-buffer FB.
In
FIG. 4C, by contrast there are four color samples per pixel, so the 16 samples shown only correspond to 2×2 pixels, while in the example in
FIG. 4Bit was 4×4 pixels. This diagram shows the three fragments created when the active pixels are unrolled so that pixel shader invocation occurs on each covered sample, each contains one covered sample.
The active sample count can be varied by selectively disabling samples specified by the metadata MD. For example, if the upper right and lower left samples in each pixel are rendered inactive, only the one active sample location is covered and therefore only the middle of the three fragments depicted in
FIG. 4Cwould be passed to the pixel-shader PS.
In some implementations, the metadata is fixed for the optics and FOV of the
display316. An example of such a metadata configuration is shown schematically in
FIG. 4D.
FIG. 4Dillustrates an example of how the metadata MD could be configured to specify different active pixel samples (or active color samples) for
different subsections401 of the
display screen316. In the example illustrated in
FIG. 4D, central subsections of the
screen316 are desired to have full resolution, and subsections further from the center have progressively lower resolution. By way of example, and not by way of limitation, each
pixel403 of each
region401 of the
display screen316 can be defined to have a fixed number of depth and color samples, e.g., 8 depth samples and 4 color samples. The metadata for any particular region can specify an active color sample count of 1, 2, 3, or 4 for that region, depending on the desired resolution for that region.
In alternative implementations, the metadata could vary to implement foveal rendering for eye tracking. In such implementations, the
system300 includes hardware for tracking a user's gaze, i.e., where a user's eye is pointing, and relating this information to a corresponding screen location that the user is looking at. One example of such hardware could include a digital camera in a known location with respect to the screen of the
display device316 and pointed in the general direction of a user. The digital camera could be part of the
user interface318 or a separate component. The CPU code 303 C could include image analysis software that analyzes images from the camera to determine (a) if the user is in the image; (b) if the user is facing the camera; (c) if the user is facing the screen; (d) if the user's eyes are visible; (e) the orientation of the pupils of the user's eyes relative to the user's head; and (f) the orientation of the user's head relative to the camera. From the known position and orientation of the camera with respect to the screen, the orientation of the pupils of the user's eyes relative to the user's head and the orientation of the user's head relative to the camera the image analysis software could determine whether the user is looking at the screen and, if so, screen space coordinates for the
portion401 of the screen the user is looking at. The CPU code 303 c could then pass these screen coordinates to the GPU code 303 G, which could determine the subsection or subsections containing the
portion401. The GPU code could then modify the metadata MD accordingly so that the pixel resolution is highest in the subsection or subsections containing the
portion401 and progressively lower in subsections further away from the
portion401, as shown in
FIG. 4E.
Referring again to
FIG. 3B, the
pixel shading computations352 may output values to one or
more buffers305 in
graphics memory328, sometimes referred to as render targets, or if multiple, as multiple render targets (MRTs). MRTs allow pixel shaders to optionally output to more than one render target, each with the same screen dimensions but potentially with a different pixel format. Render target format limitations often mean that any one render target can only accept up to four independent output values (channels) and that the formats of those four channels are tightly tied to each other. MRTs allow a single pixel shader to output many more values in a mix of different formats. The formats of render targets are “texture-like”, in that they store values per screen space pixel, but, for various performance reasons, render target formats are becoming more specialized in recent hardware generations, sometimes (but not always) requiring what is called a “resolve” to reformat the data before it is compatible with being read in by the texture units.
The
pixel processing350 may generally culminate in render
output operations356, which may include what are commonly known as raster operations (ROP). Rasterization Operations (ROP) is simply run multiple times per pixel, once for each render target among the multiple render targets (MRTs). During the
output operations356, the final pixel values 359 may be determined in a frame buffer, which may optionally include merging fragments, applying stencils, depth tests, and certain per sample processing tasks. The final pixel values 359 include the collected output to all active render targets (MRTs). The
GPU304 uses the final pixel values 359 to make up a
finished frame360, which may optionally be displayed on the pixels of the
display device316 in real-time.
The
output operations350 may also include
texture mapping operations354, which may be performed to some extent by one or more shaders (e.g., pixel shaders PS compute shaders CS, vertex shaders VS or other types of shaders) and to some extent by the
texture units306. The shader computations 352 include calculating texture coordinates UV from screen space coordinates XY, and sending the texture coordinates to the
Texture Operations354, and receiving texture data TX. The texture coordinates UV could be calculated from the screen space coordinates XY in an arbitrary fashion, but typically are calculated from interpolated input values or sometimes from the results of previous texture operations. Gradients Gr are often directly calculated from quads of texture coordinates by the texture units 306 (Texture Operations hardware units), but can optionally be calculated explicitly by the
pixel shader computations352 and passed to the
texture operations354 rather than relying on the
texture units306 to perform the default calculation.
The
texture operations356 generally include the following stages, which can be performed by some combination of a pixel shader PS and a
texture unit306. First, one or more texture coordinates UV per pixel location XY are generated and used to provide a coordinate set for each texture mapping operation. Then, gradient values Gr are calculated from the texture coordinates UV (potentially with corrections for non-orthonormality of the sample locations) and used to determine a level of detail (LOD) for a texture to apply to the primitive.
Additional Aspects
Additional aspects of the present disclosure include a graphics processing method, comprising: receiving metadata specifying an active sample configuration for a particular region of the display device among a plurality of regions of the display device; receiving pixel data for one or more pixels in the particular region, wherein the pixel data specifies the same number of color samples for each pixel; and for each pixel in the particular region, invoking a pixel shader only for color samples specified to be active samples by the active sample configuration.
An additional aspect of the present disclosure includes a graphics processing method in which different regions of a screen of a display device have different pixel resolution.
Another additional aspect is a computer-readable medium having computer executable instructions embodied therein that, when executed, implement one or both of the foregoing methods.
A further aspect is an electromagnetic or other signal carrying computer-readable instructions for performing one or both of the foregoing methods.
An additional further aspect is a computer program product downloadable from a communication network and/or stored on a computer-readable and/or microprocessor-executable medium, characterized in that it comprises program code instructions for implementing one or both of the foregoing methods.
Another additional further aspect is a graphics processing system configured to implement one or both of the foregoing methods.
While the above is a complete description of the preferred embodiment of the present invention, it is possible to use various alternatives, modifications and equivalents. Therefore, the scope of the present invention should be determined not with reference to the above description but should, instead, be determined with reference to the appended claims, along with their full scope of equivalents. Any feature described herein, whether preferred or not, may be combined with any other feature described herein, whether preferred or not. In the claims that follow, the indefinite article “A”, or “An” refers to a quantity of one or more of the item following the article, except where expressly stated otherwise. The appended claims are not to be interpreted as including means-plus-function limitations, unless such a limitation is explicitly recited in a given claim using the phrase “means for.”
Claims (33)
1. A method for graphics processing with a graphics processing system having a graphics processing unit coupled to a display device, comprising: receiving metadata specifying an active sample configuration for a particular region of a screen of the display device among a plurality of regions of the screen; receiving pixel data for one or more pixels of an image in the particular region, wherein the pixel data specifies the same number of color samples for each pixel; and for each pixel in the particular region that is covered by a primitive, invoking a pixel shader only for color samples for the pixel specified to be active samples by the active sample configuration, wherein invoking the pixel shader includes unrolling pixel shader invocations by default, wherein pixel shading computations of the pixel shader super-sample each pixel by virtue of being invoked for each active color sample of each pixel in the particular region that is covered by the primitive, wherein the metadata specifies different active sample configurations for regions of the screen that have different resolutions.
2. The method of
claim 1, wherein the metadata specifies a mask of active samples for the particular region and wherein invoking the pixel shader only for active samples includes performing a logical AND between the mask and a set of samples covered by a primitive to determine the active samples for the primitive for which the pixel shader is to be invoked.
3. The method of
claim 1, wherein the metadata specifies an active sample count for the particular region of the plurality of regions; wherein the active sample count is less than or equal to the number of color samples, wherein the number of color samples is two or more;
wherein invoking the pixel shader only for active samples includes for each pixel in the particular region, invoking a pixel shader only for a number of the color samples for the pixel equal to the active sample count.
4. The method of
claim 1, wherein the metadata is configured such that an active sample count for one or more regions of the plurality located near a center of the screen is greater than an active sample count for one or more regions of the plurality located near a periphery of the screen.
5. The method of
claim 1, wherein the metadata is configured such that an active sample count for one or more regions of the plurality located near a center of the screen is greater than an active sample count for one or more regions of the plurality located near an edge of the screen and wherein the active count for the one or more regions located near the edge of the screen is greater than an active count for one or more regions of the plurality located near a corner of the screen.
6. The method of
claim 1, wherein the display device is characterized by a field of view of 90 degrees or more.
7. The method of
claim 1, wherein the display device is a head-mounted display device.
8. The method of
claim 1, further comprising determining a portion of a screen of the display device that a user is looking at and wherein the metadata is configured to vary the pixel resolution such that pixel resolution is highest for one or more subsections of the screen containing the portion the user is looking at.
9. The method of
claim 1, wherein the metadata is static for given optics and a given field of view of the display device.
10. The method of
claim 1, wherein the metadata is configured to specify different active color samples for different regions of the screen.
11. The method of
claim 1, wherein, each region of the plurality corresponds to a fixed size portion of the screen.
12. The method of
claim 1, wherein each region of the plurality corresponds to a variable size portion of the screen.
13. The method of
claim 1, wherein the metadata defines each region of the plurality of regions by ranges of pixels in vertical and horizontal directions.
14. The method of
claim 1, wherein the metadata defines each region of the plurality by coarse rasterization tiles of some size.
15. The method of
claim 1, wherein a portion of the metadata associated with a particular region of the plurality includes information specifying an active color sample count for the particular region.
16. The method of
claim 1, wherein the metadata is stored in the form of a table in a memory and/or graphics memory.
17. A system for graphics processing, comprising
a graphics processing unit (GPU) configured to
receive metadata specifying an active sample configuration for a particular region of a screen of a display device among a plurality of regions of the screen;
receive pixel data for one or more pixels of an image in the particular region, wherein the pixel data specifies the same number of color samples for each pixel; and
for each pixel in the particular region that is covered by a primitive, invoke a pixel shader only for color samples for the pixel specified to be active samples by the active sample configuration, wherein invoking the pixel shader includes unrolling pixel shader invocations by default, wherein pixel shading computations of the pixel shader super-sample each pixel by virtue of being invoked for each active color sample of each pixel in the particular region that is covered by the primitive, and wherein the metadata specifies different active sample configurations for regions of the screen that have different resolutions.
18. The system of
claim 17, wherein the metadata specifies a mask of active samples for the particular region and wherein the graphics processing unit is configured to invoke the pixel shader only for active samples by performing a logical AND between the mask and a set of samples covered by a primitive to determine the active samples for the primitive for which the pixel shader is to be invoked.
19. The system of
claim 17, wherein the metadata specifies an active sample count for the particular region of the plurality of regions, wherein the active sample count is less than or equal to the number of color samples, wherein the number of color samples is two or more; and
wherein the GPU is configured to invoke the pixel shader only for active samples includes for each pixel in the particular region by invoking a pixel shader only for a number of the color samples for the pixel equal to the active sample count.
20. The system of
claim 17, wherein the metadata is configured such that an active sample count for one or more regions of the plurality located near a center of the screen is greater than an active sample count for one or more regions of the plurality located near a periphery of the screen.
21. The system of
claim 17, wherein the metadata is configured such that an active sample count for one or more regions of the plurality located near a center of the screen is greater than an active sample count for one or more regions of the plurality located near an edge of the screen and wherein the active count for the one or more regions located near the edge of the screen is greater than an active count for one or more regions of the plurality located near a corner of the screen.
22. The system of
claim 17, further comprising the display device, wherein the display device is characterized by a field of view of 90 degrees or more.
23. The system of
claim 17, further comprising the display device, wherein the display device is a head-mounted display device.
24. The system of
claim 17, wherein the system is configured to determine a portion of a screen of the display device that a user is looking at and wherein the metadata is configured to vary the pixel resolution such that pixel resolution is highest for one or more subsections of the screen containing the portion the user is looking at.
25. The system of
claim 17, wherein the system is configured to use static metadata for given optics and field of view of the display device.
26. The system of
claim 17, wherein the metadata is configured to specify different active color samples for different regions of the screen.
27. The system of
claim 17, wherein, each region of the plurality corresponds to a fixed size portion of the screen.
28. The system of
claim 17, wherein each region of the plurality corresponds to a variable size portion of the screen.
29. The system of
claim 17, wherein the metadata defines each region of the plurality of regions by ranges of pixels in vertical and horizontal directions.
30. The system of
claim 17, wherein the metadata defines each region of the plurality by coarse rasterization tiles of some size.
31. The system of
claim 17, wherein a portion of the metadata associated with a particular region of the plurality includes information specifying an active color sample count for the particular region.
32. The system of
claim 17, further comprising a memory and/or graphics memory, wherein the metadata is stored in the form of a table in the memory and/or graphics memory.
33. A non-transitory computer-readable medium having computer executable instructions embodied therein that, when executed, implement a method for graphics processing with a graphics processing system having a graphics processing unit coupled to a display device, the method comprising:
receiving metadata specifying an active sample configuration for a particular region of a screen of the display device among a plurality of regions of the screen;
receiving pixel data for one or more pixels of an image in the particular region, wherein the pixel data specifies the same number of color samples for each pixel; and
for each pixel in the particular region that is covered by a primitive, invoking a pixel shader only for color samples for the pixel specified to be active samples by the active sample configuration, wherein invoking the pixel shader includes unrolling pixel shader invocations by default, wherein pixel shading computations of the pixel shader super-sample each pixel by virtue of being invoked for each active color sample of each pixel in the particular region that is covered by the primitive, and wherein the metadata specifies different active sample configurations for regions of the screen that have different resolutions.
Priority Applications (10)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/246,061 US10068311B2 (en) | 2014-04-05 | 2014-04-05 | Varying effective resolution by screen location by changing active color sample count within multiple render targets |
EP15773599.4A EP3129979A4 (en) | 2014-04-05 | 2015-03-23 | Varying effective resolution by screen location by changing active color sample count within multiple render targets |
JP2016560642A JP6652257B2 (en) | 2014-04-05 | 2015-03-23 | Varying the effective resolution with screen position by changing the active color sample count in multiple render targets |
PCT/US2015/021971 WO2015153165A1 (en) | 2014-04-05 | 2015-03-23 | Varying effective resolution by screen location by changing active color sample count within multiple render targets |
KR1020167027635A KR101922482B1 (en) | 2014-04-05 | 2015-03-23 | Varying effective resolution by screen location by changing active color sample count within multiple render targets |
US16/119,274 US10614549B2 (en) | 2014-04-05 | 2018-08-31 | Varying effective resolution by screen location by changing active color sample count within multiple render targets |
JP2020005290A JP7033617B2 (en) | 2014-04-05 | 2020-01-16 | Varying the effective resolution depending on the position of the screen by changing the active color sample count within multiple render targets |
US16/807,044 US11302054B2 (en) | 2014-04-05 | 2020-03-02 | Varying effective resolution by screen location by changing active color sample count within multiple render targets |
JP2020046386A JP7004759B2 (en) | 2014-04-05 | 2020-03-17 | Varying the effective resolution depending on the position of the screen by changing the active color sample count within multiple render targets |
JP2021050413A JP7112549B2 (en) | 2014-04-05 | 2021-03-24 | Varying the effective resolution with screen position by changing the active color sample count within multiple render targets |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/246,061 US10068311B2 (en) | 2014-04-05 | 2014-04-05 | Varying effective resolution by screen location by changing active color sample count within multiple render targets |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/119,274 Continuation US10614549B2 (en) | 2014-04-05 | 2018-08-31 | Varying effective resolution by screen location by changing active color sample count within multiple render targets |
Publications (2)
Publication Number | Publication Date |
---|---|
US20150287165A1 US20150287165A1 (en) | 2015-10-08 |
US10068311B2 true US10068311B2 (en) | 2018-09-04 |
Family
ID=54210194
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/246,061 Active 2034-09-07 US10068311B2 (en) | 2014-04-05 | 2014-04-05 | Varying effective resolution by screen location by changing active color sample count within multiple render targets |
US16/119,274 Active US10614549B2 (en) | 2014-04-05 | 2018-08-31 | Varying effective resolution by screen location by changing active color sample count within multiple render targets |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/119,274 Active US10614549B2 (en) | 2014-04-05 | 2018-08-31 | Varying effective resolution by screen location by changing active color sample count within multiple render targets |
Country Status (5)
Country | Link |
---|---|
US (2) | US10068311B2 (en) |
EP (1) | EP3129979A4 (en) |
JP (4) | JP6652257B2 (en) |
KR (1) | KR101922482B1 (en) |
WO (1) | WO2015153165A1 (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190108643A1 (en) * | 2016-06-06 | 2019-04-11 | Sz Dji Osmo Technology Co., Ltd. | Image processing for tracking |
US10614549B2 (en) | 2014-04-05 | 2020-04-07 | Sony Interactive Entertainment Europe Limited | Varying effective resolution by screen location by changing active color sample count within multiple render targets |
US10685425B2 (en) | 2014-04-05 | 2020-06-16 | Sony Interactive Entertainment LLC | Varying effective resolution by screen location by altering rasterization parameters |
US10783696B2 (en) | 2014-04-05 | 2020-09-22 | Sony Interactive Entertainment LLC | Gradient adjustment for texture mapping to non-orthonormal grid |
US11106928B2 (en) | 2016-06-06 | 2021-08-31 | Sz Dji Osmo Technology Co., Ltd. | Carrier-assisted tracking |
US11302054B2 (en) | 2014-04-05 | 2022-04-12 | Sony Interactive Entertainment Europe Limited | Varying effective resolution by screen location by changing active color sample count within multiple render targets |
Families Citing this family (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015154004A1 (en) | 2014-04-05 | 2015-10-08 | Sony Computer Entertainment America Llc | Method for efficient re-rendering objects to vary viewports and under varying rendering and rasterization parameters |
US10037620B2 (en) | 2015-05-29 | 2018-07-31 | Nvidia Corporation | Piecewise linear irregular rasterization |
US11403099B2 (en) | 2015-07-27 | 2022-08-02 | Sony Interactive Entertainment LLC | Backward compatibility by restriction of hardware resources |
US9805176B2 (en) * | 2015-07-30 | 2017-10-31 | Toshiba Tec Kabushiki Kaisha | Shared system and terminal device |
US9799134B2 (en) | 2016-01-12 | 2017-10-24 | Indg | Method and system for high-performance real-time adjustment of one or more elements in a playing video, interactive 360° content or image |
US10915333B2 (en) | 2016-03-30 | 2021-02-09 | Sony Interactive Entertainment Inc. | Deriving application-specific operating parameters for backwards compatiblity |
US10275239B2 (en) | 2016-03-30 | 2019-04-30 | Sony Interactive Entertainment Inc. | Deriving application-specific operating parameters for backwards compatiblity |
US10303488B2 (en) | 2016-03-30 | 2019-05-28 | Sony Interactive Entertainment Inc. | Real-time adjustment of application-specific operating parameters for backwards compatibility |
GB2553353B (en) * | 2016-09-05 | 2021-11-24 | Advanced Risc Mach Ltd | Graphics processing systems and graphics processors |
CN106651991B (en) * | 2016-09-12 | 2023-10-31 | 广州久邦世纪科技有限公司 | Intelligent mapping realization method and system thereof |
GB2560306B (en) * | 2017-03-01 | 2020-07-08 | Sony Interactive Entertainment Inc | Image processing |
CN110520781A (en) | 2017-03-27 | 2019-11-29 | 阿维甘特公司 | Central fovea display can be turned to |
WO2018190826A1 (en) * | 2017-04-12 | 2018-10-18 | Hewlett-Packard Development Company, L.P. | Transfer to head mounted display |
US10685473B2 (en) * | 2017-05-31 | 2020-06-16 | Vmware, Inc. | Emulation of geometry shaders and stream output using compute shaders |
JP2019028368A (en) * | 2017-08-02 | 2019-02-21 | 株式会社ソニー・インタラクティブエンタテインメント | Rendering device, head-mounted display, image transmission method, and image correction method |
CN109388448B (en) * | 2017-08-09 | 2020-08-04 | 京东方科技集团股份有限公司 | Image display method, display system, and computer-readable storage medium |
US12217385B2 (en) | 2017-08-09 | 2025-02-04 | Beijing Boe Optoelectronics Technology Co., Ltd. | Image processing device, method and computer-readable storage medium to determine resolution of processed regions |
US10650586B2 (en) * | 2017-08-10 | 2020-05-12 | Outward, Inc. | Automated mesh generation |
CN115842907A (en) * | 2018-03-27 | 2023-03-24 | 京东方科技集团股份有限公司 | Rendering method, computer product and display device |
EP3598393B1 (en) * | 2018-07-16 | 2023-11-08 | Huawei Technologies Co., Ltd. | Rendering using a per-tile msaa level |
KR102166106B1 (en) * | 2018-11-21 | 2020-10-15 | 스크린엑스 주식회사 | Method and system for generating multifaceted images using virtual camera |
CA3122089A1 (en) | 2018-12-07 | 2020-06-11 | Avegant Corp. | Steerable positioning element |
JP7460637B2 (en) * | 2019-01-07 | 2024-04-02 | エイヴギャント コーポレイション | Control System and Rendering Pipeline |
CN217739617U (en) | 2019-03-29 | 2022-11-04 | 阿维甘特公司 | System for providing steerable hybrid display using waveguides |
NO346115B1 (en) * | 2019-12-20 | 2022-02-28 | Novotech As | 3D rendering |
WO2021142486A1 (en) | 2020-01-06 | 2021-07-15 | Avegant Corp. | A head mounted system with color specific modulation |
GB2591803B (en) | 2020-02-07 | 2022-02-23 | Imagination Tech Ltd | Graphics processing method and system for rendering items of geometry based on their size |
GB2591802B (en) | 2020-02-07 | 2022-03-23 | Imagination Tech Ltd | Graphics processing method and system for rendering items of geometry based on their size |
US20230232005A1 (en) * | 2022-01-18 | 2023-07-20 | Cisco Technology, Inc. | Representing color indices by use of constant partitions |
CN114473709B (en) * | 2022-02-21 | 2023-04-28 | 宁波维真显示科技股份有限公司 | 3DLED trimming processing device and method |
GB2624205B (en) * | 2022-11-10 | 2025-02-12 | Imagination Tech Ltd | Graphics processing system and method of rendering |
Citations (102)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4513317A (en) | 1982-09-28 | 1985-04-23 | The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration | Retinally stabilized differential resolution television display |
US5130794A (en) * | 1990-03-29 | 1992-07-14 | Ritchey Kurtis J | Panoramic display system |
US5224208A (en) | 1990-03-16 | 1993-06-29 | Hewlett-Packard Company | Gradient calculation for texture mapping |
US5422653A (en) * | 1993-01-07 | 1995-06-06 | Maguire, Jr.; Francis J. | Passive virtual reality |
US5602391A (en) | 1995-02-23 | 1997-02-11 | Hughes Electronics | Quincunx sampling grid for staring array |
US5777913A (en) * | 1995-12-27 | 1998-07-07 | Ericsson Inc. | Resolution enhancement of fixed point digital filters |
USH1812H (en) * | 1997-10-24 | 1999-11-02 | Sun Microsystems, Inc. | Method for encoding bounding boxes of drawing primitives to be rendered for multi-resolution supersampled frame buffers |
JP2000155850A (en) | 1998-11-20 | 2000-06-06 | Sony Corp | Texture mapping device and rendering device equipped with the same device and information processor |
US6313838B1 (en) | 1998-02-17 | 2001-11-06 | Sun Microsystems, Inc. | Estimating graphics system performance for polygons |
US20020057279A1 (en) | 1999-05-20 | 2002-05-16 | Compaq Computer Corporation | System and method for displaying images using foveal video |
US6417861B1 (en) | 1999-02-17 | 2002-07-09 | Sun Microsystems, Inc. | Graphics system with programmable sample positions |
US6469700B1 (en) | 1998-06-24 | 2002-10-22 | Micron Technology, Inc. | Per pixel MIP mapping and trilinear filtering using scanline gradients for selecting appropriate texture maps |
US20030086603A1 (en) | 2001-09-07 | 2003-05-08 | Distortion Graphics, Inc. | System and method for transforming graphical images |
US20030112238A1 (en) | 2001-10-10 | 2003-06-19 | Cerny Mark Evan | System and method for environment mapping |
US20030112240A1 (en) | 2001-10-10 | 2003-06-19 | Cerny Mark Evan | System and method for point pushing to render polygons in environments with changing levels of detail |
US20030122833A1 (en) | 2001-12-31 | 2003-07-03 | Doyle Peter L. | Efficient graphics state management for zone rendering |
US20030234784A1 (en) | 2002-06-21 | 2003-12-25 | Radek Grzeszczuk | Accelerated visualization of surface light fields |
US20040036692A1 (en) | 2002-08-23 | 2004-02-26 | Byron Alcorn | System and method for calculating a texture-mapping gradient |
US20040169663A1 (en) | 2003-03-01 | 2004-09-02 | The Boeing Company | Systems and methods for providing enhanced vision imaging |
US6804066B1 (en) | 2001-05-23 | 2004-10-12 | University Of Central Florida | Compact lens assembly for the teleportal augmented reality system |
US20040227703A1 (en) | 2003-05-13 | 2004-11-18 | Mcnc Research And Development Institute | Visual display with increased field of view |
US20050017983A1 (en) | 2003-02-20 | 2005-01-27 | Liao Qun Feng | Approximation of level of detail calculation in cubic mapping without attribute delta function |
US6906723B2 (en) | 2001-03-29 | 2005-06-14 | International Business Machines Corporation | Generating partials for perspective corrected texture coordinates in a four pixel texture pipeline |
US20050225670A1 (en) * | 2004-04-02 | 2005-10-13 | Wexler Daniel E | Video processing, such as for hidden surface reduction or removal |
US6967663B1 (en) | 2003-09-08 | 2005-11-22 | Nvidia Corporation | Antialiasing using hybrid supersampling-multisampling |
TWI250785B (en) | 2003-04-28 | 2006-03-01 | Toshiba Corp | Image rendering device and image rendering method |
US20060077209A1 (en) | 2004-10-07 | 2006-04-13 | Bastos Rui M | Pixel center position displacement |
JP2006293627A (en) | 2005-04-08 | 2006-10-26 | Toshiba Corp | Plotting method and plotting device |
US20060256112A1 (en) | 2005-05-10 | 2006-11-16 | Sony Computer Entertainment Inc. | Statistical rendering acceleration |
US20060277520A1 (en) | 2001-09-11 | 2006-12-07 | The Regents Of The University Of California | Method of locating areas in an image such as a photo mask layout that are sensitive to residual processing effects |
US20070018988A1 (en) | 2005-07-20 | 2007-01-25 | Michael Guthe | Method and applications for rasterization of non-simple polygons and curved boundary representations |
US20070165035A1 (en) | 1998-08-20 | 2007-07-19 | Apple Computer, Inc. | Deferred shading graphics pipeline processor having advanced features |
US20070183649A1 (en) | 2004-03-15 | 2007-08-09 | Koninklijke Philips Electronic, N.V. | Image visualization |
US7336277B1 (en) | 2003-04-17 | 2008-02-26 | Nvidia Corporation | Per-pixel output luminosity compensation |
US7339594B1 (en) | 2005-03-01 | 2008-03-04 | Nvidia Corporation | Optimized anisotropic texture sampling |
US20080062164A1 (en) | 2006-08-11 | 2008-03-13 | Bassi Zorawar | System and method for automated calibration and correction of display geometry and color |
US20080106489A1 (en) | 2006-11-02 | 2008-05-08 | Brown Lawrence G | Systems and methods for a head-mounted display |
US20080113792A1 (en) | 2006-11-15 | 2008-05-15 | Nintendo Co., Ltd. | Storage medium storing game program and game apparatus |
US20080129748A1 (en) | 2003-11-19 | 2008-06-05 | Reuven Bakalash | Parallel graphics rendering system supporting parallelized operation of multiple graphics processing pipelines within diverse system architectures |
US7426724B2 (en) | 2004-07-02 | 2008-09-16 | Nvidia Corporation | Optimized chaining of vertex and fragment programs |
JP2008233765A (en) | 2007-03-23 | 2008-10-02 | Toshiba Corp | Image display device and method |
US20090002380A1 (en) | 2006-11-10 | 2009-01-01 | Sony Computer Entertainment Inc. | Graphics Processing Apparatus, Graphics Library Module And Graphics Processing Method |
US7511717B1 (en) | 2005-07-15 | 2009-03-31 | Nvidia Corporation | Antialiasing using hybrid supersampling-multisampling |
TW200919376A (en) | 2007-07-31 | 2009-05-01 | Intel Corp | Real-time luminosity dependent subdivision |
JP2009116550A (en) | 2007-11-05 | 2009-05-28 | Fujitsu Microelectronics Ltd | Plotting processor, plotting processing method and plotting processing program |
US20090141033A1 (en) | 2007-11-30 | 2009-06-04 | Qualcomm Incorporated | System and method for using a secondary processor in a graphics system |
TW201001329A (en) | 2008-03-20 | 2010-01-01 | Qualcomm Inc | Multi-stage tessellation for graphics rendering |
US20100002000A1 (en) | 2008-07-03 | 2010-01-07 | Everitt Cass W | Hybrid Multisample/Supersample Antialiasing |
US20100007662A1 (en) | 2008-06-05 | 2010-01-14 | Arm Limited | Graphics processing systems |
US20100104162A1 (en) | 2008-10-23 | 2010-04-29 | Immersion Corporation | Systems And Methods For Ultrasound Simulation Using Depth Peeling |
US20100110102A1 (en) | 2008-10-24 | 2010-05-06 | Arm Limited | Methods of and apparatus for processing computer graphics |
US20100156919A1 (en) | 2008-12-19 | 2010-06-24 | Xerox Corporation | Systems and methods for text-based personalization of images |
US20100214294A1 (en) | 2009-02-20 | 2010-08-26 | Microsoft Corporation | Method for tessellation on graphics hardware |
WO2010111258A1 (en) | 2009-03-24 | 2010-09-30 | Advanced Micro Devices, Inc. | Method and apparatus for angular invariant texture level of detail generation |
US7876332B1 (en) | 2006-12-20 | 2011-01-25 | Nvidia Corporation | Shader that conditionally updates a framebuffer in a computer graphics system |
US7907792B2 (en) | 2006-06-16 | 2011-03-15 | Hewlett-Packard Development Company, L.P. | Blend maps for rendering an image frame |
US7916155B1 (en) | 2007-11-02 | 2011-03-29 | Nvidia Corporation | Complementary anti-aliasing sample patterns |
US20110090250A1 (en) | 2009-10-15 | 2011-04-21 | Molnar Steven E | Alpha-to-coverage using virtual samples |
US20110090242A1 (en) | 2009-10-20 | 2011-04-21 | Apple Inc. | System and method for demosaicing image data using weighted gradients |
US20110134136A1 (en) | 2009-12-03 | 2011-06-09 | Larry Seiler | Computing Level of Detail for Anisotropic Filtering |
US20110188744A1 (en) | 2010-02-04 | 2011-08-04 | Microsoft Corporation | High dynamic range image generation and rendering |
US20110216069A1 (en) | 2010-03-08 | 2011-09-08 | Gary Keall | Method And System For Compressing Tile Lists Used For 3D Rendering |
US8044956B1 (en) | 2007-08-03 | 2011-10-25 | Nvidia Corporation | Coverage adaptive multisampling |
US8090383B1 (en) | 2004-02-17 | 2012-01-03 | Emigh Aaron T | Method and system for charging for a service based on time spent at a facility |
US20120014576A1 (en) * | 2009-12-11 | 2012-01-19 | Aperio Technologies, Inc. | Signal to Noise Ratio in Digital Pathology Image Analysis |
US20120069021A1 (en) | 2010-09-20 | 2012-03-22 | Samsung Electronics Co., Ltd. | Apparatus and method of early pixel discarding in graphic processing unit |
US8144156B1 (en) | 2003-12-31 | 2012-03-27 | Zii Labs Inc. Ltd. | Sequencer with async SIMD array |
US20120092366A1 (en) | 2010-10-13 | 2012-04-19 | Qualcomm Incorporated | Systems and methods for dynamic procedural texture generation management |
US8207975B1 (en) | 2006-05-08 | 2012-06-26 | Nvidia Corporation | Graphics rendering pipeline that supports early-Z and late-Z virtual machines |
US8228328B1 (en) | 2006-11-03 | 2012-07-24 | Nvidia Corporation | Early Z testing for multiple render targets |
US8233004B1 (en) | 2006-11-06 | 2012-07-31 | Nvidia Corporation | Color-compression using automatic reduction of multi-sampled pixels |
US20120206452A1 (en) | 2010-10-15 | 2012-08-16 | Geisner Kevin A | Realistic occlusion for a head mounted augmented reality display |
US8300059B2 (en) | 2006-02-03 | 2012-10-30 | Ati Technologies Ulc | Method and apparatus for selecting a mip map level based on a min-axis value for texture mapping |
US20120293519A1 (en) | 2011-05-16 | 2012-11-22 | Qualcomm Incorporated | Rendering mode selection in graphics processing units |
US20120293486A1 (en) | 2011-05-20 | 2012-11-22 | Canon Kabushiki Kaisha | Image processing apparatus and image processing method |
US20130021358A1 (en) | 2011-07-22 | 2013-01-24 | Qualcomm Incorporated | Area-based rasterization techniques for a graphics processing system |
US20130063440A1 (en) | 2011-09-14 | 2013-03-14 | Samsung Electronics Co., Ltd. | Graphics processing method and apparatus using post fragment shader |
US20130093766A1 (en) | 2007-08-07 | 2013-04-18 | Nvidia Corporation | Interpolation of vertex attributes in a graphics processor |
US20130114680A1 (en) | 2010-07-21 | 2013-05-09 | Dolby Laboratories Licensing Corporation | Systems and Methods for Multi-Layered Frame-Compatible Video Delivery |
US20130120380A1 (en) | 2011-11-16 | 2013-05-16 | Qualcomm Incorporated | Tessellation in tile-based rendering |
WO2013076994A1 (en) | 2011-11-24 | 2013-05-30 | パナソニック株式会社 | Head-mounted display device |
US20130141445A1 (en) | 2011-12-05 | 2013-06-06 | Arm Limited | Methods of and apparatus for processing computer graphics |
US20130265309A1 (en) | 2012-04-04 | 2013-10-10 | Qualcomm Incorporated | Patched shading in graphics processing |
US8581929B1 (en) | 2012-06-05 | 2013-11-12 | Francis J. Maguire, Jr. | Display of light field image data using a spatial light modulator at a focal length corresponding to a selected focus depth |
US20130300740A1 (en) | 2010-09-13 | 2013-11-14 | Alt Software (Us) Llc | System and Method for Displaying Data Having Spatial Coordinates |
US20130342547A1 (en) | 2012-06-21 | 2013-12-26 | Eric LUM | Early sample evaluation during coarse rasterization |
US20140049549A1 (en) | 2012-08-20 | 2014-02-20 | Maxim Lukyanov | Efficient placement of texture barrier instructions |
US20140063016A1 (en) | 2012-07-31 | 2014-03-06 | John W. Howson | Unified rasterization and ray tracing rendering environments |
US20140362101A1 (en) | 2013-06-10 | 2014-12-11 | Sony Computer Entertainment Inc. | Fragment shaders perform vertex shader computations |
US20140362081A1 (en) | 2013-06-10 | 2014-12-11 | Sony Computer Entertainment Inc. | Using compute shaders as front end for vertex shaders |
US20140362102A1 (en) | 2013-06-10 | 2014-12-11 | Sony Computer Entertainment Inc. | Graphics processing hardware for using compute shaders as front end for vertex shaders |
US20140362100A1 (en) | 2013-06-10 | 2014-12-11 | Sony Computer Entertainment Inc. | Scheme for compressing vertex shader output parameters |
US20150089367A1 (en) * | 2013-09-24 | 2015-03-26 | Qnx Software Systems Limited | System and method for forwarding an application user interface |
US20150287158A1 (en) | 2014-04-05 | 2015-10-08 | Sony Computer Entertainment America Llc | Method for efficient re-rendering objects to vary viewports and under varying rendering and rasterization parameters |
US20150287230A1 (en) | 2014-04-05 | 2015-10-08 | Sony Computer Entertainment America Llc | Gradient adjustment for texture mapping for multiple render targets with resolution that varies by screen location |
US20150287167A1 (en) | 2014-04-05 | 2015-10-08 | Sony Computer Entertainment America Llc | Varying effective resolution by screen location in graphics processing by approximating projection of vertices onto curved viewport |
US20150287166A1 (en) | 2014-04-05 | 2015-10-08 | Sony Computer Entertainment America Llc | Varying effective resolution by screen location by altering rasterization parameters |
US20150287232A1 (en) | 2014-04-05 | 2015-10-08 | Sony Computer Entertainment America Llc | Gradient adjustment for texture mapping to non-orthonormal grid |
US20160246323A1 (en) | 2015-02-20 | 2016-08-25 | Sony Computer Entertainment America Llc | Backward compatibility through use of spoof clock and fine grain frequency control |
US20170031834A1 (en) | 2015-07-27 | 2017-02-02 | Sony Interactive Entertainment America Llc | Backward compatibility by restriction of hardware resources |
US20170031732A1 (en) | 2015-07-27 | 2017-02-02 | Sony Computer Entertainment America Llc | Backward compatibility by algorithm matching, disabling features, or throttling performance |
US20170123961A1 (en) | 2015-11-02 | 2017-05-04 | Sony Computer Entertainment America Llc | Backward compatibility testing of software in a mode that disrupts timing |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6496187B1 (en) * | 1998-02-17 | 2002-12-17 | Sun Microsystems, Inc. | Graphics system configured to perform parallel sample to pixel calculation |
JP2002503855A (en) * | 1998-02-17 | 2002-02-05 | サン・マイクロシステムズ・インコーポレーテッド | Graphics system with variable resolution supersampling |
US6731298B1 (en) | 2000-10-02 | 2004-05-04 | Nvidia Corporation | System, method and article of manufacture for z-texture mapping |
EP1496475B1 (en) | 2003-07-07 | 2013-06-26 | STMicroelectronics Srl | A geometric processing stage for a pipelined graphic engine, corresponding method and computer program product therefor |
US7728841B1 (en) * | 2005-12-19 | 2010-06-01 | Nvidia Corporation | Coherent shader output for multiple targets |
US10068311B2 (en) | 2014-04-05 | 2018-09-04 | Sony Interacive Entertainment LLC | Varying effective resolution by screen location by changing active color sample count within multiple render targets |
-
2014
- 2014-04-05 US US14/246,061 patent/US10068311B2/en active Active
-
2015
- 2015-03-23 EP EP15773599.4A patent/EP3129979A4/en active Pending
- 2015-03-23 JP JP2016560642A patent/JP6652257B2/en active Active
- 2015-03-23 KR KR1020167027635A patent/KR101922482B1/en active IP Right Grant
- 2015-03-23 WO PCT/US2015/021971 patent/WO2015153165A1/en active Application Filing
-
2018
- 2018-08-31 US US16/119,274 patent/US10614549B2/en active Active
-
2020
- 2020-01-16 JP JP2020005290A patent/JP7033617B2/en active Active
- 2020-03-17 JP JP2020046386A patent/JP7004759B2/en active Active
-
2021
- 2021-03-24 JP JP2021050413A patent/JP7112549B2/en active Active
Patent Citations (120)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4513317A (en) | 1982-09-28 | 1985-04-23 | The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration | Retinally stabilized differential resolution television display |
US5224208A (en) | 1990-03-16 | 1993-06-29 | Hewlett-Packard Company | Gradient calculation for texture mapping |
US5130794A (en) * | 1990-03-29 | 1992-07-14 | Ritchey Kurtis J | Panoramic display system |
US5422653A (en) * | 1993-01-07 | 1995-06-06 | Maguire, Jr.; Francis J. | Passive virtual reality |
US5602391A (en) | 1995-02-23 | 1997-02-11 | Hughes Electronics | Quincunx sampling grid for staring array |
US5777913A (en) * | 1995-12-27 | 1998-07-07 | Ericsson Inc. | Resolution enhancement of fixed point digital filters |
USH1812H (en) * | 1997-10-24 | 1999-11-02 | Sun Microsystems, Inc. | Method for encoding bounding boxes of drawing primitives to be rendered for multi-resolution supersampled frame buffers |
US6313838B1 (en) | 1998-02-17 | 2001-11-06 | Sun Microsystems, Inc. | Estimating graphics system performance for polygons |
US6469700B1 (en) | 1998-06-24 | 2002-10-22 | Micron Technology, Inc. | Per pixel MIP mapping and trilinear filtering using scanline gradients for selecting appropriate texture maps |
US20070165035A1 (en) | 1998-08-20 | 2007-07-19 | Apple Computer, Inc. | Deferred shading graphics pipeline processor having advanced features |
JP2000155850A (en) | 1998-11-20 | 2000-06-06 | Sony Corp | Texture mapping device and rendering device equipped with the same device and information processor |
US6476819B1 (en) | 1998-11-20 | 2002-11-05 | Sony Corporation | Apparatus and method for assigning shrinkage factor during texture mapping operations |
US6417861B1 (en) | 1999-02-17 | 2002-07-09 | Sun Microsystems, Inc. | Graphics system with programmable sample positions |
US20020057279A1 (en) | 1999-05-20 | 2002-05-16 | Compaq Computer Corporation | System and method for displaying images using foveal video |
US6906723B2 (en) | 2001-03-29 | 2005-06-14 | International Business Machines Corporation | Generating partials for perspective corrected texture coordinates in a four pixel texture pipeline |
US6804066B1 (en) | 2001-05-23 | 2004-10-12 | University Of Central Florida | Compact lens assembly for the teleportal augmented reality system |
US20030086603A1 (en) | 2001-09-07 | 2003-05-08 | Distortion Graphics, Inc. | System and method for transforming graphical images |
US20060277520A1 (en) | 2001-09-11 | 2006-12-07 | The Regents Of The University Of California | Method of locating areas in an image such as a photo mask layout that are sensitive to residual processing effects |
US7046245B2 (en) | 2001-10-10 | 2006-05-16 | Sony Computer Entertainment America Inc. | System and method for environment mapping |
US7081893B2 (en) | 2001-10-10 | 2006-07-25 | Sony Computer Entertainment America Inc. | System and method for point pushing to render polygons in environments with changing levels of detail |
US20030112240A1 (en) | 2001-10-10 | 2003-06-19 | Cerny Mark Evan | System and method for point pushing to render polygons in environments with changing levels of detail |
US20030112238A1 (en) | 2001-10-10 | 2003-06-19 | Cerny Mark Evan | System and method for environment mapping |
US8031192B2 (en) | 2001-10-10 | 2011-10-04 | Sony Computer Entertainment America Llc | System and method for generating additional polygons within the contours of a rendered object to control levels of detail |
US7786993B2 (en) | 2001-10-10 | 2010-08-31 | Sony Computer Entertainment America Llc | Environment mapping |
US20070002049A1 (en) | 2001-10-10 | 2007-01-04 | Cerny Mark E | System and method for generating additional polygons within the contours of a rendered object to control levels of detail |
US8174527B2 (en) | 2001-10-10 | 2012-05-08 | Sony Computer Entertainment America Llc | Environment mapping |
US20100283783A1 (en) | 2001-10-10 | 2010-11-11 | Mark Evan Cerny | Environment Mapping |
US20060001674A1 (en) | 2001-10-10 | 2006-01-05 | Sony Computer Entertainment America Inc. | Environment mapping |
US20030122833A1 (en) | 2001-12-31 | 2003-07-03 | Doyle Peter L. | Efficient graphics state management for zone rendering |
US20030234784A1 (en) | 2002-06-21 | 2003-12-25 | Radek Grzeszczuk | Accelerated visualization of surface light fields |
US20040036692A1 (en) | 2002-08-23 | 2004-02-26 | Byron Alcorn | System and method for calculating a texture-mapping gradient |
US20050017983A1 (en) | 2003-02-20 | 2005-01-27 | Liao Qun Feng | Approximation of level of detail calculation in cubic mapping without attribute delta function |
JP2004265413A (en) | 2003-03-01 | 2004-09-24 | Boeing Co:The | System and method for giving environmental image to view point of display |
US20040169663A1 (en) | 2003-03-01 | 2004-09-02 | The Boeing Company | Systems and methods for providing enhanced vision imaging |
US7336277B1 (en) | 2003-04-17 | 2008-02-26 | Nvidia Corporation | Per-pixel output luminosity compensation |
US7161603B2 (en) | 2003-04-28 | 2007-01-09 | Kabushiki Kaisha Toshiba | Image rendering device and image rendering method |
TWI250785B (en) | 2003-04-28 | 2006-03-01 | Toshiba Corp | Image rendering device and image rendering method |
US20040227703A1 (en) | 2003-05-13 | 2004-11-18 | Mcnc Research And Development Institute | Visual display with increased field of view |
US6967663B1 (en) | 2003-09-08 | 2005-11-22 | Nvidia Corporation | Antialiasing using hybrid supersampling-multisampling |
US20080129748A1 (en) | 2003-11-19 | 2008-06-05 | Reuven Bakalash | Parallel graphics rendering system supporting parallelized operation of multiple graphics processing pipelines within diverse system architectures |
US8144156B1 (en) | 2003-12-31 | 2012-03-27 | Zii Labs Inc. Ltd. | Sequencer with async SIMD array |
US8090383B1 (en) | 2004-02-17 | 2012-01-03 | Emigh Aaron T | Method and system for charging for a service based on time spent at a facility |
US20070183649A1 (en) | 2004-03-15 | 2007-08-09 | Koninklijke Philips Electronic, N.V. | Image visualization |
US20050225670A1 (en) * | 2004-04-02 | 2005-10-13 | Wexler Daniel E | Video processing, such as for hidden surface reduction or removal |
US7426724B2 (en) | 2004-07-02 | 2008-09-16 | Nvidia Corporation | Optimized chaining of vertex and fragment programs |
US20060077209A1 (en) | 2004-10-07 | 2006-04-13 | Bastos Rui M | Pixel center position displacement |
US7339594B1 (en) | 2005-03-01 | 2008-03-04 | Nvidia Corporation | Optimized anisotropic texture sampling |
US7355604B2 (en) | 2005-04-08 | 2008-04-08 | Kabushiki Kaisha Toshiba | Image rendering method and image rendering apparatus using anisotropic texture mapping |
JP2006293627A (en) | 2005-04-08 | 2006-10-26 | Toshiba Corp | Plotting method and plotting device |
US20060256112A1 (en) | 2005-05-10 | 2006-11-16 | Sony Computer Entertainment Inc. | Statistical rendering acceleration |
US7511717B1 (en) | 2005-07-15 | 2009-03-31 | Nvidia Corporation | Antialiasing using hybrid supersampling-multisampling |
US20070018988A1 (en) | 2005-07-20 | 2007-01-25 | Michael Guthe | Method and applications for rasterization of non-simple polygons and curved boundary representations |
US8300059B2 (en) | 2006-02-03 | 2012-10-30 | Ati Technologies Ulc | Method and apparatus for selecting a mip map level based on a min-axis value for texture mapping |
US8207975B1 (en) | 2006-05-08 | 2012-06-26 | Nvidia Corporation | Graphics rendering pipeline that supports early-Z and late-Z virtual machines |
US7907792B2 (en) | 2006-06-16 | 2011-03-15 | Hewlett-Packard Development Company, L.P. | Blend maps for rendering an image frame |
US20080062164A1 (en) | 2006-08-11 | 2008-03-13 | Bassi Zorawar | System and method for automated calibration and correction of display geometry and color |
US20080106489A1 (en) | 2006-11-02 | 2008-05-08 | Brown Lawrence G | Systems and methods for a head-mounted display |
US8228328B1 (en) | 2006-11-03 | 2012-07-24 | Nvidia Corporation | Early Z testing for multiple render targets |
US8233004B1 (en) | 2006-11-06 | 2012-07-31 | Nvidia Corporation | Color-compression using automatic reduction of multi-sampled pixels |
US8149242B2 (en) | 2006-11-10 | 2012-04-03 | Sony Computer Entertainment Inc. | Graphics processing apparatus, graphics library module and graphics processing method |
US20090002380A1 (en) | 2006-11-10 | 2009-01-01 | Sony Computer Entertainment Inc. | Graphics Processing Apparatus, Graphics Library Module And Graphics Processing Method |
US20080113792A1 (en) | 2006-11-15 | 2008-05-15 | Nintendo Co., Ltd. | Storage medium storing game program and game apparatus |
US7876332B1 (en) | 2006-12-20 | 2011-01-25 | Nvidia Corporation | Shader that conditionally updates a framebuffer in a computer graphics system |
JP2008233765A (en) | 2007-03-23 | 2008-10-02 | Toshiba Corp | Image display device and method |
TW200919376A (en) | 2007-07-31 | 2009-05-01 | Intel Corp | Real-time luminosity dependent subdivision |
US8044956B1 (en) | 2007-08-03 | 2011-10-25 | Nvidia Corporation | Coverage adaptive multisampling |
US20130093766A1 (en) | 2007-08-07 | 2013-04-18 | Nvidia Corporation | Interpolation of vertex attributes in a graphics processor |
US7916155B1 (en) | 2007-11-02 | 2011-03-29 | Nvidia Corporation | Complementary anti-aliasing sample patterns |
JP2009116550A (en) | 2007-11-05 | 2009-05-28 | Fujitsu Microelectronics Ltd | Plotting processor, plotting processing method and plotting processing program |
US20090141033A1 (en) | 2007-11-30 | 2009-06-04 | Qualcomm Incorporated | System and method for using a secondary processor in a graphics system |
TW201001329A (en) | 2008-03-20 | 2010-01-01 | Qualcomm Inc | Multi-stage tessellation for graphics rendering |
US20100007662A1 (en) | 2008-06-05 | 2010-01-14 | Arm Limited | Graphics processing systems |
US20100002000A1 (en) | 2008-07-03 | 2010-01-07 | Everitt Cass W | Hybrid Multisample/Supersample Antialiasing |
US20100104162A1 (en) | 2008-10-23 | 2010-04-29 | Immersion Corporation | Systems And Methods For Ultrasound Simulation Using Depth Peeling |
US20100110102A1 (en) | 2008-10-24 | 2010-05-06 | Arm Limited | Methods of and apparatus for processing computer graphics |
US20100156919A1 (en) | 2008-12-19 | 2010-06-24 | Xerox Corporation | Systems and methods for text-based personalization of images |
US20100214294A1 (en) | 2009-02-20 | 2010-08-26 | Microsoft Corporation | Method for tessellation on graphics hardware |
WO2010111258A1 (en) | 2009-03-24 | 2010-09-30 | Advanced Micro Devices, Inc. | Method and apparatus for angular invariant texture level of detail generation |
US8669999B2 (en) | 2009-10-15 | 2014-03-11 | Nvidia Corporation | Alpha-to-coverage value determination using virtual samples |
US20110090250A1 (en) | 2009-10-15 | 2011-04-21 | Molnar Steven E | Alpha-to-coverage using virtual samples |
TW201143466A (en) | 2009-10-20 | 2011-12-01 | Apple Inc | System and method for demosaicing image data using weighted gradients |
US20110090242A1 (en) | 2009-10-20 | 2011-04-21 | Apple Inc. | System and method for demosaicing image data using weighted gradients |
US20110134136A1 (en) | 2009-12-03 | 2011-06-09 | Larry Seiler | Computing Level of Detail for Anisotropic Filtering |
US20120014576A1 (en) * | 2009-12-11 | 2012-01-19 | Aperio Technologies, Inc. | Signal to Noise Ratio in Digital Pathology Image Analysis |
US20110188744A1 (en) | 2010-02-04 | 2011-08-04 | Microsoft Corporation | High dynamic range image generation and rendering |
US20110216069A1 (en) | 2010-03-08 | 2011-09-08 | Gary Keall | Method And System For Compressing Tile Lists Used For 3D Rendering |
US20130114680A1 (en) | 2010-07-21 | 2013-05-09 | Dolby Laboratories Licensing Corporation | Systems and Methods for Multi-Layered Frame-Compatible Video Delivery |
US20130300740A1 (en) | 2010-09-13 | 2013-11-14 | Alt Software (Us) Llc | System and Method for Displaying Data Having Spatial Coordinates |
US20120069021A1 (en) | 2010-09-20 | 2012-03-22 | Samsung Electronics Co., Ltd. | Apparatus and method of early pixel discarding in graphic processing unit |
US20120092366A1 (en) | 2010-10-13 | 2012-04-19 | Qualcomm Incorporated | Systems and methods for dynamic procedural texture generation management |
US20120206452A1 (en) | 2010-10-15 | 2012-08-16 | Geisner Kevin A | Realistic occlusion for a head mounted augmented reality display |
US20120293519A1 (en) | 2011-05-16 | 2012-11-22 | Qualcomm Incorporated | Rendering mode selection in graphics processing units |
US20120293486A1 (en) | 2011-05-20 | 2012-11-22 | Canon Kabushiki Kaisha | Image processing apparatus and image processing method |
US20130021358A1 (en) | 2011-07-22 | 2013-01-24 | Qualcomm Incorporated | Area-based rasterization techniques for a graphics processing system |
US20130063440A1 (en) | 2011-09-14 | 2013-03-14 | Samsung Electronics Co., Ltd. | Graphics processing method and apparatus using post fragment shader |
US20130120380A1 (en) | 2011-11-16 | 2013-05-16 | Qualcomm Incorporated | Tessellation in tile-based rendering |
WO2013076994A1 (en) | 2011-11-24 | 2013-05-30 | パナソニック株式会社 | Head-mounted display device |
US20130141445A1 (en) | 2011-12-05 | 2013-06-06 | Arm Limited | Methods of and apparatus for processing computer graphics |
JP2013137756A (en) | 2011-12-05 | 2013-07-11 | Arm Ltd | Method for processing computer graphics and device for processing computer graphics |
US20130265309A1 (en) | 2012-04-04 | 2013-10-10 | Qualcomm Incorporated | Patched shading in graphics processing |
US8581929B1 (en) | 2012-06-05 | 2013-11-12 | Francis J. Maguire, Jr. | Display of light field image data using a spatial light modulator at a focal length corresponding to a selected focus depth |
US20130342547A1 (en) | 2012-06-21 | 2013-12-26 | Eric LUM | Early sample evaluation during coarse rasterization |
US20140063016A1 (en) | 2012-07-31 | 2014-03-06 | John W. Howson | Unified rasterization and ray tracing rendering environments |
US20140049549A1 (en) | 2012-08-20 | 2014-02-20 | Maxim Lukyanov | Efficient placement of texture barrier instructions |
US20140362101A1 (en) | 2013-06-10 | 2014-12-11 | Sony Computer Entertainment Inc. | Fragment shaders perform vertex shader computations |
US20140362081A1 (en) | 2013-06-10 | 2014-12-11 | Sony Computer Entertainment Inc. | Using compute shaders as front end for vertex shaders |
US20140362102A1 (en) | 2013-06-10 | 2014-12-11 | Sony Computer Entertainment Inc. | Graphics processing hardware for using compute shaders as front end for vertex shaders |
US20140362100A1 (en) | 2013-06-10 | 2014-12-11 | Sony Computer Entertainment Inc. | Scheme for compressing vertex shader output parameters |
US20150089367A1 (en) * | 2013-09-24 | 2015-03-26 | Qnx Software Systems Limited | System and method for forwarding an application user interface |
US20150287158A1 (en) | 2014-04-05 | 2015-10-08 | Sony Computer Entertainment America Llc | Method for efficient re-rendering objects to vary viewports and under varying rendering and rasterization parameters |
US20150287230A1 (en) | 2014-04-05 | 2015-10-08 | Sony Computer Entertainment America Llc | Gradient adjustment for texture mapping for multiple render targets with resolution that varies by screen location |
US20150287167A1 (en) | 2014-04-05 | 2015-10-08 | Sony Computer Entertainment America Llc | Varying effective resolution by screen location in graphics processing by approximating projection of vertices onto curved viewport |
US20150287166A1 (en) | 2014-04-05 | 2015-10-08 | Sony Computer Entertainment America Llc | Varying effective resolution by screen location by altering rasterization parameters |
US20150287232A1 (en) | 2014-04-05 | 2015-10-08 | Sony Computer Entertainment America Llc | Gradient adjustment for texture mapping to non-orthonormal grid |
US9495790B2 (en) | 2014-04-05 | 2016-11-15 | Sony Interactive Entertainment America Llc | Gradient adjustment for texture mapping to non-orthonormal grid |
US20170061671A1 (en) | 2014-04-05 | 2017-03-02 | Sony Interactive Entertainment America Llc | Gradient adjustment for texture mapping to non-orthonormal grid |
US20160246323A1 (en) | 2015-02-20 | 2016-08-25 | Sony Computer Entertainment America Llc | Backward compatibility through use of spoof clock and fine grain frequency control |
US20170031834A1 (en) | 2015-07-27 | 2017-02-02 | Sony Interactive Entertainment America Llc | Backward compatibility by restriction of hardware resources |
US20170031732A1 (en) | 2015-07-27 | 2017-02-02 | Sony Computer Entertainment America Llc | Backward compatibility by algorithm matching, disabling features, or throttling performance |
US20170123961A1 (en) | 2015-11-02 | 2017-05-04 | Sony Computer Entertainment America Llc | Backward compatibility testing of software in a mode that disrupts timing |
Non-Patent Citations (75)
Title |
---|
Co-Pending U.S. Appl. No. 14/246,061, to Tobias Berghoff, filed Apr. 5, 2014. |
Co-Pending U.S. Appl. No. 14/246,062, to Mark Evan Cerny, filed Apr. 5, 2014. |
Co-Pending U.S. Appl. No. 14/246,063, to Mark Evan Cerny, filed Apr. 5, 2014. |
Co-Pending U.S. Appl. No. 14/246,064, to Tobias Berghoff, filed Apr. 5, 2014. |
Co-Pending U.S. Appl. No. 14/246,066, to Mark Cerny, filed Apr. 5, 2014. |
Co-Pending U.S. Appl. No. 14/246,066, to Mark Evan Cerny, filed Apr. 5, 2014. |
Co-Pending U.S. Appl. No. 14/246,067, to Tobias Berghoff, filed Apr. 5, 2014. |
Co-Pending U.S. Appl. No. 14/246,068, to Mark Evan Cerny, filed Apr. 5, 2014. |
Extended European search report dated Aug. 29, 2017 for European Patent Application No. 15773477.3. |
Extended European Search Report dated Oct. 2, 2017 for European patent application EP15773048. |
Extended European Search Report dated Sep. 22, 2017 for EP Application No. 15772990. |
Extended European search report dated Sep. 22, 2017 for European Patent Application No. 15772568.0. |
Final Office Action for U.S. Appl. No. 14/246,062, dated Jul. 15, 2016. |
Final Office Action for U.S. Appl. No. 14/246,063, dated Jun. 21, 2016. |
Final Office Action for U.S. Appl. No. 14/246,064, dated Jul. 11, 2016. |
Final Office Action for U.S. Appl. No. 14/246,066, dated Jul. 20, 2016. |
Final Office Action for U.S. Appl. No. 14/246,067, dated Jun. 17, 2016. |
International Search Report and Written Opinion for International Application No. PCT/US2015/024303, dated Jul. 1, 2015. |
International Search Report and Written Opinion for International Application No. PCT/US2015/21951, dated Jul. 1, 2015. |
International Search Report and Written Opinion for International Application No. PCT/US2015/21956, dated Jul. 1, 2015. |
International Search Report and Written Opinion for International Application No. PCT/US2015/21971, dated Jul. 1, 2015. |
International Search Report and Written Opinion for International Application No. PCT/US2015/21978, dated Jul. 1, 2015. |
International Search Report and Written Opinion for International Application No. PCT/US2015/21984, dated Jul. 1, 2015. |
International Search Report and Written Opinion for International Application No. PCT/US2015/21987, dated Jul. 1, 2015. |
International Search Report and Written Opinion for International Application No. PCT/US2015021982, dated Nov. 30, 2017. |
International Search Report and Written Opinion for International Application No. PCT/US201521982, dated Jul. 1, 2015. |
Jacob Munkberg et al: "Hierarchical stochastic motion blur rasterization",High Performance Graphics, Association for Computing Machinery, Inc., Aug. 5, 2011 (Aug. 5, 2011), pp. 107-118. |
John D. Owens, Mike Houston, David Luebke, Simon Green, John E. Stone, and James C. Phillips, "GPU Computing", Proceeding of IEEE, May 2008, p. 879-899. * |
KAYVON FATAHALIAN ; SOLOMON BOULOS ; JAMES HEGARTY ; KURT AKELEY ; WILLIAM R. MARK ; HENRY MORETON ; PAT HANRAHAN: "Reducing shading on GPUs using quad-fragment merging", ACM TRANSACTIONS ON GRAPHICS (TOG), ACM, US, vol. 29, no. 4, 26 July 2010 (2010-07-26), US, pages 1 - 8, XP058157954, ISSN: 0730-0301, DOI: 10.1145/1778765.1778804 |
Kayvon Fatahalian et al: "Reducing shading on GPUs using quad-fragment merging", ACM Transactions on Graphics US, vol. 29, No. 4, Jul. 26, 2010 pp. 1-8, XP058157954, ISSN: 0730-0301. |
Marries Van De Hoef et al: "Comparison of multiple rendering techniques", Jun. 4, 2010 (Jun. 9, 2010). |
Marries Van De Hoef et al: "Comparison of mutiple rendering techniques", Jun. 4, 2010 (Jun. 9, 2010). |
Matthaus G. Chajdas, Morgan McGuire, David Luebke; "Subpixel Reconstruction Antialiasing for Deferred Shading" in i3D, Feb. 2011. |
Non-Final Office Action for U.S. Appl. No. 14/246,062, dated Jan. 14, 2016. |
Non-Final Office Action for U.S. Appl. No. 14/246,063, dated Jan. 4, 2016. |
Non-Final Office Action for U.S. Appl. No. 14/246,063, dated Nov. 23, 2016. |
Non-Final Office Action for U.S. Appl. No. 14/246,064, dated Dec. 8, 2016. |
Non-Final Office Action for U.S. Appl. No. 14/246,064, dated Feb. 1, 2015. |
Non-Final Office Action for U.S. Appl. No. 14/246,066, dated Dec. 30, 2016. |
Non-Final Office Action for U.S. Appl. No. 14/246,066, dated Feb. 5, 2016. |
Non-Final Office Action for U.S. Appl. No. 14/246,067, dated Jan. 22, 2016. |
Non-Final Office Action for U.S. Appl. No. 14/246,067, dated Oct. 27, 2016. |
Non-Final Office Action for U.S. Appl. No. 14/246,068, dated Jan. 14, 2016. |
Non-Final Office Action for U.S. Appl. No. 14/678,445, dated Dec. 30, 2016. |
Non-Final Office Action for U.S. Appl. No. 15/587,825, dated Jun. 30, 2017. |
Non-Final Office Action for U.S. Appl. No. 15/717,041, dated Dec. 14, 2017. |
Non-Final/Final Office Action for U.S. Appl. No. 14/246,061, dated Aug. 24, 2017. |
Notice of Allowance dated Aug. 4, 2017 for U.S. Appl. No. 12/246,066. |
Notice of Allowance dated Jul. 17, 2018 for U.S. Appl. No. 15/717,041. |
Notice of Allowance dated May 31, 2018 for U.S. Appl. No. 15/587,825. |
Notice of Allowance for U.S. Appl. No. 14/246,062, dated Jan. 4, 2017. |
Notice of Allowance for U.S. Appl. No. 14/246,066, dated Aug. 4, 2017. |
Notice of Allowance for U.S. Appl. No. 14/246,068, dated Jul. 15, 2016. |
Notification of Reason(s) for Refusal dated Sep. 12, 2017 for Japanese Patent application No. 2016-559961. |
Office Action dated Aug. 1, 2017 for Korean Patent Application No. 0-2016-7027635. |
Office Action dated Aug. 29, 2017 for TW Application No. 105138883. |
Office Action dated May 28, 2018 for Korean Patent Application No. 2016-7027633. |
Office Action dated Oct. 3, 2017 for JP Application No. 2016-560398. |
Office Action dated Oct. 31, 2017 for Japan Patent Application 2016-560646. |
Office Action dated Sep. 5, 2017 for Japanese Patent Application No. 2016-560642. |
Scott Kircher et al: "Inferred lighting: fast dynamic lighting and shadows for opaque and translucent objects", Sandbox 2009 : Proceedings ; 4th ACM SIGGRAPH Symposium on Video Games ; New Orleans , Louisiana , Aug. 4-6, 2009 , ACM, New York. |
Shirman et al, "A new look at mipmap level estimation techniques", Computers and Graphics , Elsevier, GB , vol. 23 , No. 2, Apr. 1, 1999 (Apr. 1999) , pp. 223-231 , XP004165786 , ISSN: 0097-8493. |
SHIRMAN, L. KAMEN, Y.: "A new look at mipmap level estimation techniques", COMPUTERS AND GRAPHICS., ELSEVIER., GB, vol. 23, no. 2, 1 April 1999 (1999-04-01), GB, pages 223 - 231, XP004165786, ISSN: 0097-8493, DOI: 10.1016/S0097-8493(99)00032-1 |
Steve Marschner et al: "Geometry-Aware Framebuffer, Level of Detail", Eurographics Symposium on Rendering 2008, Jan. 1, 2008 (Jan. 1, 2008). |
Taiwan Office Action for TW Application No. 104108777, dated Jun. 27, 2016. |
Taiwanese Office Action for TW Application No. 104108773, dated Dec. 22, 2015. |
Taiwanese Office Action for TW Application No. 104108774, dated Sep. 12, 2016. |
U.S. Appl. No. 14/246,062, to Mark Evan Cerny, filed Apr. 5, 2014. |
U.S. Appl. No. 14/246,063, to Mark Evan Cerny, filed Apr. 5, 2014. |
U.S. Appl. No. 14/246,064, to Tobias Berghoff, filed Apr. 5, 2014. |
U.S. Appl. No. 14/246,066, to Mark Evan Cerny, filed Apr. 5, 2014. |
U.S. Appl. No. 14/246,067, to Tobias Berghoff, filed Apr. 5, 2014. |
U.S. Appl. No. 14/246,068, to Mark Evan Cerny, filed Apr. 5, 2014. |
U.S. Appl. No. 14/678,445, to Mark Evan Cerny, filed Apr. 3, 2015. |
U.S. Appl. No. 61/975,774, to Mark Evan Cerny, filed Apr. 5, 2014. |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10614549B2 (en) | 2014-04-05 | 2020-04-07 | Sony Interactive Entertainment Europe Limited | Varying effective resolution by screen location by changing active color sample count within multiple render targets |
US10685425B2 (en) | 2014-04-05 | 2020-06-16 | Sony Interactive Entertainment LLC | Varying effective resolution by screen location by altering rasterization parameters |
US10783696B2 (en) | 2014-04-05 | 2020-09-22 | Sony Interactive Entertainment LLC | Gradient adjustment for texture mapping to non-orthonormal grid |
US11238639B2 (en) | 2014-04-05 | 2022-02-01 | Sony Interactive Entertainment LLC | Gradient adjustment for texture mapping to non-orthonormal grid |
US11302054B2 (en) | 2014-04-05 | 2022-04-12 | Sony Interactive Entertainment Europe Limited | Varying effective resolution by screen location by changing active color sample count within multiple render targets |
US11301956B2 (en) | 2014-04-05 | 2022-04-12 | Sony Interactive Entertainment LLC | Varying effective resolution by screen location by altering rasterization parameters |
US20190108643A1 (en) * | 2016-06-06 | 2019-04-11 | Sz Dji Osmo Technology Co., Ltd. | Image processing for tracking |
US10902609B2 (en) * | 2016-06-06 | 2021-01-26 | Sz Dji Osmo Technology Co., Ltd. | Image processing for tracking |
US11106928B2 (en) | 2016-06-06 | 2021-08-31 | Sz Dji Osmo Technology Co., Ltd. | Carrier-assisted tracking |
US11568626B2 (en) | 2016-06-06 | 2023-01-31 | Sz Dji Osmo Technology Co., Ltd. | Carrier-assisted tracking |
Also Published As
Publication number | Publication date |
---|---|
EP3129979A1 (en) | 2017-02-15 |
JP6652257B2 (en) | 2020-02-19 |
US10614549B2 (en) | 2020-04-07 |
WO2015153165A1 (en) | 2015-10-08 |
JP2017517025A (en) | 2017-06-22 |
JP2020170506A (en) | 2020-10-15 |
US20150287165A1 (en) | 2015-10-08 |
JP7033617B2 (en) | 2022-03-10 |
JP2020091877A (en) | 2020-06-11 |
JP7112549B2 (en) | 2022-08-03 |
KR20170012201A (en) | 2017-02-02 |
JP7004759B2 (en) | 2022-01-21 |
EP3129979A4 (en) | 2017-11-08 |
US20180374195A1 (en) | 2018-12-27 |
JP2021108154A (en) | 2021-07-29 |
KR101922482B1 (en) | 2018-11-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10614549B2 (en) | 2020-04-07 | Varying effective resolution by screen location by changing active color sample count within multiple render targets |
US11301956B2 (en) | 2022-04-12 | Varying effective resolution by screen location by altering rasterization parameters |
US10438319B2 (en) | 2019-10-08 | Varying effective resolution by screen location in graphics processing by approximating projection of vertices onto curved viewport |
US10102663B2 (en) | 2018-10-16 | Gradient adjustment for texture mapping for multiple render targets with resolution that varies by screen location |
US10438312B2 (en) | 2019-10-08 | Method for efficient re-rendering objects to vary viewports and under varying rendering and rasterization parameters |
US11302054B2 (en) | 2022-04-12 | Varying effective resolution by screen location by changing active color sample count within multiple render targets |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
2014-07-31 | AS | Assignment |
Owner name: SONY COMPUTER ENTERTAINMENT EUROPE LIMITED, UNITED Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BERGHOFF, TOBIAS;REEL/FRAME:033440/0138 Effective date: 20140731 Owner name: SONY COMPUTER ENTERTAINMENT AMERICA LLC, CALIFORNI Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BERGHOFF, TOBIAS;REEL/FRAME:033440/0138 Effective date: 20140731 |
2016-05-05 | AS | Assignment |
Owner name: SONY INTERACTIVE ENTERTAINMENT AMERICA LLC, CALIFORNIA Free format text: CHANGE OF NAME;ASSIGNOR:SONY COMPUTER ENTERTAINMENT AMERICA LLC;REEL/FRAME:038626/0637 Effective date: 20160331 Owner name: SONY INTERACTIVE ENTERTAINMENT AMERICA LLC, CALIFO Free format text: CHANGE OF NAME;ASSIGNOR:SONY COMPUTER ENTERTAINMENT AMERICA LLC;REEL/FRAME:038626/0637 Effective date: 20160331 |
2017-11-29 | AS | Assignment |
Owner name: SONY INTERACTIVE ENTERTAINMENT EUROPE LIMITED, UNI Free format text: CHANGE OF NAME;ASSIGNOR:SONY COMPUTER ENTERTAINMENT EUROPE LIMITED;REEL/FRAME:044250/0931 Effective date: 20160729 |
2018-07-24 | AS | Assignment |
Owner name: SONY INTERACTIVE ENTERTAINMENT LLC, CALIFORNIA Free format text: MERGER AND CHANGE OF NAME;ASSIGNORS:SONY INTERACTIVE ENTERTAINMENT AMERICA LLC;SONY INTERACTIVE ENTERTAINMENT LLC;REEL/FRAME:046444/0115 Effective date: 20180323 |
2018-08-20 | STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
2022-03-04 | MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |