US20080123748A1 - Compression circuitry for generating an encoded bitstream from a plurality of video frames - Google Patents
- ️Thu May 29 2008
Info
-
Publication number
- US20080123748A1 US20080123748A1 US12/020,668 US2066808A US2008123748A1 US 20080123748 A1 US20080123748 A1 US 20080123748A1 US 2066808 A US2066808 A US 2066808A US 2008123748 A1 US2008123748 A1 US 2008123748A1 Authority
- US
- United States Prior art keywords
- macroblocks
- circuit
- idct
- dct
- generating Prior art date
- 2002-03-18 Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000007906 compression Methods 0.000 title claims description 21
- 230000006835 compression Effects 0.000 title claims description 20
- 238000000034 method Methods 0.000 claims description 49
- 230000008569 process Effects 0.000 claims description 17
- 239000000872 buffer Substances 0.000 claims description 11
- 230000009466 transformation Effects 0.000 claims description 8
- 230000002457 bidirectional effect Effects 0.000 description 19
- 238000013139 quantization Methods 0.000 description 16
- 239000013598 vector Substances 0.000 description 16
- 238000012545 processing Methods 0.000 description 14
- 238000011002 quantification Methods 0.000 description 5
- 238000001914 filtration Methods 0.000 description 3
- 239000011159 matrix material Substances 0.000 description 3
- 238000012856 packing Methods 0.000 description 3
- 230000001360 synchronised effect Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000008707 rearrangement Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
- 230000003245 working effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/42—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/124—Quantisation
- H04N19/126—Details of normalisation or weighting functions, e.g. normalisation matrices or variable uniform quantisers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/13—Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
- H04N19/137—Motion inside a coding unit, e.g. average field, frame or block difference
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/146—Data rate or code amount at the encoder output
- H04N19/15—Data rate or code amount at the encoder output by monitoring actual compressed data size at the memory before deciding storage at the transmission buffer
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/172—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/90—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
- H04N19/91—Entropy coding, e.g. variable length coding [VLC] or arithmetic coding
Definitions
- the present invention relates to motion picture compression circuits for pictures such as television pictures, and more particularly to a compression circuit complying with H.261 and MPEG standards.
- FIGS. 1A-1C schematically illustrate three methods for compressing motion pictures in accordance with H.261 and MPEG standards.
- pictures may be of intra or predicted type.
- the pictures can also be of bidirectional type.
- Intra (“I”) pictures are not coded with reference to any other pictures.
- Predicted (“P”) pictures are coded with reference to a past intra or past predicted picture.
- Bidirectional (“B”) pictures are coded with reference to both a past picture and a following picture.
- FIG. 1A illustrates the compression of an intra picture I 1 .
- Picture I 1 is stored in a memory area M 1 before being processed.
- the pictures have to be initially stored in a memory since they arrive line by line whereas they are processed square by square, the size of each square being generally 16 by 16 pixels.
- memory area M 1 must be filled with at least 16 lines.
- a macroblock includes four 8 by 8-pixel luminance blocks and two or four 8 by 8-pixel chrominance blocks. The processes hereinafter described are carried out by blocks of 8 by 8 pixels.
- each macroblock of picture 11 is submitted at 10 to a discrete cosine transform (DCT) followed at 11 by a quantization (Q).
- DCT discrete cosine transform
- Q quantization
- a DCT transforms a matrix of pixels (a block) into a matrix whose upper left corner coefficient tends to have a relatively high value. The other coefficients rapidly decrease as the position moves downwards to the right.
- Quantization involves dividing the coefficients of the matrix so transformed, such that a large number of coefficients which are a distance away from the upper left corner are cancelled.
- the quantified matrices are subject to zigzag scanning (ZZ) and to run/level coding (RLC).
- ZZ zigzag scanning
- RLC run/level coding
- Zigzag scanning has the consequence of improving the chances of consecutive series of zero coefficients, each of which is preceded by a non-zero coefficient.
- the run/level coding mainly includes replacing each series from the ZZ scanning with a pair of values, one representing the number of successive zero coefficients and the other representing the first following non-zero coefficient.
- the pairs of values from the RLC are subject to variable length coding (VLC) that includes replacing the more frequent pairs with short codes and replacing the less frequent pairs with long codes, with the aid of correspondence tables defined by the H.261 and MPEG standards.
- VLC variable length coding
- the quantification coefficients can be varied from one block to the next by multiplication by a quantization coefficient. That quantization coefficient is inserted during variable length coding in headers preceding the compressed data corresponding to macroblocks.
- Macroblocks of an intra picture are used to compress macroblocks of a subsequent picture of predicted or bidirectional type.
- decoding of a predicted or bidirectional picture is likely to be achieved from a previously decoded intra picture.
- This previously decoded intra picture does not exactly correspond to the actual picture initially received by the compression circuit, since this initial picture is altered by the quantification at 11 .
- the compression of a predicted or intra picture is carried out from a reconstructed intra picture I 1 rather than from the real intra picture I 1 , so that decoding is carried out under the same conditions as encoding.
- the reconstructed intra picture I 1 r is stored in a memory area M 2 and is obtained by subjecting the macroblocks provided by the quantification 11 to a reverse processing, that is, at 15 an inverse quantification (Q ⁇ 1 ) followed at 16 by an inverse DCT (DCT ⁇ 1 ).
- FIG. 1B illustrates the compression of a predicted picture P 4 .
- the predicted picture P 4 is stored in a memory area M 1 .
- a previously processed intra picture I 1 r has been reconstructed in a memory area M 2 .
- the processing of the macroblocks of the predicted picture P 4 is carried out from so-called predictor macroblocks of the reconstructed picture I 1 r .
- Each macroblock of picture P 4 (reference macroblock) is subject to motion estimation (ME) at 17 (generally, the motion estimation is carried out only with the four luminance blocks of the reference macroblocks).
- This motion estimation includes searching in a window of picture I 1 r for a macroblock that is nearest, or most similar to the reference macroblock.
- the nearest macroblock found in the window is the predictor macroblock. Its position is determined by a motion vector V provided by the motion estimation.
- the predictor macroblock is subtracted at 18 from the current reference macroblock. The resulting difference macroblock is subjected to the process described with relation to FIG. 1A .
- the predicted pictures serve to compress other predicted pictures and bidirectional pictures.
- the predicted picture P 4 is reconstructed (P 4 r ) in a memory area M 3 by an inverse quantification at 15 , inverse DCT at 19 , and addition at 19 of the predictor macroblock that was subtracted at 18 .
- the vector V provided by the motion estimation 17 is inserted in a header preceding the data provided by the variable length coding of the currently processed macroblock.
- FIG. 1C illustrates the compression of a bidirectional picture B 2 .
- Bidirectional pictures are provided for in MPEG standards only.
- the processing of the bidirectional pictures differs from the processing of predicted pictures in that the motion estimation 17 consists in finding two predictor macroblocks in two pictures I 1 r and P 4 r , respectively, that were previously reconstructed in memory areas M 2 and M 3 .
- pictures I 1 r and P 4 r respectively correspond to a picture preceding the bidirectional picture that is currently processed and to a picture following the bidirectional picture.
- the mean value of the two obtained predictor macroblocks is calculated and is subtracted at 18 from the currently processed macroblock.
- the bidirectional picture is not reconstructed because it is not used to compress another picture.
- the motion estimation 17 provides two vectors V 1 and V 2 indicating the respective positions of the two predictor macroblocks in pictures I 1 r and P 4 r with respect to the reference macroblock of the bidirectional picture.
- Vectors V 1 and V 2 are inserted in a header preceding the data provided by the variable length coding of the currently processed macroblock.
- the reference macroblock is submitted to either predicted processing with the vector that is found, predicted processing with a zero vector, or intra processing.
- the reference macroblock is submitted to either bidirectional processing with the two vectors, predicted processing with only one of the vectors, or intra processing.
- a predicted picture and a bidirectional picture may contain macroblocks of different types.
- the type of a macroblock is also data inserted in a header during variable length coding.
- the motion vectors can be defined with an accuracy of half a pixel.
- To search a predictor macroblock with a non integer vector first the predictor macroblock determined by the integer part of this vector is fetched, then this macroblock is submitted to so-called “half-pixel filtering”, which includes averaging the macroblock and the same macroblock shifted down and/or to the right by one pixel, depending on the integer or non-integer values of the two components of the vector.
- the predictor macroblocks may be subjected to low-pass filtering. For this purpose, information is provided with the vector, indicating whether filtering has to be carried out or not.
- a GOP generally begins with an intra picture. It is usual, in a GOP, to have a periodical series, starting from the second picture, including several successive bidirectional pictures, followed by a predicted picture, for example of the form IBBPBBPBB . . . where I is an intra picture, B a bidirectional picture, and P a predicted picture.
- I is an intra picture
- B a bidirectional picture
- P a predicted picture.
- the processing of each bidirectional picture B is carried out from macroblocks of the previous intra or predicted picture and from macroblocks of the next predicted picture.
- FIG. 2 The various functional blocks that are used in a typical prior art functional implementation are shown in FIG. 2 .
- the motion estimation engine and memory for storing macroblocks and video pictures have been omitted.
- a reference macroblock is supplied to a subtraction circuit, where the predictor for that macroblock is subtracted (in the case of B and P pictures, only).
- the resultant error block (or the original macroblock, for I pictures) is passed on to a DCT block, then to a quantization block for quantization.
- the quantized macroblock is forwarded to an encoding process and an inverse quantization block.
- the encoding process takes the quantized macroblock and zig-zag encodes it, performs run level coding on the resultant data, then variable length packs the result, outputting the now encoded bitstream.
- bitstream is monitored and can be controlled via feedback to a rate control system. This controls quantization (and dequantization) to meet certain objectives for bitstream.
- a typical objective is a maximum bit-rate, although other factors can also be used.
- the inverse quantization block in FIG. 2 is the start of a reconstruction chain that is used to generate a reconstructed version of each frame, so that the frames the motion prediction engine is searching for matching macroblocks are the same as will be regenerated during decoding proper.
- the macroblock is inverse DCT transformed in IDCT block and added to the original predictor used to generate the error macroblock. This reconstructed block is stored in memory for subsequent use in the motion estimation process.
- the various blocks required to generate the encoded output stream have different computational requirements, which themselves can vary according to the particular application or user selected restrictions. Throttling of the output bitstream to meet bandwidth requirements is typically handled by manipulating the quantization step.
- a decoder circuit comprises: a processor configured to inverse quantize macroblocks to generate inverse quantized macroblocks; an inverse discrete cosine transformation circuit that processes the inverse quantized macroblocks from the processor to generate IDCT transformed macroblocks; and an addition circuit that adds a single IDCT transformed macroblock and a corresponding predictor macroblock to generate a reconstructed picture macroblock.
- a method for decoding an encoded bitstream comprises: inverse quantizing decoded macroblocks in a processor to generate inverse quantized macroblocks; generating inverse discrete cosine transformation (IDCT) transformed macroblocks from the inverse quantized macroblocks; and adding a single IDCT transformed macroblock and a corresponding predictor macroblock to generate a reconstructed picture macroblock.
- IDCT inverse discrete cosine transformation
- a video compression circuit comprises: a discrete cosine transform (DCT) circuit for accepting prediction error macroblocks and generating DCT transformed macroblocks; a processor being configured to quantize the DCT transformed macroblocks to generate quantized macroblocks, and to inverse quantize the quantized macroblocks to generate inverse quantized macroblocks; an inverse discrete cosine transform (IDCT) circuit, wherein the IDCT circuit transforms the inverse quantized macroblocks to generate reconstructed prediction error macroblocks; and an addition circuit for adding a single reconstructed prediction error macroblock and a corresponding predictor macroblock to generate respective reconstructed macroblocks for use in the encoding of other macroblocks.
- DCT discrete cosine transform
- IDCT inverse discrete cosine transform
- a method of generating a compressed video bitstream comprises: generating DCT transformed macroblocks by applying prediction error macroblocks to a discrete cosine transform (DCT) circuit; quantizing the DCT transformed macroblocks to generate quantize macroblocks; inverse quantizing the quantize macroblocks to generate inverse quantize macroblocks; generating reconstructed prediction error macroblocks by applying the inverse quantize macroblocks to a IDCT circuit; and adding a single reconstructed prediction error macroblock and a corresponding predictor macroblock to generate respective reconstructed macroblocks for use in the encoding of other macroblocks.
- DCT discrete cosine transform
- an encoder/decoder circuit comprises: a discrete cosine transform (DCT) circuit to generate DCT transformed macroblocks from prediction error macroblocks; a processor configured to quantize the DCT transformed macroblocks to generate quantized macroblocks, and to inverse quantize the quantized macroblocks to generate inverse quantized macroblocks; an inverse discrete cosine transform (IDCT) circuit to transform the inverse quantized macroblocks to generate reconstructed prediction error macroblocks; an addition circuit to add a single reconstructed prediction error macroblock and a corresponding predictor macroblock to generate respective reconstructed macroblocks; and a control circuit to configure the encoder/decoder circuit to encode or decode a bitstream.
- DCT discrete cosine transform
- ICT inverse discrete cosine transform
- a method for encoding and decoding in an encoder/decoder circuit having a control circuit to configure the encoder/decoder circuit for encoding or decoding mode comprises: generating DCT transformed macroblocks by applying prediction error macroblocks to a discrete cosine transform (DCT) circuit; quantizing the DCT transformed macroblocks to generate quantized macroblocks; inverse quantizing the quantized macroblocks to generate inverse quantized macroblocks; generating reconstructed prediction error macroblocks by applying the inverse quantized macroblocks to the IDCT circuit; and adding the reconstructed prediction error macroblocks and corresponding predictor macroblocks to generate respective reconstructed macroblocks; wherein the reconstructed macroblocks are useful either as decoded reconstructed picture macroblocks or for encoding other macroblocks.
- DCT discrete cosine transform
- FIGS. 1A to 1C previously described, illustrate three picture compression processes according to H.261 and MPEG standards, in accordance with the prior art
- FIG. 2 previously described, is a schematic of the functional blocks in a typical MPEG encoding scheme, in accordance with the prior art
- FIG. 3 is a schematic of an encoder loop
- FIG. 4 is a schematic of compression circuitry for generating an encoded bitstream from a plurality of video frames.
- FIG. 3 shows an overview of the functional blocks of one embodiment, in which hardware functionality is represented by rectangular blocks and software functionality is represented by an oval block.
- the functional blocks include a subtraction circuit 300 for subtracting each predictor macroblock, as supplied by the motion estimation engine (described later) from its corresponding picture macroblock, to generate a prediction error macroblock. For an I picture, there is no predictor, so the macroblock is passed through the subtraction circuit with no change.
- the prediction error macroblock is supplied to a DCT circuit 301 where a forward discrete cosine transform (DCT) is performed.
- DCT forward discrete cosine transform
- variable length coded takes place in software.
- variable length coding and packing, or just packing is performed in hardware, since this provides a drastic increase in performance compared to software coding running on a general purpose processor.
- the processor 302 also performs inverse quantization (Q ⁇ 1 ), and the resultant inverse quantized macroblocks are sent to an inverse DCT (IDCT) circuit 303 via a streaming interface.
- IDCT inverse DCT
- An inverse DCT (IDCT) is performed and the resultant reconstructed error macroblock is added to the original predictor macroblock (for P and B pictures only) by an addition circuit 304 .
- the predictor macroblocks have been delayed in a delay buffer 305 .
- the macroblock is fully reconstructed after the IDCT circuit.
- the resultant reconstructed macroblocks are then stored in memory for use by the motion estimation engine in generating predictors for future macroblocks. This is necessary because it is reconstructed macroblocks that a decoder will subsequently use to reconstruct the pictures.
- FIG. 4 shows a more detailed version of the embodiment of FIG. 3 , and like features are denoted by corresponding reference numerals.
- the motion estimation engine 400 for use with the encoding circuitry is also shown.
- the motion estimation engine 400 determines the best matching macroblock (or average of two macroblocks) for each macroblock in the frame (for B and P pictures only) and subtracts it from the macroblock being considered to generate a predictor error macroblock.
- the method of selecting predictor macroblocks is not a part of the present solution and so is not described in greater detail herein.
- the motion estimation engine 400 outputs the macroblocks, associated predictor macroblocks and vectors, and other information such as frame type and encoding modes, to DCT/IDCT circuitry via a direct link. Alternatively, this information can be transferred over a data bus. Data bus transfer principles are well known and so is not described in detail.
- the DCT and IDCT steps are performed in a DCT/IDCT block 401 , which includes combined DCT/IDCT circuitry 301 / 303 that is selectable to perform either operation on incoming data.
- the input is selected by way of a multiplexer 402 , the operation of which will be described in greater detail below.
- the output of the multiplexer is supplied to the delay block 305 and the DCT/IDCT circuitry 301 / 303 .
- Additional data supplied by the motion estimation engine 400 such as the motion vector(s), encoding decisions (intra/non-intra, MC/no MC, field/frame prediction, field/frame DCT) is routed past the delay and DCT/IDCT blocks to a first streaming data interface SDI 403 .
- the outputs of the delay block and the DCT/IDCT circuitry are supplied to an addition circuit 304 , the output of which is sent to memory 450 .
- the output of the DCT/IDCT block 301 / 303 is also supplied to the first SDI port 403 .
- the first SDI port 403 accepts data from the DCT/IDCT block 301 / 303 and the multiplexer 402 and converts it into a format suitable for streaming transmission to a corresponding second streaming SDI port 404 .
- the streaming is controlled by a handshake arrangement between the respective SDI ports.
- the second streaming SDI port 404 takes the streaming data from the first SDI port 403 and converts it back into a format suitable for use within the processor 302 .
- the processor performs quantization 405 , inverse quantization 406 and zig-zag/run level coding 407 as described previously. It will be appreciated that the particular implementations of these steps in software is not relevant, and so is not described in detail.
- the macroblock is returned to a third SDI port 408 , which operates in the same way as the first streaming port to convert and stream the data to a fourth SDI port 409 , which converts the data for synchronous use and supplies it to the multiplexer 402 .
- the processor 302 outputs the run level coded data to a fifth SDI port 410 , which in a similar fashion to the first and third SDI ports, formats the data for streaming transmission to a sixth SDI port 411 , which in turn reformats the data into a synchronous format.
- the data is then variable length coded and packed in hardware VLC circuitry 412 .
- the particular workings of the hardware VLC packing circuitry 412 are well known in the art, are not critical to understanding the present solution and so will not be described in detail. Indeed, as mentioned previously, the VLC operation can be performed in software by the processor, for a corresponding cost in processor cycles.
- the multiplexer and DCT/IDCT block 301 / 303 need to be controlled to ensure that the correct data is being fed to the DCT/IDCT block and that the correct operation is being performed.
- the multiplexer 402 is controlled to provide data from the bus (supplied by the motion estimation engine) to the DCT/IDCT block 301 / 303 , which is set to DCT mode.
- the multiplexer 402 sends data from the fourth SDI port 409 to the DCT/IDCT block 301 / 303 , which is set to IDCT mode.
- the DCT and IDCT functions of the DCT/IDCT block 301 / 303 will be performed in an interleaved manner, with one or more DCT operations being interleaved with one or more IDCT operations, depending upon the order of I, P and B pictures being encoded.
- the encoding circuitry described above can perform decoding of an encoded MPEG stream. This is because the inverse quantization software and IDCT hardware are common to the encoding and decoding process. There are at least three ways this can be achieved:
- the dequantized coefficient blocks can be streamed from the processor to the IDCT/DCT block 301 / 303 via the third and fourth SDI ports 408 and 409 .
- the results of the IDCT are then read back via the first and second SDI ports 403 and 404 .
- Option 1 can be extended to allow more of the decoding load to be passed to the DCT/IDCT block 401 .
- the predictor blocks are read into the delay buffer 305 .
- the coefficient blocks are then read in via the same route by the DCT/IDCT block 301 / 303 (in IDCT 30 mode).
- the predictor and IDCT processed macroblocks are combined by the addition circuitry 304 and written to system memory via the system data bus.
- the motion estimation block is configured to provide the predictor blocks to the delay buffer 305 via the multiplexer 402 .
- the coefficient blocks are provided to the DCT/IDCT block 301 / 303 (in IDCT mode), and the remainder of the procedure is as per the second decoding arrangement.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Data is discrete cosine transformed and streamed to a processor where quantized and inverse quantized blocks are generated. A second streaming data connection streams the inverse quantized blocks to an inverse discrete cosine transform block to generate reconstructed prediction error macroblocks. An addition circuit adds each reconstructed prediction error macroblock and its corresponding predictor macroblock to generate a respective reconstructed macroblock. The quantized macroblocks are zig-zag scanned, run level coded and variable length coded to generate and encoded bitstream.
Description
-
CROSS-REFERENCE
-
This application is a divisional of U.S. Application for patent Ser. No. 10/391,442, filed Mar. 17, 2003, which claims priority from European Application for Patent No. 02251932.6 filed on Mar. 18, 2002, the disclosures of which are hereby incorporated by reference.
BACKGROUND OF THE INVENTION
-
1. Technical Field of the Invention
-
The present invention relates to motion picture compression circuits for pictures such as television pictures, and more particularly to a compression circuit complying with H.261 and MPEG standards.
-
2. Description of Related Art
- FIGS. 1A-1C
schematically illustrate three methods for compressing motion pictures in accordance with H.261 and MPEG standards. According to H.261 standards, pictures may be of intra or predicted type. According to MPEG standards, the pictures can also be of bidirectional type.
-
Intra (“I”) pictures are not coded with reference to any other pictures. Predicted (“P”) pictures are coded with reference to a past intra or past predicted picture. Bidirectional (“B”) pictures are coded with reference to both a past picture and a following picture.
- FIG. 1A
illustrates the compression of an intra picture I1. Picture I1 is stored in a memory area M1 before being processed. The pictures have to be initially stored in a memory since they arrive line by line whereas they are processed square by square, the size of each square being generally 16 by 16 pixels. Thus, before starting to process picture I1, memory area M1 must be filled with at least 16 lines.
-
The pixels of a 16 by 16-pixel square are arranged in a so-called “macroblock”. A macroblock includes four 8 by 8-pixel luminance blocks and two or four 8 by 8-pixel chrominance blocks. The processes hereinafter described are carried out by blocks of 8 by 8 pixels.
-
The blocks of each macroblock of
picture11 are submitted at 10 to a discrete cosine transform (DCT) followed at 11 by a quantization (Q). A DCT transforms a matrix of pixels (a block) into a matrix whose upper left corner coefficient tends to have a relatively high value. The other coefficients rapidly decrease as the position moves downwards to the right. Quantization involves dividing the coefficients of the matrix so transformed, such that a large number of coefficients which are a distance away from the upper left corner are cancelled.
-
At 12, the quantified matrices are subject to zigzag scanning (ZZ) and to run/level coding (RLC). Zigzag scanning has the consequence of improving the chances of consecutive series of zero coefficients, each of which is preceded by a non-zero coefficient. The run/level coding mainly includes replacing each series from the ZZ scanning with a pair of values, one representing the number of successive zero coefficients and the other representing the first following non-zero coefficient.
-
At 13, the pairs of values from the RLC are subject to variable length coding (VLC) that includes replacing the more frequent pairs with short codes and replacing the less frequent pairs with long codes, with the aid of correspondence tables defined by the H.261 and MPEG standards. The quantification coefficients can be varied from one block to the next by multiplication by a quantization coefficient. That quantization coefficient is inserted during variable length coding in headers preceding the compressed data corresponding to macroblocks.
-
Macroblocks of an intra picture are used to compress macroblocks of a subsequent picture of predicted or bidirectional type. Thus, decoding of a predicted or bidirectional picture is likely to be achieved from a previously decoded intra picture. This previously decoded intra picture does not exactly correspond to the actual picture initially received by the compression circuit, since this initial picture is altered by the quantification at 11. Thus, the compression of a predicted or intra picture is carried out from a reconstructed intra picture I1 rather than from the real intra picture I1, so that decoding is carried out under the same conditions as encoding.
-
The reconstructed intra picture I1 r is stored in a memory area M2 and is obtained by subjecting the macroblocks provided by the
quantification11 to a reverse processing, that is, at 15 an inverse quantification (Q−1) followed at 16 by an inverse DCT (DCT−1).
- FIG. 1B
illustrates the compression of a predicted picture P4. The predicted picture P4 is stored in a memory area M1. A previously processed intra picture I1 r has been reconstructed in a memory area M2.
-
The processing of the macroblocks of the predicted picture P4 is carried out from so-called predictor macroblocks of the reconstructed picture I1 r. Each macroblock of picture P4 (reference macroblock) is subject to motion estimation (ME) at 17 (generally, the motion estimation is carried out only with the four luminance blocks of the reference macroblocks).
-
This motion estimation includes searching in a window of picture I1 r for a macroblock that is nearest, or most similar to the reference macroblock. The nearest macroblock found in the window is the predictor macroblock. Its position is determined by a motion vector V provided by the motion estimation. The predictor macroblock is subtracted at 18 from the current reference macroblock. The resulting difference macroblock is subjected to the process described with relation to
FIG. 1A.
-
Like the intra pictures, the predicted pictures serve to compress other predicted pictures and bidirectional pictures. For this purpose, the predicted picture P4 is reconstructed (P4 r) in a memory area M3 by an inverse quantification at 15, inverse DCT at 19, and addition at 19 of the predictor macroblock that was subtracted at 18.
-
The vector V provided by the
motion estimation17 is inserted in a header preceding the data provided by the variable length coding of the currently processed macroblock.
- FIG. 1C
illustrates the compression of a bidirectional picture B2. Bidirectional pictures are provided for in MPEG standards only. The processing of the bidirectional pictures differs from the processing of predicted pictures in that the
motion estimation17 consists in finding two predictor macroblocks in two pictures I1 r and P4 r, respectively, that were previously reconstructed in memory areas M2 and M3. Generally, pictures I1 r and P4 r respectively correspond to a picture preceding the bidirectional picture that is currently processed and to a picture following the bidirectional picture.
-
At 20, the mean value of the two obtained predictor macroblocks is calculated and is subtracted at 18 from the currently processed macroblock.
-
The bidirectional picture is not reconstructed because it is not used to compress another picture.
-
The
motion estimation17 provides two vectors V1 and V2 indicating the respective positions of the two predictor macroblocks in pictures I1 r and P4 r with respect to the reference macroblock of the bidirectional picture. Vectors V1 and V2 are inserted in a header preceding the data provided by the variable length coding of the currently processed macroblock.
-
In a predicted picture, an attempt is made to find a predictor macroblock for each reference macroblock. However, in some cases, using the predictor macroblock that is found may provide a smaller compression rate than that obtained by using an unmoved predictor macroblock (zero motion vector), or even smaller than the simple intra processing of the reference macroblock. Thus, depending upon these cases, the reference macroblock is submitted to either predicted processing with the vector that is found, predicted processing with a zero vector, or intra processing.
-
In a bidirectional picture, an attempt is made to find two predictor macroblocks for each reference macroblock. For each of the two predictor macroblocks, the process providing the best compression rate is determined, as indicated above with respect to a predicted picture. Thus, depending on the result, the reference macroblock is submitted to either bidirectional processing with the two vectors, predicted processing with only one of the vectors, or intra processing.
-
Thus, a predicted picture and a bidirectional picture may contain macroblocks of different types. The type of a macroblock is also data inserted in a header during variable length coding. According to MPEG standards, the motion vectors can be defined with an accuracy of half a pixel. To search a predictor macroblock with a non integer vector, first the predictor macroblock determined by the integer part of this vector is fetched, then this macroblock is submitted to so-called “half-pixel filtering”, which includes averaging the macroblock and the same macroblock shifted down and/or to the right by one pixel, depending on the integer or non-integer values of the two components of the vector. According to H.261 standards, the predictor macroblocks may be subjected to low-pass filtering. For this purpose, information is provided with the vector, indicating whether filtering has to be carried out or not.
-
The succession of types (intra, predicted, bidirectional) is assigned to the pictures in a predetermined way, in a so-called group of pictures (GOP). A GOP generally begins with an intra picture. It is usual, in a GOP, to have a periodical series, starting from the second picture, including several successive bidirectional pictures, followed by a predicted picture, for example of the form IBBPBBPBB . . . where I is an intra picture, B a bidirectional picture, and P a predicted picture. The processing of each bidirectional picture B is carried out from macroblocks of the previous intra or predicted picture and from macroblocks of the next predicted picture.
-
The various functional blocks that are used in a typical prior art functional implementation are shown in
FIG. 2. For clarity, the motion estimation engine and memory for storing macroblocks and video pictures have been omitted.
-
In
FIG. 2, a reference macroblock is supplied to a subtraction circuit, where the predictor for that macroblock is subtracted (in the case of B and P pictures, only). The resultant error block (or the original macroblock, for I pictures) is passed on to a DCT block, then to a quantization block for quantization.
-
The quantized macroblock is forwarded to an encoding process and an inverse quantization block. The encoding process takes the quantized macroblock and zig-zag encodes it, performs run level coding on the resultant data, then variable length packs the result, outputting the now encoded bitstream.
-
The bitstream is monitored and can be controlled via feedback to a rate control system. This controls quantization (and dequantization) to meet certain objectives for bitstream. A typical objective is a maximum bit-rate, although other factors can also be used.
-
The inverse quantization block in
FIG. 2is the start of a reconstruction chain that is used to generate a reconstructed version of each frame, so that the frames the motion prediction engine is searching for matching macroblocks are the same as will be regenerated during decoding proper. After inverse quantization, the macroblock is inverse DCT transformed in IDCT block and added to the original predictor used to generate the error macroblock. This reconstructed block is stored in memory for subsequent use in the motion estimation process.
-
The various blocks required to generate the encoded output stream have different computational requirements, which themselves can vary according to the particular application or user selected restrictions. Throttling of the output bitstream to meet bandwidth requirements is typically handled by manipulating the quantization step.
-
Pure hardware architectures, while potentially the most efficient, suffer from lack of flexibility since they can support only a restricted range of standards; moreover they have long design/verification cycles. On the other hand, pure software solutions, while being the most flexible, require high-performance processors unsuited to low-cost consumer applications.
-
It would be desirable to provide an architecture that allowed for relatively flexible bitstream control while reducing the amount of software-based processing power required.
SUMMARY
-
In an embodiment, a decoder circuit comprises: a processor configured to inverse quantize macroblocks to generate inverse quantized macroblocks; an inverse discrete cosine transformation circuit that processes the inverse quantized macroblocks from the processor to generate IDCT transformed macroblocks; and an addition circuit that adds a single IDCT transformed macroblock and a corresponding predictor macroblock to generate a reconstructed picture macroblock.
-
In an embodiment, a method for decoding an encoded bitstream, comprises: inverse quantizing decoded macroblocks in a processor to generate inverse quantized macroblocks; generating inverse discrete cosine transformation (IDCT) transformed macroblocks from the inverse quantized macroblocks; and adding a single IDCT transformed macroblock and a corresponding predictor macroblock to generate a reconstructed picture macroblock.
-
In an embodiment, a video compression circuit comprises: a discrete cosine transform (DCT) circuit for accepting prediction error macroblocks and generating DCT transformed macroblocks; a processor being configured to quantize the DCT transformed macroblocks to generate quantized macroblocks, and to inverse quantize the quantized macroblocks to generate inverse quantized macroblocks; an inverse discrete cosine transform (IDCT) circuit, wherein the IDCT circuit transforms the inverse quantized macroblocks to generate reconstructed prediction error macroblocks; and an addition circuit for adding a single reconstructed prediction error macroblock and a corresponding predictor macroblock to generate respective reconstructed macroblocks for use in the encoding of other macroblocks.
-
In an embodiment, a method of generating a compressed video bitstream comprises: generating DCT transformed macroblocks by applying prediction error macroblocks to a discrete cosine transform (DCT) circuit; quantizing the DCT transformed macroblocks to generate quantize macroblocks; inverse quantizing the quantize macroblocks to generate inverse quantize macroblocks; generating reconstructed prediction error macroblocks by applying the inverse quantize macroblocks to a IDCT circuit; and adding a single reconstructed prediction error macroblock and a corresponding predictor macroblock to generate respective reconstructed macroblocks for use in the encoding of other macroblocks.
-
In an embodiment, an encoder/decoder circuit comprises: a discrete cosine transform (DCT) circuit to generate DCT transformed macroblocks from prediction error macroblocks; a processor configured to quantize the DCT transformed macroblocks to generate quantized macroblocks, and to inverse quantize the quantized macroblocks to generate inverse quantized macroblocks; an inverse discrete cosine transform (IDCT) circuit to transform the inverse quantized macroblocks to generate reconstructed prediction error macroblocks; an addition circuit to add a single reconstructed prediction error macroblock and a corresponding predictor macroblock to generate respective reconstructed macroblocks; and a control circuit to configure the encoder/decoder circuit to encode or decode a bitstream.
-
In an embodiment, a method for encoding and decoding in an encoder/decoder circuit having a control circuit to configure the encoder/decoder circuit for encoding or decoding mode comprises: generating DCT transformed macroblocks by applying prediction error macroblocks to a discrete cosine transform (DCT) circuit; quantizing the DCT transformed macroblocks to generate quantized macroblocks; inverse quantizing the quantized macroblocks to generate inverse quantized macroblocks; generating reconstructed prediction error macroblocks by applying the inverse quantized macroblocks to the IDCT circuit; and adding the reconstructed prediction error macroblocks and corresponding predictor macroblocks to generate respective reconstructed macroblocks; wherein the reconstructed macroblocks are useful either as decoded reconstructed picture macroblocks or for encoding other macroblocks.
BRIEF DESCRIPTION OF THE DRAWINGS
-
A more complete understanding of the method and apparatus may be acquired by reference to the following Detailed Description when taken in conjunction with the accompanying Drawings wherein:
- FIGS. 1A to 1C
, previously described, illustrate three picture compression processes according to H.261 and MPEG standards, in accordance with the prior art;
- FIG. 2
, previously described, is a schematic of the functional blocks in a typical MPEG encoding scheme, in accordance with the prior art;
- FIG. 3
is a schematic of an encoder loop; and
- FIG. 4
is a schematic of compression circuitry for generating an encoded bitstream from a plurality of video frames.
DETAILED DESCRIPTION OF THE DRAWINGS
- FIG. 3
shows an overview of the functional blocks of one embodiment, in which hardware functionality is represented by rectangular blocks and software functionality is represented by an oval block.
-
The functional blocks include a
subtraction circuit300 for subtracting each predictor macroblock, as supplied by the motion estimation engine (described later) from its corresponding picture macroblock, to generate a prediction error macroblock. For an I picture, there is no predictor, so the macroblock is passed through the subtraction circuit with no change.
-
The prediction error macroblock is supplied to a
DCT circuit301 where a forward discrete cosine transform (DCT) is performed. Such hardware and its operation are well known in the prior art and will not be described here in further detail.
-
The output of the DCT is streamed to a processor 302 (described later) which performs the quantization, zig-zag coding, a run level coding steps in the encoding process. The resultant data is variable length coded and output as an encoded bitstream. In the simplified schematic of
FIG. 3, the variable length coding takes place in software. However, in an alternative embodiment described later, the variable length coding and packing, or just packing, is performed in hardware, since this provides a drastic increase in performance compared to software coding running on a general purpose processor.
-
The
processor302 also performs inverse quantization (Q−1), and the resultant inverse quantized macroblocks are sent to an inverse DCT (IDCT)
circuit303 via a streaming interface. An inverse DCT (IDCT) is performed and the resultant reconstructed error macroblock is added to the original predictor macroblock (for P and B pictures only) by an
addition circuit304. The predictor macroblocks have been delayed in a
delay buffer305. For I and P pictures, the macroblock is fully reconstructed after the IDCT circuit. The resultant reconstructed macroblocks are then stored in memory for use by the motion estimation engine in generating predictors for future macroblocks. This is necessary because it is reconstructed macroblocks that a decoder will subsequently use to reconstruct the pictures.
- FIG. 4
shows a more detailed version of the embodiment of
FIG. 3, and like features are denoted by corresponding reference numerals. In
FIG. 4, the
motion estimation engine400 for use with the encoding circuitry is also shown. The
motion estimation engine400 determines the best matching macroblock (or average of two macroblocks) for each macroblock in the frame (for B and P pictures only) and subtracts it from the macroblock being considered to generate a predictor error macroblock. The method of selecting predictor macroblocks is not a part of the present solution and so is not described in greater detail herein.
-
The
motion estimation engine400 outputs the macroblocks, associated predictor macroblocks and vectors, and other information such as frame type and encoding modes, to DCT/IDCT circuitry via a direct link. Alternatively, this information can be transferred over a data bus. Data bus transfer principles are well known and so is not described in detail.
-
The DCT and IDCT steps are performed in a DCT/
IDCT block401, which includes combined DCT/
IDCT circuitry301/303 that is selectable to perform either operation on incoming data. The input is selected by way of a
multiplexer402, the operation of which will be described in greater detail below. The output of the multiplexer is supplied to the
delay block305 and the DCT/
IDCT circuitry301/303. Additional data supplied by the
motion estimation engine400, such as the motion vector(s), encoding decisions (intra/non-intra, MC/no MC, field/frame prediction, field/frame DCT) is routed past the delay and DCT/IDCT blocks to a first streaming data interface
SDI403.
-
The outputs of the delay block and the DCT/IDCT circuitry are supplied to an
addition circuit304, the output of which is sent to
memory450. The output of the DCT/
IDCT block301/303 is also supplied to the
first SDI port403.
-
The
first SDI port403 accepts data from the DCT/
IDCT block301/303 and the
multiplexer402 and converts it into a format suitable for streaming transmission to a corresponding second streaming
SDI port404. The streaming is controlled by a handshake arrangement between the respective SDI ports. The second
streaming SDI port404 takes the streaming data from the
first SDI port403 and converts it back into a format suitable for use within the
processor302.
-
Once the data has been transformed back into a synchronous format, the processor performs
quantization405,
inverse quantization406 and zig-zag/
run level coding407 as described previously. It will be appreciated that the particular implementations of these steps in software is not relevant, and so is not described in detail.
-
After inverse quantization, the macroblock is returned to a
third SDI port408, which operates in the same way as the first streaming port to convert and stream the data to a
fourth SDI port409, which converts the data for synchronous use and supplies it to the
multiplexer402.
-
The
processor302 outputs the run level coded data to a
fifth SDI port410, which in a similar fashion to the first and third SDI ports, formats the data for streaming transmission to a
sixth SDI port411, which in turn reformats the data into a synchronous format. The data is then variable length coded and packed in
hardware VLC circuitry412. The particular workings of the hardware
VLC packing circuitry412 are well known in the art, are not critical to understanding the present solution and so will not be described in detail. Indeed, as mentioned previously, the VLC operation can be performed in software by the processor, for a corresponding cost in processor cycles.
-
It will be appreciated that a number of control lines and ancillary detail has been omitted for clarity. For example, it is clear the multiplexer and DCT/
IDCT block301/303 need to be controlled to ensure that the correct data is being fed to the DCT/IDCT block and that the correct operation is being performed. For example, when the
initial DCT operation301 is being performed, the
multiplexer402 is controlled to provide data from the bus (supplied by the motion estimation engine) to the DCT/
IDCT block301/303, which is set to DCT mode. However, when performing the
IDCT operation303, the
multiplexer402 sends data from the
fourth SDI port409 to the DCT/
IDCT block301/303, which is set to IDCT mode.
-
Similarly, some support hardware that would exist in the actual implementation has been omitted. An obvious example is buffers on the various inputs and output. It would be usual in such circuitry to include FIFO buffers supporting the SDI ports to maximize throughput. For the purposes of clarity, such support hardware is not explicitly shown. However, it will be understood by those skilled in the art to be implicitly present in any practical application.
-
It will be appreciated that, in the encoding mode described above, the DCT and IDCT functions of the DCT/
IDCT block301/303 will be performed in an interleaved manner, with one or more DCT operations being interleaved with one or more IDCT operations, depending upon the order of I, P and B pictures being encoded.
-
With slight modifications to control software and circuitry, the encoding circuitry described above can perform decoding of an encoded MPEG stream. This is because the inverse quantization software and IDCT hardware are common to the encoding and decoding process. There are at least three ways this can be achieved:
- Option
1. If it is only required to offload the IDCT processing from the processor, the dequantized coefficient blocks can be streamed from the processor to the IDCT/DCT block 301/303 via the third and
fourth SDI ports408 and 409. The results of the IDCT are then read back via the first and
second SDI ports403 and 404.
-
Option 2.
Option1 can be extended to allow more of the decoding load to be passed to the DCT/
IDCT block401. In particular, the predictor blocks are read into the
delay buffer305. The coefficient blocks are then read in via the same route by the DCT/
IDCT block301/303 (in IDCT 30 mode). After the IDCT has taken place, the predictor and IDCT processed macroblocks are combined by the
addition circuitry304 and written to system memory via the system data bus.
-
Option 3. In an alternative to option 2, the motion estimation block is configured to provide the predictor blocks to the
delay buffer305 via the
multiplexer402. The coefficient blocks are provided to the DCT/
IDCT block301/303 (in IDCT mode), and the remainder of the procedure is as per the second decoding arrangement.
-
Although preferred embodiments of the method and apparatus of the present invention have been illustrated in the accompanying Drawings and described in the foregoing Detailed Description, it will be understood that the invention is not limited to the embodiments disclosed, but is capable of numerous rearrangements, modifications and substitutions without departing from the spirit of the invention as set forth and defined by the following claims.
Claims (48)
1. A decoder circuit, comprising:
a processor configured to inverse quantize macroblocks to generate inverse quantized macroblocks;
an inverse discrete cosine transformation circuit that processes the inverse quantized macroblocks from the processor to generate IDCT transformed macroblocks; and
an addition circuit that adds a single IDCT transformed macroblock and a corresponding predictor macroblock to generate a reconstructed picture macroblock.
2. The decoder circuit of
claim 1, further comprising a delay buffer for storing the corresponding predictor macroblocks.
3. The decoder circuit of
claim 2, wherein a motion estimation engine provides the corresponding predictor macroblocks to the delay buffer.
4. The decoder circuit of
claim 3, further comprising a first streaming data connection for streaming the inverse quantized macroblocks from the processor to the IDCT circuit.
5. The decoder circuit of
claim 4, wherein the IDCT circuit processes data at a rate determined by the arrival of data from the relevant data connection.
6. The decoder circuit of
claim 5, wherein the IDCT circuit processes data at a rate determined by a handshake control signal.
7. The decoder circuit of
claim 6, further comprising a macroblock memory to store the reconstructed picture macroblocks.
8. A method for decoding an encoded bitstream, comprising:
inverse quantizing decoded macroblocks in a processor to generate inverse quantized macroblocks;
generating inverse discrete cosine transformation (IDCT) transformed macroblocks from the inverse quantized macroblocks; and
adding a single IDCT transformed macroblock and a corresponding predictor macroblock to generate a reconstructed picture macroblock.
9. The method according to
claim 8, further comprising storing the corresponding predictor macroblocks in a delay buffer.
10. The method according to
claim 9, further comprising receiving the corresponding predictor macroblocks from a motion estimation engine.
11. The method according to
claim 10, further comprising streaming the inverse quantized macroblocks from the processor to the IDCT circuit.
12. The method according to
claim 11, wherein generating the IDCT transformed macroblocks takes place at a rate determined by the arrival of data.
13. The method according to
claim 12, wherein generating the IDCT transformed macroblocks takes place at a rate determined by a handshake control signal.
14. The method according to
claim 13, further comprising storing the reconstructed picture macroblocks in a macroblock memory.
15. A video compression circuit, comprising:
a discrete cosine transform (DCT) circuit for accepting prediction error macroblocks and generating DCT transformed macroblocks;
a processor being configured to quantize the DCT transformed macroblocks to generate quantized macroblocks, and to inverse quantize the quantized macroblocks to generate inverse quantized macroblocks;
an inverse discrete cosine transform (IDCT) circuit, wherein the IDCT circuit transforms the inverse quantized macroblocks to generate reconstructed prediction error macroblocks; and
an addition circuit for adding a single reconstructed prediction error macroblock and a corresponding predictor macroblock to generate respective reconstructed macroblocks for use in the encoding of other macroblocks.
16. The compression circuit of
claim 15, further comprising means for zig-zag scanning, run level coding and variable length coding the quantized macroblocks to generate an encoded bitstream.
17. The compression circuit of
claim 16, wherein the means for zig-zag scanning and run length coding is the processor configured to implement the zig-zag scanning and run length coding, and the means for variable length coding is a hardware VLC packer.
18. The compression circuit of
claim 17, further comprising:
a first streaming data connection for streaming the DCT transformed macroblocks from the DCT transformation circuit to the processor;
a second streaming data connection for streaming the inverse quantized macroblocks from the processor to the IDCT transformation circuit; and
a third streaming data connection for streaming the run length coded data from the processor to the hardware VLC packer.
19. The compression circuit of
claim 18, wherein the DCT circuit, the IDCT circuit, and the hardware VLC packer process data at a rate determined by the arrival of data from the relevant data connection.
20. The compression circuit according to
claim 19, wherein the DCT circuit, the IDCT circuit, and the hardware VLC packer process data at a rate determined by a handshake control signal.
21. The compression circuit according to
claim 20, further comprising a motion estimation engine for supplying the prediction error macroblocks to the DCT circuit.
22. The compression circuit according to
claim 21, further comprising a macroblock memory for storing the reconstructed macroblocks.
23. A method of generating a compressed video bitstream, the method comprising:
generating DCT transformed macroblocks by applying prediction error macroblocks to a discrete cosine transform (DCT) circuit;
quantizing the DCT transformed macroblocks to generate quantize macroblocks;
inverse quantizing the quantize macroblocks to generate inverse quantize macroblocks;
generating reconstructed prediction error macroblocks by applying the inverse quantize macroblocks to a IDCT circuit; and
adding a single reconstructed prediction error macroblock and a corresponding predictor macroblock to generate respective reconstructed macroblocks for use in the encoding of other macroblocks.
24. The method according to
claim 23, further comprising generating an encoded bitstream by zig-zag scanning, run level coding and variable length coding the quantized macroblocks.
25. The method according to
claim 24, wherein generating the encoded bitstream by zig-zag scanning and run length coding the quantized macroblocks is performed by the processor configured to implement the zig-zag scanning and run length coding, and by variable length coding the run length coded macroblocks in a hardware VLC packer.
26. The method according to
claim 25, further comprising:
streaming the DCT transformed macroblocks from the DCT transformation circuit to the processor;
streaming the inverse quantized macroblocks from the processor to the IDCT transformation circuit; and
streaming the run length coded data to the hardware VLC packer.
27. The method according to
claim 26, wherein generating the DCT transformed macroblocks, generating the reconstructed prediction error macroblocks, and generating the encoded bitstream take place at a rate determined by the arrival of data from the relevant data connection.
28. The method according to
claim 27, wherein generating the DCT transformed macroblocks, generating the reconstructed prediction error macroblocks, and generating the encoded bitstream takes place at a rate determined by a handshake control signal.
29. The method according to
claim 28, further comprising receiving the prediction error macroblocks from a motion estimation engine.
30. The method according to
claim 29, further comprising storing the reconstructed macroblocks in a macroblock memory.
31. An encoder/decoder circuit, comprising:
a discrete cosine transform (DCT) circuit to generate DCT transformed macroblocks from prediction error macroblocks;
a processor configured to quantize the DCT transformed macroblocks to generate quantized macroblocks, and to inverse quantize the quantized macroblocks to generate inverse quantized macroblocks;
an inverse discrete cosine transform (IDCT) circuit to transform the inverse quantized macroblocks to generate reconstructed prediction error macroblocks;
an addition circuit to add a single reconstructed prediction error macroblock and a corresponding predictor macroblock to generate respective reconstructed macroblocks; and
a control circuit to configure the encoder/decoder circuit to encode or decode a bitstream.
32. The encoder/decoder of
claim 31, wherein the encoder/decoder circuit configured for decoding mode uses the processor configured to inverse quantize macroblocks, the IDCT circuit, and the addition circuit to generate the reconstructed macroblocks.
33. The encoder/decoder of
claim 32, further comprising means for zig-zag scanning, run level coding and variable length coding the quantized macroblocks to generate an encoded bitstream.
34. The encoder/decoder of
claim 33, wherein the means for zig-zag scanning and run length coding is the processor configured to implement the zig-zag scanning and run length coding, and the means for variable length coding is a hardware VLC packer.
35. The encoder/decoder of
claim 34, further comprising a delay buffer for storing the corresponding predictor macroblocks.
36. The encoder/decoder of
claim 35, wherein a motion estimation engine provides the corresponding predictor macroblocks to the delay buffer.
37. The encoder/decoder of
claim 36, further comprising:
a first streaming data connection for streaming DCT transformed macroblocks to the processor;
a second streaming data connection for streaming the inverse quantized macroblocks from the processor to the IDCT circuit; and
a third streaming data connection for streaming the run length coded data from the processor to the hardware VLC packer.
38. The encoder/decoder of
claim 37, wherein the DCT circuit, the IDCT circuit, and the hardware VLC packer process data at a rate determined by the arrival of data from the relevant data connection.
39. The encoder/decoder of
claim 38, wherein the DCT circuit, the IDCT circuit, and the hardware VLC packer process data at a rate determined by a handshake control signal.
40. The encoder/decoder of
claim 39, further comprising a macroblock memory to store the reconstructed macroblocks.
41. A method for encoding and decoding in an encoder/decoder circuit having a control circuit to configure the encoder/decoder circuit for encoding or decoding mode, comprising:
generating DCT transformed macroblocks by applying prediction error macroblocks to a discrete cosine transform (DCT) circuit;
quantizing the DCT transformed macroblocks to generate quantized macroblocks;
inverse quantizing the quantized macroblocks to generate inverse quantized macroblocks;
generating reconstructed prediction error macroblocks by applying the inverse quantized macroblocks to the IDCT circuit; and
adding the reconstructed prediction error macroblocks and corresponding predictor macroblocks to generate respective reconstructed macroblocks;
wherein the reconstructed macroblocks are useful either as decoded reconstructed picture macroblocks or for encoding other macroblocks.
42. The method according to
claim 41, further comprising generating an encoded bitstream by zig-zag scanning, run level coding and variable length coding the quantized macroblocks.
43. The method according to
claim 42, wherein generating the encoded bitstream by zig-zag scanning and run length coding the quantized macroblocks is performed by the processor configured to implement the zig-zag scanning and run length coding, and by variable length coding the run length coded macroblocks in a hardware VLC packer.
44. The method according to
claim 43, further comprising storing the corresponding predictor macroblocks in a delay buffer.
45. The method according to
claim 44, further comprising receiving the corresponding predictor macroblocks and the prediction error macroblocks from a motion estimation engine.
46. The method according to
claim 45, further comprising:
streaming the DCT transformed macroblocks to the processor;
streaming the inverse quantized macroblocks from the processor to the IDCT circuit; and
streaming the run length coded data from the processor to the hardware VLC packer.
47. The method according to
claim 46, wherein generating the DCT transformed macroblocks, generating the reconstructed prediction error macroblocks, and generating the encoded bitstream take place at a rate determined by the arrival of data from the relevant data connection.
48. The method according to
claim 47, wherein generating the DCT transformed macroblocks, generating the reconstructed prediction error macroblocks, and generating the encoded bitstream take place at a rate determined by a handshake control signal.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/020,668 US20080123748A1 (en) | 2002-03-18 | 2008-01-28 | Compression circuitry for generating an encoded bitstream from a plurality of video frames |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP02251932A EP1347650A1 (en) | 2002-03-18 | 2002-03-18 | Compression circuitry for generating an encoded bitstream from a plurality of video frames |
EP02251932.6 | 2002-03-18 | ||
US10/391,442 US7372906B2 (en) | 2002-03-18 | 2003-03-17 | Compression circuitry for generating an encoded bitstream from a plurality of video frames |
US12/020,668 US20080123748A1 (en) | 2002-03-18 | 2008-01-28 | Compression circuitry for generating an encoded bitstream from a plurality of video frames |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/391,442 Division US7372906B2 (en) | 2002-03-18 | 2003-03-17 | Compression circuitry for generating an encoded bitstream from a plurality of video frames |
Publications (1)
Publication Number | Publication Date |
---|---|
US20080123748A1 true US20080123748A1 (en) | 2008-05-29 |
Family
ID=27771942
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/391,442 Active 2025-05-08 US7372906B2 (en) | 2002-03-18 | 2003-03-17 | Compression circuitry for generating an encoded bitstream from a plurality of video frames |
US12/020,668 Abandoned US20080123748A1 (en) | 2002-03-18 | 2008-01-28 | Compression circuitry for generating an encoded bitstream from a plurality of video frames |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/391,442 Active 2025-05-08 US7372906B2 (en) | 2002-03-18 | 2003-03-17 | Compression circuitry for generating an encoded bitstream from a plurality of video frames |
Country Status (2)
Country | Link |
---|---|
US (2) | US7372906B2 (en) |
EP (2) | EP2309759A1 (en) |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2309759A1 (en) * | 2002-03-18 | 2011-04-13 | STMicroelectronics Limited | Compression circuitry for generating an encoded bitstream from a plurality of video frames |
JP3680846B2 (en) * | 2003-05-28 | 2005-08-10 | セイコーエプソン株式会社 | MOVING IMAGE COMPRESSION DEVICE AND IMAGING DEVICE USING THE SAME |
JP3680845B2 (en) * | 2003-05-28 | 2005-08-10 | セイコーエプソン株式会社 | Compressed video decompression device and image display device using the same |
US20050105612A1 (en) * | 2003-11-14 | 2005-05-19 | Sung Chih-Ta S. | Digital video stream decoding method and apparatus |
KR100787655B1 (en) * | 2003-12-22 | 2007-12-21 | 닛본 덴끼 가부시끼가이샤 | Method and device for encoding moving picture |
KR100800772B1 (en) * | 2004-05-26 | 2008-02-01 | 마츠시타 덴끼 산교 가부시키가이샤 | Apparatus, Method, Program and Media for Motion Vector Coding |
CN1972168B (en) * | 2005-11-25 | 2010-04-28 | 杭州中天微系统有限公司 | Programmable variable-length bit-stream processor |
US7929599B2 (en) | 2006-02-24 | 2011-04-19 | Microsoft Corporation | Accelerated video encoding |
CN103067718B (en) * | 2013-01-30 | 2015-10-14 | 上海交通大学 | Be applicable to the one-dimensional discrete cosine inverse transform module circuit of digital video decoding |
Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5198901A (en) * | 1991-09-23 | 1993-03-30 | Matsushita Electric Corporation Of America | Derivation and use of motion vectors in a differential pulse code modulation system |
US5535288A (en) * | 1992-05-18 | 1996-07-09 | Silicon Engines, Inc. | System and method for cross correlation with application to video motion vector estimator |
US5680482A (en) * | 1995-05-17 | 1997-10-21 | Advanced Micro Devices, Inc. | Method and apparatus for improved video decompression by adaptive selection of video input buffer parameters |
US5768433A (en) * | 1994-03-30 | 1998-06-16 | Sgs-Thomson Microelectronics S.A. | Picture compression circuit |
US5774206A (en) * | 1995-05-10 | 1998-06-30 | Cagent Technologies, Inc. | Process for controlling an MPEG decoder |
US5805483A (en) * | 1995-12-13 | 1998-09-08 | Samsung Electronics Co., Ltd. | Method of converting data outputting sequence in inverse DCT and circuit thereof |
US6052415A (en) * | 1997-08-26 | 2000-04-18 | International Business Machines Corporation | Early error detection within an MPEG decoder |
US6091857A (en) * | 1991-04-17 | 2000-07-18 | Shaw; Venson M. | System for producing a quantized signal |
US6104751A (en) * | 1993-10-29 | 2000-08-15 | Sgs-Thomson Microelectronics S.A. | Apparatus and method for decompressing high definition pictures |
US6256349B1 (en) * | 1995-12-28 | 2001-07-03 | Sony Corporation | Picture signal encoding method and apparatus, picture signal transmitting method, picture signal decoding method and apparatus and recording medium |
US20010033617A1 (en) * | 2000-04-19 | 2001-10-25 | Fumitoshi Karube | Image processing device |
US6480541B1 (en) * | 1996-11-27 | 2002-11-12 | Realnetworks, Inc. | Method and apparatus for providing scalable pre-compressed digital video with reduced quantization based artifacts |
US6788740B1 (en) * | 1999-10-01 | 2004-09-07 | Koninklijke Philips Electronics N.V. | System and method for encoding and decoding enhancement layer data using base layer quantization data |
US6842484B2 (en) * | 2001-07-10 | 2005-01-11 | Motorola, Inc. | Method and apparatus for random forced intra-refresh in digital image and video coding |
US6996173B2 (en) * | 2002-01-25 | 2006-02-07 | Microsoft Corporation | Seamless switching of scalable video bitstreams |
US7015921B1 (en) * | 2001-12-31 | 2006-03-21 | Apple Computer, Inc. | Method and apparatus for memory access |
US7372906B2 (en) * | 2002-03-18 | 2008-05-13 | Stmicroelectronics Limited | Compression circuitry for generating an encoded bitstream from a plurality of video frames |
-
2002
- 2002-03-18 EP EP10183232A patent/EP2309759A1/en not_active Withdrawn
- 2002-03-18 EP EP02251932A patent/EP1347650A1/en not_active Ceased
-
2003
- 2003-03-17 US US10/391,442 patent/US7372906B2/en active Active
-
2008
- 2008-01-28 US US12/020,668 patent/US20080123748A1/en not_active Abandoned
Patent Citations (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6091857A (en) * | 1991-04-17 | 2000-07-18 | Shaw; Venson M. | System for producing a quantized signal |
US5198901A (en) * | 1991-09-23 | 1993-03-30 | Matsushita Electric Corporation Of America | Derivation and use of motion vectors in a differential pulse code modulation system |
US5535288A (en) * | 1992-05-18 | 1996-07-09 | Silicon Engines, Inc. | System and method for cross correlation with application to video motion vector estimator |
US6104751A (en) * | 1993-10-29 | 2000-08-15 | Sgs-Thomson Microelectronics S.A. | Apparatus and method for decompressing high definition pictures |
US5768433A (en) * | 1994-03-30 | 1998-06-16 | Sgs-Thomson Microelectronics S.A. | Picture compression circuit |
US5774206A (en) * | 1995-05-10 | 1998-06-30 | Cagent Technologies, Inc. | Process for controlling an MPEG decoder |
US5680482A (en) * | 1995-05-17 | 1997-10-21 | Advanced Micro Devices, Inc. | Method and apparatus for improved video decompression by adaptive selection of video input buffer parameters |
US5805483A (en) * | 1995-12-13 | 1998-09-08 | Samsung Electronics Co., Ltd. | Method of converting data outputting sequence in inverse DCT and circuit thereof |
US6256349B1 (en) * | 1995-12-28 | 2001-07-03 | Sony Corporation | Picture signal encoding method and apparatus, picture signal transmitting method, picture signal decoding method and apparatus and recording medium |
US6480541B1 (en) * | 1996-11-27 | 2002-11-12 | Realnetworks, Inc. | Method and apparatus for providing scalable pre-compressed digital video with reduced quantization based artifacts |
US7075986B2 (en) * | 1996-11-27 | 2006-07-11 | Realnetworks, Inc. | Method and apparatus for providing scalable pre-compressed digital video with reduced quantization based artifacts |
US6052415A (en) * | 1997-08-26 | 2000-04-18 | International Business Machines Corporation | Early error detection within an MPEG decoder |
US6788740B1 (en) * | 1999-10-01 | 2004-09-07 | Koninklijke Philips Electronics N.V. | System and method for encoding and decoding enhancement layer data using base layer quantization data |
US20010033617A1 (en) * | 2000-04-19 | 2001-10-25 | Fumitoshi Karube | Image processing device |
US6842484B2 (en) * | 2001-07-10 | 2005-01-11 | Motorola, Inc. | Method and apparatus for random forced intra-refresh in digital image and video coding |
US7015921B1 (en) * | 2001-12-31 | 2006-03-21 | Apple Computer, Inc. | Method and apparatus for memory access |
US6996173B2 (en) * | 2002-01-25 | 2006-02-07 | Microsoft Corporation | Seamless switching of scalable video bitstreams |
US7372906B2 (en) * | 2002-03-18 | 2008-05-13 | Stmicroelectronics Limited | Compression circuitry for generating an encoded bitstream from a plurality of video frames |
Also Published As
Publication number | Publication date |
---|---|
US20030231710A1 (en) | 2003-12-18 |
EP2309759A1 (en) | 2011-04-13 |
EP1347650A1 (en) | 2003-09-24 |
US7372906B2 (en) | 2008-05-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP4138056B2 (en) | 2008-08-20 | Multi-standard decompression and / or compression device |
US5768537A (en) | 1998-06-16 | Scalable MPEG2 compliant video encoder |
US20080123748A1 (en) | 2008-05-29 | Compression circuitry for generating an encoded bitstream from a plurality of video frames |
EP0585051B1 (en) | 2001-03-07 | Image processing method and apparatus |
EP0690392A1 (en) | 1996-01-03 | Method and device for transcoding a sequence of coded digital signals |
KR19990036188A (en) | 1999-05-25 | Method and apparatus for decoding encoded digital video signal |
EP0756803B1 (en) | 2000-03-29 | A transcoder |
WO2009130886A1 (en) | 2009-10-29 | Moving image coding device, imaging device and moving image coding method |
US5748240A (en) | 1998-05-05 | Optimal array addressing control structure comprising an I-frame only video encoder and a frame difference unit which includes an address counter for addressing memory addresses |
EP1838108A1 (en) | 2007-09-26 | Processing video data at a target rate |
JP2000050263A (en) | 2000-02-18 | Image coder, decoder and image-pickup device using them |
KR100312421B1 (en) | 2001-12-12 | A conversion method of the compressed moving video on the video communication system |
KR20010083718A (en) | 2001-09-01 | Method and apparatus for transformation and inverse transformation of image for image compression coding |
JPH10271516A (en) | 1998-10-09 | Compression coder, coding method, decoder and decoding method |
KR101147744B1 (en) | 2012-05-25 | Method and Apparatus of video transcoding and PVR of using the same |
JP4201839B2 (en) | 2008-12-24 | Overhead data processor of image processing system that effectively uses memory |
KR100364748B1 (en) | 2002-12-16 | Apparatus for transcoding video |
KR100598093B1 (en) | 2006-07-07 | Video Compression Device with Low Memory Bandwidth and Its Method |
US6097843A (en) | 2000-08-01 | Compression encoding apparatus, encoding method, decoding apparatus, and decoding method |
US20010021221A1 (en) | 2001-09-13 | Transcoding method and device |
KR100323235B1 (en) | 2002-02-19 | Algorithm and Implementation Method of a Low-Complexity Video Encoder |
JP2001189662A (en) | 2001-07-10 | Method for changing bit rate of data stream of encoded video picture |
JP3141149B2 (en) | 2001-03-05 | Image coding device |
JP4100067B2 (en) | 2008-06-11 | Image information conversion method and image information conversion apparatus |
JP4140163B2 (en) | 2008-08-27 | Encoding method converter |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
2013-04-24 | STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |