Báo cáo hóa học: "Motion Estimation and Signaling Techniques for 2D+t Scalable Video Coding" doc

21 607 0
Báo cáo hóa học: "Motion Estimation and Signaling Techniques for 2D+t Scalable Video Coding" doc

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Hindawi Publishing Corporation EURASIP Journal on Applied Signal Processing Volume 2006, Article ID 57308, Pages 1–21 DOI 10.1155/ASP/2006/57308 Motion Estimation and Signaling Techniques for 2D+t Scalable Video Coding M. Tagliasacchi, D. Maestroni, S. Tubaro, and A. Sarti Dipartimento di Elettronica e Informazione, Politecnico di Milano, Piazza Leonardo da Vinci, 32 20133 Milano, Italy Received 1 March 2005; Revised 5 August 2005; Accepted 12 September 2005 We describe a fully scalable wavelet-based 2D+t (in-band) video coding architecture. We propose new coding tools specifically designed for this framework aimed at two goals: reduce the computational complexity at the encoder without sacrificing compres- sion; improve the coding efficiency, especially at low bitrates. To this end, we focus our attention on motion estimation and motion vector encoding. We propose a fast motion estimation algor ithm that works in the wavelet domain and exploits the geometrical properties of the wavelet subbands. We show that the computational complexity grows linearly with the size of the search window, yet approaching the performance of a full search strategy. We extend the proposed motion estimation algorithm to work with blocks of variable sizes, in order to better capture local motion characteristics, thus improving in terms of rate-distortion behavior. Given this motion field representation, we propose a motion vector coding algorithm that allows to adaptively scale the motion bit budget according to the target bitrate, improving the coding efficiency at low bitrates. Finally, we show how to optimally scale the motion field when the sequence is decoded at reduced spatial resolution. Experimental results illustrate the advantages of each individual coding tool presented in this paper. Based on these simulations, we define the best configuration of coding parameters and we compare the proposed codec with MC-EZBC, a widely used reference codec implementing the t+2D fr amework. Copyright © 2006 Hindawi Publishing Corporation. All rights reserved. 1. INTRODUCTION Today’s video streaming applications require codecs to pro- vide a bitstream that can be flexibly adapted to the charac- teristics of the network and the receiving device. Such codecs are expected to fulfill the scalability requirements so that en- coding is performed only once, while decoding takes place each time at different spatial resolutions, fr ame rates, and bi- trates. Consider for example streaming a video content to TV sets, PDAs, and cellphones at the same time. Obviously each device has its own constraints in terms of bandwidth, display resolution, and battery life. For this reason it would be useful for the end users to subscribe to a scalable video stream in such a way that a representation of the video con- tent matching the device characteristics can be extracted at decoding time. Wavelet-based video codecs have proved to be able to naturally fit this application scenario, by decom- posing the video sequence into a plurality of spatio-temporal subbands. Combined with an embedded entropy coding of wavelet coefficients such as JPEG2000 [1], SPIHT (set par- titioning in hierarchical trees) [2], EZBC (embedded zero- block coding) [3], or ESCOT (motion-based embedded sub- band coding with optimized truncation) [4], it is possible to support spatial, temporal, and SNR (signal-to-noise ratio) scalability. Broadly speaking, two families of wavelet-based video codecs have been described in the literature: (i) t+2D schemes [5–7]: the video sequence is first filtered in the temporal direction along the motion trajecto- ries (MCTF—motion-compensated temporal filtering [8]) in order to tackle temporal redundancy. Then, a 2D wavelet transform is carried out in the spatial do- main. Motion estimation/compensation takes place in the spatial domain, hence conventional coding tools used in nonscalable video codecs can be easily reused; (ii) 2D+t (or in-band) schemes [9, 10]: each frame of the video sequence is wavelet-transformed in the spatial domain, followed by MCTF. Motion esti- mation/compensation is carried out directly in the wavelet domain. Due to the nonlinear motion warping operator needed in the temporal filtering stage, the order of the transforms does not commute. In fact the wavelet transform is not shift- invariant and care has to be taken since the motion esti- mation/compensation task is performed in the wavelet do- main. In the literature several approaches have been used to tackle this issue. Although known under different names (low-band-shift [11], ODWT (overcomplete discrete wavelet 2 EURASIP Journal on Applied Signal Processing transform) [12], redundant DWT [10]), all the solutions rep- resent different implementations of the algorithm ´ atrous [13], that computes an overcomplete wavelet decomposition by omitting the decimators in the fast DWT algorithm and stretching the wavelet filters by inserting zeros. A two-level ODWT transform on a 1D signal is illustrated in Figure 1, where H 0 (z)andH 1 (z) are, respectively, the wavelet low-pass and h igh-pass filters used in the conventional critically sam- pled DWT. H k i (z) is the dilated version of H i (z) obtained by inserting k − 1zerosbetweentwoconsecutivesamples. The extension to 2D signals is straightforward with a separa- ble approach. Despite its higher complexity, a 2D+t scheme comes with the advantage of reducing the impact of blocking artifacts caused by the failure of block-based motion mod- els. This is because such artifacts are canceled out by the inverse DWT spatial transform, without the need to adopt some sort of deblocking filtering. This fact greatly enhances the perceptual quality of reconstructed sequences, especially at low bit rates. Furthermore, as shown in [14, 15], 2D+t approaches naturally fit the spatial scalability requirements providing higher coding efficiency when the sequence is de- coded at reduced spatial resolution. This is due to the fact that with in-band motion compensation it is possible to limit the problem of drift that occurs when decoder does not have access to all the wavelet subbands used at the encoder side. Fi- nally, 2D+t schemes naturally support multi-hypothesis mo- tion compensation taking advantage of the redundancy of the ODWT [10]. 1.1. Motivations and goals In this paper we present a fully scalable video coding archi- tecture based on a 2D+t approach. Our contribution is at the system level. Figure 2 depicts the overall video coding archi- tecture, emphasizing the modules we are focusing on in this paper. It is widely acknowledged that motion modeling has a f undamental importance in the design of a video cod- ing architecture in order to match the coding efficiency of state-of-the-art codecs. As an example, much of the cod- ing g a in observed in the recent H.264/AVC standard [16] is due to more sophisticated motion modeling tools (vari- able block sizes, quarter-pixel motion accuracy, multiple ref- erence frames, etc). Motion modeling is particularly rele- vant especially when the sequences are decoded at low bi- trates and at reduced spatial resolution, because a signifi- cant fraction of the bit budget is usually allocated to describe motion-related information. This fact motivates us to focus our attention on motion estimation/compensation and mo- tion signaling techniques to improve the coding efficiency of the proposed 2D+t wavelet-based video codec. While achiev- ing better compression, we also want to keep the computa- tional complexity of the encoder under control, in order to design a practical architecture. In Section 2, we describe the details of the proposed 2D+t scalable video codec (see Figure 2). Based on this cod- ing framework, we propose novel techniques to improve the coding efficiency and reduce the complexity of the encoder. H 0 (z) H (2) 0 (z) H (4) 0 (z) L 3 H (4) 1 (z) H 3 H (2) 1 (z) H 2 H 1 (z) H 1 Figure 1: Two level overcomplete DWT (ODWT) of a 1D signal according to the algorithm ´ atrousimplementation. Specifically, we propose the following: (i) in Section 2.1, a fast motion estimation algorithm that is meant to work in the wavelet domain (FIBME—fast in-band motion estimation), exploiting the geomet- rical properties of the wavelet subbands. Section 2.1 elaborates on this topic comparing the computational complexity of the proposed approach with that of an exhaustive full search; (ii) in Section 2.2, the FIBME algorithm is further ex- tended to work with blocks of variable size; (iii) in Section 2.3, a scalable representation of the mo- tion model is introduced, which is suitable for vari- able block sizes and allows to adapt the bit budget al- located to motion a ccording to the target bitrate (see Section 2.3); (iv) in Section 2.4, a formal analysis describing how the motion field estimated at full resolution can be adapted at reduced spatial resolutions. We show that motion vector truncation, adopted in the reference im- plementation of the MC-EZBC codec [5], is not the optimal choice when the motion field resolution needs to be scaled. The paper builds upon our previous work appeared in [17–19]. 1.2. Related works In 2D+t wavelet-based video codecs, motion estimation/ compensation needs to be carried out directly in the wavelet domain. Althoug h a great deal of works focuses on how to avoid the shift variance of the wavelet transform [9–11, 20], the problem of fast motion estimation in 2D+t schemes is not thoroughly investigated in the literature. References [21, 22] propose similar algorithms for in-band motion estimation. In both cases different motion vectors are assigned to each scale of wavelet subbands. In order to decrease the complex- ity of the motion search, the algorithms work in a multi- resolution fashion, in such a way that the motion search at a given resolution is initialized with the estimate obtained at lower resolution. The proposed fast motion estimation M. Tagliasacchi et al. 3 FIBME + variable size BM Scalable MV ME MV encoding Motion information Spatial domain DWT In-band MCTF (ODWT) EZBC Wavelet subbands coefficients Figure 2: Block diagram of the proposed scalable 2D+t coding architecture. Call-outs point to the novel features described in this paper. algorithm shares the multi-resolution approach of [21, 22]. Despite this similarity, the proposed algorithm takes full ad- vantage of the geometrical properties of the wavelet sub- bands, and different motion vectors are used to compensate subbands at the same scale but having different orientation (see Section 2.1), thus giving more flexibility in the model- ing of local motion. Variable size block matching is well known in the liter- ature, at least when it is applied in the spatial domain. The state-of-the-art H.264/AVC [16] standard efficiently exploits this technique. In [23], a hierarchical variable size block matching (HVSBM) algorithm is used in the context of a t+2D wavelet-based codec. The MC-EZBC codec [5]adopts the same algorithm in the motion estimation phase. The au- thors of [24] independently proposed a variable size block matching strategy within their 2D+t wavelet-based codec. The search for the best motion partition is close to the idea of H.264/AVC, since all the possible block partitions are tested in order to determine the optimal one. On the other hand, the algorithm proposed in this paper (see Section 2.2)is more similar to the HVSBM algorithm [23], as the search is suboptimal but faster. Scalability of motion vector was first proposed in [25] and later further discussed in [26], where JPEG2000 is used to encode the motion field components. The work in [26] assumes that fixed block sizes (or regular meshes) are used in the motion estimation phase. More recently, other works have appeared in the literature [27–29], describing coding algorithms for motion fields having arbitrary block sizes specifically designed for wavelet-based scalable video codecs. The algorithm described in this paper has been designed in- dependently and shares the general approach of [26, 27], since the motion field is quantized when decoding at low bi- trates. Despite these similarities, the proposed entropy cod- ing scheme is novel and it is inspired to SPIHT [2], allow- ing lossy to lossless representation of the motion field (see Section 2.3). 2. PROPOSED 2D+t CODEC Figure 2 illustrates the functional modules that compose the proposed 2D+t codec. First, a group of pictures (GOP) is fed in input and each frame is wavelet transformed in the spatial domain using Daubechies 9/7 filters. Then, in-band MCTF is performed using the redundant representation of the ODWT to combat shift variance. The motion is estimated (ME) by variable size block matching with the FIBME algorithm (fast in-band motion estimation) described in Section 2.1. Finally, wavelet coefficients are entropy coded with EZBC (embed- ded zero-block coding) while motion vectors are encoded in a scalable way by the algorithm proposed in Section 2.3. In the following, we concentrate on the description of the in-band MCTF module, as we need the background for in- troducing the proposed fast motion estimation algorithm. MCTF is usually performed taking advantage of the lift- ing implementation. This technique enables to split direct wavelet temporal filtering into a sequence of prediction and update steps in such a way that the process is both perfectly invertible and computationally efficient. In our implementa- tion a simple Haar transform is used, although the extension to longer filters such as 5/3 [6, 7] is conceptually straight- forward. In the Haar case, the input frames are recursively processed two-by-two, according to the following equations: H = 1 √ 2  B − W A→B (A)  , L = √ 2A + W B→A (H), (1) where A and B are two successive frames and W B→A (·)isa motion warping oper ators that warps frame A into the co- ordinate system of frame B. L and H are, respectively, the low-pass and high-pass temporal subbands. These two lift- ing steps are then iterated on the L subbands of the GOP such that for each GOP only one low-pass subband is obtained. The prediction step is the counterpart of motion com- pensated prediction in conventional closed loop schemes. The energy of frame H is lower than that of the original frame, thus achieving compression. On the other hand, the update step can be thought as a motion-compensated aver- aging along the motion trajectories: the updated frames are free from temporal aliasing artifacts and at the same time L requires fewer bits for the same quality than frame A because of the motion-compensated denoising performed by the up- date step. In the 2D+t scenario, temporal filtering occurs in the wavelet domain and the reference fr a me is thus available in 4 EURASIP Journal on Applied Signal Processing Reference subband A O 1 Current subband B 1 E B A − 1 √ 2 + 1 √ 2 C = A −B H IDWT ODWT F √ 2 1 H O 1 D = E + F + L 1 (a) L 1 H 1 D IDWT ODWT F C 1 √ 2 − 1 √ 2 H O 1 + E = D −F A 1 IDWT ODWT R O B √ 2 1 A O 1 + Current A = B + C B 1 (b) Figure 3: In-band MCTF: (a) temporal filtering at the encoder side (MCTF analysis); (b) temporal filtering at the decoder side (MCTF synthesis). its overcomplete version in order to combat the shift variance of the wavelet transform. In what follows we illustrate an implementation of the lifting structure, which works in the overcomplete wavelet domain. Figure 3 shows the current and the overcomplete reference frame together with the es- timated motion vector (dx, dy) in the wavelet domain. For the sake of clarity, we refer to one wavelet subband at decom- position level 1 (LH 1 , HL 1 ,orHH 1 ). The computation of H i is rather straightforward. For each coefficient of the current frame, the corresponding wavelet transformed coefficient in the overcomplete transfor med reference frame is subtracted: H i (x, y) = 1 √ 2  B i (x, y) − A O i  2 i x + dx,2 i y + dy  ,(2) where A O i is the overcomplete wavelet-transformed reference frame subband at level i and it has the same number of sam- ples as the original frame. The computation of the L i subband is not as trivial. While H i shares the coordinate system with the current frame, L i uses the reference frame coordinate sys- tem. A straightforward implementation could be L i (x, y) = √ 2A O i  2 i x,2 i y  + H i  x − dx/2 i , y −dy/2 i  . (3) The problem here is that the coordinates (x −dx/2 i , y−dy/2 i ) might be noninteger valued even if full pixel accuracy of the displacements is used in the spatial domain. Consider, for example, the coefficient D in the L i subband, as shown in Figure 3.Thiscoefficient should be computed as the sum be- tween coefficients E and F. Unfortunately, the latter does not exist in subband H i , which suggests that an interpolated ver- sion of H i is needed. First, we need to compute the inverse DWT(IDWT)oftheH i subbands, which transforms it back to the spatial domain. Then, we obtain the overcomplete M. Tagliasacchi et al. 5 DWT , H O i .TheL i frame can be now computed as L i (x, y) = √ 2A O i  2 i x,2 i y  +ODWT  IDWT  H i  2 i x − dx,2 i y − dy  = √ 2A O i  2 i x,2 i y  + H O i  2 i x − dx,2 i y − dy  . (4) The process is repeated in reverse order at the decoder side. The decoder receives L i and H i . First the overcomplete copy of H i is computed through IDWT-ODWT. The reference frame is reconstructed as A i (x, y) = 1 √ 2  L i (x, y) − ODWT  IDWT  H i  ×  2 i x − dx,2 i y − dy  = 1 √ 2  L i (x, y) − H O i  2 i x − dx,2 i y − dy  . (5) At this point, the overcomplete version of the reference frame must be reconstructed via IDWT-ODWT in order to com- pute the current frame: B i (x, y) = √ 2H i (x, y) +ODWT  IDWT  A i  2 i x − dx,2 i y − dy  = √ 2H O i  2 i x − dx,2 i y − dy  + A O i  2 i x + dx,2 i y + dy  . (6) Figure 3 shows the overall process diagram that illustrates the temporal analysis at the encoder and the synthesis at the decoder. Notice that the combined IDWT-ODWT opera- tion takes place three times, once at the encoder and twice at the decoder. In the actual implementation, the IDWT- ODWT cascade can be combined in order to reduce the memory bandwidth and the computational complexity ac- cording to the complete-to-overcomplete (CODWT) algo- rithm descr ibed in [20]. 2.1. Fast in-band motion estimation The wavelet in-band prediction mechanism (2D+t), as il- lustrated in [9], works by computing the residual error after block matching. For each wavelet block, the best- matching wavelet block is searched in the overcomplete wavelet-transformed reference frame, using a full search ap- proach. The computational complexity can be expressed in terms of the number of required operations as T = 2W 2 N 2 ,(7) where W is the size of the search window and N is the block size. As a matter of fact, for every motion vector, at least N 2 subtractions and N 2 summations are needed to compute the MAD (mean absolute difference) of the residuals, and there exist W 2 different motion vectors to be tested. The proposed fast motion estimation algorithm is based on optical flow estimation techniques. The family of differ- ential algorithms, including Lucas-Kanade [30]andHorn- Schunk [31], assumes that the intensity remains unchanged along the motion trajectories. This results in the brightness constraint in differential form: I x v x + I y v y + I t = 0, (8) where, I x = ∂I(x, y, t) ∂x , I y = ∂I(x, y, t) ∂y , I x = ∂I(x, y, t) ∂t , (9) I x , I y ,andI t are the horizontal, vertical, and temporal gra- dients, respectively. Notice that when I x  I y , that is, when the local texturing is almost vertically oriented, only the dx (dx = v x dt) component can be accurately estimated because: I x I x v x + I y I x v y + I t I x  v x + I t I x = 0. (10) This is the so-called “aperture problem” [32], which consists of the fact that when the observation window is too small, we can only estimate the optical flow component that is parallel to the local gradient. That of the aperture is indeed a prob- lem for traditional motion estimation methods, but in the proposed motion estimation algorithm we take advantage of this fact. For the sake of clarity, let us consider a pair of images that exhibit a limited displacement between corresponding ele- ments, and let us focus on the HL subband only (before sub- sampling). This subband is low-pass filtered along the verti- cal axis and high-pass filtered along the horizontal axis. The output of this separable filter looks like the spatial horizontal gradient I x . In fact, the HL subbands tend to preserve only those details that are oriented along the vertical direction. This suggests us that the family of HL subbands, all sharing the same orientation, could be used to accurately estimate the dx motion vector component. Similarly, LH subbands have details oriented along the horizontal axis, therefore they are suitable for computing the dy component. For each wavelet block, a coarse full search is applied to the LL K subband only, where the subscript K is the number of the considered DWT decomposition levels. This initial computation allows us to determine a good starting point (dx FS , dy FS ) 1 for the fast search algorithm, which reduces the risk of getting trapped into local minima. As the LL K subband has 2 2K fewer samples than the whole wavelet block, block matching is not compu- tationally expensive. In fact, the computational complexity of this initial step expressed in terms of the number of additions and multiplications is T = 2  W 2 K  2  N 2 K  2 = W 2 N 2 2 4K−1 . (11) At this point we can focus on the HL subbands. In fact, we use a block matching process on these subbands in order to compute the horizontal displacements and estimate the dx 1 The superscript FS stands for full search. 6 EURASIP Journal on Applied Signal Processing component for block k whose top-left corner has coordinates (x k , y k ). The search window is reduced to W/4, as we only need to refine the coarse estimate provided by the full search: dx k = arg min dx j K  i=1 MAD HL i  x k , y k , dx FS + dx j , dy FS  +MAD LL K  x k , y k , dx FS + dx j , dy FS  , (12) where MAD HL i (x k , y k , dx, dy)andMAD LL K (x k , y k , dx, dy) are the MAD obtained compensating the block (x k , y k ) in the subbands HL i and LL K , respectively, with the motion vector (dx, dy): MAD HL i  x k , y k , dx, dy  = x k,i +N/2 i  x=x k,i y k,i +N/2 i  y=y k,i   HL cur i (x, y)HL ref O i  2 i x+dx,2 i y+dy    , MAD LL K  x k , y k , dx, dy  = x k,i +N/2 K  x=x k,i y k,i +N/2 K  y=y k,i   LL cur K (x, y)LL ref O K  2 K x + dx,2 K y    , (13) (x k,i , y k,i ) are the top-left corner coordinates of block k sub- band at level i and it is equal to ( x k /2 i , y k /2 i ). Because of the shift-varying behavior of the wavelet transform, block matching is performed considering the overcomplete DWT of the reference frame (HL ref O i (·)and LL ref O K (·)). Similarly we can work on the LH subbands to esti- mate the dy component. In order to improve the accuracy of the estimate, this second stage takes (x k +dx FS +dx k , y k +dx FS ) as a star ting point: dy k =argmin dy j K  i=1 MAD LH i  x k , y k , dx FS + dx k , dy FS + dy j  +MAD LL K  x k , y k , dx FS + dx k , dy FS + dy j  , (14) where MAD LH i (x k , y k , dx, dy) is defined in a similar way to MAD HL i (x k , y k , dx, dy). We refer to this algorithm for motion estimation as fast in-band motion estimation (FIBME). The algorithm achieves a good solution that compares favorably with re- spect to a full search approach with a modest computational effort. The computational complexity of this method is T  2 · 2 3 · W 4 N 2 + W 2 N 2 2 4K−1 = 1 3 WN 2 + W 2 N 2 2 4K−1 , (15) where W/4 comparisons are required to compute the hori- zontal component and other W/4 for the vertical component. Each comparison involves either HL or LH subbands, whose size is approximately one third of the whole wavelet block (if we neglect the LL K subband). If we keep the block size N fixed, the proposed algorithm runs in linear time with the search window size, while the complexity of the full search grows with the square power. The speedup factor with re- spect to the full search in terms of number of operations is speedup = 2W 2 N 2 (1/3)WN 2 + W 2 N 2 /2 4K−1  6W. (16) It is worth pointing out that this speedup factor refers to the motion estimation task, which is only part of the overall computational burden at the encoder. In Section 3,wegive more precise indications, based on experimental evidence, about the actual encoding time speedup, including wavelet transforms, motion compensation, and entropy coding. At a fraction of the cost of the full search, the proposed algo- rithm achieves a solution that is suboptimal. Nevertheless, Section 3 shows through extensive experimental results on different test sequences that the coding efficiency loss is lim- ited approximately to 0.5 dB on sequences with large motion. We have investigated the accuracy of our search algo- rithm in case of large displacements. If we do not use a full search for the LL K subband, our approach tends to give a bad estimate of the horizontal component when the vertical dis- placement is too large. In this scenario, when the search win- dow scrolls horizontally, it cannot match the reference dis- placed block. We observed that the maximum allowed ver- tical displacement is approximately as large as the low-pass filter impulse response used by the critically sampled DWT. This is due to the fact that such filter operates along the ver- tical direction by stretching the details proportionally to its impulse response extension. The same conclusions can be drawn if we take a closer look at Figure 4. A wavelet block from the DWT-transformed current fr ame is taken as the current block, while the ODWT transformed reference frame is taken as the reference. For all possible displacements (dx, dy), the MAD of the prediction residuals is computed by compensating only the HL sub- band family, that is, the one that we argue being suitable for estimating the horizontal displacement. In Figure 4(a), the global minimum of this function is equal to zero and is located at (0, 0). In addition, around the global minimum there is a region that is elongated in the vertical direction, which is characterized by low values of the MAD. Let u s now consider a sequence of two images, one obtained from the other through translation of the vector (dx  , dy  ) = (10, 5) (see Figure 4(b)). Considering a wavelet block on the current image, Figure 4 shows the MAD value for all the possible dis- placements. A full search algorithm would identify the global minimum M. Our algorithm starts f rom point A(0, 0) and proceeds horizontally both ways to search for the minimum (B). If dy  is not too large, the horizontal search finds its op- timum in the elongated valley centered on the global mini- mum, therefore the horizontal component is estimated quite accurately. The vertical component can now be estimated without problems using the LH family subbands. In conclu- sion, coarsely initializing the algorithm with a full search pro- vides better results in case of large dy  displacement without significantly affecting the computational complexity. M. Tagliasacchi et al. 7 −15 −10 −5 0 5 10 15 dy −15 −10 −50 51015 dx (a) −15 −10 −5 0 5 10 15 dy −15 −10 −50 51015 dx AB M (b) Figure 4: Error surface as a function of the candidate motion vector (dx, dy): (a) global minimum in (0, 0); (b) global minimum in (15, 5). HL 1 HH 1 HL 2 HH 2 LH 1 LH 2 HL 3 HH 3 LL 3 LH 3 (a) HL 1 HH 1 HL 2 HH 2 LH 1 LH 2 HL 3 HH 3 LL 3 LH 3 (b) HL 1 HH 1 HL 2 HH 2 LH 1 LH 2 HL 3 HH 3 LL 3 LH 3 (c) Figure 5: Motion field assigned to a 16 × 16 wavelet block: (a) one vector per wavelet block; (b) four vectors per wavelet block; (c) further splitting of the wavelet subblock. 2.2. Variable size block matching As described so far, the FIBME fast search algorithm works with wavelet blocks of fixed sizes. We propose a simple ex- tension that allows to adopt blocks of variable sizes by gener- alizing the HVSBM (hierarchical variable size block match- ing) [23] algorithm to work in the wavelet domain. Let us consider a three-level wavelet decomposition and a wavelet block of size 16 × 16 (refer to Figure 5(a)). In the fixed size implementation, only one motion vector is assigned to each wavelet block. If we focus on the lowest frequency subband, the wavelet block covers a 2 × 2 pixel area. Splitting this area into four and taking the descendants of each element, we generate four 8 × 8 wavelet blocks, which are the offspring of the 16 × 16 parent block (see Figure 5(b)). Block match- ing is performed on those smaller wavelet blocks to estimate four distinct motion vectors. In that figure, all the elements that have the same color are assigned the same motion vector. Like in HVSBM, we build a quadtree-like structure where in each node we store the motion vector, the rate R needed to encode the motion vector and the distortion D (MAD). A pruning algorithm is then used to select the optimal splitting configuration for a given bitrate budget [23]. The number B of different block sizes relative to the wavelet block size N and the wavelet decomposition level K is B = log 2  N 2 K  + 1 (17) that corresponds to the block sizes:  N 2 i × N 2 i  , i = 0, , B − 1. (18) If we do not want to be forced to work with fixed size blocks, we need to have at least two different block sizes. For 8 EURASIP Journal on Applied Signal Processing example, with B = 2, we have log 2  N 2 K  > 1 =⇒ N 2 K > 2 =⇒ N>2 K+1 . (19) Having fixed K, the previous equation sets a lower bound on the size of the smallest wavelet block. By setting N = 16 and K = 3, three different block sizes are allowed: 16 × 16, 8 × 8, and 4 × 4. We can take this approach one step fur- ther in order to overcome the lower bound. If N = 2 K or if we have already split the wavelet block in such a way that there is only one pixel in the LL K subband, further split can be performed according to the above scheme. In order to provide a motion field of finer granularity, we can still as- sign a new motion vector to each subband LH K , HL K , HH K , plus the refined version of LL K alone. This way we produce four children motion vectors, as shown in Figure 5(c). In this case, the motion vector shown in subband HL 3 is the same one used for compensating all of the coefficients in subbands HL 3 , HL 2 ,andHL 1 . The same figure shows a further splitting step perfor med on the wavelet block of the LH 3 subband. In fact, the splitting can be iterated at lower scales, by assigning one motion vector to each one-pixel subblock at level K − 1 (in subband LH 2 in this example). Figure 5(c) shows that the wavelet block with roots on the blue pixel (in the top-left po- sition) in subband LH 3 , is split into four subblocks in the LH 2 subband. These refinement steps allow us to compensate el- ements in different subbands with different motion vectors that correspond to the same spatial location. We need to em- phasize that this last splitting step makes the difference be- tween spatial domain variable size block matching and the proposed algorithm. In fact, in the latter case it is possible to compensate the same spatial region with separate motion vectors, according to the local texture orientation. Following this simple procedure, we can generate subblocks of arbitrary size in the wavelet domain. 2.3. Scalable coding of motion vec tors In both t+2D and 2D+t, wavelet-based video codec SNR scal- ability is achieved by truncating the embedded representa- tion of the wavelet coefficients. In this way, only the texture information is scaled, while the motion information is loss- less encoded, thus occupying a fixed amount of the bit bud- get decided at encoding time and unaware of the decoding bitrate. This fact has two major drawbacks. First, the video sequence cannot be encoded at a target bitrate lower than the one necessary to lossless encode the motion vectors. Second, no optimal tradeoff between motion and residuals bit budget can be computed. Recently, it has been demonstrated [26] that in the case of open-loop wavelet-based video coders it is possible to use a quantized version of the motion field dur ing decoding to- gether w ith the residual coefficients computed at the encoder with the lossless version of the motion. A scalable representa- tion of the motion is achieved in [26] by coding the motion field as a two-component image u sing a JPEG2000 scheme. This is possible as long as the motion vectors are disposed on a regular lattice, as it is the case for fixed size block matching or deformable meshes using equally spaced control points. In this section, we introduce an algorithm able to build a scalable representation of the motion vectors which is specifi- cally designed to work with blocks of variable sizes produced in output by the motion estimation algorithm presented in Sections 2.1 and 2.2. Block sizes range from N max × N max to N min × N min and they tend to be smaller in reg ions characterized by complex motion. Neighboring blocks usually manifest a high degree of similarity, therefore a coding algorithm able to reduce their spatial redundancy is needed. In the standard imple- mentation of HVSBM [23], a simple nearest neighbor pre- dictor is used for this purpose. Although it achieves a good lossless coding efficiency, it does not provide a scalable rep- resentation. The proposed algorithm aims at achieving the same performance when working in lossless mode allowing at the same time a scalable representation of the motion in- formation. In order to tackle spatial redundancy, a multi-resolution pyramid of the motion field is built in a bottom-up fashion. As shown in Figure 6, variable size block matching generates a quadtree-like representation of the motion model. At the beginning of the algorithm, only the leaf nodes are assigned with a value, representing the two components of the mo- tion vector. For each component, we compute the value of the node as a simple average of its four offspring. Then we code each offspring as the difference between each value and its parent. We iterate these steps further up the motion vec- tor tree. The root node contains an average of the motion vectors over the whole image. Depending on the size of the image and N min , the root node might have fewer than four offspring. Figure 6 illustrates a toy example that clarifies this multi- resolution representation. The motion vector components are the numbers indicated just below each leaf node. The av- erages computed on intermediate nodes are shown in grey, while the values to be encoded are written in bold typeface. The same figure also shows the labeling convention we use: each node is identified by a pair (i, d), where d represents the depth in the tree while i is the index number starting from zero of the nodes at a given depth. Since the motion field usually exhibits a certain amount of spatial redundancy, the leaf nodes are likely to have a smaller absolute values. In other words, walking down from the root to the leaves, we can ex- pect the same sort of energy decay that is specific of wavelet coefficients across subbands following parent-children rela- tionships. This fact suggested us that the same ideas under- pinning wavelet-based image coders could be exploited here. Specifically, if an intermediate node is insignificant w ith re- spect with a given threshold, then it is likely that its de- scendants are also insignificant. This is the reason why the proposed algorithm inherits some of the basic concepts of SPIHT [2] (set partitioning in hierarchical trees). Before detailing the steps of the algorithm, it is impor- tant to point out that, in the quadtree representation that we have built so far, the node values should be multiplied by a weighting factor that depends on their depth in the tree. Let us consider only one node and its four offspring. If we wish to achieve a lossy representation of the motion field, these nodes M. Tagliasacchi et al. 9 00 00 0,1 1,1 2,1 3,1 2222 0 −13−20 1 −13 2 1502312 0,2 1,2 2,2 3,2 4,2 5,2 6,2 7,2 −10 1 0 0,3 1,3 2,3 3,3 12 34 2 0,0 2 Motion vector difference Node coordinates Motion vector component Average of children motion vectors Δmv x i,d mv x mv avg Figure 6: Quadtree-like representation of the motion model generated by the variable size block matching algorithm. will be quantized. If we make an error in the parent node, that will badly affec t its offspring, while the same error will have fewer consequences if one of the children is involved. If we use the mean squared error as a distortion measure, the parent node needs to be multiplied by a factor of 2, in such a way that errors are weighted equally and the same quantiza- tion step sizes can be used regardless of the node depth. The proposed algorithm encodes the nodes of the quadtree from top to bottom starting from the most signif- icant bitplane. As in SPIHT, the algorithm is divided into a sorting pass that identifies which nodes are significant with respect to a given threshold, and a refinement pass that refines the nodes already found significant in the pre- vious steps. There are four lists that are maintained both at the encoder and the decoder, which allow to keep track of each node status. The L IV (list insignificant vectors) con- tains those nodes that have not been found significant yet. The LIS (list insignificant sets) represents those nodes whose descendants are insignificant. On the other hand, LSV (list significant vectors) and LSS (list significant sets) contain ei- ther nodes found significant or whose descendants are signif- icant. A node can be moved from LIV to LIS and from LIS to LSS, but not vice versa. Only the nodes in the LSV are refined during the refinement pass. The follow ing notation is used: (i) P(i, d): coordinates of parent node of node i at depth d; (ii) O(i, d): set of coordinates of all offspring of node i at depth d; (iii) D(i, d): set of coordinates of all descendants of node i; at depth d; (iv) H(0, 0): coordinate of the quadtree root node. The algorithm is described in detail by pseudocode listed in Algorithm 1. Note that  d keeps track of the depth of the cur- rent node. This way instead of scaling by a factor of 2 all the intermediate nodes with respec t to their offspring, the signif- icance test is carried out at bitplane n +  d, that is, S n+  d (i  d ,  d). As for SPIHT, encoding and decoding use the same algo- rithm, where the word output is substituted by input at the decoder side. The symbols emitted by the encoder are arith- metic coded. The bitstream produced by the proposed algorithm is completely embedded, in such a way that it is possible to truncate it at any point and obtain a quantized representation of the motion field. In [26], it is proved that for small dis- placement errors, there is a linear relation between the MSE (mean square error) of the quantized motion field parame- ters (MSE W ) and the MSE of the prediction residue (MSE r ): MSE r = k ψ x + ψ y 2 MSE W , (20) where the motion sensitivity factors are defined as ψ x = 1 (2π) 2  S f (ω)ω 2 x dω, ψ y = 1 (2π) 2  S f (ω)ω 2 y dω, (21) where S f (ω) is the power spectrum of the current frame f (x, y). Using this result it is possible to estimate a priori the optimal bit allocation between motion information and residual coefficients [26]. Informally speaking, at low bitrates the motion field can be heavily quantized in order to reduce its bit budget and save bits to encode residual information. On the other hand, at high bitrates the motion field is usu- ally sent lossless as it occupies a small fraction of the overall target bitrate. 2.4. Motion vectors and spatial scalability A spatially scalable video codec is able to deliver a sequence at a lower resolution than the original one in order to fit the receiving device display capabilities. Wavelet-based video coders address spatial scalability in a st raightforward way. At the end of spatio-temporal analysis each frame of a GOP of size T represents a temporal subband further decomposed into spatial subbands up to level K. Each GOP thus consists 10 EURASIP Journal on Applied Signal Processing (1) Initialization: (1.1) output msb = n =log 2 max (i,d) (c i,d ) (1.2) output max depth = max(d) (1.3) set the LLS and the LSV as empty lists add H(0,0) to the LIV and to the LIS. (2) Sorting pass (2.1) set  d = 0, set i d = 0(d = 0, 1, ,max depth) (2.2) if 0 ≤ n +  d ≤ msb do: (2.2.1) if entry (i  d ,  d)isintheLIVdo: (i) output S n+  d (i  d ,  d) (ii) if S n+  d (i  d ,  d) = 1thenmove(i  d ,  d)toLSVandoutputthesignofc i  d ,  d (2.3) if entry (i  d ,  d)isintheLISdo: (2.3.1) if n +  d<msb do: (i) S D = 0 (ii) for h =  d +1tomax depth do: –foreach(j, h) ∈ D(i  d ,  d), if S n+h ( j, h) = 1 then S D = 1 (iii) output S D (iv) if S D = 1thenmove(i  d ,  d) to LSS, add each (k, l) ∈ O(i  d ,  d) to the LIV and to the LIS, increment  d by 1, and go to Step (2.2) (2.4) if entry (i  d ,  d) is in the LSS the increment  d by 1 and go to Step (2.2). (3) Refinement pass (3.1) if 0 ≤ n +  d ≤ msb do: (i) if entry (i  d ,  d) is in t he LSV and was not included during the last sorting pass, then output the nth most signifi- cant bit of |c i,  d | (3.2) if  d ≥ 1do (i) increment i  d by 1 (ii) if (i  d ,  d) ∈ O(P(i  d −1,  d )) then go to Step (2.2); otherwise decrement  d by 1 and go to Step (3). (4) Quantization step update:decrementn by 1 and go to Step (2). Algorithm 1: Pseudocode of the proposed scalable motion vector encoding algorithm. of the following subbands: LL t i , LH t i , HL t i , HH t i with spatial subband index i = 1, , K and temporal subband index t = 1, , T. Let us assume that we want to decode a sequence at a resolution 2 (k−1) times lower than the original one. We need to send only those subbands with i = k, , K. At the de- coder side, spatial decomposition and motion-compensated temporal filtering is inverted in the synthesis phase. It is a de- coder task to adapt the full resolution motion field to match the resolution of the received subbands. In this section we compare analytically the following two approaches: (a) the original motion vectors are truncated and rounded in order to match the resolution of the decoded se- quence, (b) the original motion vectors are retained, while a full resolution sequence is interpolated starting from the received subbands. The former implementation tends to be computationally simpler while not as efficient as the latter in terms of coding efficiency as it will be demonstrated in the following. Fur- thermore, this is the technique adopted in the MC-EZBC [5] reference software, used as a benchmark in Section 3. Let us concentrate our attention on a one-dimensional discrete signal x(n) and its translated version by an integer displacement d, that is, y(n) = x(n − d). Their 2D counter- part are the current and the reference frame, respectively. We arethusneglectingmotioncompensationerrorsduetocom- plex motion, reflections, and illumination changes. Temporal analysis is carried out with the lifting implementation of the Haar transform along the motion trajectory d: H(n) = 1 √ 2  y(n) − x(n − d)  = 0, L(n) = √ 2x(n)+H(n + d) = √ 2x(n), (22) [...]... (MC-EZBC) 2D+t- nonscalable MV 2D+t- scalable MV t+2D (MC-EZBC) 2D+t- nonscalable MV 2D+t- scalable MV (a) (b) 32 30 28 26 24 22 Football CIF@30 fps 36 34 Average Y PSNR Average Y PSNR Foreman CIF@30 fps 42 40 38 36 34 32 30 28 26 24 22 256 512 768 1024 1280 1536 1792 2048 Rate (kbps) t+2D (MC-EZBC) 2D+t- nonscalable MV 2D+t- scalable MV 256 512 768 1024 1280 1536 1792 2048 Rate (kbps) t+2D (MC-EZBC) 2D+t- nonscalable... (MC-EZBC) 2D+t- nonscalable MV 2D+t- scalable MV (c) (d) Soccer 4CIF@60 fps City 4CIF@60 fps 38 36 36 34 Average Y PSNR Average Y PSNR 38 32 30 28 26 32 30 28 26 24 22 34 512 1536 2560 3584 Rate (kbps) t+2D (MC-EZBC) 2D+t- nonscalable MV 2D+t- scalable MV (e) 4608 5632 24 512 1536 2560 3584 Rate (kbps) t+2D (MC-EZBC) 2D+t- nonscalable MV 2D+t- scalable MV (f) Figure 12: Nonscalable versus scalable motion vectors... (MC-EZBC) 2D+t- no MV truncation 2D+t- MV truncation (b) Foreman QCIF@30 fps 36 36 34 32 Average Y PSNR Average Y PSNR 38 t+2D (MC-EZBC) 2D+t- no MV truncation 2D+t- MV truncation (a) 30 28 26 24 22 20 1024 Foreman QCIF@15 fps 35 34 33 32 31 30 29 28 256 512 768 Rate (kbps) 1024 27 256 512 768 Rate (kbps) t+2D (MC-EZBC) 2D+t- no MV truncation 2D+t- MV truncation t+2D (MC-EZBC) 2D+t- no MV truncation 2D+t- MV... motion modeling in wavelet-based 2D+t video coding schemes REFERENCES [1] D Taubam and M W Marcellin, JPEG2000: Image Compression Fundamentals, Standards and Practice, Kluwer Academic, Boston, Mass, USA, 2002 [2] A Said and W A Pearlman, “A new, fast, and efficient image codec based on set partitioning in hierarchical trees,” IEEE Transactions on Circuits and Systems for Video Technology, vol 6, no 3,... 1991) as an Associate Professor, and from December 2004 as a Full Professor In the first years of activities, he worked on problems related to speech analysis; motion estimation/ compensation for video analysis/coding, and vector quantization applied to hybrid video coding In the past few years, his research interests have focused on image and video analysis for the geometric and radiometric modeling of 3D... H.-W Park and H.-S Kim, “Motion estimation using lowband-shift method for wavelet-based moving-picture coding,” IEEE Transactions on Image Processing, vol 9, no 4, pp 577– 587, 2000 [12] J C Ye and M van der Schaar, “Fully scalable 3D overcomplete wavelet video coding using adaptive motion compensated temporal filtering,” in Visual Communications and Image Processing (VCIP ’03), T Ebrahimi and T Sikora,... paper, we present a fully scalable 2D+t video codec, where we introduced novel algorithms in the motion estimation and signaling part The proposed fast motion estimation algorithm is able to achieve good coding efficiency at a fraction of the computational complexity of a full search approach The scalable representation of motion information improves the objective and subjective quality of sequences... Xu, Z Xiong, S Li, and Y.-Q Zhang, “Three-dimensional embedded subband coding with optimized truncation (3D ESCOT),” Applied and Computational Harmonic Analysis, vol 10, no 3, pp 290–315, 2001 [5] P Chen and J W Woods, “Bidirectional MC-EZBC with lifting implementation,” IEEE Transactions on Circuits and Systems for Video Technology, vol 14, no 10, pp 1183–1194, 2004 [6] A Secker and D Taubman, “Lifting-based... motion adaptive transform (LIMAT) framework for highly scalable video compression,” IEEE Transactions on Image Processing, vol 12, no 12, pp 1530–1542, 2003 [7] G Pau, C Tillier, B Pesquet-Popescu, and H Heijmans, “Motion compensation and scalability in lifting-based video coding,” Signal Processing: Image Communication, vol 19, no 7, pp 577–600, 2004 [8] J.-R Ohm, “Three-dimensional subband coding with... Rate (kbps) t+2D (MC-EZBC) 2D+t- full search 2D+t- FIBME t+2D (MC-EZBC) 2D+t- full search 2D+t- FIBME (c) (d) City 4CIF@60 fps 38 38 34 36 Average Y PSNR 36 Average Y PSNR Soccer 4CIF@60 fps 40 32 30 28 26 32 30 28 24 22 34 26 512 1536 2560 3584 Rate (kbps) t+2D (MC-EZBC) 2D+t- full search 2D+t- FIBME 4608 5632 24 512 1536 2560 3584 Rate (kbps) t+2D (MC-EZBC) 2D+t- full search 2D+t- FIBME (e) (f) Figure 10: . 10.1155/ASP/2006/57308 Motion Estimation and Signaling Techniques for 2D+t Scalable Video Coding M. Tagliasacchi, D. Maestroni, S. Tubaro, and A. Sarti Dipartimento di Elettronica e Informazione, Politecnico. (MC-EZBC) 2D+t- nonscalable MV 2D+t- scalable MV City 4CIF@60 fps (e) 24 26 28 30 32 34 36 38 Average Y PSNR 512 1536 2560 3584 4608 5632 Rate (kbps) t+2D (MC-EZBC) 2D+t- nonscalable MV 2D+t- scalable. of bandwidth, display resolution, and battery life. For this reason it would be useful for the end users to subscribe to a scalable video stream in such a way that a representation of the video

Ngày đăng: 22/06/2014, 23:20

Từ khóa liên quan

Mục lục

  • Introduction

    • Motivations and goals

    • Related works

    • Proposed 2D+t codec

      • Fast in-band motion estimation

      • Variable size block matching

      • Scalable coding of motion vectors

      • Motion vectors and spatial scalability

      • Experimental results

      • Conclusions

      • REFERENCES

Tài liệu cùng người dùng

Tài liệu liên quan