Modulation and coding course- lecture 11

28 261 0
Modulation and coding course- lecture 11

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Modulation, Demodulation and Coding Course Period - 2007 Catharina Logothetis Lecture 11 Last time, we talked about: Another class of linear codes, known as Convolutional codes We studied the structure of the encoder and different ways for representing it Lecture 11 Today, we are going to talk about: What are the state diagram and trellis representation of the code? How the decoding is performed for Convolutional codes? What is a Maximum likelihood decoder? What are the soft decisions and hard decisions? How does the Viterbi algorithm work? Lecture 11 Block diagram of the DCS Information source Rate 1/n Conv encoder Modulator Input sequence Codeword sequence U i = u1i , ,u ji , ,u ni 14 244 Channel U = G(m) = (U1 , U , U , , U i , ) 144 2444 m = (m1 , m2 , , mi , ) 144 44 Branch word ( n coded bits) Information sink Rate 1/n Conv decoder ˆ ˆ ˆ ˆ m = (m1 , m2 , , mi , ) Demodulator Z = ( Z1 , Z , Z , , Z i , ) 144 2444 received sequence Zi { Demodulator outputs for Branch word i = z1i , ,z ji , ,z ni 14 244 n outputs per Branch word Lecture 11 State diagram A finite-state machine only encounters a finite number of states State of a machine: the smallest amount of information that, together with a current input to the machine, can predict the output of the machine In a Convolutional encoder, the state is represented by the content of the memory Hence, there are K −1 states Lecture 11 State diagram – cont’d A state diagram is a way to represent the encoder A state diagram contains all the states and all possible transitions between them Only two transitions initiating from a state Only two transitions ending up in a state Lecture 11 State diagram – cont’d 0/00 Input Output (Branch word) Current state 0/11 S0 00 S2 S1 10 01 S1 01 0/01 S2 10 1/11 S0 00 1/00 0/10 1/01 S3 11 S3 11 1/10 Lecture 11 input 1 1 Next state S0 S2 S0 S2 S1 S3 S1 S3 output 00 11 11 00 10 01 01 10 Trellis – cont’d Trellis diagram is an extension of the state diagram that shows the passage of time Example of a section of trellis for the rate ½ code State S = 00 0/00 1/11 S = 10 0/11 S1 = 01 1/01 1/00 0/10 0/01 S3 = 11 1/10 ti ti +1 Lecture 11 Time Trellis –cont’d A trellis diagram for the example code Tail bits Input bits 1 0 00 10 11 0/00 0/00 0/00 Output bits 11 10 0/00 0/00 1/11 1/11 0/11 1/00 1/01 0/11 1/00 0/10 1/01 0/01 t1 1/11 0/11 1/00 0/10 1/01 0/01 t 1/11 0/11 1/00 0/10 1/01 0/01 t Lecture 11 0/11 1/00 0/10 1/01 0/01 t 1/11 0/10 0/01 t t Trellis – cont’d Tail bits Input bits 1 0 00 10 11 0/00 0/00 0/00 0/11 0/11 Output bits 11 10 0/00 0/00 1/11 1/11 1/11 0/10 0/11 1/00 1/01 1/01 0/10 0/10 0/01 t1 t t 0/01 t Lecture 11 t t 10 AWGN channels For BPSK modulation the transmitted sequence corresponding to the codeword U (m ) is denoted by ( where S ( m ) = ( S1( m ) , S 2( m ) , , Si( m ) , ) and Si ( m ) = ( s1(im ) , , s (jim ) , , snim ) ) and sij = ± Ec The log-likelihood function becomes ∞ n γ U (m) = ∑∑ z ji s (jim ) =< Z, S ( m ) > i =1 j =1 Inner product or correlation between Z and S Maximizing the correlation is equivalent to minimizing the Euclidean distance ML decoding rule: Choose the path which with minimum Euclidean distance to the received sequence Lecture 11 14 Soft and hard decisions In hard decision: The demodulator makes a firm or hard decision whether one or zero is transmitted and provides no other information for the decoder such that how reliable the decision is Hence, its output is only zero or one (the output is quantized only to two level) which are called “hardbits” Decoding based on hard-bits is called the “hard-decision decoding” Lecture 11 15 Soft and hard decision-cont’d In Soft decision: The demodulator provides the decoder with some side information together with the decision The side information provides the decoder with a measure of confidence for the decision The demodulator outputs which are called softbits, are quantized to more than two levels Decoding based on soft-bits, is called the “soft-decision decoding” On AWGN channels, dB and on fading channels dB gain are obtained by using soft-decoding over hard-decoding Lecture 11 16 The Viterbi algorithm The Viterbi algorithm performs Maximum likelihood decoding It find a path through trellis with the largest metric (maximum correlation or minimum distance) It processes the demodulator outputs in an iterative manner At each step in the trellis, it compares the metric of all paths entering each state, and keeps only the path with the smallest metric, called the survivor, together with its metric It proceeds in the trellis by eliminating the least likely paths It reduces the decoding complexity to L Lecture 11 K −1 ! 17 The Viterbi algorithm - cont’d Viterbi algorithm: A Do the following set up: For a data block of L bits, form the trellis The trellis has L+K-1 sections or levels and starts at time t1 and ends up at time t L + K Label all the branches in the trellis with their corresponding branch metric For each state in the trellis at the time ti which is denoted by S (ti ) ∈ {0,1, ,2 K −1} , define a parameter Γ(S (ti ), ti ) B Then, the following: Lecture 11 18 The Viterbi algorithm - cont’d Set Γ(0, t1 ) = and i = 2 At time ti , compute the partial path metrics for all the paths entering each state Set Γ(S (ti ), ti ) equal to the best partial path metric entering each state at time ti Keep the survivor path and delete the dead paths from the trellis If i < L + K , increase i by and return to step A Start at state zero at time t L + K Follow the surviving branches backwards through the trellis The path thus defined is unique and correspond to the ML codeword Lecture 11 19 Example of Hard decision Viterbi decoding m = (101) U = (11 10 00 10 11) Z = (11 10 11 10 01) 0/00 0/00 0/00 1/11 1/11 0/00 0/11 1/00 1/01 0/11 0/10 1/01 0/01 t2 0/11 1/11 0/10 t1 0/00 t3 0/01 t4 Lecture 11 0/10 t5 t6 20 Example of Hard decision Viterbi decoding-cont’d Label al the branches with the branch metric (Hamming distance) Γ(S (ti ), ti ) 2 1 1 0 2 1 t1 t2 t3 t4 Lecture 11 t5 t6 21 Example of Hard decision Viterbi decoding-cont’d i=2 2 1 1 0 0 2 1 t1 t2 t3 t4 Lecture 11 t5 t6 22 Example of Hard decision Viterbi decoding-cont’d i=3 2 1 1 0 0 2 1 t1 t2 t3 t4 Lecture 11 t5 t6 23 Example of Hard decision Viterbi decoding-cont’d i=4 2 1 0 1 1 0 2 0 2 t1 t2 t3 t4 Lecture 11 t5 t6 24 Example of Hard decision Viterbi decoding-cont’d i=5 2 1 0 1 0 1 0 2 t1 t2 t3 t4 Lecture 11 t5 t6 25 Example of Hard decision Viterbi decoding-cont’d i=6 2 1 0 1 0 1 0 2 t1 t2 t3 t4 Lecture 11 t5 t6 26 Example of Hard decision Viterbi decodingcont’d Trace back and then: ˆ m = (100) ˆ U = (11 10 11 00 00) 2 1 0 1 0 1 0 2 t1 t2 t3 t4 Lecture 11 t5 t6 27 Example of soft-decision Viterbi decoding 2 −2 −2 −2 Z = (1, , , , ,1, , − 1, ,1) 3 3 3 m = (101 ) U = (11 10 00 10 11 ) -5/3 -5/3 -5/3 5/3 10/3 -1/3 1/3 5/3 ˆ m = (101) ˆ U = (11 10 00 10 11) 1/3 8/3 14/3 1/3 -1/3 1/3 4/3 -1/3 1/3 5/3 -5/3 1/3 5/3 Partial metric 13/3 Γ(S (ti ), ti ) 5/3 -4/3 1/3 5/3 -5/3 Branch metric 10/3 -5/3 t1 t2 t3 t4 Lecture 11 t5 t6 28 ... Output bits 11 10 0/00 0/00 1 /11 1 /11 0 /11 1/00 1/01 0 /11 1/00 0/10 1/01 0/01 t1 1 /11 0 /11 1/00 0/10 1/01 0/01 t 1 /11 0 /11 1/00 0/10 1/01 0/01 t Lecture 11 0 /11 1/00 0/10 1/01 0/01 t 1 /11 0/10 0/01... unique and correspond to the ML codeword Lecture 11 19 Example of Hard decision Viterbi decoding m = (101) U = (11 10 00 10 11) Z = (11 10 11 10 01) 0/00 0/00 0/00 1 /11 1 /11 0/00 0 /11 1/00 1/01 0 /11. .. Input bits 1 0 00 10 11 0/00 0/00 0/00 0 /11 0 /11 Output bits 11 10 0/00 0/00 1 /11 1 /11 1 /11 0/10 0 /11 1/00 1/01 1/01 0/10 0/10 0/01 t1 t t 0/01 t Lecture 11 t t 10 Optimum decoding If the input

Ngày đăng: 29/10/2013, 13:15

Từ khóa liên quan

Tài liệu cùng người dùng

Tài liệu liên quan