John wiley sons display interfaces

295 103 0
John wiley  sons display interfaces

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Display Interfaces - Fundamentals and Standards Robert L Myers Copyright  2002 John Wiley & Sons, Ltd ISBN: 0-471-49946-3 Display Interfaces Wiley-SID Series in Display Technology Editor: Anthony C Lowe The Lambent Consultancy, Braishfield, UK Display Systems: Design and Applications Lindsay W MacDonald and Anthony C Lowe (Eds) Electronic Display Measurement: Concepts, Techniques and Instrumentation Peter A Keller Projection Displays Edward H Stupp and Matthew S Brennesholz Liquid Crystal Displays: Addressing Schemes and Electro-Optical Effects Ernst Lueder Reflective Liquid Crystal Displays Shin-Tson Wu and Deng-Ke Yang Colour Engineering: Achieving Device Independent Colour Phil Green and Lindsay MacDonald (Eds) Display Interfaces: Fundamentals and Standards Robert L Myers 3XEOLVKHG LQ $VVRFLDWLRQ ZLWK WKH 6RFLHW\ IRU ,QIRUPDWLRQ 'LVSOD\ Display Interfaces Fundamentals and Standards Robert L Myers Hewlett-Packard Company, USA -2+1 :,/(< 6216 /7' Copyright © 2002 John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, :HVW 6XVVH[ 32 64 (QJODQG Telephone (+44) 1243 779777 Email (for orders and customer service enquiries): cs-books@wiley.co.uk Visit our Home Page on www.wileyeurope.com or www.wiley.com Reprinted March 2003 All Rights Reserved No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except under the terms of the Copyright, Designs and Patents Act 1988 or under the terms of a licence issued by the Copyright Licensing Agency Ltd, 90 Tottenham Court Road, London W1T 4LP, UK, without the permission in writing of the Publisher Requests to the Publisher should be addressed to the Permissions Department, John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex PO19 8SQ, England, or emailed to permreq@wiley.co.uk, or faxed to (+44) 1243 770571 This publication is designed to provide accurate and authoritative information in regard to the subject matter covered It is sold on the understanding that the Publisher is not engaged in rendering professional services If professional advice or other expert assistance is required, the services of a competent professional should be sought Other Wiley Editorial Offices John Wiley & Sons Inc., 111 River Street, Hoboken, NJ 07030, USA Jossey-Bass, 989 Market Street, San Francisco, CA 94103-1741, USA Wiley-VCH Verlag GmbH, Boschstr 12, D-69469 Weinheim, Germany John Wiley & Sons Australia Ltd, 33 Park Road, Milton, Queensland 4064, Australia John Wiley & Sons (Asia) Pte Ltd, Clementi Loop #02-01, Jin Xing Distripark, Singapore 129809 John Wiley & Sons Canada Ltd, 22 Worcester Road, Etobicoke, Ontario, Canada M9W 1L1 ISBN 0-471-49946-3 Typeset in Times from files supplied by the author Printed and bound in Great Britain by Antony Rowe Ltd, Chippenham, Wiltshire This book is printed on acid-free paper responsibly manufactured from sustainable forestry in which at least two trees are planted for each one used for paper production Contents Series Editor’s Foreword Preface xiii Basic Concepts in Display Systems 1.1 Introduction 1.1.1 1.2 Basic components of a display system Imaging Concepts 1.2.1 1.2.2 1.2.3 1.2.4 1.3 xi Vector-scan and raster-scan systems; pixels and frames Spatial formats vs resolution; fields Moving images; frame rates Three-dimensional imaging Transmitting the Image Information 1 10 11 The Human Visual System 2.1 Introduction 2.2 The Anatomy of the Eye 2.3 Visual Acuity 2.4 Dynamic Range and Visual Response 2.5 Chromatic Aberrations 2.6 Stereopsis 2.7 Temporal Response and Seeing Motion 2.8 Display Ergonomics References 13 13 14 19 22 23 24 25 30 31 Fundamentals of Color 3.1 Introduction 3.2 Color Basics 3.3 Color Spaces and Color Coordinate Systems 3.4 Color Temperature 33 33 34 37 42 VI CONTENTS 3.5 3.6 3.7 3.8 3.9 3.10 Display Technologies and Applications 4.1 Introduction 4.2 The CRT Display 4.3 Color CRTs 4.4 Advantages and Limitations of the CRT 4.5 The “Flat Panel” Display Technologies 4.6 Liquid-Crystal Displays 4.7 Plasma Displays 4.8 Electroluminescent (EL) Displays 4.9 Organic Light-Emitting Devices (OLEDs) 4.10 Field-Emission Displays (FEDs) 4.11 Microdisplays 4.12 Projection Displays 4.13 Standard Illuminants Color Gamut Perceptual Uniformity in Color Spaces; the CIE L*u*v* Space MacAdam Ellipses and MPCDs The Kelly Chart Encoding Color 79 Display Applications 80 5.6.1 5.6.2 5.6.3 5.6.4 5.6.5 53 53 55 57 60 61 64 69 71 72 73 75 78 4.12.1 CRT projection Practical and Performance Requirements of the Display Interface 5.1 Introduction 5.2 Practical Channel Capacity Requirements 5.3 Compression 5.4 Error Correction and Encryption 5.5 Physical Channel Bandwidth 5.6 Performance Concerns for Analog Connections 5.7 44 45 46 48 49 49 Cable impedance Shielding and filtering Cable losses Cable termination Connectors Performance Concerns for Digital Connections Basics of Analog and Digital Display Interfaces 6.1 Introduction 6.2 “Bandwidth” vs Channel Capacity 6.3 Digital and Analog Interfaces with Noisy Channels 6.4 Practical Aspects of Digital and Analog Interfaces 6.5 Digital vs Analog Interfacing for Fixed-Format Displays 6.6 Digital Interfaces for CRT Displays 6.7 The True Advantage of Digital 6.8 Performance Measurement of Digital and Analog Interfaces 6.8.1 Analog signal parameters and measurement 83 83 84 86 88 89 92 92 95 96 98 100 102 105 105 106 107 109 111 112 113 113 114 CONTENTS 6.8.2 6.8.3 Transmission-line effects and measurements Digital systems VII 119 121 Format and Timing Standards 7.1 Introduction 7.2 The Need for Image Format Standards 7.3 The Need for Timing Standards 7.4 Practical Requirements of Format and Timing Standards 7.5 Format and Timing Standard Development 7.6 An Overview of Display Format and Timing Standards 7.7 Algorithms for Timings – The VESA GTF Standard 123 123 123 125 126 130 131 135 Standards for Analog Video – Part I: Television 8.1 Introduction 8.2 Early Television Standards 8.3 Broadcast Transmission Standards 8.4 Closed-Circuit Video; The RS-170 and RS-343 Standards 8.5 Color Television 8.6 NTSC Color Encoding 8.7 PAL Color Encoding 8.8 SECAM 8.9 Relative Performance of the Three Color Systems 8.10 Worldwide Channel Standards 8.11 Physical Interface Standards for “Television” Video 139 139 139 141 144 146 147 154 155 156 157 157 8.11.1 8.11.2 8.11.3 8.11.4 8.11.5 8.11.6 8.11.7 8.11.8 Component vs composite video interfaces The “RCA Phono” connector The “F” connector The BNC connector The N connector The SMA and SMC connector families The “S-Video”/mini-DIN connector The SCART or “Peritel” connector Standards for Analog Video – Part II: The Personal Computer 9.1 Introduction 9.2 Character-Generator Display Systems 9.3 Graphics 9.4 Early Personal Computer Displays 9.5 The IBM PC 9.6 MDA/Hercules 9.7 CGA and EGA 9.8 VGA – The Video Graphics Array 9.9 Signal Standards for PC Video 9.10 Workstation Display Standards 9.11 The “13W3” Connector 9.12 EVC – The VESA Enhanced Video Connector 9.13 The Transition to Digital Interfaces 9.14 The Future of Analog Display Interfaces 157 158 159 159 160 160 160 161 163 163 164 165 166 167 167 168 168 170 173 176 177 179 181 VIII CONTENTS 10 Digital Display Interface Standards 10.1 Introduction 10.2 Panel Interface Standards 10.3 LVDS/EIA-644 10.4 PanelLink™ and TMDS™ 10.5 GVIF™ 10.6 Digital Monitor Interface Standards 10.7 The VESA Plug & Display™ Standard 10.8 The Compaq/VESA Digital Flat Panel Connector – DFP 10.9 The Digital Visual Interface™ 10.10 The Apple Display Connector 10.11 Digital Television 10.12 General-Purpose Digital Interfaces and Video 10.13 Future Directions for Digital Display Interfaces 183 183 184 185 188 191 191 191 193 194 196 197 197 199 11 Additional Interfaces to the Display 11.1 Introduction 11.2 Display Identification 11.3 The VESA Display Information File (VDIF) Standard 11.4 The VESA EDID and DDC Standards 11.5 ICC Profiles and the sRGB Standard 11.6 Display Control 11.7 Power Management 11.8 The VESA DDC-CI and MCCS Standards 11.9 Supplemental General-Purpose Interfaces 11.10 The Universal Serial Bus 11.11 IEEE-1394/”FireWireTM” 203 203 203 205 207 210 212 213 214 216 217 219 12 The Impact of Digital Television and HDTV 12.1 Introduction 12.2 A Brief History of HDTV Development 12.3 HDTV Formats and Rates 12.4 Digital Video Sampling Standards 223 223 224 227 229 12.4.1 12.4.2 12.4.3 12.4.4 12.5 12.6 12.7 12.8 12.9 12.10 12.11 Sampling structure Selection of sampling rate The CCIR-601 standard 4:2:0 Sampling Video Compression Basics 230 230 231 232 233 12.5.1 The discrete cosine transform (DCT) 235 Compression of Motion Video Digital Television Encoding and Transmission Digital Content Protection Physical Connection Standards for Digital Television Digital Cinema The Future of Digital Video 237 241 242 244 245 247 CONTENTS 13 New Displays, New Applications, and New Interfaces 13.1 Introduction 13.2 Color, Resolution, and Bandwidth 13.3 Technological Limitations for Displays and Interfaces 13.4 Wireless Interfaces 13.5 The Virtual DisplayInterfaces for HMDs 13.6 The Intelligent Display – DPVL and Beyond 13.7 Into The Third Dimension 13.8 Conclusions IX 249 249 251 253 255 257 259 261 264 Glossary 267 Bibliography, References, and Recommended Further Reading Printed Resources 279 279 Fundamentals, Human Vision, and Color Science Display Technology Television Broadcast Standards and Digital/High-Definition Television Computer Display Interface Standards Other Interfaces and Standards On-Line Resources Standards Organizations and Similar Groups Other Recommended On-Line Resources Index 279 280 280 281 281 281 282 283 285 Foreword By their nature, display interfaces and the standards that govern their use are ephemeral They are the more so because extremely rapid developments in the field have been driven by increasing pixel content of displays and by requirements for increased colour depth and update rates So, why write a book on this subject? There are several reasons, but foremost among them is the fact that the nature and the performance limitations of display interfaces are often ill understood by many professionals involved in display and display system development That is why this latest addition to the Wiley-SID series in Display Technology pays particular attention to the principles that underlie display interfaces and their architecture In the first four chapters, the author includes information on basic concepts, the human visual system, the fundamentals of colour and different display technologies to enable an inexperienced reader to acquire sufficient background information to address the remaining nine chapters of the book In these chapters, all aspects of display interfaces are addressed, starting with performance requirements and the basics of analogue and digital interfaces Then follow discussions of standards for format and timing, analogue video (for TV and computers) and digital interfaces Other interfaces than those used to convey image data to the display are also discussed; these are the interfaces, which, among other functions, enable a computer to identify and then correctly to address a newly connected display The book concludes with a discussion of the impact of digital and HDTV and of the changes that will be necessary if future interface designs are to be able to deal with ever increasing display pixel content Throughout the book, a great deal of practical information with examples of commonly used hardware is provided This is backed up by a section containing references to source material available in print or from the web and a glossary in which all the commonly used terms are defined Interface architectures and the standards that govern them will certainly change Even so, this volume will remain a valuable handbook for engineers and scientists who are working in the field and a lucid and easy to read introduction to the subject for those who are not Anthony C Lowe Braishfield, UK 2002 218 ADDITIONAL INTERFACES TO THE DISPLAY deep, although there is no set limit on the number of separate “branches” (“downstream” ports) which may be supported by each hub Hubs may be either “powered” or “unpowered”; as the name would imply, a powered hub has its own power supply, and does not rely on the +5 VDC line provided by its “upstream” connection In either case, all USB “downstream” ports (such as those provided by the host or a hub for the connection of peripherals) must provide up to 500 mW of power (100 mA) on the +5 VDC line when a peripheral is connected and initialized Under host control, a given peripheral may then be provided with up to 2.5 W (500 mA), if this remains within the limitations of the host or hub USB was designed to permit “hot plugging” of all peripheral devices, meaning that any device may be connected to or disconnected while the rest of the system remains powered up Upon connection of each device (or at initialization of a previously connected set of devices at system power-up), the device is identified by the host controller and assigned an address within the current structure The speed at which the device operates is also identified, and the port to which it is connected is then configured to provide the desired level of service (either the low-speed 1.5 Mbit/s mode or the full 12 Mbit/s) As each device is identified, the proper driver for that device or device class may be loaded by the host system The USB specifications provide for sufficient standardization of device operation that a “generic” USB driver, at least for a given device class (such as human-input devices), may typically be used with all devices of that class connected to the system USB provides for a mix of isochronous and non-isochronous (or “asynchronous”) data streams to be supported simultaneously; as noted in the previous chapter, an “isochronous” data transmission is one in which the receipt of the data by the receiving device is timecritical An example of this is the transmission of digital audio, in which the data representing each sample must be received within the defined sample period To support such transmissions, the USB controller allocates the available capacity on the interface as best it can, giving priority to devices requiring such isochronous flow Thus, it is possible for one or more isochronous devices to “hog the bus” at the expense of others For any transmission type or rate, the USB interface transmits data serial on a single differential pair, in a “half-duplex” mode All operations must be initiated by the host controller; no direct communications (i.e., “peer-to-peer” transmissions) are permitted between peripherals Owing to the transmission protocol and format, the maximum length of any USB cable is strictly limited to a maximum of m (slightly under 16.5 feet); to extend any branch beyond this limit requires at least one intermediate hub A given physical product may contain both device and hub functions, although these will still be treated as logically separate blocks by the controller As a device is a separate function from a hub, such configurations can appear to provide “daisy-chain” connections, although the limitation to five levels in the tiered-star topology represents a hard limit on the length of such a “chain.” For maximum flexibility, then, most hubs will provide multiple downstream ports The specification defines two physical connectors, shown in Figure 11-3 The “type A” connector, the more rectangular, “flatter” of the two, is used as the “downstream” connection from hosts and hubs The “type B” connector, which is more nearly square in cross-section, is used on USB peripherals In April, 2000, a new USB consortium led by Compaq, Hewlett-Packard, Lucent, NEC, Intel, and Microsoft released the USB 2.0 specifications, which greatly increased the data rate supported by the system USB 2.0 remains completely compatible with the earlier specification (which in its latest revision was “USB 1.1”), but permits data rates up to 480 Mbits/s USB 1.1 peripherals can be used in a USB 2.0 system, although the port to which 219 Figure 11-3 USB connectors and pinouts As shown in the photograph, USB connectors come in several styles and sizes; this picture shows the “A” and “B” type standard-sized connectors (in white), plus an example of a mini-USB connector The “A” connector, the flatter of the two, is typically the “upstream” connection – the output of a hub or controller, for example The more squared-off “B” type connector is the “downstream” connection, used at the input to USB devices (Photograph courtesy of Don Chambers/Total Technologies, Inc.; used by permission.) they are connected will then be configured by the host controller to either the 12 Mbit/s or 1.5 Mbit/s modes, prohibiting higher-speed devices to be connected downstream of that port USB 2.0 and USB 1.1 hubs may be mixed in a given system, but the user must be aware of which is which such that the higher-speed USB 2.0 devices are not connected to a hub which cannot support them Existing USB cabling, if compliant with the original standard, is completely capable of supporting the faster rate 11.11 IEEE-1394/”FireWireTM” In 1986, Apple Computer developed a new high-speed serial interface which was introduced under the name “FireWire™.” The specifications for this interface were later standardized by the Institute of Electrical and Electronic Engineers as IEEE-1394, released in its original form in 1995 Currently, this interface is commonly known under both the “1394” and “FireWire” names, although properly the latter refers only to Apple’s implementation (and is a trademark of that company) Other proprietary names for this interface have also been used by various companies, such as “i.Link™” for Sony’s implementation We refer to it here simply as “1394.” This standard has become the digital interface of choice in several markets, especially in both consumer and professional digital audio/video (A/V) systems IEEE1394 has also been selected as the underlying physical/electrical interface standard for the VESA/CEA Home Network standard In many respects, 1394 resembles USB; both are serial interfaces intended for the easy connection of various types of devices in a “networked” manner Both permit “hot-plugging” (connection and disconnection of devices at any time), and both provide for power to be carried by the physical interface along with data And, with the recent introduction of the USB 220 ADDITIONAL INTERFACES TO THE DISPLAY 2.0 specification, the data rates supported are at least comparable The original IEEE-13941995 standard defined operation at 100, 200, and 400 Mbits/s However, there are significant differences between the two that continue to distinguish them in the market The most obvious is in the topology of the system Unlike USB, which relies on a single controller as the master of the entire “tree” of hubs and devices, 1394 is based on peer-to-peer communications Any device connected to the interface may request control of the bus, under defined protocols and capacity-allocation limits; there is no single master controller 1394 has a cable length limit similar to USB’s – in this case, 4.5 m under the original specification – but permits the use of “repeaters” to extend the connection up to 16 times between devices Note that longer cable lengths may be possible at lower data rates, or using expected future specifications for improved cabling 1394 also lends itself to optical connections, which may permit repeaterless connections of 50 to 100 m or more Each 1394 bus can support up to 63 separate devices, but there is also a provision for “bridges” between buses A maximum of 1,023 buses may be interconnected via 1394 bridges, for an ultimate limit of 64,449 interconnected devices Physically, the standard 1394 connector provides six contacts in a rectangular shell which has a slight resemblance to the USB Type A connector This same connector type, however, is used for all 1394 ports The connector and its pinout are shown in Figure 11-4 A 1394 cable comprises two twisted pairs, individually shielded, for the data channels; these are crossed between the data contacts at each end, such that each device sees a separate “transmit” and “receive” pair The remaining two contacts are for power and ground, in this case a DC supply of up to 1.5 A at to 40 VDC An overall shield is also specified, covering all six conductors A smaller 4-pin connector has also been used in some products, which deletes the power connection and its return Figure 11-4 IEEE-1394/”FireWire” connector and pinout Unlike USB, the IEEE-1394 standard uses the same connector at both ends of the cable The interface is based on two twisted-pair connections, both carrying data and data strobe signals, but these are switched from one end of the cable assembly to the other such that each device sees a “transmit” and “receive” pair The pinout shown here is for the male connector (Photograph courtesy of Don Chambers/Total Technologies, Inc.; used by permission “FireWire” is a trademark of Apple Computer Corp.) TM IEEE-1394/”FIREWIRE ” 221 Like USB, the 1394 standard defines support for both isochronous and asynchronous communications, and so also lends itself well to supporting timing-critical, streaming-data applications such as digital video or audio transmission Any single device may request up to 65% of the available capacity of the bus, and a maximum of 85% of the total capacity may be allocated to such requests across all devices on the bus This ensures that some capacity will always be available for asynchronous communications, preventing such from being completely shut down through an “overload” of isochronous allocations Also like USB, the 1394 standard continues to be developed A recently completed revision to the original specifications will extend the capacity of this system beyond the original 100/200/400 Mbit/s levels, adding 800, 1600, and 3200 Mbit/s as supported rates With this increase in capacity, 1394 can continue to support the high-speed connections required by mass storage devices, high-definition television, and multiple digital audio streams The slightly more complex cabling design, and especially the more elaborate protocol and the requirement that all devices be peers (and so effectively have “controller” capability) result in the 1394 bus being somewhat more costly to implement, overall, than the USB system However, this increased cost does buy increased capabilities, as described above Both systems are likely to continue to co-exist, with USB being the more attractive choice as a “desktop” PC peripheral interconnect, and 1394 remaining the standard for digital A/V system connections and “home network” and similar applications Display Interfaces - Fundamentals and Standards Robert L Myers Copyright  2002 John Wiley & Sons, Ltd ISBN: 0-471-49946-3 12 The Impact of Digital Television and HDTV 12.1 Introduction As was noted in Chapter 8, broadcast television represents a particularly interesting case of a “display interface,” as it requires the transmission of a fairly high-resolution, full-color, moving picture, and a significant amount of supplemental information (audio, for instance) over a very limited channel This is especially true when considering the case of high-definition television, or “HDTV,” as it has developed over the last several decades HDTV began simply as an effort to bring significantly higher image quality to the television consumer, through an increase in the “resolution” (line count and effective video bandwidth, at least) of the transmitted imagery However, as should be apparent from the earlier discussion of the original broadcast TV standards, making a significant increase in the information content of the television signal over these original standards (now referred to as “standard definition television”, or “SDTV”) is not an easy task Television channels, as used by any of the standard systems, not have much in the way of readily apparent additional capacity available Further, constraints imposed by the desired broadcast characteristics (range, power required for the transmission, etc.) and the desired cost of consumer receivers limits the range of practical choices for the broadcast television spectrum And, of course, the existing channels were already allocated – and in some markets already filled to capacity In the absence of any way to generate brand-new broadcast spectrum, the task of those designing the first HDTV systems seemed daunting indeed In a sense, new broadcast spectrum was being created, in the form of cable television systems and television broadcast via satellite Initially, both of these new distribution models used essentially the same analog video standards as had been used by conventional over-theair broadcasting Satellite distribution, however, was not initially intended for direct reception by the home viewer; it was primarily for network feeds to local broadcast outlets, but 224 THE IMPACT OF DIGITAL TELEVISION AND HDTV individuals (and manufacturers of consumer equipment) soon discovered that these signals were relatively easy to receive The equipment required was fairly expensive, and the receiving antenna (generally, a parabolic “dish” type) inconveniently large, but neither was outside the high end of the consumer television market Moving satellite television in to the mainstream of this market, however, required the development of the “direct broadcast by satellite”, or DBS, model DBS, as exemplified in North America by such services as Dish Network or DirecTV, and in Europe by BSkyB, required more powerful, purpose-built satellites, so as to enable reception by small antennas and inexpensive receivers suited to the consumer market But DBS also required a further development that would eventually have an enormous impact on all forms of television distribution – the use of digital encoding and processing techniques More than any other single factor, the success of DBS in the consumer TV market changed “digital television” from being a means for professionals to achieve effects and store video in the studio, to displacing the earlier analog broadcast standards across the entire industry And the introduction of digital technology also caused a rapid and significant change in the development of HDTV No longer simply a higher-definition version of the existing systems, “HDTV” efforts transformed into the development of what is now more correctly referred to as “Digital Advanced Television” (DATV) It is in this form that a true revolution in television is now coming into being, one that will provide far more than just more pixels on the screen 12.2 A Brief History of HDTV Development Practical high-definition television was first introduced in Japan, through the efforts of the NHK (the state-owned Japan Broadcasting Corporation) Following a lengthy development begun in the early 1970s, NHK began its first satellite broadcasts using an analog HDTV system in 1989 Known to the Japanese public as “Hi-Vision”, the NHK system is also commonly referred to within the TV industry as “MUSE,” which properly refers to the encoding method (MUltiple Sub-Nyquist Encoding) employed The Hi-Vision/MUSE system is based on a raster definition of 1125 total lines per frame (1035 of which are active), 2:1 interlaced with a 60.00 Hz field rate If actually transmitted in its basic form, such a format would require a bandwidth well in excess of 20 MHz for the luminance components alone However, the MUSE encoding used a fairly sophisticated combination of bandlimiting, subsampling of the analog signal, and “folding” of the signal spectrum (employing sampling/modulation to shift portions of the complete signal spectrum to other bands within the channel, similar to the methods used in NTSC and PAL to interleave the color and luminance information), resulting in a final transmitted bandwidth of slightly more than MHz The signal was transmitted from the broadcast satellite using conventional FM While technically an analog system, the MUSE broadcasting system also required the use of a significant amount of digital processing and some digital data transmission Up to four channels of digital audio could be transmitted during the vertical blanking interval, and digitally calculated motion vector information was provided to the receiver This permitted the receiver to compensate for blur introduced due to the fact that moving portions of the image are transmitted at a lower bandwidth than the stationary portions (This technique was employed for additional bandwidth reduction, under the assumption that detail in a moving object is less visible, and therefore less important to the viewer, than detail that is stationary A BRIEF HISTORY OF HDTV DEVELOPMENT 225 within the image.) An image format of 1440 × 1035 pixels was assumed for the MUSE HDTV signal While this did represent the first HDTV system actually deployed on a large scale, it also experienced many of the difficulties that hinder the acceptance of current HDTV standards Both the studio equipment and home receivers were expensive – early Hi-Vision receivers in Japan cost in excess of ¥4.5 million (approx US$35,000 at the time) – but really gave only one benefit: a higher-definition, wider-aspect-ratio image Both consumers and broadcasters saw little reason to make the required investment until and unless more and clearer benefits to a new system became available So while HDTV broadcasting did grow in Japan – under the auspices of the government-owned network – MUSE or Hi-Vision cannot be viewed as a true commercial success The Japanese government eventually abandoned its plans for longterm support of this system, and instead announced in the 1990s that Japan would transition to the digital HDTV system being developed in the US In the meantime, beginning in the early 1980s, efforts were begun in both Europe and North America to develop successors to the existing television broadcast systems This involved both HDTV and DBS development, although initially as quite separate things Both markets had already seen the introduction of alternatives to conventional, over-the-air broadcasting, in the form of cable television systems and satellite receivers, as noted above Additional “digital” services were also being introduced, such as the teletext systems that were became popular primarily in Europe But all of these alternates or additions remained strongly tied to the analog transmission standards of conventional television And, again due to the lack of perceived benefit vs required investment for HDTV, there was initially very little enthusiasm from the broadcast community to support development of a new standard This began to change around the middle of the decade, as TV broadcasters perceived a growing threat from cable services and pre-recorded videotapes As these were not constrained by the limitations of over-the-air broadcast, it was feared that they could offer the consumer significant improvements in picture quality and thereby divert additional market share from broadcasting In the US, the Federal Communications Commission (FCC) was petitioned to initiate an effort to establish a broadcast HDTV standard, and in 1987 the FCC’s Advisory Committee on Advanced Television Service (ACATS) was formed In Europe, a similar effort was initiated in 1986 through the establishment of a program administered jointly by several governments and private companies (the “Eureka-95” project, which began with the stated goal of introducing HDTV to the European television consumer by 1992) Both received numerous proposals, which originally were either fully analog systems or analog/digital hybrids Many of these involved some form of “augmentation” of the existing broadcast standards (the “NTSC” and “PAL” systems), or at least were compatible with them to a degree which would have permitted the continued support of these standards indefinitely However, in 1990 this situation changed dramatically Digital video techniques, developed for computer (CD-ROMs, especially) and DBS use had advanced to the point where an all-digital HDTV system was clearly possible, and in June of that year just such a system was proposed to the FCC and ACATS General Instrument Corp presented the all-digital “DigiCipher” system, which (as initially proposed) was capable of sending a 1408 × 960 image, using a 2:1 interlaced scanning format with a 60 Hz field rate, over a standard MHz television channel The advantages of an all-digital approach, particularly in enabling the level of compression which would permit transmission in a standard channel, were readily apparent, and very quickly the remaining proponents of analog systems either changed their proposals 226 THE IMPACT OF DIGITAL TELEVISION AND HDTV Table 12-1 The final four US digital HDTV proposals.a Proponent Name of proposal Image/scan format Additional comments Zenith/AT&T “Digital Spectrum Compatible” MIT/General Instruments ATVA ATRCb Advanced Digital Television General Instruments “DigiCipher” 1280 × 720 progressive scan 1280 × 720 progressive scan 1440 × 960 2:1 interlaced 1408 × 960 2:1 interlaced 59.94 Hz frame rate 4-VSB modulation 59.94 Hz frame rate 16-QAM modulation 59.94 Hz field rate SS-QAM modulation 59.94 Hz field rate 16-QAM modulation a The proponents of these formed the “Grand Alliance” in 1993, which effectively merged these proposals and other input into the final US digital television proposal b “Advanced Television Research Consortium”; members included Thomson, Philips, NBC, and the David Sarnoff Research Center to an all-digital type or dropped out of consideration By early 1992, the field of contenders in the US was down to just four proponent organizations (listed in Table 12-1), each with an all-digital proposal, and a version of the NHK MUSE system One year later, the NHK system had also been eliminated from consideration, but a special subcommittee of the ACATS was unable to determine a clear winner from among the remaining four Resubmission and retesting of the candidate systems was proposed, but the chairman of the ACATS also encouraged the four proponent organizations to attempt to come together to develop a single joint proposal This was achieved in May, 1993, with the announcement of the formation of the so-called “Grand Alliance,” made up of member companies from the original four proponent groups (AT&T, the David Sarnoff Research Center, General Instrument Corp., MIT, North American Philips, Thomson Consumer Electronics, and Zenith) This alliance finally produced what was to become the US HDTV standard (adopted in 1996), and which today is generally referred to as the “ATSC” (Advanced Television Systems Committee, the descendent of the original NTSC group) system Before examining the details of this system, we should review progress in Europe over this same period The Eureka-95 project mentioned above can in many ways be seen as the European “Grand Alliance,” as it represented a cooperative effort of many government and industry entities, but it was not as successful as its American counterpart Europe had already developed an enhanced television system for direct satellite broadcast, in the form of a Multiplexed Analog Components (MAC) standard, and the Eureka program was determined to build on this with the development of a compatible, high-definition version (HD-MAC) A 1250-line, 2:1 interlaced, 50 Hz field rate production standard was developed for HD-MAC, with the resulting signal again fit into a standard DBS channel through techniques similar to those employed by MUSE However, the basic MAC DBS system itself never became well established in Europe, as the vast majority of DBS systems instead used the conventional PAL system The HD-MAC system was placed in further jeopardy by the introduction of an enhanced, widescreen augmentation of the existing PAL system By early 1993, Thomson and Philips – two leading European manufacturers of both consumer television receivers and studio equipment – announced that they were discontinuing efforts in HD-MAC receivers, in favor of widescreen PAL Both Eureka-95 and HD-MAC were dead HDTV FORMATS AND RATES 227 As in the US, European efforts would now focus on the development of all-digital HDTV systems In 1993, a new consortium, the Digital Video Broadcasting (DVB) Project, was formed to produce standards for both standard-definition and high-definition digital television transmission in Europe The DVB effort has been very successful, and has generated numerous standards for broadcast, cable, and satellite transmission of digital television, supporting formats that span a range from somewhat below the resolution of the existing European analog systems, to high-definition video comparable to the US system However, while there is significant similarity between the two, there are again sufficient differences so as to make the American and European standards incompatible at present As discussed later, the most significant differences lie in the definitions of the broadcast encoding and modulation schemes used, and there remains some hope that these eventually will be harmonized A worldwide digital television standard, while yet to be certain, remains the goal of many During the development of all of the digital standards, interest in these efforts from other industries – and especially the personal computer and film industries – was growing rapidly Once it became clear that the future of broadcast television was digital, the computer industry saw a clear interest in this work Computer graphics systems, already an important part of television and film production, would clearly continue to grow in these applications, and personal computers in general would also become new outlets for entertainment video Many predicted the convergence of the television receiver and home computer markets, resulting in combination “digital appliances” which would handle the tasks of both The film industry also had obvious interests and concerns relating to the development of HDTV A largescreen, wide-screen, and high-definition system for the home viewer could potentially impact the market for traditional cinematic presentation On the other hand, if its standards were sufficiently capable, a digital video production system could potentially augment or even replace traditional film production Interested parties from both became vocal participants in the HDTV standards arena, although often pulling the effort in different directions 12.3 HDTV Formats and Rates One of the major areas of contention in the development of digital and high-definition television standards was the selection of the standard image formats and frame/field rates The only common goal of HDTV efforts in general was the delivery of a “higher resolution” image – the transmitted video would have to provide more scan lines, with a greater amount of detail in each, than was available with existing standards In “digital” terms, more pixels were needed But exactly how much of an increased was needed to make HDTV worthwhile, and how much could practically be achieved? Using the existing broadcast standards – at roughly 500–600 scan lines per frame – as a starting point, the goal that most seemed to aim for was a doubling of this count “HDTV” was assumed to refer to a system that would provide about 1000–1200 lines, and comparable resolution along the horizontal axis However, many argued that this overlooked a simple fact of life for the existing standards: that they were not actually capable of delivering the assumed 500 or so scan lines worth of vertical resolution As was covered in Chapter 8, several factors combine to reduce the delivered resolution of interlaced systems such as the traditional analog television standards It was therefore argued that an “HD” system could provide the intended “doubling of resolution” by employing a progressive-scan format of between 700 and 800 scan lines per frame These would also benefit from the other advantages 228 THE IMPACT OF DIGITAL TELEVISION AND HDTV of a progressive system over interlacing, such as the absence of line “twitter” and no need to provide for the proper interleaving of the fields So two distinct classes of HDTV proposals became apparent – those using 2:1 interlaced formats of between roughly 1000 and 1200 active lines, and progressive-scan systems of about 750 active lines Further complicating the format definition problem were requirements, from various sources, for the line rate, frame or field rates, and/or pixel sampling rates to be compatible with existing standards The computer industry also weighed in with the requirement that any digital HD formats selected as standards employ “square pixels” (see Chapter 1), so that they would be better suited to manipulation by computer-graphics systems Besides the image format, the desired frame rate for the standard system was the other major point of concern The computer industry had by this time already moved to display rates of 75 Hz and higher, to meet ergonomic requirements for “flicker-free” displays, and wanted an HDTV rate that would also meet these The film industry had long before standardized on frame rates of 24 fps (in North America) and 25 fps (in Europe), and this also argued for a frame or field rate of 72 or 75 Hz for HDTV Film producers were also concerned about the choice of image formats – HDTV was by this time assumed to be “widescreen,” but using a 16:9 aspect ratio that did not match any widescreen film format Alternatives up to 2:1 were proposed But the television industry also had legacies of its own: the 50 Hz field rate common to European standards, and the 60 (and then 59.94+) Hz rate of the North American system The huge amount of existing source material using these standards, and the expectation that broadcasting under the existing systems would continue for some time following the introduction of HDTV, argued for HD’s timing standards to be strongly tied to those of standard broadcast television Discussions of these topics, throughout the late 1980s and early 1990s, often became quite heated arguments In the end, the approved standards in both the US and Europe represent a compromise between these many factors All use – primarily – square-pixel formats And, while addressing the desire for progressive-scan by many, the standards also permit the use of some interlaced formats as well, both as a matter of providing for compatibility with existing source material at the low end, and as a means of permitting greater pixel counts at the high The recognized transmission formats and frame rates of the major HDTV standards in the world as of this writing are listed in Table 12-2 (It should be noted that, technically, there are no format and rate standards described in the official US HDTV broadcast specifications The information shown in this table represents the last agreed-to set of format and rate standards proposed by the Grand Alliance and the ATSC In a last-minute compromise, due to objections raised by certain computer-industry interests, this table was dropped from the proposed US rules It is, however, still expected to be followed by US broadcasters for their HDTV systems.) Note that both systems include support for “standard definition” programming In the DVB specifications, the 720 × 576 format is a common “square-pixel representation of 625/50 PAL/SECAM transmissions The US standard supports two “SDTV” formats: 640 × 480, which is a standard “computer” format and one that represents a “square-pixel” version of 525/60 video, and 720 × 480 The latter format, while not using “square” pixels, is the standard for 525/60 DVD recordings, and may be used for either 4:3 (standard aspect ratio) or 16:9 “widescreen” material It should also be noted that, while the formats shown in the table are those currently expected to be used under these systems, there is nothing fundamental to either system that would prevent the use of the other’s formats and rates Both systems employ a packetized data-transmission scheme and use very similar compression methods (Again, the biggest technical difference is in the modulation system used for terrestrial DIGITAL VIDEO SAMPLING STANDARDS Table 12-2 Standard US “ATSC” HDTV broadcast standard DVB (as used in existing 625/50 markets) Japan/NHK “MUSE” Common “HDTV” broadcast standard formats Image format (H × V) Rates/scan format 640 × 480 60/59.94 Hz: 2:1 interlaced 720 × 480 60/59.94 Hz; 2:1 interlaced 1280 × 720 24/30/60 Hz;a progressive 24/30/60 Hz;a progressive and 2:1 interlaced 1920 × 1080 720 × 576 b 1440 × 1152 b 1920 × 1152 b 2048 × 1152 b 1440 × 1035 (effective) 229 59.94 Hz; 2:1 interlaced Comments “Standard definition” TV Displayed as 4:3 only SDTV; std DVD format Displayed as 4:3 or 16:9 Square-pixel 16:9 format Square-pixel 16:9 format 2:1 interlaced at 59.94/60 Hz only SDTV; std DVD format for 625/50 systems 2× SDTV format; non-square pixels 1152-line version of common 1920 × 1080 format; nonsquare at 16:9 Square-pixel 16:9 1152-line format Basically an analog system; will be made obsolete by adoption of an all-digital standard a The ATSC proposal originally permitted transmission of these formats at 24.00, 30.00 and 60.00 frames or fields per second, as well as at the so-called “NTSC-compatible” (N/1.001) versions of these b In regions currently using analog systems based on the 625/50 format, DVB transmissions would likely use 25 or 50 frames or fields per second, either progressive-scan or 2:1 interlaced However, the DVB standards are not strongly tied to a particular rate or scan format, and could, for example, readily be used at the “NTSC” 59.94+ Hz field rate broadcast.) There is still some hope for reconciliation of the two into a single worldwide HDTV broadcast standard 12.4 Digital Video Sampling Standards “Digital television” generally does not refer to a system that is completely digital, from image source to output at the display Cameras remain, to a large degree, analog devices (even the CCD image sensors used in many video cameras are, despite being fixed-format, fundamentally analog in operation), and of course existing analog source material (such as video tape) is very often used as the input to “digital” television systems To a large degree, then, digital television standards have been shaped by the specifications required for sampling analog video, the first step in the process of conversion to digital form Three parameters are commonly given to describe this aspect of digital video – the number and nature of the samples First, the sampling clock selection is fundamental to any video digitization system In digital video standards based on the sampling of analog signals, this clock generally must be related to the basic timing parameters of the original analog stan- 230 THE IMPACT OF DIGITAL TELEVISION AND HDTV dard This is needed both to keep the sampling grid synchronized with the analog video (such that the samples occur in repeatable locations within each frame), and so that the sampling clock itself may be easily generated from the analog timebase With the clock selected, the resulting image format in pixels is set – the number of samples per line derives from the active line period divided by the sample period, and the number of active lines is presumably already fixed within the existing analog standard 12.4.1 Sampling structure Both of these will be familiar to those coming from a computer-graphics background, but the third parameter is often the source of some confusion The sampling structure of a digital video system is generally given via a set of three numbers; these describe the relationship between the sampling clocks used for the various components of the video signal As in analog television, many digital TV standards recognize that the information relating to color only does not need to be provided at the same bandwidth as the luminance channel, and therefore these “chroma” components will be subsampled relative to the luminance signal An example of a sampling structure, stated in the conventional manner, is “YCRCB, 4:2:2”, indicating that the color-difference signals “CR” and “CB” are each being sampled at one-half the rate of the luminance signal Y (The reason for this being “4:2:2” and not “2:1:1” will become clear in a moment.) Note that “CR” and “CB” are commonly used to refer to the color difference signals in digital video practice, as in “YCRCB” rather than the “YUV” label common in analog video 12.4.2 Selection of sampling rate Per the Nyquist sampling theorem, any analog signal may be sampled and the original information fully recovered only if the sampling rate exceeds a lower limit of one-half the bandwidth of the original signal (Note that the requirement is based on the bandwidth, and not the upper frequency limit of the original signal in the absolute sense.) If we were to sample, say, the luminance signal of a standard NTSC transmission (with a bandwidth restricted to 4.2 MHz), the minimum sampling rate would be 8.4 MHz Or we might simply require that we properly sample the signal within the standard North American MHz television channel, which then gives a lower limit on the sampling rate of 12 MHz However, as noted above, it is also highly desirable that the selected sampling rate be related to the basic timing parameters of the analog system This will result in a stable number and location of samples per line, etc A convenient reference frequency in the color television standards, and one that is already related to (and synchronous with) the line rate, is the color subcarrier frequency (commonly 3.579545+ MHz for “NTSC” system, or 4.433618+ MHz for PAL) These rates were commonly used as the basis for the sampling clock rate in the original digital television standards To meet the requirements of the Nyquist theorem, a standard sampling rate of four times the color subcarrier was used, and so these are generally referred to as the “4fsc” standards The basic parameters for such systems for both “NTSC” (525 lines/frame at a 59.94 Hz field rate) and “PAL” (625/50) transmissions are given in Table 12-3 Note that these are intended to be used for sampling the composite analog video signal, rather than the separate components, and so no “sampling structure” information, per the above discussion, is given DIGITAL VIDEO SAMPLING STANDARDS Table 12-3 231 4fsc sampling standard rates and the resulting digital image formats Parameter Color subcarrier (typ.; MHz) times color subcarrier (sample rate) No of samples per line (total) No of samples per line (active) No of lines per frame (active) 525/60 “NTSC” video 625/50 3.579545 14.318182 910 768 485 4.433619 17.734475 1135 948 575 12.4.3 The CCIR-601 standard The 4fsc standards suffer from being incompatible between NTSC and PAL versions Efforts to develop a sampling specification that would be usable with both systems resulted in CCIR Recommendation 601, “Encoding Parameters for Digital Television for Studios.” CCIR-601 is a component digital video standard which, among other things, establishes a single common sampling rate usable with all common analog television systems It is based on the lowest common multiple of the line rate for both the 525/60 and 625/50 common timings This is 2.25 MHz, which is 143 times the standard 525/60 line rate (15,734.26+ Hz), and 144 times the standard 625/50 rate (15,625 Hz) The minimum acceptable luminance sampling rate based on this least common multiple is 13.5 MHz (six times the 2.25 MHz LCM), and so this was selected as the common sampling rate under CCIR-601 (Note that as this is a component system, the minimum acceptable sampling rate is set by the luminance signal bandwidth, not the full channel width.) Acceptable sampling of the color-difference signals could be achieved at one-fourth this rate, or 3.375 MHz (1.5 times the LCM of the line rates) The 3.375 MHz sampling rate was therefore established as the actual reference frequency for CCIR-601 sampling, and the sampling structure descriptions are based on that rate Common sampling structures used with this rate include: • • • 4:1:1 sampling The luminance signal is sampled at 13.5 MHz (four times the 3.375 MHz reference), while the color-difference signals are sampled at the reference rate 4:2:2 sampling The luminance signal is sampled at 13.5 MHz (four times the 3.375 MHz reference), but the color-difference signals are sampled at twice the reference rate (6.75 MHz) This is the most common structure for studio use, as it provides greater bandwidth for the color-difference signals The use of 4:1:1 is generally limited to lowend applications for this reason 4:4:4 sampling All signals are sampled at the 13.5 MHz rate This provides for equal bandwidth for all, and so may also be used for the base RGB signal set (which is assumed to require equal bandwidth for all three signals) It is also used in YCRCB applications for the highest possible quality The resulting parameters for CCIR-601 4:2:2 sampling for both the “NTSC” (525/60 scanning) and “PAL” (625/50 scanning) systems are given in Table 12-4 232 THE IMPACT OF DIGITAL TELEVISION AND HDTV Table 12-4 CCIR-601 sampling for the 525/60 and 625/50 systems and the resulting digital image formats Parameter Sample rate (luminance channel) No of Y samples per line (total) No of Y samples per line (active) No of lines per frame (active) Chrominance sampling rate (for 4:2:2) Samples per line, CR and CB (total) Samples per line, CR and CB (active) 525/60 “NTSC” video 13.5 MHz 858 720 480 6.75 MHz 429 360 625/50 13.5 MHz 864 720 576 6.75 MHz 432 360 12.4.4 4:2:0 Sampling It should be noted that another sampling structure, referred to as “4:2:0,” is also in common use, although it does not strictly fit into the nomenclature of the above Many digital video standards, and especially those using the MPEG-2 compression technique (which will be covered in the following section) recognize that the same reasoning that applies to subsampling the color components in a given line – that the viewer will not see the results of limiting the chroma bandwidth – can also be applied in the vertical direction “4:2:0” sampling refers to an encoding in which the color-difference signals are subsampled as in 4:2:2 (i.e., sampled at half the rate of the luminance signal), but then also averaged over multiple lines The end result is to provide a single sample of each of the color signals for every four samples of luminance, but with those four samples representing a × array (Figure 12-1) rather Figure 12-1 4:2:2 and 4:2:0 sampling structures In 4:2:2 sampling, the color-difference signals CR and CB are sampled at half the rate of the luminance signal Y So-called “4:2:0” sampling also reduces the effective bandwidth of the color-difference signals in the vertical direction, by creating a single set of CR and CB samples for each × block of Y samples Each is derived by averaging data from four adjacent lines in the complete 4:2:2 structure (merging the two fields), as shown; the value of sample z in the 4:2:0 structure on the right is equal to (a + 3b + 3c + d)/8 The color-difference samples are normally considered as being located with the odd luminance sample points VIDEO COMPRESSION BASICS 233 than four successive samples in the same line (as in 4:1:1 sampling) This is most often achieved by beginning with a 4:2:2 sampling structure, and averaging the color-difference samples over multiple lines as shown (Note that in this example, the original video is assumed to be 2:1 interlace scanned – but the averaging is still performed over lines which are adjacent in the complete frame, meaning that the operation must span two successive fields.) 12.5 Video Compression Basics As was noted in Chapter 6, one of the true advantages of a digital system over its analog counterpart is the ability to apply digital processing to the signal In the case of broadcast television, digital processing was necessary in order to deliver the expected increase in resolution and overall quality, while still sending the transmission over a standard television channel In short, it was the availability of practical and relatively low-cost digital compression and decompression hardware that made digital HDTV possible This same capability can also be exploited in another way, and has in commercial systems – by reducing the channel capacity required for a given transmission, digital compression also permits multiple standard-definition transmissions, along with additional data, to be broadcast in a single standard channel Consider one frame in one standard “HD” format – a 1920 × 1080, 2:1 interlaced transmission If we assume an RGB representation at bits per color, each frame contains almost 50 million bits of data; transmitting this at a 60 Hz frame rate would require almost a 375 Mbytes/s sustained data rate Clearly, the interlaced scanning format will help here, reducing the rate by a factor of two We can also change to a more efficient representation; for example, a YCRCB signal set, with bits/sample of each, and then apply the 4:2:2 subsampling described above This would reduce the required rate by another third, to approximately 124 Mbytes/s, or just under Gbit/s But by Shannon’s theorem for the data capacity of a bandlimited, noisy channel, a MHz TV channel is capable of carrying not more than about 20 Mbit/s (This assumes a signal-tonoise ratio of 10 dB, not an unreasonable limit for television broadcast.) Transmitting the HDTV signal via this channel, as was the intention announced by the FCC in calling for an all-digital system, will require a further reduction or compression of the data by a factor of about 50:1! (Note that this can also be expressed as requiring that the transmitted data stream correspond to less than one bit per pixel of the original image.) Digital television, and especially digital HDTV, requires the use of sophisticated compression techniques There are many different techniques used to compress video transmissions Entire books can, and have, been written to describe these in detail, and we will not be able to duplicate that depth of coverage here However, a review of the basics of compression, and some specific information regarding how digital video data is compressed in the current DTV and HDTV standards, is needed here Recall (from Chapter 5) that compression techniques may be broadly divided into two categories: lossless and lossy, depending on whether or not the original data can be completely recovered from the compressed form of the information (assuming no losses due to noise in the transmission process itself) Lossless compression is possible only when redundancy exists in the original data The redundant information may be removed without impacting the receiver’s ability to recover the original, although any such process generally increases the sensitivity of the transmission to noise and distortion (This must be true, since ... Germany John Wiley & Sons Australia Ltd, 33 Park Road, Milton, Queensland 4064, Australia John Wiley & Sons (Asia) Pte Ltd, Clementi Loop #02-01, Jin Xing Distripark, Singapore 129809 John Wiley & Sons. .. Liquid-Crystal Displays 4.7 Plasma Displays 4.8 Electroluminescent (EL) Displays 4.9 Organic Light-Emitting Devices (OLEDs) 4.10 Field-Emission Displays (FEDs) 4.11 Microdisplays 4.12 Projection Displays... limitations of display interfaces are often ill understood by many professionals involved in display and display system development That is why this latest addition to the Wiley- SID series in Display

Ngày đăng: 23/05/2018, 16:44

Từ khóa liên quan

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan