Tài liệu Logic kỹ thuật số thử nghiệm và mô phỏng P12 docx

89 383 0
Tài liệu Logic kỹ thuật số thử nghiệm và mô phỏng P12 docx

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

567 Digital Logic Testing and Simulation , Second Edition , by Alexander Miczo ISBN 0-471-43995-9 Copyright © 2003 John Wiley & Sons, Inc. CHAPTER 12 Behavioral Test and Verification 12.1 INTRODUCTION The first 11 chapters of this text focused on manufacturing test. Its purpose is to answer the question, “Was the IC fabricated correctly?” In this, the final chapter, the emphasis shifts to design verification, which attempts to answer the question, “Was the IC designed correctly?” For many years, manufacturing test development and design verification followed parallel paths. Designs were entered via schematics, and then stimuli were created and applied to the design. Design correctness was con- firmed manually; the designer applied stimuli and examined simulation response to determine if the circuit responded correctly. Manufacturing correctness was deter- mined by simulating vectors against a netlist that was assumed to be correct. These vectors were applied to the fabricated circuit, and response of the ICs was compared to response predicted by the simulator. Thoroughness of design verification test suites could be evaluated by means of toggle counts, while thoroughness of manu- facturing test suites was evaluated by means of fault simulation. In recent years, most design starts have grown so large that it is not feasible to use functional vectors for manufacturing test, even if they provide high-fault coverage, because it usually takes so many vectors to test all the functional corners of the design that the cost of the time spent on the tester becomes prohibitive. DFT tech- niques are needed both to achieve acceptable fault coverage and to reduce the amount of time spent on the tester. A manufacturing test based on scan targets defects more directly in the structure of the circuit. A downside to this was pointed out in Section 7.2; that is, some defects may best be detected using stimuli that tar- get functionality. While manufacturing test relies increasingly on DFT to achieve high-fault cover- age, design verification is also changing. Larger, more complex designs created by large teams of designers incorporate more functionality, along with the necessary handshaking protocols, that must be verified. Additionally, the use of core modules, and the need to verify equivalence of different levels of abstraction for a given design, have made it a greater challenge to select the best methodology for a given 568 BEHAVIORAL TEST AND VERIFICATION design. What verification method (or methods) should be selected? Tools have been developed to assist in all phases of support for the traditional approach—that is, apply stimuli and evaluate response. But, there is also a gradual shift in the direction of formal verification. Despite the shift in emphasis, there remains considerable overlap in the tools and algorithms for design verification and manufacturing test, and we will occasionally refer back to the first 11 chapters. Additionally, we will see that, in the final analysis, manufacturing test and design verification share a common goal: reliable delivery of computation, control, and communication. If it doesn’t work correctly, the customer doesn’t care whether the problem occurred in the design or the fabrication. 12.2 DESIGN VERIFICATION: AN OVERVIEW The purpose of design verification is to demonstrate that a design was implemented correctly. By way of contrast, the purpose of design validation is to show that the design satisfies a given set or requirements. 1 A succinct and informal way to differ- entiate between them is by noting that 2 Validation asks “Am I building the right product?” Verification asks “Am I building the product right?” Seen from this perspective, validation implies an intimate knowledge of the problem that the IC is designed to solve. An IC created to solve a problem is described by a data sheet composed of text and waveforms. The text verbally describes IC behavior in response to stimuli applied to its I/O pins. Sometimes that behavior will be very complex, spanning many vectors, as when stimuli are first applied in order to config- ure one or more internal control registers. Then, behavior depends on both the con- tents of the control registers and the applied stimuli. The waveforms provide a detailed visual description of stimulus and response, together with timing, that shows the relative order in which signals are applied and outputs respond. Design verification, on the other hand, must show that the design, expressed at the RTL or structural level, implements the operations described in the data sheet or whatever other specification exists. Verification at the RTL level can be accom- plished by means of simulation, but there is a growing tendency to supplement sim- ulation with formal methods such as model checking. At the structural level the use of equivalence checking is becoming standard procedure. In this operation the RTL model is compared to a structural model, which may have been synthesized by soft- ware or created manually. Equivalence checking can determine if the two levels of abstraction are equivalent. If they differ, equivalence checking can identify where they differ and can also identify what logic values cause a difference in response. The emphasis in this chapter is on design verification. When performing verifica- tion, the target device can be viewed as a white box or a black box. During white- box testing , detailed knowledge is available describing the internal workings of the device to be tested. This knowledge can be used to direct the verification effort. For DESIGN VERIFICATION: AN OVERVIEW 569 example, an engineer verifying a digital circuit may have schematics, block dia- grams, RTL code that may or may not be suitably annotated, and textual descrip- tions including timing diagrams and state transition graphs. All or a subset of these can be used to advantage when developing test programs. Some examples of this were seen in Chapter 9. The logic designer responsible for the correctness of the design, armed with knowledge of the internal workings of the design, writes stimuli based on this knowledge; hence he or she is performing white-box testing. During black-box testing it is assumed that there is no visibility into the internal workings of the device being tested. A functional description exists which outlines, in more or less detail, how the device must respond to various externally applied stimuli. This description, or specification, may or may not describe behavior of the device in the presence of all possible combinations of inputs. For example, a micro- processor may have op-code combinations that are left unused and unspecified. From one release to the next, these unused op-codes may respond very differently if invoked. PCB designers, concerned with obtaining ICs that work correctly with other ICs plugged into the same PCB or backplane, are most likely to perform black-box testing, unless they are able to persuade their vendor to provide them with more detailed information. Some of the tools used for design verification of ICs have their roots in software testing. Tools for software testing are sometimes characterized as static analysis and dynamic analysis tools. Static analysis tools evaluate software before it has run. An example of such a tool is Lint . It is not uncommon, when porting a software system to another host environment and recompiling all of the source code for the program, to experience a situation where source code that compiled without complaint on the original host now either refuses to compile or produces a long list of ominous sounding warnings during compilation. The fact is, no two compilers will check for exactly the same syntax and/or semantic violations. One compiler may attempt to interpret the programmer’s intention, while a second compiler may flag the error and refuse to generate an object module, and a third compiler may simply ignore the error. Lint is a tool that examines C code and identifies such things as unused variables, variables that are used before being initialized, and argument mismatches. Commer- cial versions of Lint exist both for programming languages and for hardware design languages. A lint program attempts to discover all fatal and nonfatal errors in a pro- gram before it is executed. It then issues a list of warnings about code that could cause problems. Sometimes the programmer or logic designer is aware of the coding practice and does not consider it to be a problem. In such cases, a lint program will usually permit the user to mask out those messages so that more meaningful mes- sages don’t become lost in a sea of detail. In contrast to static analysis tools, dynamic analysis tools operate while the code is running. In software this code detects such things as memory leaks, bounds viola- tions, null pointers, and pointers out of range. They can also identify source code that has been exercised and, more importantly, code that has not been exercised. Additionally, they can point out lines of code that have been exercised over only a partial range of their variables. 570 BEHAVIORAL TEST AND VERIFICATION 12.3 SIMULATION Over the years, simulation performance has benefited from steady advances in both software and hardware enhancements, as well as modeling techniques. Section 2.12 provides a taxonomy of methods used to improve simulation perfor- mance. Nonetheless, it must be pointed out that the style of the code written by the logic designer, as well as the level of abstraction, can greatly influence simulation performance. 12.3.1 Performance Enhancements Several approaches to speeding up simulation were discussed in Chapter 2. Many of these approaches impose restrictions on design style. For example, asynchronous circuit design requires that the simulator maintain a detailed record of the precise times at which events occur. This is accomplished by means of delay values, which facilitate prediction of problems resulting from races and hazards, as well as setup and hold violations, but slow down simulation. But why the emphasis on speed? The system analyst wants to study as many alternatives as possible at the conceptual level before committing to a detailed design. For example, the system analyst may want to model and study new or revised op-codes for a microprocessor architecture. Or the analyst may want to know how many transactions a bank teller machine can perform in a given period of time. Throughput, memory and bandwidth requirements for system level designs can all be more thoroughly evaluated at higher levels of abstraction. Completely new applications can be modeled in order to perform feasibility studies whose pur- pose is to decide how to divide functionality between software and hardware. Developing a high-level model that runs quickly, and coding the model very early in the conceptual design phase, may offer the additional benefit that it can permit diagnostic engineers to begin writing and debugging their programs earlier in the project. The synchronous circuit, when rank-ordered and using zero delay, can be simu- lated much more efficiently than the asynchronous circuit, because it is only neces- sary to evaluate each element once during each clock period. Timing analysis, performed at the structural or gate level, is then used to ensure that path delays do not exceed the clock period and do not violate setup and hold times. Synchronous design also makes it possible to employ compiled code, rather than interpreted code which uses complex tables to link signals and variables. A Verilog or VHDL model can be compiled into C or C++ code which is then compiled to the native language of the host computer. This can provide further reduction in simulation times, as well as significant savings in memory usage, since variables can be linked directly, rather than through tables and pointers. The amount of performance gain realized by compiled code depends on how it is implemented. The simplest approach, from an implementation standpoint, is to have all of the compiled code execute on every clock cycle. Alternatively, a pseudo-event- driven implementation can separate the model into major functions and execute the SIMULATION 571 compiled code only for those functions in which one or more inputs has changed. This requires overhead to determine which blocks should be executed, but that cost can be offset by the savings from not executing blocks of code unnecessarily. The type of circuit being simulated is another factor that determines how much gain is realized by performing rank-ordered, zero delay simulation. In a pure combi- national, gate-level circuit, such as a multiplier array, if timing-based, event-driven simulation is performed, logic gates may be evaluated multiple times in each clock cycle because logic events occur at virtually every time slot during that period. These events propagate forward, through the cone they are in, and converge at dif- ferent times on the output of that cone. As a result, logic gates at or near the output of the cone may be evaluated tens or hundreds of times. Thus, in a large combina- tional array, rank-ordered, zero delay simulation may realize 10 to 100 times improvement in simulation speed. Traditionally, point accelerators have been used to speed up various facets of the design task, such as simulation. The use of scan in an emulation model makes it pos- sible to stop on any clock and dump out the contents of registers in order to pinpoint the source of an incorrect response. However, while they can significantly speed up simulation, point accelerators have their drawbacks. They tend to be quite costly and, unlike a general-purpose workstation, when not being used for simulation they stand idle. There is also the risk that if an accelerator goes down for any length of time, it can leave several logic designers idle while a marketing window of opportu- nity slowly slips away. Also, the point accelerator is a low-volume product, hence costly to update, while the general-purpose workstation is always on an upward spi- ral, performancewise. So the workstation, over time, closes the performance gap with the accelerator. By way of contrast, a cycle simulator (cf. Section 2.12), incorporating some or all of the features described here, can provide major performance improvements over an event-driven simulator. As a software solution, it can run on any number of readily available workstations, thus accommodating several engineers. If a single machine fails, the project can continue uninterrupted. If a simulation task can be partitioned across multiple processors, further performance gains can be obtained. The chief requirement is that the circuit be partitioned so that results only need be communicated at the end of each cycle, a task far easier to perform in the synchro- nous environment required for cycle simulation. Flexibility is another advantage of cycle simulation; algorithm enhancements to a software product are much easier to implement than upgrades to hardware. It was mentioned earlier that a user can often influence the speed or efficiency of simulation. One of the tools supported by some commercial simulators is the pro- filer . It monitors the amount of CPU time spent in each part of the circuit model being simulated. At the end of simulation a profiler can identify the amount of CPU time spent on any line or group of lines of code. For compute-intensive operations such as simulation, it is not unusual for 80–95% of the CPU time to be spent simu- lating a very small part of the circuit model. If it is known, for instance, that 5% of the code consumes 80% of the CPU time, then that part of the code can be reviewed with the intention of writing it more efficiently, perhaps at a higher level of 572 BEHAVIORAL TEST AND VERIFICATION abstraction. Streamlining the code can sometimes produce a significant improve- ment in simulation performance. 12.3.2 HDL Extensions and C++ There is a growing acceptance of high-level languages (HLLs), particularly C and C++, for conceptual or system level modeling. One reason for this is the fact that a model expressed in an HLL usually executes more rapidly than the same model expressed in an RTL language. This is based, at least in part, on the fact that when a Verilog or VHDL model is executing as compiled code, it is first translated into C or C++. This intermediate translation may introduce inefficiencies that the system engineer hopes to avoid by directly encoding his or her system level model in C or C++. Another attraction of HLLs is their support for complex mathematical func- tions and similar such utilities. These enable the system analyst to quickly describe and simulate complex features or operations of their system level model without becoming sidetracked or distracted from their main focus by having to write these utility routines. To assist in the use of C++ for logic design, vendors provide class libraries. 3 These extend the capabilities of C++ by including libraries of functions, data types, and other constructs, as well as a simulation kernel. To the user, these additions make the C++ model look more like an HDL model while it remains legal C++ code. For example, the library will provide a function that implements a wait for an active clock edge. Other problems solved by the library include interconnection methodology, time sequencing, concurrency, data types, performance tracking, and debugging. Because digital hardware functions operate concurrently, devices such as the timing wheel (cf. Section 2.9.1) have been invented to solve the concurrency issue at the gate-level. The C++ library must provide a corresponding capability. Data types that must be addressed in C++ include tri-state logic and odd data bus widths that are not a multiple of 2. After the circuit model has been expressed in terms of the library functions and data types, the entire circuit model may then be linked with a simulation kernel. An alternative to C++ for speeding up the simulation process, and reducing the effort needed to create testbenches, is to extend Verilog and VHDL. The IEEE peri- odically releases new specifications that extend the capabilities of these languages. The release of Verilog-2001, for example, incorporates some of the more attractive features of VHDL, such as the “generate” feature. Vendors are also extending Veri- log and VHDL with proprietary constructs that provide more support for describing operations at higher levels of abstraction, as well as support for testbench verifica- tion capabilities—for example, constructs that permit complex monitoring actions to be compressed into just a few lines of code. Oftentimes an activity such as monitor- ing events during simulation—an activity that might take many lines of code in a Verilog testbench, and something that occurs frequently during debug—may be implemented very efficiently in a language extension. The extensions have the advantage that they are supersets of Verilog or VHDL; hence the learning curve is quite small for the logic designer already familiar with one of these languages. SIMULATION 573 A danger of deviating from existing standards, such as Verilog and VHDL, is that a solution that provides major benefits while simulating a design may not be compatible with existing tools, such as an industry standard synthesis tool or a design verification tool. As a result, it becomes necessary for a design team to first make a value judgment as to whether there is sufficient payback to resort to the use of C++ or one of the exten- sion languages. The extension language may be an easier choice. The circuit under design is restricted to Verilog or VHDL while the testbench is able to use all the fea- tures of Verilog or VHDL plus the more powerful extensions provided by the vendor. If C++ is chosen for systems level analysis, then once the system analyst is satis- fied that the algorithms are performing correctly, it becomes necessary to convert the algorithms to Verilog or VHDL for implementation. Just as there are translators that convert Verilog and VHDL to C or C++ to speed up simulation, there are translators that convert C or C++ to Verilog or VHDL in order to take advantage of industry standard synthesis tools. The problem with automating the conversion of C++ to an RTL is that C++ is quite powerful, with many features that bear no resemblance to hardware, so it is necessary to place restrictions on the language features that are used, just as synthesis tools currently restrict Verilog and VHDL to a synthesizable subset. Without the restrictions, the translator may fail completely. Restrictions on the language, in turn, place restrictions on the user, who may find that a well- designed block of code employs constructs that are not supported by the particular translator being used by the design team. This necessitates recoding the function, often in a less expressive form. 12.3.3 Co-design and Co-verification Many digital systems have grown so large and complex that it is, for all practical purposes, impossible to design and verify them in the traditional manner—that is, by coding them in an HDL and applying stimuli by means of a testbench. Confidence in the correctness of the design is only gained when it is seen to be operating in an environment that closely resembles its final destination. This is often accomplished through the use of co-design and co-verification.* Co-design simultaneously designs the hardware and software components of a system, whereas co-verification simultaneously executes and verifies the hardware and software components. Traditionally, hardware and software were kept at arms length while designing a system. Studies would first be performed, architectural changes would be investigated, and the hardware design would be “frozen,” mean- ing that no more changes would be accepted unless it could be demonstrated that they were absolutely essential to the proper functioning of the product. The amount of systems analysis would depend on the category of the development effort: Is it a completely new product, or an enhancement (cf. Section 1.4)? If it is an enhance- ment to an existing product, such as a computer to which a few new op-codes are to be added, then compatibility with existing products is essential, and that becomes a *Co-design and co-verification often appear in the literature without the hyphen—that is, as codesign and coverification. 574 BEHAVIORAL TEST AND VERIFICATION constraint on the process. A completely new product permits much greater freedom of expression while investigating and experimenting with various configurations. The co-design process may be focused on finding the best performance, given a cost parameter. Alternatively, the performance may be dictated by the marketplace, and the goal is to find the most economical implementation, subject to the perfor- mance requirements. Given the constraints, the design effort then shifts toward iden- tifying an acceptable hardware/software partition. Another design parameter that must be determined is control concurrency. A system’s control concurrency is defined by the functional behavior and interaction of its processes. 4 Control concur- rency is determined by merging or splitting process behaviors, or by moving func- tions from one process to another. In all of these activities, there is a determined effort to keep open channels of communication between the software and hardware developers so that the implications of tradeoffs are completely understood. The task of communicating between diverse subsystems, some implemented in software and some in hardware, or some in an HDL and some in a programming lan- guage, presents a challenge that often requires an ad-hoc solution. The flow in Figure 12.1 represents a generic co-design methodology. 5 In this diagram, the hard- ware may be modeled in Verilog, VHDL, or C++ or it could be modeled using field programmable gate arrays (FPGAs). Specification of the hardware depends on its purpose. Decisions must be made regarding datapath sizes, number and size of reg- isters, technology, and so on. Figure 12.1 Generic co-design methodology. System specification Algorithm development Hardware-software partitioning Hardware synthesis Software synthesis Interface synthesis System simulation Design verification System evaluation Success ? DONE yes no MEASURING SIMULATION THOROUGHNESS 575 The interface between hardware and software must handle communications between them. If the model is described in Verilog, running under Unix, then the Verilog programming language interface (PLI) can communicate with software pro- cesses using the Unix socket facility. After the design has been verified, system eval- uation determines whether the system, as partitioned and implemented, satisfies performance requirements at or under cost objectives. If some aspect of the design falls short, then another partitioning is performed. This process can be repeated until objectives are met, or some optimum flow is achieved. Note that if the entire system is developed using C++, many communications problems are solved, since every- thing can be compiled and linked as one large executable. 12.4 MEASURING SIMULATION THOROUGHNESS As indicated previously, many techniques exist for speeding up simulation, thus per- mitting more stimuli to be applied to a design in a given period of time. However, in design verification, as in manufacturing test, it is important not to just run a lot of stimuli, but also to measure the thoroughness of those stimuli. Writing stimuli blindly, without evaluating their effectiveness, may result in high quantities of low- quality test stimuli that repeatedly exercise the same functionality. This slows down the simulations without detecting any new bugs in the design. Coverage analysis can identify where attention needs to be directed in order to improve thoroughness of the verification effort. Then, the percentage coverage of the RTL, rather than the quan- tity of testbench code, becomes the criteria for deciding when to bring design verifi- cation to a halt. 12.4.1 Coverage Evaluation Chapter 7 explored a number of topics, including toggle coverage (Section 7.8.4), gate-level fault simulation (Section 7.5.2), behavioral fault simulation (Section 7.8.3), and code coverage (Section 7.8.5). Measuring toggle coverage during simulation was a common practice many years ago. It was appealing because it did not significantly impact simulation time, nor did it require much memory. However, its appeal for design verification is rather limited now because it requires a gate-level model. If a designer simulates at the gate level and finds a bug, it usually becomes necessary to resynthesize the design, and designers find it inconvenient to interrupt verification and resynthesize each time a bug is uncovered, particularly in the early stages of design verification when many bugs are often found in rapid succession. As pointed out in Section 7.8.4, toggle count remains useful for identifying and correcting hot spots in a design—that is, areas of a die that experience excessive amounts of logic activity, causing heat buildup. It was also argued in Chapter 7 that fault simulation can provide a measure of the thoroughness of design verification vectors. But, like toggle count, it relies on a gate-level model. Code coverage has the advantage that it can be used while simulating at the RTL level. If a bug is found, the RTL is corrected and simulation continues. The RTL is 576 BEHAVIORAL TEST AND VERIFICATION not synthesized until there is confidence in the correctness of the RTL. As pointed out in Section 7.8.5, code coverage can be used to measure block coverage, expres- sion coverage, path coverage, and coverages specific to state machines, such as branch coverage. When running code coverage, the user can identify modules of interest and omit those that are not of interest. For example, the logic designer may include in the design a module pulled down from a library or obtained from a ven- dor. The module may already have been thoroughly checked out and is currently being used in other designs, so there is confidence in its design. Hence it can be omitted from the coverage analysis. Code coverage measures controllability; that is, it identifies all the states visited during verification. For example, we are given the equation WE = CS & ArraySelect & SectorSelect & WriteRequest ; What combinations of the input variables are applied to that expression? Does the variable SectorSelect ever control the response of WE ? In order for SectorSelect to control WE , it must assume the values 0 and 1 while the other three inputs must be 1. For this expression, a code coverage tool can give a coverage percentage, similar to a fault coverage percentage, indicating how many of the variables have con- trolled the expression at one time or another during simulation. Block coverage, which indicates only whether or not a line of code was ever exercised, is a poor measure of coverage. When verifying logic, it is not uncommon to get the right response for the wrong reason, what is sometimes referred to as coincidental cor- rectness . For example, two condition code bits in a processor may determine a con- ditional jump, but the one that triggered the jump may not be the one currently being investigatated. Consider the state machine: It is desirable to visit all states, and it is desirable to traverse all arcs. But, in a typical state machine several variables can control the state transitions. Given a compound expression that controls the transition from S i to S j , a thorough verification requires that each of the variables, at some point dur- ing verification, causes or determines the transition to S j . In general, equations can be evaluated to determine which variables controlled the equation and, more impor- tantly, which variable never controlled the equation throughout the course of simu- lation. An important goal of code coverage is to verify that the input vectors established logic values on internal signals in such a way that the outcome of a logic transaction depends only on one particular signal, namely, the signal under consideration. Behavioral fault simulation, in contrast to code coverage, measures both control- lability and observability. A fault must be sensitized, and its effects must be propa- gated to an observable output before it can be counted as detected. One drawback to behavioral fault simulation is the fact that the industry has never settled on an accept- able family of faults, in contrast to gate-level fault simulation where stuck-at-1 and stuck-at-0 faults have been accepted for more than a quarter-century. Given a fault coverage number estimated using a gate-level model, test engineers can usually make a reasonably accurate prediction of how many tester escapes to [...]... checker can put the logic designer’s mind at ease It might, however, be argued that if the logic designer used code coverage and obtained 100% expression coverage, and verified that the circuit responded correctly for all stimuli, then the designer has already checked the condition 578 BEHAVIORAL TEST AND VERIFICATION Example Consider the fifo example cited earlier Somewhere in the logic there may be... during design verification and served as the acceptance test for ICs provided by the foundry Part of the reason for low coverage stems from decisions by logic designers regarding the importance of verifying various parts of the design It is not uncommon for a logic designer to make subjective decisions as to which parts of a design are “complicated” and need to be thoroughly checked out, based on his or... example, Figure 9.30) A gatelevel ATPG, attempting to justify an assignment to the state machine, may spend countless hours of CPU time trying to justify a logic 1 on two or more flip-flops when the implementation only permits a single flip-flop to be at logic 1 at any given time By abstracting out details and explicitly identifying legal behavior of the state machine, this difficulty can be avoided 588 BEHAVIORAL... individual logic elements and arcs corresponded to connections between elements The nodes were represented by descripter cells containing pointers and other data (see Figure 2.21) The pointers described I/O connections between the output of one element and the inputs of other elements The ATPG used the pointers to traverse a circuit, tracing through the interconnections in order to propagate logic values... forward to primary outputs and justify assignments back toward the inputs For logic elements in an RTL circuit the descripter cells bear a similarity, but functions of greater complexity require more entries in the descripter cell In addition, linking elements via pointers is more complex In gate-level circuits the inputs of logic gates are identical in function, but in RTL circuits the inputs may be... 2′b10: Y[3:0] 2′b01: Y[3:0] 2′b00: Y[3:0] endcase; = = = = 4′b1000; 4′b0100; 4′b0010; 4′b0001; When a behavioral ATPG parses an RTL model and associates RTL constructs with logic functions, the actions are similar to those performed during logic synthesis The major difference lies in what must be done after the RTL description has been parsed Whereas synthesis software is simply concerned with mapping an... or vice versa One of the terms might be modified by adding a variable to the product Sometimes the failure to include a variable in the sensitivity list, particularly if it is a long list, can cause a logic designer to puzzle for quite some time over the cause of an erroneous response in an equation that appears well-formed The misuse of blocking and non-blocking assignments in Verilog procedural statements... Design error injection in HDL designs is quite similar to S/W testing One noticeable difference is the fact that response of an HDL can be examined at I/O pins But, recalling our previous discussion, logic designers may choose not to drive an internal state to an I/O pin Hence it may be necessary to capture internal state at registers and state machines and then output that information to a file where... sufficient to sensitize (i.e., control) the faults But, as we have just seen, code coverage measures controllability, and its metrics are well understood and accepted So, if the goal is simply to sensitize logic, then code coverage is adequate Another means for determining the thoroughness of coverage is through the use of event monitors and assertion checkers.8 The event monitor is a block of code that... its entirety, the RTPG system developed by this project was dynamic: Test 584 BEHAVIORAL TEST AND VERIFICATION generation was interleaved with the execution of instructions This made it possible for the logic designer to create test programs during the early stages of design, while implementing the op-codes The test program generated by RTPG is made up of three parts: Initial state Instructions Expected . timing-based, event-driven simulation is performed, logic gates may be evaluated multiple times in each clock cycle because logic events occur at virtually every time. verify that the input vectors established logic values on internal signals in such a way that the outcome of a logic transaction depends only on one particular

Ngày đăng: 15/12/2013, 04:15

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan