Digital logic testing and simulation phần 10 pps

67 226 0
Digital logic testing and simulation phần 10 pps

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

THE TEST DESIGN EXPERT 607 The objective is to develop one or more paths that define a transition either from the current state of the circuit or from a reset state and, by means of the Petri net, identify a sequence of inputs that will drive the circuit to the Goal. In either case, the traversal is directed by means of input stimuli. By virtue of having several paths from leaf nodes to the Goal, several options exist. One option is to traverse the short- est path from the leaf node to the Goal. Another option is to traverse a path that includes branches that have not yet been traversed in order to exercise heretofore unexercised logic. It may be desirable to spread out the traversals, choosing different paths each time, so that a manufacturing test program exercises all paths approximately the same number of times, rather than exercise the same path repeatedly. However, rec- ognizing that tester time can be quite expensive, the goal of a manufacturing test program usually is to be as short and efficient as possible, so it may be desirable to find the “least cost” path. From Figure 12.10 it can be seen that the searches can grow out of control quickly, and the example circuit was rather small. The SCIRTSS project spent much effort developing cost functions to help navigate through the logic and prune the search trees in order to find shortest paths as well as to control the growth of goal trees. 12.8 THE TEST DESIGN EXPERT The Test Design Expert (TDX) was a commercial endeavor motivated by the SCIR- TSS system, and it bore some resemblence to it. But TDX included strategies, tech- niques, and refinements that took it beyond SCIRTSS, and some of the features that it had in common with SCIRTSS were evolved and refined. Inputs to TDX included a netlist and an RTL description. It also employed a map file to link storage elements in the RTL with their instantiated counterparts in the netlist. 12.8.1 An Overview of TDX Like SCIRTSS, TDX used search heuristics to explore RTL models, but as it evolved it added additional software tools. The Supervisor was continuously select- ing and applying tools to the task of generating vector sequences. TDX included a gate-level ATPG called DEPOT (DEductive, Path-Oriented Trace), which imple- mented both the D-Algorithm and the PODEM algorithm, and TDX could select either of them under Supervisor control. TDX included a testability analyzer that was a variant of SCOAP, and the C/O numbers that it generated from the netlist were used by DEPOT to help find the best path through combinational logic. The best path usually meant propagating a fault to a destination storage element or primary output while making the smallest possible number of justification assignments to storage elements. This was an important consideration because as more logic assignments are made to flip-flops while sensitizing a fault in a sequential circuit, the more difficult it becomes to jus- tify the values on those flip-flops while trying to sensitize the fault or propagate the 608 BEHAVIORAL TEST AND VERIFICATION fault effect through the RTL code. Behavioral C/O numbers were also computed and used. A full-timing, gate-level concurrent fault simulator was integrated with the other components of TDX. It provided a detailed analysis of fault coverage for the vectors generated by TDX. It also performed logic simulation to verify that sequences generated by BATG had the intended effect and were not side- tracked by races and hazards. Outputs from TDX included a test vector file, a response file, and various reports, including fault coverage and testability analysis information. The key components of TDX are shown in Figure 12.11. Input files included a netlist, an RTL description and a map file, which linked gate-level flip-flops and latches with their RTL counterparts. The RTL description could be provided in VHDL or Verilog. Regardless of which language was used to describe the circuit, when parsed, it was translated into a behavioral intermediate form (BIF) that expressed the RTL code as a collection of cause and effect rules. The netlist could also be provided in either VHDL or Verilog. A faultlist compiler read the netlist and produced a list of the traditional stuck-at faults. The search heuristics were a collec- tion of strategies, algorithms, and heuristics that could be invoked as needed by the TDX Supervisor to solve problems. The Supervisor controlled the overall operation of TDX. The first step was to read in the netlist and the BIF and compile a knowledge base. The knowledge base differs from a data base in that it “is more explicit about the objects in its universe and how information flows between them.” 23 The Supervisor analyzes the RTL description and breaks it down into groups that correspond to state machines, counters, multiplexers, and other familiar structures. Some parts of a circuit are described using long series of detailed RTL equations. Those parts of the design are stored as equations. Figure 12.11 The Test Design Expert (TDX). TDX supervisor Structural description RTL description Optional user inputs Knowledge base Fault simulator Search heuristics Testability analyzer Combinational ATPGs Fault list manager Test vectors Fault reports THE TEST DESIGN EXPERT 609 When reading in the RTL description, much of the initial processing was similar to that performed by a synthesis program; that is, RTL constructs are recognized and mapped into appropriate data structures corresponding to the common hardware functions. In one respect, however, TDX digressed from synthesis programs. The synthesis program works just fine with gate-level modules intermingled with RTL. In fact, a synthesis program might accept either a pure RTL, or a mixed RTL/gate description and produce identical gate-level netlists from them. The synthesizable subset handles low-level detail quite well, but may have trouble with higher levels of abstraction. TDX, conversely, was quite adept at handling higher levels of abstraction, but often stumbled with circuits that contained too much low-level detail. Complex, handcrafted modules in the data path part of the RTL that obscured functionality could sometimes prove difficult for TDX. When the Supervisor invoked the fault list compiler to compile a fault list, it would compile either a full fault list or a fault sample of a size chosen by the user. The fault simulator was a full-timing, gate-level concurrent fault simulator that could accurately fault simulate both synchronous and asynchronous circuits. It was tightly integrated with the rest of the TDX system, so it could fault simulate a sequence of arbitrary length, pass control back to the Supervisor which would then invoke the ATPG, and then it could regain control from the Supervisor and resume fault simu- lating from the point where it previously left off. Alternatively, it could operate standalone on the same fault list that had been previously processed by TDX. TDX could initialize a circuit or it could accept initialization stimuli from the user. Often, particularly when the circuit required complex, timing-critical sequences, the designer could accomplish the initialization more efficiently. Some- times it was preferable to use design verification suites to get coverage up to a cer- tain level. In such cases it was not necessary for TDX to generate vectors until the productivity of the user’s vectors began to diminish. When the user provided test vectors, these would be passed directly to the fault simulator, which would determine the coverage for these vectors. The faults that were detected by the test vector suite would be dropped by the fault list manager, so there would be no effort to generate tests for those faults. There were several hooks included in the fault simulator to permit it to communicate with the Supervisor. The Supervisor could, at any point during the process of test generation, query the fault simulator and determine which faults had been detected and which undetected faults had produced error signals that caused one or more flip-flops to assume an incorrect value. Some of the capabilities of TDX included: Gate-level combinational ATPG (D-algorithm and PODEM) Trapped fault propagation Controllability/observability analysis (gate level and behavioral) Creation/manipulation of goal trees Constraint propagation Functional walk Learn mode 610 BEHAVIORAL TEST AND VERIFICATION Like SCIRTSS, TDX could target undetected faults for sensitization or it could identify trapped faults. If the Supervisor determined that there were trapped faults, it could choose one for propagation. If several trapped faults exist, the Supervisor could select one based on various criteria. The criteria were not hard and fast, they could vary during a run. As a result, a fault trapped in a flip-flop might be selected at one point during test generation, and the same fault trapped in the same flip-flop might be rejected at some other time during a run. For example, suppose a fault becomes trapped in a flip-flop. Suppose that flip- flop becomes immediately observable at an output if a tri-state enable is set to 1, and suppose the tri-state enable is easily controllable. It is possible that several undetec- ted faults are trapped in that flip-flop. In that case, it is desirable to enable the tri- state control and detect the faults. However, it is also possible that another flip-flop has trapped faults, and those faults originate in a region of the design where control- lability and observability are very difficult. The other flip-flop, even if it requires a more complex sequence to flush out the faults, may be a more desirable objective. The concept of targeting trapped faults was discussed in Section 7.9.2, when the SOFTG system was discussed. There it was pointed out that the tendency to grab a trapped fault must be tempered by the realization that a trapped fault can lead to a dead end. This is illustrated in the circuit of Figure 12.12, a variable-length byte- wide shift register. The shift length is programmed by loading a value in register R S that determines which of the registers R 1 –R 16 will be selected and clocked into desti- nation register R 17 on the next active clock edge. Suppose a fault effect appears in register R 3 . It may be an opportune time to prop- agate that fault forward and position it closer to an output. But if R 3 is not currently selected by the multiplexer, then it must be selected by loading the correct value into R S through select bits S 3-0 . However, note what happens when the value at S 3-0 is clocked into R S . The contents of R 3 are propagated to R 4 and are replaced by the con- tents of R 2 . A knowledgeable human would recognize and allow for that possibility by loading R S with the bits required to choose R 4 . The problem of dead ends in sequential logic can be quite serious. The problem occurs regardless of whether the fault became trapped serendipitously while another fault was the object of propagation, or the fault may be one that was sensitized by DEPOT. In other words, the process of selecting and sensitizing a fault may succeed in its effort to propagate a fault effect from the fault origin to a target flip-flop, but it may, in the sensitization process, produce a trapped fault that cannot be further propagated. Figure 12.12 Dead end for a trapped fault. R 0 R 1 R 2 R 15 R 16 R S 4 8 8 MUX DI 7-0 DO 7-0 S 3-0 S R 17 8 • • • THE TEST DESIGN EXPERT 611 Note that a strategy that might be employed by an experienced test engineer, knowing that a fault is trapped in R 3 , would be to set the select bits S 3-0 = (1,1,1,1) and clock the circuit until the value in R 3 , including the fault effect, propagates through all the registers to the output DO 7-0 . By using this lookahead strategy, the targeted fault is propagated forward, but in the process other faults may also be flushed out of the circuit. However, if a trapped fault reaches R 16 , it is at a dead end unless R S has already been set to select R 17 . Dead ends are a major problem for sequential test pattern generation. In the example just given, the 16-stage shift register, the trapped fault remained alive, it just didn’t go where it was expected to go. More often, the trapped fault gets blocked in the combinational logic between the flip-flop in which it is trapped and the desti- nation flip-flop. In the fault simulator, the fault is converged at the point where it becomes blocked. In general, whenever a fault is being sensitized or propagated, an effort must be made to sensitize and propagate simultaneously. Consider the circuit in Figure 12.13. A fault is sensitized at the input of a NOR gate. When a clock edge is applied, the D will be clocked into a flip-flop where it will become trapped. However, another flip-flop receives a 1 when the clock is applied. Unfortunately, that 1 becomes inverted and blocks the D from propagating any further. In order to successfully propagate the fault effect in this circuit, it must simultaneously be sensitized in the combinational logic and propagated through the logic corresponding to the next time image. When that happens, requirements will be imposed on the destination flip-flops (the bank of flip-flops on the right). These requirements will then have to be justified simultaneous with the sensitization of the stuck-at fault. Note that a 1 was assigned to one of the flip-flops in the left bank in order to justify a 0 from the NOR gate. That assignment can be changed to a 0, and the other input to the NOR gate can be assigned a 1. An alternate solution, if the flip-flop that receives a D has a hold mode, is to force that flip-flop into the hold state. But, that also requires looking ahead. In this case, rather than look ahead into the next time frame, the state search must simultaneously Figure 12.13 Encountering a dead end. 0 1 D 1 1 1 x 1 0 612 BEHAVIORAL TEST AND VERIFICATION sensitize the fault and justify the hold mode for the target flip-flop. If the trapped fault can be held in the target flip-flop for an indeterminate number of time frames, then a propagation path can be set up while the target flip-flop retains the fault effect. Eventually, the trapped fault propagates forward. Of course, if the destination is not a primary output, the same considerations hold at the new destination; that is, the fault could be at a dead end if care is not taken to hold it until a propagation path is established. When trapped faults are selected by the fault simulator and passed on to the Supervisor, the corresponding RTL level storage elements in which they are trapped are identified by means of a map file. The Supervisor then selects from among the heuristics. The fault chosen for propagation may be trapped in a data path, or it may be trapped in control logic. If it is trapped in a data path, then the object is to propa- gate it forward toward an output. If the fault is trapped in control logic, then it can usually only be observed indirectly by means of its effects on the data path elements. For example, suppose the fault-free circuit is attempting to perform a logic AND on two arguments X and Y, and a fault in the control section causes an OR operation to be selected. Then, for the values X i = 0 and Y i = 1, the fault-free circuit responds with a 0 and the faulty circuit responds with a 1. The propagation in this case is not done by chance. BATG must be able to recog- nize whether a fault effect currently being processed is in control logic or data-flow logic. Control logic includes such things as status registers and mode control regis- ters. For example, suppose a particular mode control register determines the display resolution and number of colors chosen by a graphics chip. Such a register is often write-only; it cannot be directly read out. In order to determine its contents, the data coming out of the graphics chip must be inspected. If a defect exists in the mode control register, data will come out at the wrong rate, or the wrong data will come out, in which case the defect will be identified. BATG must have enough intelligence built into it to enable it to understand, at some level, that it must set up data registers with values that can cause incorrect values in a mode control register to appear at the output pins of the chip in the form of incorrect data, in order for control register faults to be identified. Another example of indirect identification of register bits occurs when a status register for an ALU must be checked (see Figure 3.1). To determine if an overflow occurred during an ALU operation, a conditional jump instruction is executed. If an overflow is supposed to occur during an ALU operation, then the jump address should appear at the address bus. If the next sequential address appears, the overflow bit of the status register must be stuck-at the wrong value. In this case, BATG must set up the processor to first perform an ALU operation, and BATG must determine what arguments are needed to either induce or avoid an overflow, depending on which of these conditions is being checked. Trapped fault selection is one of the activities that can be guided by heuristics. In the early phase of test pattern generation, it is usually desirable to flush out as many faults as possible, as quickly as possible. This can help to reduce fault simulation time, and it can help to avoid performing complex searches on faults that would nor- mally fall out as a byproduct of other searches. Selection criteria include ease or THE TEST DESIGN EXPERT 613 difficulty of flushing out trapped faults. If a large register, say a 32- or 64-bit data register, has many trapped faults, then it is a candidate for propagation. If two or more registers have comparable numbers of trapped faults, then another level of decision must be employed to further refine the decision process. One of the difficult things to do in an ATPG program is to make judgment calls. In the shift register of Figure 12.12, a comprehensive test strategy needs to consider how many faults can be detected by propagating a value through the entire shift reg- ister. If fault coverage for the register is high and only a few faults are undetected, there may be no benefit in adding many clock cycles to the test in order to propagate a value through the entire shift register. Another complicating factor is the level of effort required to set up the values required at S 3-0 . It was assumed that it could be done in one clock cycle. In reality, S 3-0 may require that a state machine traverse many states in order to reach the state that enables the needed values onto S 3-0 . This is an area where TDX could have benefited from a rule-based system, invoking the system to make decisions based on current fault coverage percentage, ease or diffi- culty of sensitizing and propagating a fault through the RTL, payback in estimated number of additional fault detections, and so on. To determine how to rate trapped faults in terms of difficulty, it must be possible to link error signals back to their fault origins. The fault origins, in turn, are linked to the input or output of the gate at which they originate. From there the controllability and observability numbers at the gate input or output can be used to estimate a level of difficulty, hence a value, for that fault. It should be noted, however, that the mechanical computation of C/O numbers does not always provide an accurate indi- cation of the ease or difficulty in controlling or observing the fault. It is possible for faults in control sections and the data path to have similar C/O numbers, but the effects of faults in the data path are directly observable, while faults in control logic are indirectly observable; that is, they are detected by observing their effects on operations performed in the data path. If it can be determined that many faults in a control section are trapped in the register, then the register contents can be given a high value during the selection process. To determine whether a fault effect originated in the control or data path part of the circuit, the data structures can be examined by tracing from the gate associated with the fault origin back to the flip-flops that drive that gate and for- ward to the flip-flops that are driven by it. From the map file these are easily associ- ated with their RTL level counterparts, which can then be related to their purpose in the circuit. As we saw in preceding paragraphs, trapped faults, a seemingly innocuous con- cept, can introduce many complexities into the equation when all the issues are con- sidered. In real-life circuits, many flip-flops and registers are buried deep in the circuit and require many clock cycles to control and observe. Others may be easy to control and difficult to observe or vice versa. When considering trapped faults, it would be useful to know in advance which of the flip-flops and registers are easy to control and/or observe. A register may have many trapped faults that are desirable to propagate to the output, but it may be the case that it is exceedingly difficult to prop- agate the contents of that register to the output. 614 BEHAVIORAL TEST AND VERIFICATION Conversely, the contents of a register may be easy to propagate to the output. It may, in fact, directly drive an internal bus that is connected to an output port. Part of the task of TDX was to learn about the circuit. The controllability and observability (C/O) numbers generated by the testability analysis program were a first-stage attempt to understand C/O issues. From there, other means were used to evaluate the ease or difficulty of propagating values to outputs. In effect, BATG was constantly refining its understanding of how the circuit behaved, and how it could be controlled and observed. As fault coverage increased, heuristics for selecting trapped faults often changed. If BATG learned how to sensitize and propagate faults in a particular area of a design, it might be desirable to continue developing test sequences for that area until all or nearly all the faults in that region become detected. An alternative may be to address faults in a function for which there is little or no coverage. The rationale for this is to get overall fault coverage up to some desirable level with the least number of vectors. This is motivated by the fact that the user may want to hold down the overall test length (cost) while getting the best possible fault coverage within that test length constraint. Also, as has been pointed out in the literature, test quality is influenced to some extent by how well fault coverage is distributed. 24 When fault coverage reaches some predefined level, it is possible at that point to begin attacking individual faults, or small clusters of faults, with more refined heuristics. 12.8.2 DEPOT By virtue of its architecture, TDX could propagate and justify values derived from the RTL model by means of error modeling, or it could propagate and justify stuck- at faults identified by a gate-level ATPG that could determine what state the circuit had to be in for a fault to become sensitized. To that end, a gate-level combinational test pattern generator was developed. It supported the D-algorithm and a variant of PODEM. The gate-level ATPG was called DEPOT (DEductive, Path-Oriented Trace). Conceptually, when DEPOT was running, TDX, to all appearances, behaved like any other scan-based ATPG, at least for synchronous circuits. It selected an undetec- ted fault, then worked its way forward to a flip-flop or primary output, and justified assignments back to primary inputs and/or flip-flops. However, at this point the sim- ilarity to a scan-based system ceased. A priority for DEPOT was to sensitize a fault with the smallest possible number of state assignments. Sequential state searches were costly in terms of computations, and the greater the number of state assign- ments, the greater the search space, and the greater the likelihood of conflicts. Because PODEM was given a list of inputs (primary inputs and flip-flops) to which assignments were to be made, and these inputs were selected by tracing back from assignments that required justification, it would often make assignments that were not essential to sensitizing a fault. It turned out that, for the purposes of gener- ating the smallest list of assignments to flip-flops and I/O pins, the D-algorithm gen- erally proved to be more frugal. SCOAP numbers were used to control justification and sensitization, and these numbers were more effectively used by the D-algorithm. THE TEST DESIGN EXPERT 615 Another priority for DEPOT was to try to match the existing state of a circuit. If two or more sensitization solutions exist and if one of them more closely matches the current state than any of the other solutions, then it is usually the more desirable solution, since fewer goals are generated. It is possible, however, that only one stor- age element needs to be changed from its existing state in order to sensitize a fault, but it may be extremely difficult to control. A cost function involving heuristics, including controllability/observability numbers, helped to make a decision in those cases. An optimal strategy was to look for easy solutions. For example, a sensitized path may already exist for an undetected fault from its origin to the data input of a flip- flop. Since the concurrent fault simulator had a complete record of fault effects, it could examine logic gates driving flip-flop inputs, looking for fault effects that cor- responded to undetected faults (cf. Figure 3.10). If one or more such fault effects were found, then toggling the clock would cause that fault, and possibly others, to become trapped. Strategies that were under consideration (but not implemented) included Boolean differences and binary decision diagrams (BDD). Given a fault to be sensitized in a particular cone, the object was to find a closed form expression sensitizing that fault within the cone (cf. Section 4.13.1). The expression could then be evaluated analyti- cally, relative to the current state of the circuit, to find the best sensitization state. The best sensitization state might be one that most closely matches the current state of the circuit, or it might be one that is deemed least expensive (easiest), based on some cost function. If a fault exists in two or more cones and if closed-form expres- sions could be generated for each of the cones, a more comprehensive cost function could be implemented. Section 12.6.3 introduced the concept of primitive function test patterns (PFTP) for members of the library of parameterized models (LPM). It was pointed out that a comprehensive set of vectors, based on the functionality of individual members of the LPM, has the advantage that all inputs to an n-wide data port can be assigned simultaneously, permitting more faults to be detected, or more classes of faults to be addressed, such as shorts between adjacent pins to the function. These vectors can be used in place of vectors that were generated by DEPOT for specific faults, or vec- tors generated by DEPOT could be merged with these vectors from the library. Another option is to use the PFTP vectors first and then, if faults escape detection, use DEPOT to target those faults that remain undetected. History files were another TDX feature. Information useful in a history file included a record of successes and failures while trying to drive a circuit into a spe- cific state. It was found that success or failure in reaching a target state often depended on the current state of the circuit. Sometimes the target state could be reached merely by toggling the clock. At other times long, complex sequences were required. It proved useful sometimes to tag a particular difficult-to-reach state to indicate that if it were reached while trying to achieve some other goal, it should then be considered for exploitation. This is one of those examples of trying to develop rules that mimic behavior of the human engineer who, while developing sequences to either verify a design or create manufacturing test programs, may 616 BEHAVIORAL TEST AND VERIFICATION break off a particular approach and pursue another target of opportunity that appears to give a greater payback with less effort. The history file is also useful when analyzing closed-form expressions, such as those obtained from boolean differences, for identifying preferred sensitization states. History files can become enormous, so they must be limited to key control constructs such as state machines, mode control registers, and status registers. His- tory, together with controllability and observability values for these registers, can then become part of a more global evaluation process. This higher level of analysis provides a payback when what looks like a less expensive solution turns out to be the more expensive solution, or vice versa. 12.8.3 The Fault Simulator Since the original intent of TDX was to generate stimuli for manufacturing tests, a fault simulator was needed to compute fault coverage and to identify undetected faults. It was a full-timing, concurrent fault simulator, able to accurately fault simu- late both synchronous and asynchronous circuits. The simulation engine supported both an event-driven engine and a read/write array for processing zero delay ele- ments. If the elements of a combinational block of logic all had zero delay, they would be rank-ordered. This provided some of the benefits of cycle simulation, with a payback magnified by the fact that rank-ordering not only reduced the number of logic event evaluations, but also reduced the number of fault event evaluations, and there tended to be, on average, about 10 times as many of these evaluations. The fault simulator was able to fault-simulate subsequences provided by the Supervisor, then return control to the Supervisor. After fault simulation the TDX Supervisor would then request that the fault simulator identify a trapped fault for propagation, in which case BATG would be invoked to perform RTL level propaga- tion of the fault; if there were no trapped fault candidates, the Supervisor would select a fault from the list of undetected faults and invoke DEPOT. If there were several trapped faults, the Supervisor could identify particular regis- ters or flip-flops in which it was interested in trapped faults, or it could request that the fault simulator return a linked list identifying all of the storage devices that con- tained trapped faults. Identifying trapped faults in particular registers or flip-flops was often more valuable during the early stages of the run when it was likely that all or almost all of the storage devices would have trapped faults. During this stage, general-purpose registers might hold many trapped faults as a result of ALU opera- tions. As the run progressed and fault coverage increased, the distribution of fault effects became more sparse, and the likelihood of finding trapped faults would decrease in inverse proportion to the fault coverage. The fault simulator was also used as a logic simulator. In this mode it ignored the fault effects. After a sequence of vectors was generated, the simulator logic simu- lated them to determine if the correct destination state was reached at the end of the sequence. If the sequence caused the circuit to behave as intended, then the sequence would be fault-simulated to (a) identify faults that were detected by the subsequence and (b) identify trapped faults. If the sequence failed to drive the [...]... result of SM1 being in state 0 010 and SM2 being in state 010 Suppose also that the instruction register INST must contain the value 0101 1101 and that the status register must contain XXXX01XX; that is, Status[3:2] must contain the value 01 Suppose that three of the four requirements are satisfied and that the only requirement not satisfied is the requirement that INST = 0101 1101 In a combinational circuit... their formal training and experience Within that framework of knowledge and understanding, their so-called frame of reference, humans often understand the general concepts, but may not know all the details pertaining to how a particular instance of a device works That is where trial and error begins The human guesses at a solution and then attempts to ratify that guess In digital logic this ratification,... given circuits f and g, to perform Apply(⊕, Bf , Bg ), where Bf and Bg are the ROBDDs for the functions f and g The comparison of two circuits f and g is accomplished by comparing cones of combinational logic bounded by primary outputs and internal flip-flops This can be seen in Figure 12.25 (cf also Figure 7.21) The combinational cone driving flip-flop DFFi has inputs A, Bd, C, and Dd Inputs A and C are primary... fault simulation revealed that there were undetected faults remaining after the precomputed sequence of vectors had been applied, then DEPOT could go in and generate test vectors for the remaining undetected faults 12.9 DESIGN VERIFICATION In the early days of digital IC design, manufacturing test and design verification shared many common tools and methods Simulation was the workhorse of both test and. .. through combinational logic In the example from Figure 12.14 just described, if one considers only the fact that three of the goals are satisfied, it would be tempting to ignore those three goals and only be concerned with the unsatisfied goal However, the human, recognizing that we are dealing with a CPU and that the instruction register and the state machines are inextricably bound up and interdependent,... in TDX During the period when TDX was in development (1988–1992), gate count in logic circuits was growing rapidly, and Moore’s law was in full force Entire PCBs were being subsumed into one or two ICs In addition, memory elements and other custom-designed modules were beginning to appear on the same die as the random logic Design-for-test was becoming a more important aspect of test because, even... number of time frames, and it pointed directly at areas of a design where partial scan was needed Features of TDX that were particularly effective at identifying areas of a design in need of DFT support were the initialization phase and functional walk In the initialization phase, each flip-flop and latch was established an a stand-alone goal, with the object being to initialize it and cause it to switch... fact, the human is often not even aware that he or she is thinking in terms of goals Goal ordering and prioritizing is done instinctively and subconsciously As part of this training and experience, the human imposes different levels of understanding on different types of circuits, recognizing from experience and intuition the very different nature of a peripheral chip, in contrast to a microprocessor This... either true or false and can be verified Hence it is a proposition An atomic proposition is a basic proposition, one that cannot be broken down into two or more smaller units A compound proposition is one that is composed of two or more atomic propositions that are connected by logical connectives, such as AND, OR, NOT, XOR, equivalence, and implies When discussing propositional logic, it is customary... if A is false, or if A is true and B is true The expression A → B is only false if A is true and B is false, which can be interpreted to mean that a true premise cannot imply a false conclusion We have used propositional logic throughout the text; it is the backbone of the digital industry However, it has its limitations, which in turn has led to extensions Predicate logic is one such extension A predicate . sensitized as a result of SM1 being in state 0 010 and SM2 being in state 010. Suppose also that the instruction register INST must contain the value 0101 1101 and that the status register must contain. is attempting to perform a logic AND on two arguments X and Y, and a fault in the control section causes an OR operation to be selected. Then, for the values X i = 0 and Y i = 1, the fault-free. data-flow logic. Control logic includes such things as status registers and mode control regis- ters. For example, suppose a particular mode control register determines the display resolution and number

Ngày đăng: 09/08/2014, 16:20

Từ khóa liên quan

Tài liệu cùng người dùng

Tài liệu liên quan