TÀI LIỆU THAM KHẢO-IMPLEMENTATION OF FLOATING POINT ARTHMETIC ON FPGA

38 313 2
TÀI LIỆU THAM KHẢO-IMPLEMENTATION OF FLOATING POINT ARTHMETIC ON FPGA

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

FLOATING POINT ARITHMETIC ON FPGA   3DJH_ PROJECT REPORT ON IMPLEMENTATION OF FLOATING POINT ARITHMETIC ON FPGA DIGITAL SYSTEM ARCHITECTURE WINTER SEMESTER-2010 BY SUBHASH C (200911005) A N MANOJ KUMAR (200911030) PARTH GOSWAMI (200911049) FLOATING POINT ARITHMETIC ON FPGA   3DJH_ ACKNOWLEDGEMENT: We would like to express our deep gratitude to Dr.Rahul Dubey, who not only gave us this opportunity to work on this project, but also guided and encouraged us throughout the course. He and TAs of the course, Neeraj Chasta and Purushothaman, patiently helped us throughout the project. We take this as opportunity to thank them and our classmates and friends for extending their support and worked together in a friendly learning environment. And last but not the least, we would like to thank non-teaching lab staff who patiently helped us to understand that all kits were working properly. By Subhash C A N Manoj Kumar Parth Goswami FLOATING POINT ARITHMETIC ON FPGA   3DJH_ CONTENTS 1. PROBLEM STATEMENT 4 2. ABSTRACT 4 3. INTRODUCTION 5 3.1. FLOATING POINT FORMAT USED 6 3.2. DETECTION OF SPECIAL INPUTS 6 4. FLOATING POINT ADDER/SUBTRACTOR 8 5. FLOATING POINT MULTIPLIER 9 5.1. ARCHITECTURE FOR FLOATING POINT MULTIPLICATION 10 5.2. DESIGNED 4 * 4 BIT MULTIPLIER. 12 6. VERIFICATION PLAN 14 7. SIMULATION RESULTS & RTL SCHEMATICS 15 8. FLOOR PLAN OF DESIGN & MAPPING REPORT 21 9. POWER ANALYSIS USING XPOWER ANALYZER 25 10. CONCLUSION 26 11. FUTURE SCOPE 26 12. REFERENCES 27 13. APPENDIX 28 FLOATING POINT ARITHMETIC ON FPGA   3DJH_ 1. PROBLEM STATEMENT: Implement the arithmetic (addition/subtraction & multiplication) for IEEE-754 single precision floating point numbers on FPGA. Display the resultant value on LCD screen. 2. ABSTRACT: Floating point operations are hard to implement on FPGAs because of the complexity of their algorithms. On the other hand, many scientific problems require floating point arithmetic with high levels of accuracy in their calculations. Therefore, we have explored FPGA implementations of addition and multiplication for IEEE-754 single precision floating-point numbers. For floating point multiplication, in IEEE single precision format, we have to multiply two 24 bits. As we know that in Spartan 3E, 18 bit multiplier is already there. The main idea is to replace the existing 18 bit multiplier with a dedicated 24 bit multiplier designed with small 4 bit multiplier. For floating point addition, exponent matching and shifting of 24 bit mantissa and sign logic are coded in behavioral style. Entire our project is divided into 4 modules. 1. Designing of floating point adder/subtractor. 2. Designing of floating point multiplier. 3. Creation of combined control & data paths. 4. I/O interfacing: Interfacing of LCD for displaying the output and tacking inputs from block RAM. Prototypes have been implemented on Xilinx Spartan 3E. FLOATING POINT ARITHMETIC ON FPGA   3DJH_ 3. INTRODUCTION: Image and digital signal processing applications require high floating point calculations throughput, and nowadays FPGAs are being used for performing these Digital Signal Processing (DSP) operations. Floating point operations are hard to implement on FPGAs as their algorithms are quite complex. In order to combat this performance bottleneck, FPGAs vendors including Xilinx have introduced FPGAs with nearly 254 18x18 bit dedicated multipliers. These architectures can cater the need of high speed integer operations but are not suitable for performing floating point operations especially multiplication. Floating point multiplication is one of the performance bottlenecks in high speed and low power image and digital signal processing applications. Recently, there has been significant work on analysis of high-performance floating-point arithmetic on FPGAs. But so far no one has addressed the issue of changing the dedicated 18x18 multipliers in FPGAs by an alternative implementation for improvement in floating point efficiency. It is a well known concept that the single precision floating point multiplication algorithm is divided into three main parts corresponding to the three parts of the single precision format. In FPGAs, the bottleneck of any single precision floating-point design is the 24x24 bit integer multiplier required for multiplication of the mantissas. In order to circumvent the aforesaid problems, we designed floating point multiplication and addition. The designed architecture can perform both single precision floating point addition as well as single precision floating point multiplication with a single dedicated 24x24 bit multiplier block designed with small 4x4 bit multipliers. The basic idea is to replace the existing 18x18 multipliers in FPGAs by dedicated 24x24 bit multiplier blocks which are implemented with dedicated 4x4 bit multipliers. This architecture can also be used for integer multiplication as well. FLOATING POINT ARITHMETIC ON FPGA   3DJH_ 3.1. FLOATING POINT FORMAT USED: As mentioned above, the IEEE Standard for Binary Floating Point Arithmetic (ANSI/IEEE Std 754-1985) will be used throughout our work. The single precision format is shown in Figure 1. Numbers in this format are composed of the following three fields: 1-bit sign, S: A value of ¶1· indicates that the number is negative, and a ¶0· indicates a positive number. Bias-127 exponent, e = E + bias: This gives us an exponent range from Emin = -126 to Emax = 127. Fraction, f/mantissa: The fractional part of the number. The fractional part must not be confused with the significand, which is 1 plus the fractional part. The leading 1 in the significand is implicit. When performing arithmetic with this format, the implicit bit is usually made explicit. To determine the value of a floating point number in this format we use the following formula: Value = (-1) sign x 2 (exponent-127) x 1.f22f21f20 f1f0 Fig 1. Representation of floating point number 3.2. DETECTION OF SPECIAL INPUTS: In the ieee-754 single precision floating point numbers support three special inputs Signed Infinities The two infinities, + and - , represent the maximum positive and negative real numbers, respectively, that can be represented in the floating- point format. Infinity is always represented by a zero significand (fraction and FLOATING POINT ARITHMETIC ON FPGA   3DJH_ integer bit) and the maximum biased exponent allowed in the specified format (for example, 255 10 for the single-real format). The signs of infinities are observed, and comparisons are possible. Infinities are always interpreted in the affine sense; that is, ¨ is less than any finite number and + is greater than any finite number. Arithmetic on infinities is always exact. Exceptions are generated only when the use of infinity as a source operand constitutes an invalid operation. Whereas de-normalized numbers represent an underflow condition, the two infinity numbers represent the result of an overflow condition. Here, the normalized result of a computation has a biased exponent greater than the largest allowable exponent for the selected result format. NaN's Since NaNs are non-numbers, they are not part of the real number line. The encoding space for NaNs in the FPU floating-point formats is shown above the ends of the real number line. This space includes any value with the maximum allowable biased exponent and a non-zero fraction. (The sign bit is ignored for NaNs.) The IEEE standard defines two classes of NaNs: quiet NaNs (QNaNs) and signaling NaNs (SNaNs). A QNaN is a NaN with the most significant fraction bit set; an SNaN is a NaN with the most significant fraction bit clear. QNaNs are allowed to propagate through most arithmetic operations without signaling an exception. SNaNs generally signal an invalid-operation exception whenever they appear as operands in arithmetic operations. Though zero is not a special input, if one of the operands is zero, then the result is known without performing any operation, so a zero which is denoted by zero exponent and zero mantissa. One more reason to detect zeroes is that it is difficult to find the result as adder may interpret it to decimal value 1 after adding the hidden ¶1· to mantissa. FLOATING POINT ARITHMETIC ON FPGA   3DJH_ 4. FLOATING POINT ADDER/SUBTRACTOR: Floating-point addition has mainly three parts: 1. Adding hidden ¶1· and Alignment of the mantissas to make exponents equal. 2. Addition of aligned mantissas. 3. Normalization and rounding the Result. The initial mantissa is of 23-bits wide. After adding the hidden ¶1· ,it is 24-bits wide. First the exponents are compared by subtracting one from the other and looking at the sign (MSB which is carry) of the result. To equalize the exponents, the mantissa part of the number with lesser exponent is shifted right d-times. where ¶d· is the absolute value difference between the exponents. The sign of larger number is anchored. The xor of sign bits of the two numbers decide the operation (addition/ subtraction) to be performed. Now, as the shifting may cause loss of some bits and to prevent this to some extent, generally the length of mantissas to be added is no longer 24-bits. In our implementation, the mantissas to be added are 25-bits wide. The two mantissas are added (subtracted) and the most significant 24-bits of the absolute value of the result form the normalized mantissa for the final packed floating point result. Again xor of anchor-sign bit and the sign of result forms the sign bit for the final packed floating point result. The remaining part of result is exponent. Before normalizing the result Value of exponent is same as the anchored exponent which is the larger of two exponents. In normalization, the leading zeroes are detected and shifted so that a leading one comes. Exponent also changes accordingly forming the exponent for the final packed floating point result. The whole process is explained clearly in the below figure. FLOATING POINT ARITHMETIC ON FPGA   3DJH_ Fig 2. Architecture for floating point adder/subtractor 5. FLOATING POINT MULTIPLIER: The single precision floating point algorithm is divided into three main parts corresponding to the three parts of the single precision format. The first part of the product which is the sign is determined by an exclusive OR function of the two input signs. The exponent of the product which is the second part is Calculated by adding the two input exponents. The third part which is the significand of the product is determined by multiplying the two input significands each with a ´1µ concatenated to it. Below figure shows the architecture and flowchart of the single precision floating point multiplier. It can be easily observed from the Figure that 24x24 FLOATING POINT ARITHMETIC ON FPGA   3DJH_ bit integer multiplier is the main performance bottleneck for high speed and low power operations. In FPGAs, the availability of the dedicated 18x18 multipliers instead of dedicated 24x24 bit multiply blocks further complicates this problem. 5.1. DESIGNED ARCHITECTURE FOR MULTIPLICATION IN FPGAS: We proposed the idea of a combined floating point multiplier and adder for FPGAs. In this, it is proposed to replace the existing 18x18 bit multipliers in FPGAs with dedicated blocks of 24x24 bit integer multipliers designed with 4x4 bit multipliers. In the designed architecture, the dedicated 24x24 bit multiplication block is fragmented to four parallel 12x12 bit multiplication module, where AH, AL, BH and BL are each of 12 bits. The 12x12 multiplication modules are implemented using small 4x4 bit multipliers. Thus, the whole 24x24 bit multiplication operation is divided into 36 4x4 multiply modules working in parallel. The 12 bit numbers A & B to be multiplied are divided into 4 bits groups A3,A2,A1 and B3,B2,B1 respectively. The flowchart and the architecture for the multiplier block are shown below. fig 3. Flowchart for floating point multiplication [...].. .FLOATING POINT ARITHMETIC ON FPGA 3DJH_ fig 4 Designed architecture for floating point multiplication Additional Advantages: The additional advantage of the proposed CIFM is that floating point multiplication operation can now be performed easily in FPGA without any resource and performance bottleneck In the single precision floating point multiplication, the mantissas are of 23 bits... INSTANTIATON OF FLOATINGPOINT ADDER MODULE fpadder adder(.A(IN1),.B(IN2),.C(FPADD)); // INSTATIATION OF FLOATINGPOINT MULTIPLICATION MODULE FLOATINGMULTIPLICATION multiplication(.IN1(IN1),.IN2(IN2),.OUT(FPMUL)); // ASSIGNING THE REQUIRED VALUE TO OUTPUT VARIABLE DEPENDING ON THE CONTROL(CNTR) VALUE //assign OUT = cntr ? FPADD: FPMUL; assign OUT1 = cntr ? FPADD: FPMUL; endmodule // MODULE FOR FLOATING POINT. .. point numbers on FPGA, and displayed the corresponding output values on LCD as well 3DJH_ 11 FUTURE SCOPE: As we have used a MUX to select the outputs of two computational blocks, both adder and multiplier are active though only one of them is needed to be active at a time This consumes lot of dynamic power which can be reduced by disabling one of them when not required One more addition that can...   FLOATING POINT ARITHMETIC ON FPGA TEST FOR ADDER MODULE: 3DJH_ VARIOUS SIGNALS ² DESCRIPTION (MULTIPLIER MODULE): 1 IN1,IN2: input 32-bit single precision numbers 2 OUT: output 32-bit single precision number 3 SA,SB,EA,EB,MA,MB: sign ,exponent and mantissa parts of inputs 4 PFPM: 48 bit multiplication result 5 SPFPM: shifted result of multiplication 6 EFPM: exponent result (output of exponent...   FLOATING POINT ARITHMETIC ON FPGA // GENERATION OF SIGN BIT USING XOR GATE xor (SFP,SA,SB); // INSTANTIATION OF EXPONENT ADDITION MODULE TO ADD EXPONENTS EXPONENTADDITION FPEXP(.A(EA),.B(EB),.E(EFPM)); // INSTANTIATIONG 24 BIT MULTIPLIER MODULE TO MULTIPLY FRACTION 3DJH_ PART MULTIPLIER24BIT FPMUL(.A(MA),.B(MB),.P(PFPM)); //...  FLOATING POINT ARITHMETIC ON FPGA 9 POWER ANALYSIS USING XPOWER ANALYZER: 3DJH_ TEMPERATURE ANALYSIS:   FLOATING POINT ARITHMETIC ON FPGA 10 CONCLUSION: We have successfully implemented arithmetic (adder/subtract & multiplication) for IEEE single precision floating point. .. ====================================================================== Device utilization summary: Selected Device: 3s500efg320-5 Number of Slices: 1616 out of 4656 34% Number of 4 input LUTs: 2881 out of 9312 30% Number of IOs: 38 Number of bonded IOBs: 38 out of 232 16% Number of BRAMs: 2 out of 20 10% Number of GCLKs: 1 out of 24 4% ===================================================================== ...   FLOATING POINT ARITHMETIC ON FPGA RTL SCHEMATIC FOR DATAPATH & CONTROLLER: 3DJH_ TEST FOR DATAPATH & CONTROLLER:   FLOATING POINT ARITHMETIC ON FPGA VARIOUS SIGNALS ² DESCRIPTION (ADDER/SUBTRACTOR MODULE): 1 2 3 4 5 6 7 8 9 A,B: input 32-bit single precision numbers... is very much likely to occur in floating point operations)   FLOATING POINT ARITHMETIC ON FPGA 12 REFERENCES: 1 www.xilinx.com 2 Himanshu Thapliyal, Hamid R Arabnia, A.P Vinod , combined integer and 3DJH_ floating point multiplication in FPGA s 3 Computer arithmetic: Algorithms... www.randelshofer.ch/fhw/gri/lcd-init for some part of code in lcd interfacing 6. http://babbage.cs.qc.cuny.edu/IEEE-754/Decimal.html for java applets regarding floating point conversions 7 HITACHI HD44780_LCD data sheet   FLOATING POINT ARITHMETIC ON FPGA APPENDIX VERILOG CODE FOR FLOATING . (addition/subtraction & multiplication) for IEEE-754 single precision floating point numbers on FPGA. Display the resultant value on LCD screen. 2. ABSTRACT: Floating point operations. have explored FPGA implementations of addition and multiplication for IEEE-754 single precision floating- point numbers. For floating point multiplication, in IEEE single precision format, we. designed floating point multiplication and addition. The designed architecture can perform both single precision floating point addition as well as single precision floating point multiplication

Ngày đăng: 02/06/2015, 17:54

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan