Floating point operations are hard to implement on FPGAs because of the complexity of their algorithms. On the other hand, many scientific problems require Single Precision Floating Point Multiplication with high levels of accuracy in their calculations. Therefore, we have explored FPGA implementations of addition and multiplication for IEEE Single Precision Floating Point Multiplication. Customizations were performed where this was possible in order to save chip area, or get the most out of our prototype board. The implementations tradeoff area and speed for accuracy. The adder is a bit parallel adder, and the multiplier is a digit-serial multiplier. Prototypes have been implemented on Altera FLEXSOOOs, and peak rates of 7MFlops for 32-bit addition and 2.3MFlops for 32-bit multiplication have been obtained.
As mentioned above, the IEEE Standard for Binary Floating-Point Arithmetic (ANSI/IEEE Std 754-1985) will be used throughout our work. The single-precision format. Numbers in this format are composed of the following three fields:
1-bit sign, S: A value of ‘1’ indicates that the number is negative, and a ‘0’ indicates a positive number.
Bias-127 exponent, e = E + bias: This gives us an exponent range from Emin = -126 to E,,, = 127.
Fraction, f: The fractional part of the number. The fractional part must not be confused with the significance, which is 1 plus the fractional part. The leading 1 in the significand is implicit. When performing arithmetic with this format, the implicit bit is usually made explicit. To determine the value of a floating-point number in this format we use the following formula:
Value = (-1)’ x 2e-127 x I.f23f22f21…..fO
The main objectives throughout our work were to minimize the number of logic cells required for the adder and the multiplier, while at the same time keeping the speed of the operations at a reasonable level and maintaining IEEE 32-bit accuracy. The results presented above show that these requirements have been satisfied to a great extent; however, this does not mean that further improvements xe not possible. In the rest of this section, we present some of the ideas we have for making our designs faster, smaller, and more accurate. We have shown that IEEE single-precision floating-point arithmetic can be successfully implemented on FPGAs. Our implementations give a respectable performance, though not at the level of custom implementations.