Floating Point Divider Hardware Implementation Details

I am trying to implement a 32-bit floating point hardware divider in hardware and I am wondering if I can get any suggestions as to some tradeoffs between different algorithms?

My floating point unit currently suppports multiplication and addition/subtraction, but I am not going to switch it to a fused multiply-add (FMA) floating point architecture since this is an embedded platform where I am trying to minimize area usage.


Solution 1:

Once upon a very long time ago i come across this neat and easy to implement float/fixed point divison algorithm used in military FPUs of that time period:

  1. input must be unsigned and shifted so x < y and both are in range < 0.5 ; 1 >

    don't forget to store the difference of shifts sh = shx - shy and original signs

  2. find f (by iterating) so y*f -> 1 .... after that x*f -> x/y which is the division result

  3. shift the x*f back by sh and restore result sign (sig=sigx*sigy)

    the x*f can be computed easily like this:

    z=1-y
    (x*f)=(x/y)=x*(1+z)*(1+z^2)*(1+z^4)*(1+z^8)*(1+z^16)...(1+z^2n)
    

    where

    n = log2(num of fractional bits for fixed point, or mantisa bit size for floating point)
    

    You can also stop when z^2n is zero on fixed bit width data types.

[Edit2] Had a bit of time&mood for this so here 32 bit IEEE 754 C++ implementation

I removed the old (bignum) examples to avoid confusion for future readers (they are still accessible in edit history if needed)

//---------------------------------------------------------------------------
// IEEE 754 single masks
const DWORD _f32_sig    =0x80000000;    // sign
const DWORD _f32_exp    =0x7F800000;    // exponent
const DWORD _f32_exp_sig=0x40000000;    // exponent sign
const DWORD _f32_exp_bia=0x3F800000;    // exponent bias
const DWORD _f32_exp_lsb=0x00800000;    // exponent LSB
const DWORD _f32_exp_pos=        23;    // exponent LSB bit position
const DWORD _f32_man    =0x007FFFFF;    // mantisa
const DWORD _f32_man_msb=0x00400000;    // mantisa MSB
const DWORD _f32_man_bits=       23;    // mantisa bits
//---------------------------------------------------------------------------
float f32_div(float x,float y)
    {
    union _f32          // float bits access
        {
        float f;        // 32bit floating point
        DWORD u;        // 32 bit uint
        };
    _f32 xx,yy,zz; int sh; DWORD zsig; float z;
    //      result signum        abs value
    xx.f=x; zsig =xx.u&_f32_sig; xx.u&=(0xFFFFFFFF^_f32_sig);
    yy.f=y; zsig^=yy.u&_f32_sig; yy.u&=(0xFFFFFFFF^_f32_sig);
    // initial exponent difference sh and normalize exponents to speed up shift in range
    sh =0;
    sh-=((xx.u&_f32_exp)>>_f32_exp_pos)-(_f32_exp_bia>>_f32_exp_pos); xx.u&=(0xFFFFFFFF^_f32_exp); xx.u|=_f32_exp_bia;
    sh+=((yy.u&_f32_exp)>>_f32_exp_pos)-(_f32_exp_bia>>_f32_exp_pos); yy.u&=(0xFFFFFFFF^_f32_exp); yy.u|=_f32_exp_bia;
    // shift input in range
    while (xx.f> 1.0f) { xx.f*=0.5f; sh--; }
    while (xx.f< 0.5f) { xx.f*=2.0f; sh++; }
    while (yy.f> 1.0f) { yy.f*=0.5f; sh++; }
    while (yy.f< 0.5f) { yy.f*=2.0f; sh--; }
    while (xx.f<=yy.f) { yy.f*=0.5f; sh++; }
    // divider block
    z=(1.0f-yy.f);
    zz.f=xx.f*(1.0f+z);
    for (;;)
        {
        z*=z; if (z==0.0f) break;
        zz.f*=(1.0f+z);
        }
    // shift result back
    for (;sh>0;) { sh--; zz.f*=0.5f; }
    for (;sh<0;) { sh++; zz.f*=2.0f; }
    // set signum
    zz.u&=(0xFFFFFFFF^_f32_sig);
    zz.u|=zsig;
    return zz.f;
    }
//---------------------------------------------------------------------------

I wanted to keep it simple so it is not optimized yet. You can for example replace all *=0.5 and *=2.0 by exponent inc/dec ... If you compare with FPU results on float operator / this will be a bit less precise because most FPUs compute on 80 bit internal format and this implementation is only on 32 bits.

As you can see I am using from FPU just +,-,*. The stuff can be speed up by using fast sqr algorithms like

  • Fast bignum square computation

especially if you want to use big bit widths ...

Do not forget to implement normalization and or overflow/underflow correction.