Is it possible to take a decimal number as input from the user without using float/double data type?

Is it possible to take a decimal number as input from the user without using float/double data type?

And this number has to be used further for calculation.

I'm making a function to calculate the square root of a number without using cmath library, I'm doing this for a DSP that doesn't have FPU so I can't use float/double.

If I will be using string to take input as decimal number from the user then how can I convert it into a number, and if it is converted into a number then what should be the return type of the square root function (the root will be a fractional number)?

If possible please suggest some alternative way instead of stings.

I know it is related to fixed point arithmetic but I don't know how to implement it in c++.


Solution 1:

Foreword: Compilers can implement software floating point operations for CPU's which do not have floating point hardware, so it is often not a problem to use float and double on such systems.

I recommend using the standard fundamental floating point types as long as they are supported by your compiler.

Is it possible to take a decimal number as input from the user without using float/double data type?

Yes. User input is done using character streams. You can read input into a string without involving any numeric type.

And this number has to be used further for calculation.

To do calculation, you must first decide how you would like represent the number. There are several alternatives to hardware floating point:

  • Fixed point: Use the integer 100 to represent 0.0100 for example.
  • Software floating point: Use one integer to represent mantissa, another integer to represent the exponent and a boolean to represent sign.
  • Rational numbers: Use one integer to represent nominator and another to represent denominator.
  • Probably many others...

Each of these have different implemenations for different arithmetic operations.

Fixed point is the simplest and probably most efficient, but has both small range and poor precision near zero (well, equal precision across the entire range but poor compared to floating point which has high precision near zero and very poor precision far from zero).

Software floating point allows potentially reproducing hardware behaviour by following the ubiquitous IEEE-754 standard.

Rational numbers have problems with overflowing as well as redundant representations. I don't think they are used much except with arbitrary precision integers.

(the root will be a fractional number)

Technically, most roots are irrational and thus not fractional. But since irrational numbers are not representable by computers (except in symbolic form), the best we can achieve is some fractional number close to the actual root.