How do I interpret precision and scale of a number in a database?
I have the following column specified in a database: decimal(5,2)
How does one interpret this?
According to the properties on the column as viewed in SQL Server Management studio I can see that it means: decimal(Numeric precision, Numeric scale).
What do precision and scale mean in real terms?
It would be easy to interpret this as a decimal with 5 digits and two decimals places...ie 12345.12
P.S. I've been able to determine the correct answer from a colleague but had great difficulty finding an answer online. As such, I'd like to have the question and answer documented here on stackoverflow for future reference.
Numeric precision refers to the maximum number of digits that are present in the number.
ie 1234567.89 has a precision of 9
Numeric scale refers to the maximum number of decimal places
ie 123456.789 has a scale of 3
Thus the maximum allowed value for decimal(5,2) is 999.99
Precision of a number is the number of digits.
Scale of a number is the number of digits after the decimal point.
What is generally implied when setting precision and scale on field definition is that they represent maximum values.
Example, a decimal field defined with precision=5
and scale=2
would allow the following values:
-
123.45
(p=5,s=2) -
12.34
(p=4,s=2) -
12345
(p=5,s=0) -
123.4
(p=4,s=1) -
0
(p=0,s=0)
The following values are not allowed or would cause a data loss:
-
12.345
(p=5,s=3) => could be truncated into12.35
(p=4,s=2) -
1234.56
(p=6,s=2) => could be truncated into1234.6
(p=5,s=1) -
123.456
(p=6,s=3) => could be truncated into123.46
(p=5,s=2) -
123450
(p=6,s=0) => out of range
Note that the range is generally defined by the precision: |value| < 10^p
...