How do I use floating point number literals when using generic types?
Regular float literals do not work:
extern crate num_traits;
use num_traits::float::Float;
fn scale_float<T: Float>(x: T) -> T {
x * 0.54
}
fn main() {
let a: f64 = scale_float(1.23);
}
error[E0308]: mismatched types
--> src/main.rs:6:9
|
6 | x * 0.54
| ^^^^ expected type parameter, found floating-point variable
|
= note: expected type `T`
found type `{float}`
Solution 1:
Use the FromPrimitive
trait:
use num_traits::{cast::FromPrimitive, float::Float};
fn scale_float<T: Float + FromPrimitive>(x: T) -> T {
x * T::from_f64(0.54).unwrap()
}
Or the standard library From
/ Into
traits
fn scale_float<T>(x: T) -> T
where
T: Float,
f64: Into<T>
{
x * 0.54.into()
}
See also:
- How do I use number literals with the Integer trait from the num crate?
Solution 2:
You can't create a Float
from a literal directly. I suggest an approach similar to the FloatConst
trait:
trait SomeDomainSpecificScaleFactor {
fn factor() -> Self;
}
impl SomeDomainSpecificScaleFactor for f32 {
fn factor() -> Self {
0.54
}
}
impl SomeDomainSpecificScaleFactor for f64 {
fn factor() -> Self {
0.54
}
}
fn scale_float<T: Float + SomeDomainSpecificScaleFactor>(x: T) -> T {
x * T::factor()
}
(link to playground)