How do you generate a random double uniformly distributed between 0 and 1 from C++?
Solution 1:
An old school solution like:
double X=((double)rand()/(double)RAND_MAX);
Should meet all your criteria (portable, standard and fast). obviously the random number generated has to be seeded the standard procedure is something like:
srand((unsigned)time(NULL));
Solution 2:
In C++11 and C++14 we have much better options with the random header. The presentation rand() Considered Harmful by Stephan T. Lavavej explains why we should eschew the use of rand()
in C++ in favor of the random
header and N3924: Discouraging rand() in C++14 further reinforces this point.
The example below is a modified version of the sample code on the cppreference site and uses the std::mersenne_twister_engine engine and the std::uniform_real_distribution which generates numbers in the [0,1)
range (see it live):
#include <iostream>
#include <iomanip>
#include <map>
#include <random>
int main()
{
std::random_device rd;
std::mt19937 e2(rd());
std::uniform_real_distribution<> dist(0, 1);
std::map<int, int> hist;
for (int n = 0; n < 10000; ++n) {
++hist[std::round(dist(e2))];
}
for (auto p : hist) {
std::cout << std::fixed << std::setprecision(1) << std::setw(2)
<< p.first << ' ' << std::string(p.second/200, '*') << '\n';
}
}
output will be similar to the following:
0 ************************
1 *************************
Since the post mentioned that speed was important then we should consider the cppreference section that describes the different random number engines (emphasis mine):
The choice of which engine to use involves a number of tradeoffs*: the **linear congruential engine is moderately fast and has a very small storage requirement for state. The lagged Fibonacci generators are very fast even on processors without advanced arithmetic instruction sets, at the expense of greater state storage and sometimes less desirable spectral characteristics. The Mersenne twister is slower and has greater state storage requirements but with the right parameters has the longest non-repeating sequence with the most desirable spectral characteristics (for a given definition of desirable).
So if there is a desire for a faster generator perhaps ranlux24_base or ranlux48_base are better choices over mt19937.
rand()
If you forced to use rand()
then the C FAQ for a guide on How can I generate floating-point random numbers?, gives us an example similar to this for generating an on the interval [0,1)
:
#include <stdlib.h>
double randZeroToOne()
{
return rand() / (RAND_MAX + 1.);
}
and to generate a random number in the range from [M,N)
:
double randMToN(double M, double N)
{
return M + (rand() / ( RAND_MAX / (N-M) ) ) ;
}
Solution 3:
The random_real class from the Boost random library is what you need.
Solution 4:
The C++11 standard library contains a decent framework and a couple of serviceable generators, which is perfectly sufficient for homework assignments and off-the-cuff use.
However, for production-grade code you should know exactly what the specific properties of the various generators are before you use them, since all of them have their caveats. Also, none of them passes standard tests for PRNGs like TestU01, except for the ranlux generators if used with a generous luxury factor.
If you want solid, repeatable results then you have to bring your own generator.
If you want portability then you have to bring your own generator.
If you can live with restricted portability then you can use boost, or the C++11 framework in conjunction with your own generator(s).
More detail - including code for a simple yet fast generator of excellent quality and copious links - can be found in my answers to similar topics:
- General purpose random number generation
- Very fast uniform distribution random number generator
For professional uniform floating-point deviates there are two more issues to consider:
- open vs. half-open vs. closed range, i.e. (0,1), [0, 1) or [0,1]
- method of conversion from integral to floating-point (precision, speed)
Both are actually two sides of the same coin, as the method of conversion takes care of the inclusion/exclusion of 0 and 1. Here are three different methods for the half-open interval:
// exact values computed with bc
#define POW2_M32 2.3283064365386962890625e-010
#define POW2_M64 5.421010862427522170037264004349e-020
double random_double_a ()
{
double lo = random_uint32() * POW2_M64;
return lo + random_uint32() * POW2_M32;
}
double random_double_b ()
{
return random_uint64() * POW2_M64;
}
double random_double_c ()
{
return int64_t(random_uint64()) * POW2_M64 + 0.5;
}
(random_uint32()
and random_uint64()
are placeholders for your actual functions and would normally be passed as template parameters)
Method a demonstrates how to create a uniform deviate that is not biassed by excess precision for lower values; the code for 64-bit is not shown because it is simpler and just involves masking off 11 bits. The distribution is uniform for all functions but without this trick there would be more different values in the area closer to 0 than elsewhere (finer grid spacing due to the varying ulp).
Method c shows how to get a uniform deviate faster on certain popular platforms where the FPU knows only a signed 64-bit integral type. What you see most often is method b but there the compiler has to generate lots of extra code under the hood to preserve the unsigned semantics.
Mix and match these principles to create your own tailored solution.
All this is explained in Jürgen Doornik's excellent paper Conversion of High-Period Random Numbers to Floating Point.
Solution 5:
Here's how you'd do it if you were using C++ TR1.