Why does Java read a big file faster than C++?
I have a 2 GB file (iputfile.txt
) in which every line in the file is a word, just like:
apple
red
beautiful
smell
spark
input
I need to write a program to read every word in the file and print the word count. I wrote it using Java and C++, but the result is surprising: Java runs 2.3 times faster than C++. My code are as follows:
C++:
int main() {
struct timespec ts, te;
double cost;
clock_gettime(CLOCK_REALTIME, &ts);
ifstream fin("inputfile.txt");
string word;
int count = 0;
while(fin >> word) {
count++;
}
cout << count << endl;
clock_gettime(CLOCK_REALTIME, &te);
cost = te.tv_sec - ts.tv_sec + (double)(te.tv_nsec-ts.tv_nsec)/NANO;
printf("Run time: %-15.10f s\n", cost);
return 0;
}
Output:
5e+08
Run time: 69.311 s
Java:
public static void main(String[] args) throws Exception {
long startTime = System.currentTimeMillis();
FileReader reader = new FileReader("inputfile.txt");
BufferedReader br = new BufferedReader(reader);
String str = null;
int count = 0;
while((str = br.readLine()) != null) {
count++;
}
System.out.println(count);
long endTime = System.currentTimeMillis();
System.out.println("Run time : " + (endTime - startTime)/1000 + "s");
}
Output:
5.0E8
Run time: 29 s
Why is Java faster than C++ in this situation, and how do I improve the performance of C++?
You aren't comparing the same thing. The Java program reads lines, depening on the newline, while the C++ program reads white space delimited "words", which is a little extra work.
Try istream::getline
.
Later
You might also try and do an elementary read operation to read a byte array and scan this for newlines.
Even later
On my old Linux notebook, jdk1.7.0_21 and don't-tell-me-it's-old 4.3.3 take about the same time, comparing with C++ getline. (We have established that reading words is slower.) There isn't much difference between -O0 and -O2, which doesn't surprise me, given the simplicity of the code in the loop.
Last note As I suggested, fin.read(buffer,LEN) with LEN = 1MB and using memchr to scan for '\n' results in another speed improvement of about 20%, which makes C (there isn't any C++ left by now) faster than Java.
There are a number of significant differences in the way the languages handle I/O, all of which can make a difference, one way or another.
Perhaps the first (and most important) question is: how is the data encoded in the text file. If it is single-byte characters (ISO 8859-1 or UTF-8), then Java has to convert it into UTF-16 before processing; depending on the locale, C++ may (or may not) also convert or do some additional checking.
As has been pointed out (partially, at least), in C++, >>
uses
a locale specific isspace
, getline
will simply compare for
'\n'
, which is probably faster. (Typical implementations of
isspace
will use a bitmap, which means an additional memory
access for each character.)
Optimization levels and specific library implementations may also vary. It's not unusual in C++ for one library implementation to be 2 or 3 times faster than another.
Finally, a most significant difference: C++ distinguishes
between text files and binary files. You've opened the file in
text mode; this means that it will be "preprocessed" at the
lowest level, before even the extraction operators see it. This
depends on the platform: for Unix platforms, the "preprocessing"
is a no-op; on Windows, it will convert CRLF pairs into '\n'
,
which will have a definite impact on performance. If I recall
correctly (I've not used Java for some years), Java expects
higher level functions to handle this, so functions like
readLine
will be slightly more complicated. Just guessing
here, but I suspect that the additional logic at the higher
level costs less in runtime than the buffer preprocessing at the
lower level. (If you are testing under Windows, you might
experiment with opening the file in binary mode in C++. This
should make no difference in the behavior of the program when
you use >>
; any extra CR will be considered white space. With
getline
, you'll have to add logic to remove any trailing
'\r'
to your code.)
I would suspect that the main difference is that java.io.BufferedReader
performs better than the std::ifstream
because it buffers, while the ifsteam does not. The BufferedReader reads large chunks of the file in advance and hands them to your program from RAM when you call readLine()
, while the std::ifstream only reads a few bytes at a time when you prompt it to by calling the >>
-operator.
Sequential access of large amounts of data from the hard drive is usually much faster than accessing many small chunks one at a time.
A fairer comparison would be to compare std::ifstream to the unbuffered java.io.FileReader.
I am not expert in C++, but you have at least the following to affect performance:
- OS level caching for the file
- For Java you are using a buffered reader and the buffer size defaults to a page or something. I am not sure how C++ streams does this.
- Since the file is so big that JIT would probably be kicked in, and it probably compiles the Java byte code better than if you don't turn any optimization on for your C++ compiler.
Since I/O cost is the major cost here, I guess 1 and 2 are the major reasons.