Why is ruby so much slower on windows?
What are the specific technical causes of Ruby being so much slower on Windows? People report about a 3X speed drop from Linux/OSX and there are some vague discussions about Ruby using a compiler for Windows versions that produces slow code but I can't find any specific details.
Anybody know the specifics? I'm not interested in hurf durf Windoze sucks yuk yuks.
Solution 1:
I would guess there are a few possible options, and they probably all add up:
- Ruby being mainly developed on Linux, it ends up mechanically optimised for it. The code is regularly tested for Windows and everything works, but the result is still that developer will spend more time optimising for Linux than Windows.
- To my experience, recent versions of gcc (4.3 and greater) produce code more efficient than recent versions of Visual Studio (at least 2005). My tests included in both case spending about a day finding the best options for code optimisation.
- Related to point 1, if you compile the same project using gcc for Windows or Linux, I usually observe a drop of performances of about 20% on Windows compared to Linux. Here again, I suppose this is because Linux (or Unices in general) is a primary target for gcc, windows is a port. Less time is spent optimising for Windows than Linux.
In the end, if one would want to optimise Ruby for Windows, a significant amount of time (and money, as far as I know, profilers on Windows don't come for free) will have to be spent using a profiler and optimising bottlenecks. And everything will have to be tested on Linux to make sure there is no loss of performance.
Of course, all than should be tested again with their new interpreter YARV.
Solution 2:
I've not done much work with the source code of the YARV interpreter, so the following comments pertain only to the 1.8.6 MIR interpreter.
In the course of trying to write a C extension for Ruby in Visual Studio, I discovered to my horror that the downloadable Windows binaries of Ruby 1.8.6 are compiled using Visual C++ 6.0, which was released shortly after the end of the Second World War. Since then compilers (and the processors they target) have advanced considerably. While the Linux builds get the latest gcc goodness, the Windows build limps along with last century's compiler technology. That's one reason. (Disclaimer: supposedly 1.9 is to be built with mingw, of which I am not a fan, but which also must be better than VC6)
Without knowing what ops in particular you find slower on Windows it's hard to comment further, but I will note that I found the I/O implementation on Ruby to be considerably less performant with both network and local file I/O. I never delved into the implementation of the I/O primitives enough to see why, but I assume the implementations assume the fast IO constructs on Linux are the fast IO constructs on Windows, which is almost always not the case.
Solution 3:
Not completely to your question, but there was a great discussion on the Deep Fried Bytes podcast that discussed the same question in the IronPython context. I understand your question pertains to Ruby, but there may be related issues that also affect Ruby.
Also, the discussion does a good job of looking a bit deeper than "Windows sucks", so it's worthwhile to check it out.
Solution 4:
At first you need to make a distinction between the older MRI interpreter (versions up to 1.8) and the newer YARV, which is the official interpreter for Ruby 1.9. There are big performance improvements and a different design in Ruby 1.9, so one needs to know which version you are talking about. I am guessing that what you've read refers to 1.8.x version, which is the only one that has an one-click installer so far.
Also, it would be good to know if you are talking about Ruby on Rails performance or Ruby in general. I know that there should be a clear distinction between these two, but because Ruby on Rails is the main use of Ruby, people often talk about its performance as if they were speaking about Ruby's performance.
As for the compiler, Ruby can be built using any recent version of Visual Studio, which is more than fine. I guess that if such a performance difference does exist, one should look at the implementation of the interpreter and see if there is something that would make a difference between a POSIX and a Windows system.
Solution 5:
The performance bump is not 300%, in general, instead, it is closer to 50%-100%. Casual Jim's explanation is one of the reasons why data processing scripts are slower on Windows compared to Unix-variants and Linux.
In the more general case, the only thing I can think of is that Ruby development is Linux-centered, which has lead to many Unix-isms in the way Ruby was built. Also, since most active developers are not Windows users, very little Windows optimization expertise is present in the team, and most performance optimizing decisions are focused on making things faster on Unix systems.
A specific example of this is that Ruby uses copy-on-write parameter passing, which, according to what I read, can't be done properly on Windows, causing a lot of overhead in method calls.
I can't seem to figure out though, what Casual Jim did to deserve the -8 vote.