Javascript Performance: While vs For Loops
The other day during a tech interview, one of the question asked was "how can you optimize Javascript code"?
To my own surprise, he told me that while loops were usually faster than for loops.
Is that even true? And if yes, why is that?
You should have countered that a negative while
loop would be even faster! See: JavaScript loop performance - Why is to decrement the iterator toward 0 faster than incrementing.
In while
versus for
, these two sources document the speed phenomenon pretty well by running various loops in different browsers and comparing the results in milliseconds:
https://blogs.oracle.com/greimer/entry/best_way_to_code_a and:
http://www.stoimen.com/blog/2012/01/24/javascript-performance-for-vs-while/.
Conceptually, a for
loop is basically a packaged while
loop that is specifically geared towards incrementing or decrementing (progressing over the logic according to some order or some length). For example,
for (let k = 0; ++k; k < 20) {…}
can be sped up by making it a negative while loop:
var k = 20;
while (--k) {…}
and as you can see from the measurements in the links above, the time saved really does add up for very large numbers.
While this is a great answer in minute detection of speed and efficiency I'd have to digress to @Pointy original statement.
The right answer would have been that it's generally pointless to worry about such minutia, since any effort you put into such optimizations could be rendered a complete waste by the next checkin to V8 or SpiderMonkey
Since Javascript is client side determined and was originally having to be coded per browser for full cross-browser compatibility (back before ECMA was even involved it was worse) the speed difference may not even be a logical answer at this point due to the significant optimization and adoption of Javascript on browsers and their compiler engines.
We're not even talking about about non-strict script only writing such as applications in GAS, so while the answers and questions are fun they would most likely be more trivial than useful in real world application.
To expound on this topic you first need to understand where this topic is originally coming from and compiling vs interpreting. Let's take a brief history of the evolution of languages and then jump back to compiling vs interpreting. While not required reading you can just read Compiling vs Interpeting for the quick answer but for in-depth understanding I'd recommend reading through both Compiling vs Interpreting and the Evolution of Programming (showing how they are applied today).
COMPILING VS INTERPRETING
Compiled language coding is a method of programming in which you write your code in a compilable manner that a compiler understands, some of the more recognized languages today are Java, C++ and C#. These languages are written with the intent that a compiler program then translates the code into the machine code or bytecode used by your target machine.
Interpreted code
is code that is processed Just In Time (JIT) at the time of the execution without compiling first, it skips this step and allows for quicker writing, debugging, additions/changes, etc. It also will never store the script's interpretation for future use, it will re-interpret the script each time a method is called. The interpreted code is ran within a defined and intended program runtime environment (for javascript is usually a browser) to which once interpreted by the environment is then output to the desired result. Interpreted scripts are never meant to be stand-alone software and are always looking to plug into a valid runtime environment to be interpreted. This is why a script is not executable. They'll never communicate directly a operating system. If you look at the system processes occurring you'll never see your script being processed, instead you see the program being processed which is processing your script in its runtime environment.
So writing a hello script in Javascript means that the browser interprets the code, defines what hello is and while this occurs the browser is translating this code back down to machine level code saying I have this script and my environment wants to display the word hello so the machine then processes that into a visual representation of your script. It's a constant process, which why you have processors in computers and a constant action of processing occurring on the system. Nothing is ever static, processes are constantly being performed no matter the situation.
Compilers
usually compile the code into a defined bytecode system, or machine code language, that is now a static version of your code. It will not be re-interpreted by the machine unless the source code is recompiled. This is why you will see a runtime error post compilation which a programmer then has to debug in the source and recompile. Interpreter intended scripts (like Javascript or PHP) are simply instructions not compiled before being ran so the source code is easily edited and fixed without the need for additional compiling steps as the compilation is done in real-time.
Not All Compiled Code is Created Equal
An easy way to illustrate this is video game systems. The Playstation vs Xbox. Xbox system are built to support the .net framework to optimize coding and development. C# utilizes this framework in conjunction with a Common Language Runtime in order to compile the code into bytecode. Bytecode is not a strict definition of compiled code, it's a intermediate step placed in the process that allows the writing of code quicker and on a grander scale for programs, that is then interpreted when the code is executed at runtime using, you guessed it, Just In Time (JIT). The difference is this code is only interpreted once, once compiled the program will not re-interpret that code again unless restarted.
Interpreted script languages will never compile the code, so a function in an interpreted script is constantly being re-processed while a compiled bytecode's function is interpreted once and the instructions are stored until the program's runtime is stopped. The benefit is that the bytecode can be ported to another machine's architecture provided you have the necessary resources in place. This is why you have to install .net and possibly updates and frameworks to your system in order for a program to work correctly.
The Playstation does not use a .net framework for its machine. You will need to code in C++, C++ is meant to be compiled and assembled for a particular system architecture. The code will never be interpreted and will need to be exactly correct in order to run. You can never easily move this type language like you could an intermediate language. It's made specifically for that machine's architecture and will never be interpreted otherwise.
So you see even compiled languages are not inherently finalized versions of a compiled language. Compiled languages are meant, in their strict definition, to be compiled fully for use. Interpreted languages are meant to be interpreted by a program but are also the most portable languages in programming due to only needing a program installed that understand the script but they also use the most resources due to constantly being interpreted. Intermediate languages (such as Java and C#) are hybrids of these 2, compiling in part but also requiring outside resources in order to still be functional. Once ran they then compile again, which is a one time interpretation while in runtime.
Evolution of Programming
Machine Code
The lowest form of coding, this code is strictly binary in its representation (I won't get into ternary computation as it's based on theory and practical application for this discussion). Computers understand the natural values, on/off true/false. This is machine level numerical code, which is different from the next level, assembly code.
Assembly Code
The direct next level of code is assembly language. This is the first point in which a language is interpreted to be used by a machine. This code is meant to interpret mnemonics, symbols and operands that are then sent to the machine in machine level code. This is important to understand because when you first start programming most people make the assumption it's either this or that meaning either I compile or interpret. No coding language beyond low level machine code is either compile only instructions or interpret only instructions!!!
We went over this in "Not All Compiled Code is Created Equal". Assembly language is the first instance of this. Machine code is what the machine reads but assembly language is what a human can read. As computers process faster, through better technological advancements, our lower level languages begin to become more condensed in nature and not needed to be manually implemented. Assembly language used to be the high level coding language as it was the quicker method to coding a machine. It was essentially a syntax language that once assembled (the lowest form of compiling) directly converted to machine language. An assembler is a compiler but not all compilers are assemblers.
High Level Coding
High level coding languages are languages that are one step above assembly but may even contain an even higher level (this would be Bytecode/Intermediate languages). These languages are compiled from there defined syntax structure into either the machine code needed, the bytecode to be interpreted or a hybrid of either of the previous method combined with a special compiler that allows for assembly to be written inline. High level coding like it's predecessor, Assembly, is meant to reduce the workload of the developer and remove any chance for critical errors in redundant tasks, like the building of executable programs. In today's world rarely will you see a developer work in assembly for the sake of crunching data in for the benefit of size alone. More often than a developer may have a situation, like in video game console development, where they need a speed increase in the process. Because high level coding compilers are tools that seek to ease the development process they may not 100% of the compile the code in the most efficient manner for that system architecture. In that case Assembly code would be written to maximize the system's resources. But you'll never see a person writing in machine code, unless you just meet a weirdo.
THE SUMMARY
If you made it this far, congratulations! You just listened more in one sitting than my wife can, about this stuff, for a lifetime. The OP's question was about performance of while vs for loops. The reason this is a moot point in today's standards is two-fold.
Reason One
The days of interpreting Javascript are gone. All major browsers (Yes, even Opera and Netscape) utilize a Javascript Engine that is made to compile the script before implementing it. The performance tweaks discussed by JS developers in terms of non-call out methods are obsolete methods of study when looking at native functions within the language. The code is already compiled and optimized for that before ever being a part of the DOM. It's not interpreted again while that page is up because that page is the runtime environment. Javascript has really become an intermediate language more so than interpreted script. The reason it will never be called an intermediate scripting language is because Javascript is never compiled. That's the only reason. Besides that it's function in a browser environment a minified version of what happens with Bytecode.
Reason Two The chances of you writing a script, or library of scripts, that would ever require as much processing power as an desktop application on a website is almost nill. Why? Because Javascript was never created with the intent to be an all encompassing language. It's creation was simply to provide a medium level language programming method that would allow processes to be done that weren't provided by HTML and CSS, while the alleviating development struggles of requiring dedicated high level coding languages, specifically Java.
CSS and JS was not supported for most of the early ages of web development. Till around 1997 CSS was not a safe integration and JS fought even longer. Everything besides HTML is a supplemental language in the web world.
HTML is specific for being the building blocks for a site. You'd never write javascript to fully frame a website. At most you'd do DOM manipulation but building a site.
You'd never style your site in JS as it's just not practical. CSS handles that process.
You'd never store, besides temporarily, using Javascript. You'd use a database.
So what are we left with then? Increasingly just functions and processes. CSS3 and its future iterations are going to take all methods of styling from Javascript. You see that already with animations and psuedo states(hover, active, etc.).
The only valid argument of optimization of code in Javascript at this point is for badly written functions, methods and operations that could be helped by optimization of the user's formula/code pattern. As long as you learn proper and efficient coding patterns Javascript, in today's age, has no loss of performance from its native functions.
for(var k=0; ++k; k< 20){ ... }
can be sped up by making it a negative while loop:
var k = 20; while(--k){ ... };
A more accurate test would be to use for to the same extent as while. The only difference will be that using for loops offers more description. If we wanted to be super crazy we could just forgo the entire block;
var k = 0;
for(;;){doStuff till break}
//or we could do everything
for (var i=1, d=i*2, f=Math.pow(d, i); f < 1E9; i++, d=i*2, f=Math.pow(d,i)){console.log(f)}
Either way...in NodeJS v0.10.38 I'm handling a JavaScript loop of 109 in a quarter second with for being on average about 13% faster. But that really has no affect on my future decisions with which loop to use or the amount I choose to describe in a loop.
> t=Date.now();i=1E9;
> while(i){--i;b=i+1}console.log(Date.now()-t);
292
> t=Date.now();i=1E9;
> while(--i){b=i+1}console.log(Date.now()-t);
285
> t=Date.now();i=1E9;
> for(;i>0;--i){b=i+1}console.log(Date.now()-t);
265
> t=Date.now();i=1E9;
> for(;i>0;){--i;b=i+1}console.log(Date.now()-t);
246