Is Objects.requireNonNull less efficient than the old way?
Since JDK 7 I've been happily using the method it introduced to reject null
values which are passed to a method which cannot accept them:
private void someMethod(SomeType pointer, SomeType anotherPointer) {
Objects.requireNonNull(pointer, "pointer cannot be null!");
Objects.requireNonNull(anotherPointer, "anotherPointer cannot be null!");
// Rest of method
}
I think this method makes for very tidy code which is easy to read, and I'm trying to encourage colleagues to use it. But one (particularly knowledgeable) colleague is resistant, and says that the old way is more efficient:
private void someMethod(SomeType pointer, SomeType anotherPointer) {
if (pointer == null) {
throw new NullPointerException("pointer cannot be null!");
}
if (anotherPointer == null) {
throw new NullPointerException("anotherPointer cannot be null!");
}
// Rest of method
}
He says that calling requireNonNull
involves placing another method on the JVM call stack and will result in worse performance than a simple == null
check.
So my question: is there any evidence of a performance penalty being incurred by using the Objects.requireNonNull
methods?
Let's look at the implementation of requireNonNull
in Oracle's JDK:
public static <T> T requireNonNull(T obj) {
if (obj == null)
throw new NullPointerException();
return obj;
}
So that's very simple. The JVM (Oracle's, anyway) includes an optimizing two-stage just-in-time compiler to convert bytecode to machine code. It will inline trivial methods like this if it can get better performance that way.
So no, not likely to be slower, not in any meaningful way, not anywhere that would matter.
So my question: is there any evidence of a performance penalty being incurred by using the
Objects.requireNonNull
methods?
The only evidence that would matter would be performance measurements of your codebase, or of code designed to be highly representative of it. You can test this with any decent performance testing tool, but unless your colleague can point to a real-world example of a performance problem in your codebase related to this method (rather than a synthetic benchmark), I'd tend to assume you and he/she have bigger fish to fry.
As a bit of an aside, I noticed your sample method is a private
method. So only code your team is writing calls it directly. In those situations, you might look at whether you have a use case for assertions rather than runtime checks. Assertions have the advantage of not executing in "released" code at all, and thus being faster than either alternative in your question. Obviously there are places you need runtime checks, but those are usually at gatekeeping points, public methods and such. Just FWIW.
Formally speaking, your colleague is right:
If
someMethod()
or corresponding trace is not hot enough, the byte code is interpreted, and extra stack frame is createdIf
someMethod()
is called on 9-th level of depth from hot spot, therequireNonNull()
calls shouldn't be inlined because ofMaxInlineLevel
JVM OptionIf the method is not inlined for any of the above reasons, argument by T.J. Crowder comes into play, if you use concatenation for producing error message
Even if
requireNonNull()
is inlined, JVM wastes time and space for performing this.
On the other hand, there is FreqInlineSize
JVM option, which prohibits inlining too big (in bytecodes) methods. The method's bytecodes is counted by themselves, without accounting size of methods, called within this method. Thus, extracting pieces of code into independent methods could be useful sometimes, in the example with requireNonNull()
this extraction is made for you already.
If you want evidence ... then the way to get it is to write a micro-benchmark.
(I recommend looking at the Calliper project first! Or JMH ... per Boris's recommendation. Either way, don't try and write a micro-benchmark from scratch. There are too many ways to get it wrong.)
However, you can tell your colleague two things:
The JIT compiler does a good job of inlining small method calls, and it is likely that this will happen in this case.
If it didn't inline the call, the chances are that the difference in performance would only be a 3 to 5 instructions, and it is highly unlikely that it would make a significant difference.
Yes, there is evidence that the difference between manual null
check and Objects.requireNonNull()
is negligible. OpenJDK commiter Aleksey Shipilev created benchmarking code that proves this while fixing JDK-8073479, here is his conclusion and performance numbers:
TL;DR: Fear not, my little friends, use Objects.requireNonNull.
Stop using these obfuscating Object.getClass() checks,
those rely on non-related intrinsic performance, potentially
not available everywhere.
Runs are done on i5-4210U, 1.7 GHz, Linux x86_64, JDK 8u40 EA.
The explanations are derived from studying the generated code
("-prof perfasm" is your friend here), the disassembly is skipped
for brevity.
Out of box, C2 compiled:
Benchmark Mode Cnt Score Error Units
NullChecks.branch avgt 25 0.588 ± 0.015 ns/op
NullChecks.objectGetClass avgt 25 0.594 ± 0.009 ns/op
NullChecks.objectsNonNull avgt 25 0.598 ± 0.014 ns/op
Object.getClass() is intrinsified.
Objects.requireNonNull is perfectly inlined.
where branch
, objectGetClass
and objectsNonNull
are defined as follows:
@Benchmark
public void objectGetClass() {
o.getClass();
}
@Benchmark
public void objectsNonNull() {
Objects.requireNonNull(o);
}
@Benchmark
public void branch() {
if (o == null) {
throw new NullPointerException();
}
}
Your colleague is most likely wrong.
JVM is very intelligent and will most likely inline the Objects.requireNonNull(...)
method. The performance is questionable but there will be definitely much more serious optimizations than this.
You should use the utility method from JDK.