Is it better to use System.arraycopy(...) than a for loop for copying arrays?
I want to create a new array of objects putting together two smaller arrays.
They can't be null, but size may be 0.
I can't chose between these two ways: are they equivalent or is one more efficient (for example system.arraycopy() copies whole chunks)?
MyObject[] things = new MyObject[publicThings.length+privateThings.length];
System.arraycopy(publicThings, 0, things, 0, publicThings.length);
System.arraycopy(privateThings, 0, things, publicThings.length, privateThings.length);
or
MyObject[] things = new MyObject[publicThings.length+privateThings.length];
for (int i = 0; i < things.length; i++) {
if (i<publicThings.length){
things[i] = publicThings[i]
} else {
things[i] = privateThings[i-publicThings.length]
}
}
Is the only difference the look of the code?
EDIT: thanks for linked question, but they seem to have an unsolved discussion:
Is it truly faster if it is not for native types
: byte[], Object[], char[]? in all other cases, a type check is executed, which would be my case and so would be equivalent... no?
On another linked question, they say that the size matters a lot
, for size >24 system.arraycopy() wins, for smaller than 10, manual for loop is better...
Now I'm really confused.
Solution 1:
public void testHardCopyBytes()
{
byte[] bytes = new byte[0x5000000]; /*~83mb buffer*/
byte[] out = new byte[bytes.length];
for(int i = 0; i < out.length; i++)
{
out[i] = bytes[i];
}
}
public void testArrayCopyBytes()
{
byte[] bytes = new byte[0x5000000]; /*~83mb buffer*/
byte[] out = new byte[bytes.length];
System.arraycopy(bytes, 0, out, 0, out.length);
}
I know JUnit tests aren't really the best for benchmarking, but
testHardCopyBytes took 0.157s to complete
and
testArrayCopyBytes took 0.086s to complete.
I think it depends on the virtual machine, but it looks as if it copies blocks of memory instead of copying single array elements. This would absolutely increase performance.
EDIT:
It looks like System.arraycopy 's performance is all over the place.
When Strings are used instead of bytes, and arrays are small (size 10),
I get these results:
String HC: 60306 ns
String AC: 4812 ns
byte HC: 4490 ns
byte AC: 9945 ns
Here is what it looks like when arrays are at size 0x1000000. It looks like System.arraycopy definitely wins with larger arrays.
Strs HC: 51730575 ns
Strs AC: 24033154 ns
Bytes HC: 28521827 ns
Bytes AC: 5264961 ns
How peculiar!
Thanks, Daren, for pointing out that references copy differently. It made this a much more interesting problem!
Solution 2:
Arrays.copyOf(T[], int)
is easier to read.
Internaly it uses System.arraycopy()
which is a native call.
You can't get it faster!
Solution 3:
It depends on the virtual machine, but System.arraycopy should give you the closest you can get to native performance.
I've worked for 2 years as a java developer for embedded systems (where performance is a huge priority) and everywhere System.arraycopy could be used, I've mostly used it / seen it used in existing code. It's always preferred over loops when performance is an issue. If performance isn't a big issue, I'd go with the loop, though. Much easier to read.