Can rbind be parallelized in R?

As I am sitting here waiting for some R scripts to run...I was wondering... is there any way to parallelize rbind in R?

I sitting waiting for this call to complete frequently as I deal with large amounts of data.

do.call("rbind", LIST)

Solution 1:

I haven't found a way to do this in parallel either thus far. However for my dataset (this one is a list of about 1500 dataframes totaling 4.5M rows) the following snippet seemed to help:

while(length(lst) > 1) {
    idxlst <- seq(from=1, to=length(lst), by=2)

    lst <- lapply(idxlst, function(i) {
        if(i==length(lst)) { return(lst[[i]]) }

        return(rbind(lst[[i]], lst[[i+1]]))
    })
}

where lst is the list. It seemed to be about 4 times faster than using do.call(rbind, lst) or even do.call(rbind.fill, lst) (with rbind.fill from the plyr package). In each iteration this code is halving the amount of dataframes.

Solution 2:

Because you said that you want to rbind data.frame objects you should use the data.table package. It has a function called rbindlist that enhance drastically rbind. I am not 100% sure but I would bet any use of rbind would trigger a copy when rbindlist does not. Anyway a data.table is a data.frame so you do not loose anything to try.

EDIT:

library(data.table)
system.time(dt <- rbindlist(pieces))
utilisateur     système      écoulé 
       0.12        0.00        0.13 
tables()
     NAME  NROW MB COLS                        KEY
[1,] dt   1,000 8  X1,X2,X3,X4,X5,X6,X7,X8,...    
Total: 8MB

Lightning fast...

Solution 3:

I doubt that you can get this to work faster by parallellizing it: apart from the fact that you would probably have to write it yourself (thread one first rbinds item 1 and 2, while thread two rbinds items 3 and 4 etc., and when they're done, the results are 'rebound', something like that - I don't see a non-C way of improving this), it is going to involve copying large amounts of data between your threads, which is typically the thing that goes slow in the first place.

In C, you can share objects between threads, so then you could have all your threads write in the same memory. I wish you the best of luck with that :-)

Finally, as an aside: rbinding data.frames is just slow. If you know up front that the structure of all your data.frames is exactly the same, and it doesn't contain pure character columns, you can probably use the trick from this answer to one of my questions. If your data.frame contains character columns, I suspect that your best off handling these separately (do.call(c, lapply(LIST, "[[", "myCharColName"))) and then performing the trick with the rest, after which you can reunite them.

Solution 4:

Here's a solution, it naturally extends to rbind.fill, merge, and other dataframe list functions:

But like with all my answers/questions verify :)

require(snowfall)
require(rbenchmark)

rbinder <- function(..., cores=NULL){
  if(is.null(cores)){
    do.call("rbind", ...)
  }else{
    sequ <- as.integer(seq(1, length(...), length.out=cores+1))
    listOLists <- paste(paste("list", seq(cores), sep=""), " = ...[",  c(1, sequ[2:cores]+1), ":", sequ[2:(cores+1)], "]", sep="", collapse=", ") 
    dfs <- eval(parse(text=paste("list(", listOLists, ")")))
    suppressMessages(sfInit(parallel=TRUE, cores))
    dfs <- sfLapply(dfs, function(x) do.call("rbind", x))
    suppressMessages(sfStop())
    do.call("rbind", dfs)   
  }
}

pieces <- lapply(seq(1000), function(.) data.frame(matrix(runif(1000), ncol=1000)))

benchmark(do.call("rbind", pieces), rbinder(pieces), rbinder(pieces, cores=4), replications = 10)

#test replications elapsed relative user.self sys.self user.child sys.child
#With intel i5 3570k    
#1     do.call("rbind", pieces)           10  116.70    6.505    115.79     0.10         NA        NA
#3 rbinder(pieces, cores = 4)           10   17.94    1.000      1.67     2.12         NA        NA
#2              rbinder(pieces)           10  116.03    6.468    115.50     0.05         NA        NA