data.table vs dplyr: can one do something well the other can't or does poorly?
Solution 1:
We need to cover at least these aspects to provide a comprehensive answer/comparison (in no particular order of importance): Speed
, Memory usage
, Syntax
and Features
.
My intent is to cover each one of these as clearly as possible from data.table perspective.
Note: unless explicitly mentioned otherwise, by referring to dplyr, we refer to dplyr's data.frame interface whose internals are in C++ using Rcpp.
The data.table syntax is consistent in its form - DT[i, j, by]
. To keep i
, j
and by
together is by design. By keeping related operations together, it allows to easily optimise operations for speed and more importantly memory usage, and also provide some powerful features, all while maintaining the consistency in syntax.
1. Speed
Quite a few benchmarks (though mostly on grouping operations) have been added to the question already showing data.table gets faster than dplyr as the number of groups and/or rows to group by increase, including benchmarks by Matt on grouping from 10 million to 2 billion rows (100GB in RAM) on 100 - 10 million groups and varying grouping columns, which also compares pandas
. See also updated benchmarks, which include Spark
and pydatatable
as well.
On benchmarks, it would be great to cover these remaining aspects as well:
-
Grouping operations involving a subset of rows - i.e.,
DT[x > val, sum(y), by = z]
type operations. -
Benchmark other operations such as update and joins.
-
Also benchmark memory footprint for each operation in addition to runtime.
2. Memory usage
-
Operations involving
filter()
orslice()
in dplyr can be memory inefficient (on both data.frames and data.tables). See this post.Note that Hadley's comment talks about speed (that dplyr is plentiful fast for him), whereas the major concern here is memory.
-
data.table interface at the moment allows one to modify/update columns by reference (note that we don't need to re-assign the result back to a variable).
# sub-assign by reference, updates 'y' in-place DT[x >= 1L, y := NA]
But dplyr will never update by reference. The dplyr equivalent would be (note that the result needs to be re-assigned):
# copies the entire 'y' column ans <- DF %>% mutate(y = replace(y, which(x >= 1L), NA))
A concern for this is referential transparency. Updating a data.table object by reference, especially within a function may not be always desirable. But this is an incredibly useful feature: see this and this posts for interesting cases. And we want to keep it.
Therefore we are working towards exporting
shallow()
function in data.table that will provide the user with both possibilities. For example, if it is desirable to not modify the input data.table within a function, one can then do:foo <- function(DT) { DT = shallow(DT) ## shallow copy DT DT[, newcol := 1L] ## does not affect the original DT DT[x > 2L, newcol := 2L] ## no need to copy (internally), as this column exists only in shallow copied DT DT[x > 2L, x := 3L] ## have to copy (like base R / dplyr does always); otherwise original DT will ## also get modified. }
By not using
shallow()
, the old functionality is retained:bar <- function(DT) { DT[, newcol := 1L] ## old behaviour, original DT gets updated by reference DT[x > 2L, x := 3L] ## old behaviour, update column x in original DT. }
By creating a shallow copy using
shallow()
, we understand that you don't want to modify the original object. We take care of everything internally to ensure that while also ensuring to copy columns you modify only when it is absolutely necessary. When implemented, this should settle the referential transparency issue altogether while providing the user with both possibilties.Also, once
shallow()
is exported dplyr's data.table interface should avoid almost all copies. So those who prefer dplyr's syntax can use it with data.tables.But it will still lack many features that data.table provides, including (sub)-assignment by reference.
-
Aggregate while joining:
Suppose you have two data.tables as follows:
DT1 = data.table(x=c(1,1,1,1,2,2,2,2), y=c("a", "a", "b", "b"), z=1:8, key=c("x", "y")) # x y z # 1: 1 a 1 # 2: 1 a 2 # 3: 1 b 3 # 4: 1 b 4 # 5: 2 a 5 # 6: 2 a 6 # 7: 2 b 7 # 8: 2 b 8 DT2 = data.table(x=1:2, y=c("a", "b"), mul=4:3, key=c("x", "y")) # x y mul # 1: 1 a 4 # 2: 2 b 3
And you would like to get
sum(z) * mul
for each row inDT2
while joining by columnsx,y
. We can either:-
-
aggregate
DT1
to getsum(z)
, 2) perform a join and 3) multiply (or)data.table way
DT1[, .(z = sum(z)), keyby = .(x,y)][DT2][, z := z*mul][]
dplyr equivalent
DF1 %>% group_by(x, y) %>% summarise(z = sum(z)) %>% right_join(DF2) %>% mutate(z = z * mul)
-
-
-
do it all in one go (using
by = .EACHI
feature):DT1[DT2, list(z=sum(z) * mul), by = .EACHI]
-
What is the advantage?
-
We don't have to allocate memory for the intermediate result.
-
We don't have to group/hash twice (one for aggregation and other for joining).
-
And more importantly, the operation what we wanted to perform is clear by looking at
j
in (2).
Check this post for a detailed explanation of
by = .EACHI
. No intermediate results are materialised, and the join+aggregate is performed all in one go.Have a look at this, this and this posts for real usage scenarios.
In
dplyr
you would have to join and aggregate or aggregate first and then join, neither of which are as efficient, in terms of memory (which in turn translates to speed). -
-
Update and joins:
Consider the data.table code shown below:
DT1[DT2, col := i.mul]
adds/updates
DT1
's columncol
withmul
fromDT2
on those rows whereDT2
's key column matchesDT1
. I don't think there is an exact equivalent of this operation indplyr
, i.e., without avoiding a*_join
operation, which would have to copy the entireDT1
just to add a new column to it, which is unnecessary.Check this post for a real usage scenario.
To summarise, it is important to realise that every bit of optimisation matters. As Grace Hopper would say, Mind your nanoseconds!
3. Syntax
Let's now look at syntax. Hadley commented here:
Data tables are extremely fast but I think their concision makes it harder to learn and code that uses it is harder to read after you have written it ...
I find this remark pointless because it is very subjective. What we can perhaps try is to contrast consistency in syntax. We will compare data.table and dplyr syntax side-by-side.
We will work with the dummy data shown below:
DT = data.table(x=1:10, y=11:20, z=rep(1:2, each=5))
DF = as.data.frame(DT)
-
Basic aggregation/update operations.
# case (a) DT[, sum(y), by = z] ## data.table syntax DF %>% group_by(z) %>% summarise(sum(y)) ## dplyr syntax DT[, y := cumsum(y), by = z] ans <- DF %>% group_by(z) %>% mutate(y = cumsum(y)) # case (b) DT[x > 2, sum(y), by = z] DF %>% filter(x>2) %>% group_by(z) %>% summarise(sum(y)) DT[x > 2, y := cumsum(y), by = z] ans <- DF %>% group_by(z) %>% mutate(y = replace(y, which(x > 2), cumsum(y))) # case (c) DT[, if(any(x > 5L)) y[1L]-y[2L] else y[2L], by = z] DF %>% group_by(z) %>% summarise(if (any(x > 5L)) y[1L] - y[2L] else y[2L]) DT[, if(any(x > 5L)) y[1L] - y[2L], by = z] DF %>% group_by(z) %>% filter(any(x > 5L)) %>% summarise(y[1L] - y[2L])
-
data.table syntax is compact and dplyr's quite verbose. Things are more or less equivalent in case (a).
-
In case (b), we had to use
filter()
in dplyr while summarising. But while updating, we had to move the logic insidemutate()
. In data.table however, we express both operations with the same logic - operate on rows wherex > 2
, but in first case, getsum(y)
, whereas in the second case update those rows fory
with its cumulative sum.This is what we mean when we say the
DT[i, j, by]
form is consistent. -
Similarly in case (c), when we have
if-else
condition, we are able to express the logic "as-is" in both data.table and dplyr. However, if we would like to return just those rows where theif
condition satisfies and skip otherwise, we cannot usesummarise()
directly (AFAICT). We have tofilter()
first and then summarise becausesummarise()
always expects a single value.While it returns the same result, using
filter()
here makes the actual operation less obvious.It might very well be possible to use
filter()
in the first case as well (does not seem obvious to me), but my point is that we should not have to.
-
-
Aggregation / update on multiple columns
# case (a) DT[, lapply(.SD, sum), by = z] ## data.table syntax DF %>% group_by(z) %>% summarise_each(funs(sum)) ## dplyr syntax DT[, (cols) := lapply(.SD, sum), by = z] ans <- DF %>% group_by(z) %>% mutate_each(funs(sum)) # case (b) DT[, c(lapply(.SD, sum), lapply(.SD, mean)), by = z] DF %>% group_by(z) %>% summarise_each(funs(sum, mean)) # case (c) DT[, c(.N, lapply(.SD, sum)), by = z] DF %>% group_by(z) %>% summarise_each(funs(n(), mean))
-
In case (a), the codes are more or less equivalent. data.table uses familiar base function
lapply()
, whereasdplyr
introduces*_each()
along with a bunch of functions tofuns()
. -
data.table's
:=
requires column names to be provided, whereas dplyr generates it automatically. -
In case (b), dplyr's syntax is relatively straightforward. Improving aggregations/updates on multiple functions is on data.table's list.
-
In case (c) though, dplyr would return
n()
as many times as many columns, instead of just once. In data.table, all we need to do is to return a list inj
. Each element of the list will become a column in the result. So, we can use, once again, the familiar base functionc()
to concatenate.N
to alist
which returns alist
.
Note: Once again, in data.table, all we need to do is return a list in
j
. Each element of the list will become a column in result. You can usec()
,as.list()
,lapply()
,list()
etc... base functions to accomplish this, without having to learn any new functions.You will need to learn just the special variables -
.N
and.SD
at least. The equivalent in dplyr aren()
and.
-
-
Joins
dplyr provides separate functions for each type of join where as data.table allows joins using the same syntax
DT[i, j, by]
(and with reason). It also provides an equivalentmerge.data.table()
function as an alternative.setkey(DT1, x, y) # 1. normal join DT1[DT2] ## data.table syntax left_join(DT2, DT1) ## dplyr syntax # 2. select columns while join DT1[DT2, .(z, i.mul)] left_join(select(DT2, x, y, mul), select(DT1, x, y, z)) # 3. aggregate while join DT1[DT2, .(sum(z) * i.mul), by = .EACHI] DF1 %>% group_by(x, y) %>% summarise(z = sum(z)) %>% inner_join(DF2) %>% mutate(z = z*mul) %>% select(-mul) # 4. update while join DT1[DT2, z := cumsum(z) * i.mul, by = .EACHI] ?? # 5. rolling join DT1[DT2, roll = -Inf] ?? # 6. other arguments to control output DT1[DT2, mult = "first"] ??
-
Some might find a separate function for each joins much nicer (left, right, inner, anti, semi etc), whereas as others might like data.table's
DT[i, j, by]
, ormerge()
which is similar to base R. -
However dplyr joins do just that. Nothing more. Nothing less.
-
data.tables can select columns while joining (2), and in dplyr you will need to
select()
first on both data.frames before to join as shown above. Otherwise you would materialiase the join with unnecessary columns only to remove them later and that is inefficient. -
data.tables can aggregate while joining using
by = .EACHI
feature (3) and also update while joining (4). Why materialize the entire join result to add/update just a few columns? -
data.table is capable of rolling joins (5) - roll forward, LOCF, roll backward, NOCB, nearest.
-
data.table also has
mult =
argument which selects first, last or all matches (6). -
data.table has
allow.cartesian = TRUE
argument to protect from accidental invalid joins.
Once again, the syntax is consistent with
DT[i, j, by]
with additional arguments allowing for controlling the output further.
-
do()
...dplyr's summarise is specially designed for functions that return a single value. If your function returns multiple/unequal values, you will have to resort to
do()
. You have to know beforehand about all your functions return value.DT[, list(x[1], y[1]), by = z] ## data.table syntax DF %>% group_by(z) %>% summarise(x[1], y[1]) ## dplyr syntax DT[, list(x[1:2], y[1]), by = z] DF %>% group_by(z) %>% do(data.frame(.$x[1:2], .$y[1])) DT[, quantile(x, 0.25), by = z] DF %>% group_by(z) %>% summarise(quantile(x, 0.25)) DT[, quantile(x, c(0.25, 0.75)), by = z] DF %>% group_by(z) %>% do(data.frame(quantile(.$x, c(0.25, 0.75)))) DT[, as.list(summary(x)), by = z] DF %>% group_by(z) %>% do(data.frame(as.list(summary(.$x))))
-
.SD
's equivalent is.
-
In data.table, you can throw pretty much anything in
j
- the only thing to remember is for it to return a list so that each element of the list gets converted to a column. -
In dplyr, cannot do that. Have to resort to
do()
depending on how sure you are as to whether your function would always return a single value. And it is quite slow.
Once again, data.table's syntax is consistent with
DT[i, j, by]
. We can just keep throwing expressions inj
without having to worry about these things.
Have a look at this SO question and this one. I wonder if it would be possible to express the answer as straightforward using dplyr's syntax...
To summarise, I have particularly highlighted several instances where dplyr's syntax is either inefficient, limited or fails to make operations straightforward. This is particularly because data.table gets quite a bit of backlash about "harder to read/learn" syntax (like the one pasted/linked above). Most posts that cover dplyr talk about most straightforward operations. And that is great. But it is important to realise its syntax and feature limitations as well, and I am yet to see a post on it.
data.table has its quirks as well (some of which I have pointed out that we are attempting to fix). We are also attempting to improve data.table's joins as I have highlighted here.
But one should also consider the number of features that dplyr lacks in comparison to data.table.
4. Features
I have pointed out most of the features here and also in this post. In addition:
-
fread - fast file reader has been available for a long time now.
-
fwrite - a parallelised fast file writer is now available. See this post for a detailed explanation on the implementation and #1664 for keeping track of further developments.
-
Automatic indexing - another handy feature to optimise base R syntax as is, internally.
-
Ad-hoc grouping:
dplyr
automatically sorts the results by grouping variables duringsummarise()
, which may not be always desirable. -
Numerous advantages in data.table joins (for speed / memory efficiency and syntax) mentioned above.
-
Non-equi joins: Allows joins using other operators
<=, <, >, >=
along with all other advantages of data.table joins. -
Overlapping range joins was implemented in data.table recently. Check this post for an overview with benchmarks.
-
setorder()
function in data.table that allows really fast reordering of data.tables by reference. -
dplyr provides interface to databases using the same syntax, which data.table does not at the moment.
-
data.table
provides faster equivalents of set operations (written by Jan Gorecki) -fsetdiff
,fintersect
,funion
andfsetequal
with additionalall
argument (as in SQL). -
data.table loads cleanly with no masking warnings and has a mechanism described here for
[.data.frame
compatibility when passed to any R package. dplyr changes base functionsfilter
,lag
and[
which can cause problems; e.g. here and here.
Finally:
-
On databases - there is no reason why data.table cannot provide similar interface, but this is not a priority now. It might get bumped up if users would very much like that feature.. not sure.
-
On parallelism - Everything is difficult, until someone goes ahead and does it. Of course it will take effort (being thread safe).
- Progress is being made currently (in v1.9.7 devel) towards parallelising known time consuming parts for incremental performance gains using
OpenMP
.
- Progress is being made currently (in v1.9.7 devel) towards parallelising known time consuming parts for incremental performance gains using
Solution 2:
Here's my attempt at a comprehensive answer from the dplyr perspective, following the broad outline of Arun's answer (but somewhat rearranged based on differing priorities).
Syntax
There is some subjectivity to syntax, but I stand by my statement that the concision of data.table makes it harder to learn and harder to read. This is partly because dplyr is solving a much easier problem!
One really important thing that dplyr does for you is that it constrains your options. I claim that most single table problems can be solved with just five key verbs filter, select, mutate, arrange and summarise, along with a "by group" adverb. That constraint is a big help when you're learning data manipulation, because it helps order your thinking about the problem. In dplyr, each of these verbs is mapped to a single function. Each function does one job, and is easy to understand in isolation.
You create complexity by piping these simple operations together with
%>%
. Here's an example from one of the posts Arun linked
to:
diamonds %>%
filter(cut != "Fair") %>%
group_by(cut) %>%
summarize(
AvgPrice = mean(price),
MedianPrice = as.numeric(median(price)),
Count = n()
) %>%
arrange(desc(Count))
Even if you've never seen dplyr before (or even R!), you can still get
the gist of what's happening because the functions are all English
verbs. The disadvantage of English verbs is that they require more typing than
[
, but I think that can be largely mitigated by better autocomplete.
Here's the equivalent data.table code:
diamondsDT <- data.table(diamonds)
diamondsDT[
cut != "Fair",
.(AvgPrice = mean(price),
MedianPrice = as.numeric(median(price)),
Count = .N
),
by = cut
][
order(-Count)
]
It's harder to follow this code unless you're already familiar with
data.table. (I also couldn't figure out how to indent the repeated [
in a way that looks good to my eye). Personally, when I look at code I
wrote 6 months ago, it's like looking at a code written by a stranger,
so I've come to prefer straightforward, if verbose, code.
Two other minor factors that I think slightly decrease readability:
Since almost every data table operation uses
[
you need additional context to figure out what's happening. For example, isx[y]
joining two data tables or extracting columns from a data frame? This is only a small issue, because in well-written code the variable names should suggest what's happening.I like that
group_by()
is a separate operation in dplyr. It fundamentally changes the computation so I think should be obvious when skimming the code, and it's easier to spotgroup_by()
than theby
argument to[.data.table
.
I also like that the the pipe
isn't just limited to just one package. You can start by tidying your
data with
tidyr, and
finish up with a plot in ggvis. And you're
not limited to the packages that I write - anyone can write a function
that forms a seamless part of a data manipulation pipe. In fact, I
rather prefer the previous data.table code rewritten with %>%
:
diamonds %>%
data.table() %>%
.[cut != "Fair",
.(AvgPrice = mean(price),
MedianPrice = as.numeric(median(price)),
Count = .N
),
by = cut
] %>%
.[order(-Count)]
And the idea of piping with %>%
is not limited to just data frames and
is easily generalised to other contexts: interactive web
graphics, web
scraping,
gists, run-time
contracts, ...)
Memory and performance
I've lumped these together, because, to me, they're not that important. Most R users work with well under 1 million rows of data, and dplyr is sufficiently fast enough for that size of data that you're not aware of processing time. We optimise dplyr for expressiveness on medium data; feel free to use data.table for raw speed on bigger data.
The flexibility of dplyr also means that you can easily tweak performance characteristics using the same syntax. If the performance of dplyr with the data frame backend is not good enough for you, you can use the data.table backend (albeit with a somewhat restricted set of functionality). If the data you're working with doesn't fit in memory, then you can use a database backend.
All that said, dplyr performance will get better in the long-term. We'll definitely implement some of the great ideas of data.table like radix ordering and using the same index for joins & filters. We're also working on parallelisation so we can take advantage of multiple cores.
Features
A few things that we're planning to work on in 2015:
the
readr
package, to make it easy to get files off disk and in to memory, analogous tofread()
.More flexible joins, including support for non-equi-joins.
More flexible grouping like bootstrap samples, rollups and more
I'm also investing time into improving R's database connectors, the ability to talk to web apis, and making it easier to scrape html pages.
Solution 3:
In direct response to the Question Title...
dplyr
definitely does things that data.table
can not.
Your point #3
dplyr abstracts (or will) potential DB interactions
is a direct answer to your own question but isn't elevated to a high enough level. dplyr
is truly an extendable front-end to multiple data storage mechanisms where as data.table
is an extension to a single one.
Look at dplyr
as a back-end agnostic interface, with all of the targets using the same grammer, where you can extend the targets and handlers at will. data.table
is, from the dplyr
perspective, one of those targets.
You will never (I hope) see a day that data.table
attempts to translate your queries to create SQL statements that operate with on-disk or networked data stores.
dplyr
can possibly do things data.table
will not or might not do as well.
Based on the design of working in-memory, data.table
could have a much more difficult time extending itself into parallel processing of queries than dplyr
.
In response to the in-body questions...
Usage
Are there analytical tasks that are a lot easier to code with one or the other package for people familiar with the packages (i.e. some combination of keystrokes required vs. required level of esotericism, where less of each is a good thing).
This may seem like a punt but the real answer is no. People familiar with tools seem to use the either the one most familiar to them or the one that is actually the right one for the job at hand. With that being said, sometimes you want to present a particular readability, sometimes a level of performance, and when you have need for a high enough level of both you may just need another tool to go along with what you already have to make clearer abstractions.
Performance
Are there analytical tasks that are performed substantially (i.e. more than 2x) more efficiently in one package vs. another.
Again, no. data.table
excels at being efficient in everything it does where dplyr
gets the burden of being limited in some respects to the underlying data store and registered handlers.
This means when you run into a performance issue with data.table
you can be pretty sure it is in your query function and if it is actually a bottleneck with data.table
then you've won yourself the joy of filing a report. This is also true when dplyr
is using data.table
as the back-end; you may see some overhead from dplyr
but odds are it is your query.
When dplyr
has performance issues with back-ends you can get around them by registering a function for hybrid evaluation or (in the case of databases) manipulating the generated query prior to execution.
Also see the accepted answer to when is plyr better than data.table?
Solution 4:
Reading Hadley and Arun's answers one gets the impression that those who prefer dplyr
's syntax would have in some cases to switch over to data.table
or compromise for long running times.
But as some have already mentioned, dplyr
can use data.table
as a backend. This is accomplished using the dtplyr
package which recently had it's version 1.0.0 release. Learning dtplyr
incurs practically zero additional effort.
When using dtplyr
one uses the function lazy_dt()
to declare a lazy data.table, after which standard dplyr
syntax is used to specify operations on it. This would look something like the following:
new_table <- mtcars2 %>%
lazy_dt() %>%
filter(wt < 5) %>%
mutate(l100k = 235.21 / mpg) %>% # liters / 100 km
group_by(cyl) %>%
summarise(l100k = mean(l100k))
new_table
#> Source: local data table [?? x 2]
#> Call: `_DT1`[wt < 5][, `:=`(l100k = 235.21/mpg)][, .(l100k = mean(l100k)),
#> keyby = .(cyl)]
#>
#> cyl l100k
#> <dbl> <dbl>
#> 1 4 9.05
#> 2 6 12.0
#> 3 8 14.9
#>
#> # Use as.data.table()/as.data.frame()/as_tibble() to access results
The new_table
object is not evaluated until calling on it as.data.table()
/as.data.frame()
/as_tibble()
at which point the underlying data.table
operation is executed.
I've recreated a benchmark analysis done by data.table
author Matt Dowle back at December 2018 which covers the case of operations over large numbers of groups. I've found that dtplyr
indeed enables for the most part those who prefer the dplyr
syntax to keep using it while enjoying the speed offered by data.table
.