Speed up fetching posts for my social network app by using query instead of observing a single event repeatedly

Update: we now also cover this question in an AskFirebase episode.

Loading many items from Firebase doesn't have to be slow, since you can pipeline the requests. But your code is making this impossible, which indeed will lead to suboptimal performance.

In your code, you request an item from the server, wait for that item to return and then load the next one. In a simplified sequence diagram that looks like:

Your app                     Firebase 
                             Database

        -- request item 1 -->
                               S  L
                               e  o
                               r  a
                               v  d
                               e  i
        <-  return item  1 --  r  n
                                  g
        -- request item 2 -->
                               S  L
                               e  o
                               r  a
                               v  d
                               e  i
                               r  n
        <-  return item  2 --     g
        -- request item 3 -->
                 .
                 .
                 .
        -- request item 30-->
                               S  L
                               e  o
                               r  a
                               v  d
                               e  i
                               r  n
                                  g
        <-  return item 30 --

In this scenario you're waiting for 30 times your roundtrip time + 30 times the time it takes to load the data from disk. If (for the sake of simplicity) we say that roundtrips take 1 second and loading an item from disk also takes one second that least to 30 * (1 + 1) = 60 seconds.

In Firebase applications you'll get much better performance if you send all the requests (or at least a reasonable number of them) in one go:

Your app                     Firebase 
                             Database

        -- request item 1 -->
        -- request item 2 -->  S  L
        -- request item 3 -->  e  o
                 .             r  a
                 .             v  d
                 .             e  i
        -- request item 30-->  r  n
                                  g
        <-  return item  1 --     
        <-  return item  2 --      
        <-  return item  3 --
                 .
                 .
                 .
        <-  return item 30 --

If we again assume a 1 second roundtrip and 1 second of loading, you're waiting for 30*1 + 1 = 31 seconds.

So: all requests go through the same connection. Given that, the only difference between get(1), get(2), get(3) and getAll([1,2,3]) is some overhead for the frames.

I set up a jsbin to demonstrate the behavior. The data model is very simple, but it shows off the difference.

function loadVideosSequential(videoIds) {
  if (videoIds.length > 0) {
    db.child('videos').child(videoIds[0]).once('value', snapshot => {
      if (videoIds.length > 1) {
        loadVideosSequential(videoIds.splice(1), callback)
      }
    });
  }
}

function loadVideosParallel(videoIds) {
  Promise.all(
    videoIds.map(id => db.child('videos').child(id).once('value'))
  );
}

For comparison: sequentially loading 64 items takes 3.8 seconds on my system, while loading them pipelined (as the Firebase client does natively) it takes 600ms. The exact numbers will depend on your connection (latency and bandwidth), but the pipelined version should always be significantly faster.