When does the TCP engine decide to send an ACK?

In my LAN, I have a router that runs a Samba server and my PC connects to the router.

I wiresharked during a uploading to the server and a downloading from the server.

The wireshark results show that:

  • During the uploading, the server sends an ACK every 0.6ms average
  • During the downloading, my PC send an ACK every 0.025ms average

As a consequence, the downloading generates about 120,000 frames while the uploading only generates 70,000 frames. And the downloading rate is about 12.7Mbytes/s while the uploading rate is 20Mbytes/s.

So I want to figure out the possible reason for this.


Solution 1:

There are mainly two mechanisms to reduce the amount of ACK packets returned - the Nagle algorithm and Delayed ACKs - both described in RFC 1122. Both are optional, so there will be hosts which are either configured not to use them or have the appropriate implementation missing. Especially Samba can be instructed to disable the Nagle algorithm by using socket options = TCP_NODELAY in the configuration.

Your difference in upstream / downstream data rates for SMB file copies is likely to have other reasons than an abundance of TCP ACK packets though.

Solution 2:

The TCP implementation ACKs every other data packet. So you should see, typically, two data packets received and then an ACK sent. The sender, of course, is not waiting for the ACK anyway. It will continue to transmit until the window is full, even in the absence of an ACK.

There are other factors potentially at play here, such as Nagle and delayed ACK. But it doesn't look like you're seeing the affects of them.