Will increasing net.core.somaxconn make a difference?

Setting net.core.somaxconn to higher values is only needed on highloaded servers where new connection rate is so high/bursty that having 128 (50% more in BSD's: 128 backlog + 64 half-open) not-yet-accepted connections is considered normal. Or when you need to delegate definition of "normal" to an applications itself.

Some administrators use high net.core.somaxconn to hide problems with their services, so from user's point of view process it'll look like a latency spike instead of connection interrupted/timeout (controlled by net.ipv4.tcp_abort_on_overflow in Linux).

listen(2) manual says - net.core.somaxconn acts only upper boundary for an application which is free to choose something smaller (usually set in app's config). Though some apps just use listen(fd, -1) which means set backlog to the max value allowed by system.

Real cause is either low processing rate (e.g. a single threaded blocking server) or insufficient number of worker threads/processes (e.g. multi- process/threaded blocking software like apache/tomcat)

PS. Sometimes it's preferable to fail fast and let the load-balancer to do it's job(retry) than to make user wait - for that purpose we set net.core.somaxconn any value, and limit application backlog to e.g. 10 and set net.ipv4.tcp_abort_on_overflow to 1.

PPS. Old versions of Linux kernel have nasty bug of truncating somaxcon value to it's 16 lower bits (i.e. casting value to uint16_t), so raising that value to more than 65535 can even be dangerous. For more information see: http://patchwork.ozlabs.org/patch/255460/

If you want to go into more details about all backlog internals in Linux, feel free to read: How TCP backlog works in Linux.