Java 11 HttpClient Http2 Too many streams Error
I am using HttpClient
of Java 11 to post the request to an HTTP2 server. The HttpClient Object is created as a Singleton Spring bean as shown below.
@Bean
public HttpClient getClient() {
return HttpClient.newBuilder().version(Version.HTTP_2).executor(Executors.newFixedThreadPool(20)).followRedirects(Redirect.NORMAL)
.connectTimeout(Duration.ofSeconds(20)).build();
}
I am using the sendAsync method to send the requests asynchronously.
When I try to hit the server continuously, I am receiving the error after certain time "java.io.IOException: too many concurrent streams". I used Fixed threadpool in the Client building to try to overcome this error, but it is still giving the same error.
The Exception stack is..
java.util.concurrent.CompletionException: java.io.IOException: too many concurrent streams
at java.base/java.util.concurrent.CompletableFuture.encodeRelay(CompletableFuture.java:367) ~[?:?]
at java.base/java.util.concurrent.CompletableFuture.uniComposeStage(CompletableFuture.java:1108) ~[?:?]
at java.base/java.util.concurrent.CompletableFuture.thenCompose(CompletableFuture.java:2235) ~[?:?]
at java.net.http/jdk.internal.net.http.MultiExchange.responseAsyncImpl(MultiExchange.java:345) ~[java.net.http:?]
at java.net.http/jdk.internal.net.http.MultiExchange.lambda$responseAsync0$2(MultiExchange.java:250) ~[java.net.http:?]
at java.base/java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1072) ~[?:?]
at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) ~[?:?]
at java.base/java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1705) ~[?:?]
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
at java.base/java.lang.Thread.run(Thread.java:834) [?:?]
Caused by: java.io.IOException: too many concurrent streams
at java.net.http/jdk.internal.net.http.Http2Connection.reserveStream(Http2Connection.java:440) ~[java.net.http:?]
at java.net.http/jdk.internal.net.http.Http2ClientImpl.getConnectionFor(Http2ClientImpl.java:103) ~[java.net.http:?]
at java.net.http/jdk.internal.net.http.ExchangeImpl.get(ExchangeImpl.java:88) ~[java.net.http:?]
at java.net.http/jdk.internal.net.http.Exchange.establishExchange(Exchange.java:293) ~[java.net.http:?]
at java.net.http/jdk.internal.net.http.Exchange.responseAsyncImpl0(Exchange.java:425) ~[java.net.http:?]
at java.net.http/jdk.internal.net.http.Exchange.responseAsyncImpl(Exchange.java:330) ~[java.net.http:?]
at java.net.http/jdk.internal.net.http.Exchange.responseAsync(Exchange.java:322) ~[java.net.http:?]
at java.net.http/jdk.internal.net.http.MultiExchange.responseAsyncImpl(MultiExchange.java:304) ~[java.net.http:?]
Can someone help me in fixing this issue?
The server is Tomcat9 and its max concurrent streams are the default.
When I try to hit the server continuously
The server has a setting for max_concurrent_streams
that is communicated to the client during the initial establishment of a HTTP/2 connection.
If you blindly "hit the server continuously" using sendAsync
you are not waiting for previous requests to finish and eventually you exceed the max_concurrent_streams
value and receive the error above.
The solution is to send concurrently a number of requests that is less than max_concurrent_streams
; after that, you only send a new request when a previous one completes.
This can easily implemented on the client using a Semaphore
or something similar.
Unfortunately, the approach with Semaphore
, suggested by @sbordet, didn't work for me. I tried this:
var semaphore = semaphores.computeIfAbsent(getRequestKey(request), k -> new Semaphore(MAX_CONCURRENT_REQUESTS_NUMBER));
CompletableFuture.runAsync(semaphore::acquireUninterruptibly, WAITING_POOL)
.thenComposeAsync(ignored -> httpClient.sendAsync(request, responseBodyHandler), ASYNC_POOL)
.whenComplete((response, e) -> semaphore.release());
There's no guarantee that a connection stream is released by the time the execution is passed to the next CompletableFuture
, where the semaphore is released. For me the approach worked in case of normal execution, however if there're any exceptions, it seems that the connection stream may be closed after semaphore.release()
is invoked.
Finally, I ended up by using OkHttp. It handles the problem (it just waits until some streams are freed up if the number of concurrent streams reaches max_concurrent_streams
). It also handles the GOAWAY
frame. In case of Java HttpClient
I had to implement retry logic to handle this as it just throws IOException
if the server sends GOAWAY
frame.