Is there a way to specify a timeout for the whole execution of HttpClient?

I have tried the following:

httpClient.getParams().setParameter("http.socket.timeout", timeout * 1000);
httpClient.getParams().setParameter("http.connection.timeout", timeout * 1000);
httpClient.getParams().setParameter("http.connection-manager.timeout", new Long(timeout * 1000));
httpClient.getParams().setParameter("http.protocol.head-body-timeout", timeout * 1000);

It actually works fine, except if a remote host sends back data - even at one byte/second - it will continue to read forever! But I want to interrupt the connection in 10 seconds max, whether or not the host responds.


Solution 1:

For a newer version of httpclient (e.g. http components 4.3 - https://hc.apache.org/httpcomponents-client-4.3.x/index.html):

int CONNECTION_TIMEOUT_MS = timeoutSeconds * 1000; // Timeout in millis.
RequestConfig requestConfig = RequestConfig.custom()
    .setConnectionRequestTimeout(CONNECTION_TIMEOUT_MS)
    .setConnectTimeout(CONNECTION_TIMEOUT_MS)
    .setSocketTimeout(CONNECTION_TIMEOUT_MS)
    .build();

HttpPost httpPost = new HttpPost(URL);
httpPost.setConfig(requestConfig);

Solution 2:

There is currently no way to set a maximum request duration of that sort: basically you want to say I don't care whether or not any specific request stage times out, but the entire request must not last longer than 15 seconds (for example).

Your best bet would be to run a separate timer, and when it expires fetch the connection manager used by the HttpClient instance and shutdown the connection, which should terminate the link. Let me know if that works for you.

Solution 3:

Works fine, as proposed by Femi. Thanks!

Timer timer = new Timer();
timer.schedule(new TimerTask() {
    public void run() {
        if(getMethod != null) {
            getMethod.abort();
        }
    }
}, timeout * 1000);

Solution 4:

Timer is evil! Using timer or executor or any other mechanism which creates a thread/runnable object per request is a very bad idea. Please think wisely and don't do it. Otherwise you will quickly run into all kind of memory issues with more or less real environment. Imagine 1000 req/min means 1000 threads or workers / min. Poor GC. The solution I propose require only 1 watchdog thread and will save you resources time and nerves. Basically you do 3 steps.

  1. put request in cache.
  2. remove request from cache when complete.
  3. abort requests which are not complete within your limit.

your cache along with watchdog thread may look like this.

import org.apache.http.client.methods.*;
import java.util.*;
import java.util.concurrent.*;
import java.util.stream.*;

public class RequestCache {

private static final long expireInMillis = 300000;
private static final Map<HttpUriRequest, Long> cache = new ConcurrentHashMap<>();
private static final ScheduledExecutorService exe = Executors.newScheduledThreadPool(1);

static {
    // run clean up every N minutes
    exe.schedule(RequestCache::cleanup, 1, TimeUnit.MINUTES);
}

public static void put(HttpUriRequest request) {
    cache.put(request, System.currentTimeMillis()+expireInMillis);
}

public static void remove(HttpUriRequest request) {
    cache.remove(request);
}

private static void cleanup() {
    long now = System.currentTimeMillis();
    // find expired requests
    List<HttpUriRequest> expired = cache.entrySet().stream()
            .filter(e -> e.getValue() > now)
            .map(Map.Entry::getKey)
            .collect(Collectors.toList());

    // abort requests
    expired.forEach(r -> {
        if (!r.isAborted()) {
            r.abort();
        }
        cache.remove(r);
      });
    }
  }

and the following sudo code how to use cache

import org.apache.http.client.methods.*;

public class RequestSample {

public void processRequest() {
    HttpUriRequest req = null;
    try {
        req = createRequest();

        RequestCache.put(req);

        execute(req);

    } finally {
        RequestCache.remove(req);
    }
  }
}