nginx reverse proxy with plantuml instance behind subfolder *within* subdomain, unknown error (plantuml issue?)

I'm struggling with the following set-up:

  • local gitlab instance (http) behind a reverse proxy accessible on gitlab.mydomain.com
  • plantuml instance (http) behind the same proxy, which I want to access under gitlab.mydomain.com/plantuml (also configured as such in gitlab)

So I have currently setup for nginx (here as an "aggregated" pseudo-config):

http {
   ...
   server {
       listen 5443 ...;
       # general webserver (with error pages etc.)
   }


   # reverse proxy for gitlab:
   server {
      listen 5443 ssl;
      listen [::]:5443 ssl;

      server_name gitlab.*;

      include /etc/nginx/ssl.conf; # certificates etc.

      # already tried ^~, but now according to
      # https://stackoverflow.com/a/28478928/12771809
      # (though this doesn't appear to be the real problem)
      location ~ ^/plantuml/(.*)$ {
          include /etc/nginx/proxy.conf;
          resolver myrouter valid=30s;
          set $upstream_app myserver;
          set $upstream_port 7080;
          set $upstream_proto http;
          proxy_pass $upstream_proto://$upstream_app:$upstream_port;
          rewrite /plantuml(.*) $1 break;
        }

        location / {
            include /etc/nginx/proxy.conf; # standard proxy headers
            resolver myrouter valid=30s;
            set $upstream_app myserver;
            set $upstream_port 6080;
            set $upstream_proto http;
            proxy_pass $upstream_proto://$upstream_app:$upstream_port;
        }
   }

The reverse proxy is working great (I can use all gitlab functionality just fine), also other services behind different vhosts on the same domain.

But the plantuml instance doesn't work, if I connect to gitlab.mydomain.com/plantuml/png/U9oLLS6Ec... I get:

Bad Message 400
reason: Bad Request

the debug log shows:

2020/11/18 16:30:43 [debug] 102#102: *11 name was resolved to 192.168.1.60
2020/11/18 16:30:43 [debug] 102#102: resolve name done: 0
2020/11/18 16:30:43 [debug] 102#102: resolver expire
2020/11/18 16:30:43 [debug] 102#102: *11 get rr peer, try: 1
2020/11/18 16:30:43 [debug] 102#102: *11 stream socket 44
2020/11/18 16:30:43 [debug] 102#102: *11 epoll add connection: fd:44 ev:80002005
2020/11/18 16:30:43 [debug] 102#102: *11 connect to 192.168.1.60:7080, fd:44 #93
2020/11/18 16:30:43 [debug] 102#102: *11 http upstream connect: -2
2020/11/18 16:30:43 [debug] 102#102: *11 posix_memalign: 0000560AAA19E860:128 @16
2020/11/18 16:30:43 [debug] 102#102: *11 event timer add: 44: 240000:752739282
2020/11/18 16:30:43 [debug] 102#102: *11 http finalize request: -4, "/png/U9oLLS6Ec......"

The last line shows me that the rewrite also worked, and if I paste it in my browser on the lan with http://192.168.1.60:7080/png/U9oLLS6Ec... I can see the diagram.

The debug log further shows:

2020/11/18 16:30:43 [debug] 102#102: *11 http upstream process header
2020/11/18 16:30:43 [debug] 102#102: *11 malloc: 0000560AAA1A82E0:4096
2020/11/18 16:30:43 [debug] 102#102: *11 recv: eof:0, avail:-1
2020/11/18 16:30:43 [debug] 102#102: *11 recv: fd:44 198 of 4096
2020/11/18 16:30:43 [debug] 102#102: *11 http proxy status 400 "400 Bad Request"
2020/11/18 16:30:43 [debug] 102#102: *11 http proxy header: "Content-Type: text/html;charset=iso-8859-1"
2020/11/18 16:30:43 [debug] 102#102: *11 http proxy header: "Content-Length: 54"
2020/11/18 16:30:43 [debug] 102#102: *11 http proxy header: "Connection: close"
2020/11/18 16:30:43 [debug] 102#102: *11 http proxy header: "Server: Jetty(9.4.33.v20201020)"
2020/11/18 16:30:43 [debug] 102#102: *11 http proxy header done
2020/11/18 16:30:43 [debug] 102#102: *11 http2 header filter
2020/11/18 16:30:43 [debug] 102#102: *11 http2 push resources
2020/11/18 16:30:43 [debug] 102#102: *11 http2 output header: ":status: 400"
2020/11/18 16:30:43 [debug] 102#102: *11 http2 output header: "server: nginx"
2020/11/18 16:30:43 [debug] 102#102: *11 http2 output header: "date: Wed, 18 Nov 2020 15:30:43 GMT"
2020/11/18 16:30:43 [debug] 102#102: *11 http2 output header: "content-type: text/html;charset=iso-8859-1"
2020/11/18 16:30:43 [debug] 102#102: *11 http2 output header: "content-length: 54"
2020/11/18 16:30:43 [debug] 102#102: *11 http2:131 create HEADERS frame 0000560AAA1A8188: len:57 fin:0

I don't really understand that part, but it appears as though it occurs in plantuml? (Since it shows Jetty)

EDIT:

I succeeded in executing jetty directly with plantuml and debugging like this:

java -Dorg.eclipse.jetty.util.log.class=org.eclipse.jetty.util.log.StdErrLog -Dorg.eclipse.jetty.LEVEL=DEBUG -Djetty.contextpath=/ -jar jetty-runner-9.4.34.v20201102.jar plantuml-v1.2020.19.war

and I changed the proxy to forward to this instance. (And it's working if I address it directly).

The error is exactly reproducible. There's a lot of log output, but this part strikes me as odd:

2020-11-18 20:28:48.814:DBUG:oejs.HttpChannel:qtp1663166483-40: REQUEST for //gitlab.mydomain.com/ on HttpChannelOverHttp@30ef9a4a{s=HttpChannelState@5e5175f8{s=IDLE rs=BLOCKING os=OPEN is=IDLE awp=false se=false i=true al=0},r=1,c=false/false,a=IDLE,uri=//gitlab.mydomain.com/,age=1}
GET //gitlab.mydomain.com/ HTTP/1.1
Host: gitlab.mydomain.com
Upgrade: close
Connection: close
X-Forwarded-For: xxx.xxx.xxx.xxx
X-Forwarded-Host: gitlab.mydomain.com
X-Forwarded-Proto: https
X-Forwarded-Ssl: on
X-Real-IP: xxx.xxx.xxx.xxx

Shouldn't the GET part read something like /png/blabla...?

EDIT2:

Slightly changing the location part now includes the uri part also in the GET, at least in the log, although it reports the uri as null (which I suppose is part of the problem).

2020-11-18 21:22:33.606:DBUG:oejh.HttpParser:qtp1663166483-22: Parse exception: HttpParser{s=CONTENT,0 of -1} for HttpChannelOverHttp@7dd961cb{s=HttpChannelState@79e6eaa8{s=IDLE rs=BLOCKING os=OPEN is=IDLE awp=false se=false i=true al=0},r=0,c=false/false,a=IDLE,uri=null,age=0}
org.eclipse.jetty.http.BadMessageException: 400: null
        at org.eclipse.jetty.server.HttpChannelOverHttp.upgrade(HttpChannelOverHttp.java:432)
        at org.eclipse.jetty.server.HttpChannelOverHttp.headerComplete(HttpChannelOverHttp.java:357)
        at org.eclipse.jetty.http.HttpParser.parseFields(HttpParser.java:1225)
        at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:1508)
        at org.eclipse.jetty.server.HttpConnection.parseRequestBuffer(HttpConnection.java:364)
        at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:261)
        at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)
        at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105)
        at org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104)
        at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:336)
        at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:313)
        at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:171)
        at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:129)
        at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:375)
        at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:773)
        at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:905)
        at java.base/java.lang.Thread.run(Thread.java:832)
2020-11-18 21:22:33.607:DBUG:oejh.HttpParser:qtp1663166483-22: CONTENT --> CLOSE
2020-11-18 21:22:33.607:DBUG:oejs.HttpChannel:qtp1663166483-22: REQUEST for //gitlab.mydomain.com/png/XxXxXxXx on HttpChannelOverHttp@7dd961cb{s=HttpChannelState@79e6eaa8{s=IDLE rs=BLOCKING os=OPEN is=IDLE awp=false se=false i=true al=0},r=1,c=false/false,a=IDLE,uri=//gitlab.mydomain.com/png/XxXxXxXx,age=0}
GET //gitlab.mydomain.com/png/XxXxXxXx HTTP/1.1
Host: gitlab.mydomain.com

Solution:

The problem was as "stupid" as it was hidden (so lesson learnt: never "just" include boiler plate.)

The standard proxy.conf included (it is part of one of the nginx containers I tried) contains the following (which is a lot and probably contains unneeded stuff?):

## Version 2020/10/04 - Changelog: https://github.com/linuxserver/docker-swag/commits/master/root/defaults/proxy.conf

# Timeout if the real server is dead
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503;

# Proxy Connection Settings
proxy_buffers 32 4k;
proxy_connect_timeout 240;
proxy_headers_hash_bucket_size 128;
proxy_headers_hash_max_size 1024;
proxy_http_version 1.1;
proxy_read_timeout 240;
# proxy_redirect  http://  $scheme://;
proxy_redirect off;
proxy_send_timeout 240;

# Proxy Cache and Cookie Settings
proxy_cache_bypass $cookie_session;
#proxy_cookie_path / "/; Secure"; # enable at your own risk, may break certain apps
proxy_no_cache $cookie_session;

# Proxy Header Settings
proxy_set_header Early-Data $ssl_early_data;
proxy_set_header Host $host;
proxy_set_header Proxy "";
proxy_set_header Upgrade $connection_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-Ssl on;
proxy_set_header X-Real-IP $remote_addr;

Just removing the include to this file solved the problem, in combination with a clean location /plantuml/ { ... } line.

And the culprit was the line:

proxy_set_header Upgrade $connection_upgrade;

Without it everything works.