Docker build fails to fetch packages from archive.ubuntu.com inside bash script used in Dockerfile

Trying to build a docker image with the execution of a pre-requisites installation script inside the Dockerfile fails for fetching packages via apt-get from archive.ubuntu.com.

Using the apt-get command inside the Dockerfile works flawless, despite being behind a corporate proxy, which is setup via the ENV command in the Dockerfile. Anyway, executing the apt-get command from a bash-script in a terminal inside the resulting docker container or as "postCreateCommand" in a devcontainer.json of Visual Studio Code does work as expected too. But it won't work in my case for the invocation of a bash script from inside a Dockerfile. It simply will tell:

Starting installation of package iproute2
Reading package lists...
Building dependency tree...
The following additional packages will be installed:
libatm1 libcap2 libcap2-bin libmnl0 libpam-cap libxtables12
Suggested packages:
iproute2-doc
The following NEW packages will be installed:
iproute2 libatm1 libcap2 libcap2-bin libmnl0 libpam-cap libxtables12
0 upgraded, 7 newly installed, 0 to remove and 0 not upgraded.
Need to get 971 kB of archives.
After this operation, 3,287 kB of additional disk space will be used.
Err:1 http://archive.ubuntu.com/ubuntu focal/main amd64 libcap2 amd64 1:2.32-1
Could not resolve 'archive.ubuntu.com'

... more output ...

E: Failed to fetch http://archive.ubuntu.com/ubuntu/pool/main/libc/libcap2/libcap2_2.32-1_amd64.deb  Could not resolve 'archive.ubuntu.com'

... more output ...

Just for example a snippet of the Dockerfile looks like this:

FROM ubuntu:20.04 as builderImage

USER root

ARG HTTP_PROXY_HOST_IP='http://172.17.0.1'
ARG HTTP_PROXY_HOST_PORT='3128'
ARG HTTP_PROXY_HOST_ADDR=$HTTP_PROXY_HOST_IP':'$HTTP_PROXY_HOST_PORT

ENV http_proxy=$HTTP_PROXY_HOST_ADDR
ENV https_proxy=$http_proxy
ENV HTTP_PROXY=$http_proxy
ENV HTTPS_PROXY=$http_proxy
ENV ftp_proxy=$http_proxy
ENV FTP_PROXY=$http_proxy

# it is always helpful sorting packages alpha-numerically to keep the overview ;)
RUN apt-get update && \
    apt-get -y upgrade && \
    apt-get -y install --no-install-recommends apt-utils dialog 2>&1 \
    && \
    apt-get -y install \
        default-jdk \
        git \
        python3 python3-pip

SHELL ["/bin/bash", "-c"]

ADD ./env-setup.sh .
RUN chmod +x env-setup.sh && ./env-setup.sh

CMD ["bash"]

The minimal version of the environment script env-setup.sh, which is supposed to be invoked by the Dockerfile, would look like this:

#!/bin/bash

packageCommand="apt-get";
sudo $packageCommand update;
packageInstallCommand="$packageCommand install";
package="iproute2"
packageInstallCommand+=" -y";
sudo $packageInstallCommand $package;

Of course the usage of variables is down to making use of a list for the packages to be installed and other aspects.

Hopefully that has covered everything essential to the question:

Why is the execution of apt-get working with a RUN and as well running the bash script inside the container after creating, but not from the very same bash script while building the image from a Dockerfile?

I was hoping to find the answer with the help of an extensive web-search, but unfortunately I was only able to find anything but an answer to this case.


As pointed out in the comment section underneath the question:

using sudo to launch the command, wiping out all the current vars set in the current environment, more specifically your proxy settings

So that is the case.

The solution is either to remove sudo from the bash script and invoke the script as root inside the Dockerfile.

Or, using sudo will work with ENV variables, just apply sudo -E.