How to connect API with Web SPA through docker
I have an API based on PHP (lumen) and an ecommerce based in React. Both of the work fine. The problem come when I try to make it work through Docker. I'd like to deploy the whole app running just a single command.
The problem is that the react app doesn't connect with the API.
I tried the answer of @Suman Kharel on this post
Proxying API Requests in Docker Container running react app
But it doesn't work. Any one know how can I sort it out?
Here is my repo on bitbucket.
https://bitbucket.org/mariogarciait/ecommerce-submodule/src/master/
Hopefully someone knows what i am doing wrong.
Thanks
Solution 1:
If you want to start all of your apps with a single command using docker, the only option is docker-compose.
Using docker-compose is just for test purposes or a very limited production infrastructure. Best approach is to have your artifacts in different host each one.
Please read these to understand some points:
- one service per container
- docker ip vs localhost
- docker : links/networks vs variables
When you use docker-compose, all the services are deployed in the same machine, but each one in a container. And just one process is running inside a container.
So if you enter into a container (for example a web in nodejs) and list the process, you will see something like this:
nodejs .... 3001
And into another container like a database postgres:
postgres .... 5432
So, if the nodejs web needs to connect to the database, from inside, must need the ip instead localhost of postgress database because inside of nodejs container, just one process is running in the localhost:
localhost 3001
So, use localhost:5432
won't work inside of nodejs container. Solution is to use the ip of postgres instead localhost : 10.10.100.101:5432
Solutions
When we have several containers (docker-compose) with dependencies between them, docker proposes us :
- Deprecated: container links
- Docker networks
As a summary, with these features, docker create a kind of "special network" in which all your container leave in peace without complications of ips!
Docker networks with host.docker.internal
Just for test, quickly deploy or in a very limited production environment you could use a new feature in latest version of docker-compose(1.29.2) and docker.
Add this at the end of your docker-compose
networks:
mynetwork:
driver: bridge
this to all of your containers
networks:
- mynetwork
And if some container needs the host ip, use host.docker.internal instead of the ip
environment:
- DATABASE_HOST=host.docker.internal
- API_BASE_URL=host.docker.internal:8020/api
Finally in the containers that use host.docker.internal add this:
extra_hosts:
- "host.docker.internal:host-gateway"
Note: This was tested on ubuntu, not on mac or windows, because no bodies deploy its real applications on that operative systems
Environment variables approach
In my opinion, Docker links or networks are a kind of illusion or deceit because this only works in one machine (develop or staging), hiding dependencies from us and other complex topics, which are required when your apps leave your laptop and go to your real servers ready to be used by your users.
Anyway if you you will use docker-compose for developer or real purposes, these steps will help you to manage the ips between your containers:
- get the local ip of your machine and store in a var like $MACHINE_HOST in a script like : startup.sh
- remove links or networks from docker-compose.json
- use $MACHINE_HOST to refer another container in your container.
Example:
db:
image: mysql:5.7.22
container_name: db_ecommerce
ports:
- "5003:3306"
environment:
MYSQL_DATABASE: lumen
MYSQL_ROOT_PASSWORD: ${DATABASE_PASSWORD}
api-php:
container_name: api_ecommerce
ports:
- "8020:80"
- "445:443"
environment:
- DATABASE_HOST=$MACHINE_HOST
- DATABASE_USER=$DATABASE_USER
- DATABASE_PASSWORD=$DATABASE_PASSWORD
- ETC=$ETC
web-react:
container_name: react_ecommerce
ports:
- 3001:3000
environment:
- API_BASE_URL=$MACHINE_HOST:8020/api
- Finally just run your startup.sh which has the variables and the classic
docker-compose up -d
Also in your react app read the url of your api using a var instead proxy in package.json:
process.env.REACT_APP_API_BASE_URL
Check this to learn how read environment variables from react app.
Here you can find a more detailed steps of how use MACHINE_HOST variable and its use:
- https://stackoverflow.com/a/57241558/3957754
Advices
- Use variables instead hardcoded values in your docker-compose.json file
- Separate your environments : development, testing and production
- Build is just in development stage. In other words, don't use build in your docker-compose.json. Maybe for local development could be an alternative
- For testing and production stages, just run your containers, built and uploaded in development stage (docker registry)
- If you use proxy or environment variable to read the url of your api in your react app, your build just will work in one machine. If you need to move it between several environment like: testing, staging, uat, etc you must perform a new build because proxy or environment var in react is hardcoded inside of your bundle.js.
- This is not a problem just for react, also exist in angular, vue, etc : Check Limitation 1: Every environment requires a separate build section in this page
- You can evaluate https://github.com/utec/geofrontend-server to fix the previous explained problem (and others like authentication) if apply for you.
- If your plan is to show your web to real users, web and api must have a different domains and of course with https. Example
- ecomerce.zenit.com for your react app
- api.zenit.com or ecomerce-api.zenit.com for your php api
- Finally if you want to avoid this headache of infrastructure complications and you don't have a team of devops and syadmins,you can use heroku, digital ocean, openshift or another platforms like them. Almost all of them are docker compatible. So you just need to perform a git push of each repo with its Dockerfile inside. That platform will interpret your Dockerfile, deploy and assign you a ready to use http domain for testing or a cool domain for production (prior acquisition of the domain and certificate).