Why a pod can't connect to another network? (In the new version of Kubernetes)

I reported this issue to google here: https://issuetracker.google.com/issues/111986281

And they said that is an issue in Kubernetes 1.9:

Beginning with Kubernetes version 1.9.x, automatic firewall rules have changed such that workloads in your Kubernetes Engine cluster cannot communicate with other Compute Engine VMs that are on the same network, but outside the cluster. This change was made for security reasons.

In the next link is the solution: https://cloud.google.com/kubernetes-engine/docs/troubleshooting#autofirewall

Basically:

First, find your cluster's network: gcloud container clusters describe [CLUSTER_NAME] --format=get"(network)"

Then get the cluster's IPv4 CIDR used for the containers:

gcloud container clusters describe [CLUSTER_NAME] --format=get"(clusterIpv4Cidr)"

Finally create a firewall rule for the network, with the CIDR as the source range, and allow all protocols:

gcloud compute firewall-rules create "[CLUSTER_NAME]-to-all-vms-on-network" --network="[NETWORK]" --source-ranges="[CLUSTER_IPV4_CIDR]" --allow=tcp,udp,icmp,esp,ah,sctp


Since you have two different database servers in GCP, they might have different configurations. Are you using Cloud SQL or database servers installed on GCE VMs? For Cloud SLQ, make sure the external IP addresses of your cluster nodes are whitelisted on the authorized neworks of the Cloud SQL instance.If running your database on GCE VMs, I'd recommend checking firewall rules to make sure they allow incoming connections to the server on right port and protocols. You might also verify the binding address of your database process to see if it accepts incoming connections from extrnals IP addresses. (This can be done by running "sudo netstat -plnt" to see processes and their binding addresses). This link may help.