How does Variational AutoEncoder (VAE) get mean and variance?

Could somebody explain to me how VAE works in this tutorial (look only at cell which starts with class Sampling...)?

input=(batch_size=64, flatten_pixels=784) goes to Dense(64, 'relu') layer. The result goes through Dense(32) twice in parallel. One output of it they call z_mean, another z_log_var. Why do they think that output of two identical Dense(32) layers with the same input should return mean and log variance? Reading about this topic I found that z_mean becomes a mean and z_log_var becomes a log variance during the loss minimization process.

Also, as far as I know:

KL Divergence = CrossEntropy - Entropy = -∑plog(q) - (-∑plog(p))

I know what KL Divergence is, I understand binary and cathegorical crossentropy, but I still can't understand why the loss calculated by the formula below forces one output of Dense(32) to be mean and another output to be log variance:

kl_loss = - 0.5 * tf.reduce_mean(
        z_log_var - tf.square(z_mean) - tf.exp(z_log_var) + 1)

The $n=1$ case of this answer obtains the KL loss as $-\frac12(1+\ln\sigma^2-\sigma^2-\mu^2)$. This is your formula with z_mean $\mu$ and z_log_var $\ln\sigma^2$.