Without loss of generality, let us assume that $y = (0,\dotsc, 0)$ is the origin. We write $x = (x_1, \dotsc, x_d)$.

Again without loss of generality, let us assume that every $x_i$ is non-negative. This is because we may "mirror" all the paths with respect to the $i$-th coordinate hyperplane.

We write $S$ for the sum of all $x_i$.

For the reason of parity, we should also assume that $n = 2m + S$ for some non-negative integer $m$, otherwise there is no such path.

With all these assumptions, the number of paths connecting $x$ and $y$, as function of $m$, is given by: $$f_d(m) = n!\sum_{a_1 + \dotsc + a_d = m}\frac{1}{\prod_{i = 1}^d\left(a_i!(a_i + x_i)!\right)}.$$

Reason: the number $a_i$ is the number of steps going in the negative direction of the $i$-th axis. Then we have to go $a_i + x_i$ steps in the positive direction of the $i$-th axis. The sum of all $a_i$ is just half of the steps that we "wasted", hence is equal to $m$; and once the numbers $a_i$ are all fixed, then we just choose how they are arranged in the total of $n$ steps, and there are $\frac{n!}{\prod_{i = 1}^d\left(a_i!(a_i + x_i)!\right)}$ such ways.


Example:

For $d = 1$, the function we have is simply $f_1(m) = \binom n m = \binom {2m + S} m$.

By Stirling, this is assymptotic to $\frac{2^{2m + S}}{\sqrt{\pi m}}$. (We say two functions $g(m)$ and $h(m)$ are assymptotic, or $f$ is assymptotic to $g$, if the limit $g(m)/h(m)$ tends to $1$ when $m$ tends to infinity.)


This paragraph has been edited. See the edit history for the previous version.

For $d = 2$, it turns out that there is again a closed formula.

We have \begin{eqnarray*} f_2(m) &=& n!\sum_{a_1 + a_2 = m}\frac{1}{a_1!a_2!(a_1 + x_1)!(a_2 + x_2)!}= \binom n m \sum_{a = 0}^m\binom m a \binom{m + S}{a + x_2}\\ &=& \binom n m \sum_{a = 0}^m\binom m a \binom{m + S}{m + x_1 - a} = \binom n m \binom n {m + x_1}. \end{eqnarray*}

Therefore, we have $f_2(m)$ assympotic to $\frac{2^{2n}}{\pi m}$, by Stirling.


This should give you a general taste of how $f_d$ grows with respect to $m$. More specifically, my feeling is that for every $d$ there exists a number $\alpha_d$ such that $f_d(m)$ grows "like" $\alpha_d^m$, up to some polynomial factor of $m$.

A trivial upper bound is $\alpha_d \leq 4d^2$, since every step has only $2d$ different choices, hence the total number of paths is $(2d)^n$, which is $(4d^2)^m$ up to constant factor. Thus a brave guess could be $\alpha_d = 2d$.

The idea of proof is to imitate the $d = 2$ case here. But to keep this answer in a reasonable length, I would like to pause the discussion here and leave it to interested people.


P.S. The problem obviously has an interpretation as random walks on lattices. So maybe there are probabilists who know this or are interested in this.


The best way to approach this is to instead consider a random walk on $\mathbb Z^d$ starting at the origin and ask for the probabilities that one ends at a specific point $z$ after $n$ steps. The reason for this is that we can define random variables $X_i$ for each step chosen uniformly from the $2d$ possible steps and then the ending position is just $X_1+\ldots + X_n$. These probabilities are just the quantities you want divided by $(2d)^n$, but thinking about it in these terms lets us bring in a powerful machine: the central limit theorem.

So, first of all, let's jump straight to the answer and come back to fill in the pesky details. Let $Z_n$ be a random variable given as the position after $n$ steps. The multivariate central limit theorem states that $$\frac{1}{\sqrt{n}}\cdot Z_n \rightarrow N\left(0,\frac{1}{d}\cdot I_n\right)$$ where $N$ is the multivariate normal distribution and $I_n$ is the identity matrix and the arrow means convergence in distribution. To unpack that with less jargon: the distribution of $\frac{1}{\sqrt{n}}\cdot Z_n$ tends to be the same as if we'd randomly chosen the coordinates, each from a normal distribution with variance $\frac{1}{d}$ - which is the variance of each $X_i$.

In particular, unless anything really bad happens, this means that the probability of being at a particular point $z$ after $n$ steps is roughly the probability of choosing, from the normal distribution, a point $p$ such that the closest point in $\frac{1}{\sqrt{n}}\mathbb Z^d$ with the correct parity is $\frac{1}{\sqrt{n}}z$. The probability density function of $N\left(0,\frac{1}d\cdot I_n\right)$ evaluated at $\frac{1}{\sqrt{n}}\cdot z$ is given as, where the product runs over all dimensions $$\prod_i \frac{\sqrt{d}}{\sqrt{2\pi}} \cdot \exp\left(-\frac{1}2\cdot \left(\frac{\sqrt{d}\cdot z_i}{\sqrt{n}}\right)^2\right) = \left(\frac{\sqrt{d}}{\sqrt{2\pi}}\right)^d\cdot \exp\left(-\frac{d\|z\|^2}{2n}\right).$$ The probability of this point being the closest point is roughly this evaluation of a PDF times the volume of the region in which $\frac{1}{\sqrt{n}}z$ is the closest point of the right parity - this volume being $2\left(\frac{1}{\sqrt{n}}\right)^d$, which gives the probability of ending at a $z$ of the right parity being roughly $$2\left(\frac{\sqrt{d}}{\sqrt{2\pi n}}\right)^d\cdot \exp\left(-\frac{d\|z\|^2}{2n}\right)$$ If we multiply this probability by the count of $(2d)^n$ equally likely paths of length of $n$, we get the following result:

The number of paths of length $n$ starting at the origin and ending at some point $z$ is approximately $$2(2d)^n\left(\frac{\sqrt{d}}{\sqrt{2\pi n}}\right)^d\cdot \exp\left(-\frac{d\|z\|^2}{2n}\right)$$

To be a bit more formal, all that we can use "convergence in distribution" to say is that if we choose some (measurable) region $A$ of $\mathbb R^d$ whose boundary has no $d$-volume, then the probability that $\frac{1}{\sqrt{n}}Z_n$ is in $A$ tends to, as $n$ goes to $\infty$, the probability that the same would happen for a normal distribution.

To use this at the origin, you can look at the probability that the ending position is within a ball of radius $\frac{\alpha}{\sqrt{n}}$ around the origin and have formally that this probability tends to be equal to the probability that the given normal distribution is within $\alpha$ of the origin. I'm struggling to recall a nice way to show this, but the probabilities of landing at any given point in this region are not too different from each other - essentially, the random walk doesn't remember where it started - so the probability of landing at a given point turns out to be really close to the total probability of the region divided by the number of points therein - which is also pretty much the asymptotic given before.


As mentioned by @Shalop in the comments, we can justify the asmyptotics on the probabilities of ending up at a given point by resorting to the use of some form of a local central limit theorem, which directly yield such results. For instance, Theorem 3 of this paper can be applied to this process after using a suitable change of coordinates to remove the parity issues.

One an also put together various ad-hoc approaches to prove this particular result, by noting that the probability of ending up at a particular point $z$ decreases in each coordinate of $z$'s absolute value, or that the ratio of adjacent probabilities can often be interpreted as events like "the probability of crossing a some hyperplane given that we ended up at some point" (which are very close to $1$) or that the differences of adjacent probabilities shrink quickly (which can be established via the Fourier methods in my other answer).