Probability on entering direction of a simple random walk
The $\frac14$ limit is of course true, but the conjectured error term seems to be off by a log; the first-order correction can be computed exactly using discrete potential theory. This post will not be self-contained; I refer to these lecture notes for details on the background claims.
Given a function $h:\mathbb{Z}^2\to \mathbb{R}$, define its discrete Laplacian
$$
\Delta h(x):=\frac14 \sum_e (h(x+e)-h(x)),
$$
the sum being over the four neighbours of $x$. If $\Omega\subset \mathbb{Z}^2$ and $v\in \Omega$, let $G_\Omega(\cdot ,v)\to \mathbb{R}$ be the Green's function, that is, the unique function satisfying $-\Delta G_\Omega(\cdot,v)=\delta_{\cdot,v}$ in $\Omega$, and $G(\cdot,v)\equiv 0$ outside $\Omega$. It is easy to see that if the random walk starts from $x$, then we have $$\mathbb{P}(X_{\tau}=v,X_{\tau-1}=v+e)=G_{\Omega\setminus\{v\}}(x;v+e),$$ where $\tau=\min\{t:X_t\notin\Omega \setminus\{v\}\}$. Also,
$$
G_{\Omega\setminus\{v\}}(\cdot;v+e)=-\frac{G_\Omega(v,v+e)}{G_\Omega(v,v)} G_{\Omega}(\cdot;v)+G_\Omega(\cdot,v+e),
$$
because the right-hand side satisfies the defining conditions of Green's function in $\Omega\setminus\{v\}$.
Now, we will plug in the above identity a family $\Omega_R$ of growing domains, so that the rescaled $\Omega_R$ converge: $$R^{-1}\Omega_R\to\Omega,\quad R\to\infty,$$ say, in the sense of Hausdorff. As explained in the lecture notes, if $h_R$ solves the discrete Dirichlet problem $\Delta h_R\equiv 0$ in $\Omega_R$, $h_R(x)=\varphi(xR^{-1})$ outside $\Omega_R$ for a continuous $\varphi$, then $h_R(Rx)=h(x)+o(1)$, uniformly over compact subsets of $\Omega$, where $h$ solves the (usual) Dirichlet problem in $\Omega$ with boundary conditions $\varphi$. In fact, also all the "discrete derivatives" of $h_R$ converge to the corresponding derivatives of $h$
Moreover, as explained in the notes, we can construct the discrete full plane Green's function, or discrete analog of the logarithm: the unique function $G_0(\cdot):\mathbb{Z}^2\to\mathbb{R}$ with the propeties $-\Delta G_0(\cdot)=\delta(\cdot)$, $G_0(0)=0$, and $G_0(x)=-\frac{1}{2\pi}\log|x|+c+O(|x|^{-2})$ as $|x|\to\infty$. We can write $$ G_{\Omega_R}(x,v)=G_0(x-v)+\tilde{G}_{\Omega_R}(x,v), $$ where $\tilde{G}$ solves the discrete Dirichlet problem in $\Omega_R$ with boundary data $\varphi(x)=-G_0(x,v)=\frac{1}{2\pi}\log|x-v|-c+O(R^{-2})=\frac{1}{2\pi}\log R+\frac{1}{2\pi}\log\frac{|x-v|}{R}-c+O(R^{-2})$. From the above remark on convergence of solutions to Dirichlet problem, we deduce that $$ G_{\Omega_R}(x,v)=G_0(x-v)+\frac{1}{2\pi}\log R - c + \tilde{g}_\Omega(xR^{-1};vR^{-1})+o(1), $$ where $\tilde{g}(\cdot,\hat{v})$ solves the Dirichlet problem in $\Omega$ with boundary data $\frac{1}{2\pi}\log |\cdot-\hat{v}|$. Note that if $x$ and $v$ are at distance of order $R$ from each other, this can be written as $$ G_{\Omega_R}(x,v)=g_\Omega(x,v)+o(1), $$ where $g_\Omega$ is the Green's function of $\Omega$. What is more, the remark about convergence of discrete derivatives, together with symmetry of Green's function, implies that if $x$ and $v$ are at distance of order $R$ from each other and the boundary, then $$G_{\Omega_R}(x,v+e)=G_{\Omega_R}(x,v)+R^{-1}\nabla_{2} g(x,v)\cdot e+o(R^{-1}),$$ where $\nabla_{2}$ denotes the gradient in the second argument. Plugging everything together, we arrive at $$ \frac{G_\Omega(v,v+e)}{G_\Omega(v,v)}=\frac{-\frac14+\frac{1}{2\pi}\log R-c+\tilde{g}_\Omega(vR^{-1};vR^{-1})+o(1)}{\frac{1}{2\pi}\log R-c+\tilde{g}_\Omega(vR^{-1};vR^{-1})+o(1)}=1-\frac{2\pi}{4\log R}+O\left(\frac{1}{(\log R)^2}\right), $$ and $$ G_{\Omega\setminus\{v\}}(x;v+e)=\frac{2\pi}{4\log R}g_\Omega(x,v)+R^{-1}\nabla_2 g_\Omega(x,v)\cdot e+o((\log R)^{-1})+o_e(R^{-1}), $$ where $o(\cdot)$ does not depend on $e$ and $o_e(\cdot)$ is allowed to depend on $e$. It follows that the event $S_R>T_v$ has probability $$ \frac{2\pi}{\log R}+ o((\log R)^{-1}), $$ and the conditional probability to exit through the move $v+e\to e$ is $$ \frac{1}{4}+\frac{\log R}{2\pi R}\nabla_2g_\Omega(x,v)\cdot e+o\left(\frac{\log R}{R}\right). $$