Proving monotonicity of this ratio of Hypergeometric functions
Let $n\in\Bbb N$, $\omega=0,1,\dots,n$, and $\nu,z>0$. We define $$ \tilde g_{n,\omega}(z,\nu):=\frac{z^{n-\omega}\partial_z^n z^\omega {_1F_0}(1;-;z)_\nu}{{_1F_0}(1;-;z)_\nu}, $$ where $$ {_1F_0}(1;-;z)_\nu:=\frac{1-z^{\nu+1}}{1-z}=(\nu+1)z^{\nu+1}\mathbf F(1,\nu+2,2,1-z), $$ which is a continuous interpolation of the truncated geometric series and $\mathbf F(a,b,c,z)=F(a,b,c,z)/\Gamma(c)$ is the regularized Gauss hypergeometric function.
Conjecture: Under the specified restrictions on the parameters, $|\tilde g_{n,\omega}(z,\nu)|$ is nondecreasing in $z$ and is strictly increasing in $z$ when $n-\omega-\nu\notin\Bbb N$.
I am seeking to solve this conjecture but have been unsuccessful so far and am turning to the SE math community for help in finishing the proof. See below for what I have tried as a potential path forward.
I believe the following proof solves the conjecture in the affirmative. Ways to simplify the proof are welcomed.
Theorem
Let, $$ {_1F_0}(1;-;z)_\nu:=\frac{1-z^{\nu+1}}{1-z}=(\nu+1)z^{\nu+1}\mathbf F(1,\nu+2,2,1-z), $$ where $\mathbf F(a,b,c,z)=F(a,b,c,z)/\Gamma(c)$ is the regularized Gauss hypergeometric function. Then for $n\in\Bbb N$, $\omega=0,1,\dots,n$, $z>0$, and $\nu>0$ $$ \tilde g_{n,\omega}(z,\nu):=\frac{z^{n-\omega}\partial_z^n z^\omega {_1F_0}(1;-;z)_\nu}{{_1F_0}(1;-;z)_\nu}, $$ is nondecreasing in $z$ when $n-\omega-\nu\in\Bbb N$ and strictly increasing in $z$ otherwise.
Proof
Using differentiation formulas for the hypergeometric function $\tilde g$ can be expressed in closed-form as $$ \tilde g_{n,\omega}(z,\nu)=\frac{n!(\omega+\nu+1)^{(n+1)}}{\nu+1}\frac{\mathbf F(1,\omega+\nu+2;n+2;1-z)}{\mathbf F(1,\nu+2,2,1-z)}. $$ where $(a)^{(n)}=\Gamma(a+1)/\Gamma(a-n+1)$ is the falling factorial. It is trivial to show that $\tilde g_{n,\omega}(z,\nu)$ is nondecreasing for the special case $n-\omega-\nu\in\Bbb N$ since $(\omega+\nu+1)^{(n+1)}=0$ under such conditions.
Now assuming $n-\omega-\nu\notin\Bbb N$ the leading constant term is nonzero. Furthermore, by way of the integral representation $$ \mathbf F(a,b;c;z)=\int_0^1\frac{t^{b-1}(1-t)^{c-b-1}(1-zt)^{-a}}{\Gamma(b)\Gamma(c-b)}\,\mathrm dt,\quad |\operatorname{ph}(1-z)|<\pi,\ \Re c>\Re b>0, $$ each hypergeometric term can be shown to be strictly positive over the entire parameter space such that $$ |\tilde g_{n,\omega}(z,\nu)|=\left\lvert\frac{n!(\omega+\nu+1)^{(n+1)}}{\nu+1}\right\rvert\frac{\mathbf F(1,\omega+\nu+2;n+2;1-z)}{\mathbf F(1,\nu+2,2,1-z)}. $$ For a strictly positive function $f(z)$, $\operatorname{sgn}(\partial_z\log f)=\operatorname{sgn}(\partial_z f)$; thus, we begin by evaluating the logarithmic derivative of $|\tilde g_{n,\omega}|$ yielding $$ \partial_z\log |\tilde g_{n,\omega}(z,\nu)|=\Psi_{0,0}(z,\nu)-\Psi_{n,\omega}(z,\nu), $$ where $$ \Psi_{n,\omega}(z,\nu)=(\omega+\nu+2) \frac{\mathbf F(2,\omega+\nu+3;n+3;1-z)}{\mathbf F(1,\omega+\nu+2;n+2;1-z)}. $$ For brevity, let $\beta=n+1$ and $\gamma=\omega+\nu+2$, then $$ \begin{aligned} \Psi_{n,\omega}(z,\nu)% &=\gamma\frac{\mathbf F(2,\gamma+1;\beta+2;1-z)}{\mathbf F(1,\gamma;\beta+1;1-z)}\\ &=\frac{\gamma}{z}\int_0^1\frac{zx}{1+(z-1)x}\frac{(1-x)^{\beta-1}(1+(z-1)x)^{-\gamma}}{\operatorname B(1,\beta)F(1,\gamma,\beta+1,-(z-1))}\,\mathrm dx\\ &=\frac{\gamma}{z}\,\mathsf E\left(\frac{zX}{1+(z-1)X}\right), \end{aligned} $$ where $X\sim\operatorname{GH}(1,\beta,\gamma,z-1)$ is distributed according to the Gauss hypergeometric distribution $$ f(x;\alpha,\beta,\gamma,\xi)=\frac{x^{\alpha-1}(1-x)^{\beta-1}(1+\xi x)^{-\gamma}}{\operatorname B(\alpha,\beta)F(\alpha,\gamma;\alpha+\beta;-\xi)},\quad 0<x<1, $$ as defined for $\alpha,\beta>0$, $\gamma\in\Bbb R$, and $\xi>-1$. For mathematical convenience define $W=zX(1+(z-1)X)^{-1}$ such that $\Psi_{n,\omega}(z,\nu)=\gamma/z\,\mathsf E W$. Since $W$ is a monotone increasing transformation of $X$ for all $z>0$ we have $$ F_W(w)=\mathsf P(zX(1+(z-1)X)^{-1}\leq w)= \mathsf P(X\leq (z(w^{-1}-1)+1)^{-1}), $$ which after a significant amount of work yields $$ F_W(w)=1-\frac{\operatorname B_{(1-z)(1-w)}(\beta,\gamma-\beta)}{\operatorname B_{1-z}(\beta,\gamma-\beta)},\quad 0<w<1, $$ where $\operatorname B_s(\alpha,\beta)$ is the incomplete beta function. Differentiating the derived cdf w.r.t. $w$ then provides the density in the form $$ f_W(w)=\frac{(1-w)^{\beta-1}(1-(1-z)(1-w))^{\gamma-\beta-1}}{(1-z)^{-\beta}\operatorname B_{1-z}(\beta,\gamma-\beta)}. $$ Now let $\delta>0$ and consider the family of density functions $f_\beta=\{f_W(w|\beta):\beta\geq 1\}$. We have for the likelihood ratio $$ \frac{f_{\beta+\delta}}{f_\beta}(w)=\frac{(1-z)^\delta\operatorname B_{1-z}(\beta,\gamma-\beta)}{\operatorname B_{1-z}(\beta+\delta,\gamma-\beta-\delta)}\left(\frac{1-w}{1-(1-w)(1-z)}\right)^\delta:= C (h(w,z))^\delta, $$ where the constant $C$ is easily shown to be strictly positive. Differentiating the likelihood ratio w.r.t. $w$ yields $$ \partial_w\frac{f_{\beta+\delta}}{f_\beta}(w)=-\delta C\frac{(1-w)^{\delta-1}}{(1-(1-w)(1-z))^{\delta+1}}, $$ which is strictly negative on $w\in(0,1)$; thus, the family of densities $f_\beta$ admits a strictly decreasing monotone likelihood ratio. It follows that $$ \partial_w\frac{f_{\beta+\delta}}{f_\beta}(w)<0\implies F_{\beta}(w)<F_{\beta+\delta}(w)\ \forall w\in(0,1)\implies \mathsf E W_\beta>\mathsf E W_{\beta+\delta}, $$ which upon recalling $\beta=n+1$ shows $$ \partial_n\mathsf E W<0\implies \operatorname{sgn}(\partial_n\Psi_{n,\omega})=\operatorname{sgn}(\gamma/z\,\partial_n\mathsf E W)=-\operatorname{sgn}(\gamma). $$ With this result, suppose we impose the constraint $\nu>-2\implies\gamma>0$. It follows that $\Psi_{n,\omega}(z,\nu)>0$ and $\partial_n\Psi_{n,\omega}(z,\nu)<0$; hence, for $\nu>-2$ we have proven $$ \Psi_{n,\omega}(z,\nu)<\Psi_{\omega,\omega}(z,\nu). $$ Next, we consider the behavior of $\Psi_{\omega,\omega}(z,\nu)$ in $\omega$. To aid in the following calculations we introduce the operators $$ \begin{aligned} \mathcal A_1^k F(a_1,a_2;a_3;s) &=F(a_1+k,a_2;a_3;s),\\ \mathcal A_2^k F(a_1,a_2;a_3;s) &=F(a_1,a_2+k;a_3;s),\\ \mathcal A_3^k F(a_1,a_2;a_3;s) &=F(a_1,a_2;a_3+k;s), \end{aligned} $$ for which $\mathcal A_i^0=\mathcal I$ is the identity. In terms of these operators we then have $$ \Psi_{\omega,\omega}(z,\nu)% =\frac{\frac{\omega+\nu+2}{\omega+2}\mathcal A_1\mathcal A_2\mathcal A_3F(1,\omega+\nu+2;\omega+2;1-z)}{F(1,\omega+\nu+2;\omega+2;1-z)}, $$ which upon application of the identities Eqs. $10$,$13$ \begin{gather*} \mathcal A_1=\mathcal I+\frac{a_2}{a_3}s\mathcal A_1\mathcal A_2\mathcal A_3,\\ \mathcal A_1^{-1}=\frac{a_1(s-1)}{a_1-a_3}\mathcal A_1+\frac{2a_1+(a_2-a_1)s-a_3}{a_1-a_3}\mathcal I, \end{gather*} permits us to write $$ \Psi_{\omega,\omega}(z,\nu)% =\frac{1}{1-z}\left(\frac{\omega+1}{z F(1,\omega+\nu+2;\omega+2;1-z)}+\frac{1}{z}\left((\omega+\nu+1)(1-z)-\omega\right)-1\right). $$ Introducing the relations Eq. $07.23.03.0122.01$. $$ F(1,\beta;\gamma;s)=(\gamma-1)s^{1-\gamma}(1-s)^{-(\beta-\gamma+1)}\operatorname B_s(\gamma-1,\beta-\gamma+1). $$ and Eq. $06.19.20.0003.01$ $$ \partial_\alpha\operatorname B_s(\alpha,\beta)=\operatorname B_s(\alpha,\beta)\log s-\frac{s^\alpha}{\alpha^2}{_3F_2}(1-\beta,\alpha,\alpha;\alpha+1,\alpha+1;s). $$ we are able to derive after a significant amount of work $$ \partial_\omega\Psi_{\omega,\omega}(z,\nu)=\frac{1}{1-z}\left(\frac{{_3F_2}(\nu,\omega+1,\omega+1;\omega+2,\omega+2;1-z)}{z^{\nu+2}F(1,\omega+\nu+2;\omega+2;1-z)^2}-1\right). $$ Using the integral representation of the generalized hypergeometric function in Eq. $16.5.2$ this result is equivalent to the expected value $$ \partial_\omega\Psi_{\omega,\omega}(z,\nu)=\frac{1}{1-z}\mathsf E\left(\frac{h(X)}{h(1)}-1\right), $$ where $$ h(X)=(1-(1-z)X)F(1,\omega+\nu+2;\omega+2;(1-z)X) $$ and $X\sim\operatorname{GH}(\omega+1,1,-\nu,z-1)$. Upon evaluating the derivative $$ \partial_X h(X)=(1-z)\frac{\nu}{\omega+2}F(2,\omega+\nu+2;\omega+3;(1-z)X), $$ and noting that $0<h(X)<\infty$, $\forall X\in(0,1)$ we observe that if $\nu>0$ then $$ \begin{aligned} \partial_X h(X)% \begin{cases} >0, &0<z<1\\ <0, &z>1. \end{cases}% &\implies h(X)/h(1) \begin{cases} <1, &0<z<1\\ >1, &z>1. \end{cases}\\ &\implies\frac{(h(X)/h(1)-1)}{1-z}<0,\quad \forall z\in\Bbb R^+\setminus\{1\}\\ &\implies\partial_\omega\Psi_{\omega,\omega}(z,\nu)<0,\quad \forall z\in\Bbb R^+\setminus\{1\}. \end{aligned} $$ Similarly, at the boundary point $z=1$ we find $$ \Psi_{\omega,\omega}(1,\nu)=1+\frac{\nu}{\omega+2}\implies \partial_\omega \Psi_{\omega,\omega}(1,\nu)=-\frac{\nu}{(\omega+2)^2}<0. $$ Consequently, if $\nu>0$, then $\Psi_{\omega,\omega}(z,\nu)<0$ and $$ \begin{aligned} \Psi_{n,\omega}(z,\nu)<\Psi_{\omega,\omega}(z,\nu)<\Psi_{0,0}(z,\nu)% &\implies \partial_z\log |\tilde g_{n,\omega}(z,\nu)|>0\\ &\implies \partial_z|\tilde g_{n,\omega}(z,\nu)|>0,\\ \end{aligned} $$ which completes the proof. $\quad\quad\square$