Minimal sufficient statistic of $\operatorname{Uniform}(-\theta,\theta)$
Solution 1:
If $\max\{-X_{(1)}, X_{(n)}\}$ is sufficient, then necessarily $(X_{(1)},X_{(n)})$ is sufficient, since if you know the latter you can easily compute the former. And the latter, the pair, cannot be a minimal sufficient statistic if the former is sufficient because if you know the maximum you don't have enough information to find the pair, and the pair is sufficient.
Being sufficient does not mean it gives enough information to describe the data; rather it means it gives all information in the data that is relevant to inference about $\theta$, given that the proposed model is right. The model is that the observations come from a uniform distribution on an interval symmetric about $0.$ But the data may also contain information calling that model into question and the sufficient statistic doesn't give that information.
By definition, that the maximum is sufficient means that the conditional distribution of the data given the maximum does not depend on $\theta.$
You are trying to show that $\dfrac{\mathbb{1}_{[\max\{-X_{(1)},X_{(n)}\}<\theta]}}{\mathbb{1}_{[\max\{-Y_{(1)},Y_{(n)}\}<\theta]}} \vphantom{\dfrac 1 {\displaystyle\sum}}$ does not depend on $\theta$ when the two maxima are equal. I think in cases like this, where $0/0$ can appear, one should phrase the result as saying that if the two maxima are equal then there is some number $c\ne0$ such that $$ \mathbb{1}_{[\max\{-X_{(1)},X_{(n)}\}<\theta]} = c \mathbb{1}_{[\max\{-Y_{(1)},Y_{(n)}\}<\theta]} $$ and that this equality continues to hold as $\theta$ changes within the parameter space $(0,\infty).$
Solution 2:
The forward direction for your biconditional is true by definition of sufficiency. The non-trivial direction is the reverse. So fix $x$ and $y$ and suppose $${\cal L}(x,\theta)=k{\cal L}(y,\theta)\tag1$$ where $k$ is a constant independent of $\theta$ (this formulation addresses your concern about $0/0$; note that $k$ can depend on $x$ and $y$). You can regard (1) as a statement comparing two functions of $\theta$ and asserting that one is a constant multiple of the other.
As the linked post noted, $${\cal L}(x,\theta) = \frac1{(2\theta)^n}I(\theta>\max\{-x_{(1)},x_{(n)}\}). $$ So as a function of $\theta$, the LHS of (1) equals $1/(2\theta)^n$ if $\theta$ exceeds a certain threshold, and is zero otherwise, while the RHS of (1) equals $k/(2\theta)^n$ if $\theta$ exceeds a certain threshold and is zero otherwise. The only way this can be true for all $\theta$ is if the two thresholds are identical. (We also deduce $k=1$ for free.) This proves that $\max\{-x_{(1)},x_{(n)}\}=\max\{-y_{(1)},y_{(n)}\}$ and therefore $T(X):=\max\{-X_{(1)},X_{(n)}\}$ is minimal sufficient.
OTOH, it doesn't follow that $(x_{(1)},x_{(n)})=(y_{(1)},y_{(n)})$, since (1) can hold if $y=-x$.