Tough integrals that can be easily beaten by using simple techniques

This question is just idle curiosity. Today I find that an integral problem can be easily evaluated by using simple techniques like my answer to evaluate

\begin{equation} \int_0^{\pi/2}\frac{\cos{x}}{2-\sin{2x}}dx \end{equation}

I'm even shocked (and impressed, too) by user @Tunk-Fey's answer and user @David H's answer where they use simple techniques to beat hands down the following tough integrals

\begin{equation} \int_0^\infty\frac{x-1}{\sqrt{2^x-1}\ \ln\left(2^x-1\right)}\ dx \end{equation}

and

\begin{equation} \int_{0}^{\infty}\frac{\ln x}{\sqrt{x}\,\sqrt{x+1}\,\sqrt{2x+1}}\ dx \end{equation} So, I'm wondering about super tough definite integrals that can easily beaten by using simple techniques with only a few lines of answer. Can one provide it including its evaluation?

To avoid too many possible answers and to narrow the answer set, I'm interested in knowing tough integrals that can easily beaten by only using clever substitutions, simple algebraic manipulations, trigonometric identities, or the following property

\begin{equation} \int_b^af(x)\ dx=\int_b^af(a+b-x)\ dx \end{equation}

I'd request to avoid using contour/ residue integrals, special functions (except gamma, beta, and Riemann zeta function), or complicated theorems. I'd also request to avoid standard integrals like

\begin{align} \int_{-1}^1\frac{\cos x}{1+e^{1/x}}\ dx&=\sin 1\tag1\\[10pt] \int_0^\infty\frac{\log ax}{b^2+c^2x^2}\ dx&=\frac{\pi\log\left(\!\frac{ab}{c}\!\right)}{2bc}\tag2\\[10pt] \int_0^1\frac{1}{(ax+b(1-x))^2}\ dx&=\frac{1}{bc}\tag3\\[10pt] \int_0^{\pi/2}\frac{\sin^kx}{\sin^kx+\cos^kx}\ dx&=\frac{\pi}{4}\tag4\\[10pt] \int_0^\infty\frac{e^{-ax}-e^{-bx}}{x}\ dx&=\log\left(\!\frac{b}{a}\!\right)\tag5 \end{align}


My favourite example of this is @SangchulLee's solution to @VladimirReshetnikov's question, which asks to verify the correctness of the identity $$\int_0^{\infty} \frac{dx}{\sqrt[4]{7 + \cosh x}}= \frac{\sqrt[4]{6}}{3\sqrt{\pi}} \Gamma\left(\frac14\right)^2 .$$

The other answers indicate the "toughness" of this integral, resorting to all sorts of special functions such as elliptic functions, hypergeometric functions, or Mathematica.

However, the integral can be brilliantly shown to be a few substitutions away from the form of a beta function integral.

Lee makes a chain of simple substitutions which he desecribes here, to obtain $$\int_0^{\infty} \frac{dx}{(a+ \cosh x)^s} = \frac1{(a+1)^s} \int_0^1 \frac{v^{s-1}}{\sqrt{(1-v)(1-\frac{a-1}{a+1} v)}} dv.$$

The closed formed of this integral for general $a$ is certainly non-elementary, but our special case $a=7$ and $s=4$ is different for a very neat reason:

When $a=7,$ $\frac{a-1}{a+1}$ is equal to $\frac34$, but since we have the triple angle formula $\displaystyle \, \cosh(3 x)=4\cosh^3 x-3 \cosh x,$ the integral can be rewritten (with $v=\operatorname{sech}^2 t$) as $$2^{5/4} \int_0^{\infty} \frac{\cosh t}{\sqrt{\cosh 3t}} dt$$ which can be easily brought to the form of a beta function.

(Note that we can find a similar closed form (with $a=7$) for $s=3/4$.)


Ramanujan's master theorem can be applied to a wide range of (sometimes extremely complicated) definite integrals, allowing them to be evaluated in less than a line of computations. The theorem states:

If $f(x)$ has a series expansion of the form:

$$f(x) = \sum_{k=0}^{\infty}(-1)^{k}\frac{\lambda(k)}{k!} x^k$$

then

$$\int_0^{\infty}x^{s-1}f(x) dx = \Gamma(s)\lambda(-s)$$

Here one has to use an appropriate analytic continuation of $\lambda(k)$. This is a generalization of an old theorem obtained by Glaisher stating that:

$$\int_0^{\infty}\sum_{k=0}^{\infty}(-1)^k a_k x^{2k}dx = \frac{\pi}{2}a_{-\frac{1}{2}}$$

where again an appropriate analytic continuation of the expansion coefficients is assumed.

Example: From the series expansion of the Bessel function of the first kind $J_{\alpha}(x)$ it is very easy to compute that:

$$\int_0^{\infty}x^p J_{\alpha}(x)dx = 2^p\frac{\Gamma\left(\frac{\alpha + 1 +p}{2}\right)}{\Gamma\left(\frac{\alpha + 1 -p}{2}\right)}$$

for $-(\alpha + 1)<p<\frac{1}{2}$