EF LINQ to SQL, dividing by zero error, generated query puts parameters in wrong order
You appear to be labouring under the assumption that AND
and OR
in T-SQL will always short-circuit in the order specified in the query. This is absolutely not the case.
It is true that it will normally short-circuit a logical expression. After all, why do more work than necessary? But it may not be in the order that was specified in the query. Logical operators are not specified to execute in any particular order, and the optimizer often chooses to switch them around based on things like estimates of short-circuiting likelihood or the amount of work involved in evaluation, as long as the operator precedence rules are followed (AND
before OR
etc).
Because evaluating the space of all possible execution plans is too vast, the optimizer uses aggressive pruning to remove options based on heuristics. These two predicates:
(
StockRequests.[StartYear] = @stockYear OR
(
0 <> StockIntervals.[IntervalInYears] AND
0 = (@stockYear - StockRequests.[StartYear]) % StockIntervals.[IntervalInYears]
)
)
and
(
StockRequests.[StartYear] = @stockYear OR
(
StockIntervals.[IntervalInYears] <> 0 AND
0 = (@stockYear - StockRequests.[StartYear]) % StockIntervals.[IntervalInYears]
)
)
are exactly the same as far as query intention is concerned. The question is what the optimizer will choose to do with them. In your case, it so happens that putting comparators one way around causes certain optimizations to fall in to place (or not) and therefore the AND
can get flipped around.
As you can see from this fiddle, which is running on SQL Server 2019, both of your options short-circuited correctly, as did flipping around the AND
. I had to flip the OR
to get it to fail, and then the order of the AND
did not matter. Note that the logic was not changed in any of the queries, and that the order of the AND
or =
comparators themselves do not force the hand of the optimizer, it just sometimes guides it down a certain path.
So it's very dependent on what the optimizer decides to do, and you cannot guarantee upfront that it will always do it correctly. Yes, you saw it do that a hundred times, but the hundred-and-first could change, perhaps because of statistics changes, or a update to SQL Server, or changing the cardinality estimator version, or the database compatibility level, or any of the many things that can cause a recompile.
The only guaranteed way to ensure short-circuiting in a particular order is to use CASE
(or NULLIF
which compiles into a CASE
). This is documented by Microsoft, it will work as long as you do not use any aggregation functions.
In other words, do not expect something like
CASE WHEN x > 0 THEN SUM(1 / x) END
to work, because theSUM
is often evaluated at an earlier stage. It only works with scalar values. As far as I am aware I would expect the same issue would apply to subqueries and window functions.
You can therefore work around your problem by using NULLIF
(
StockRequests.[StartYear] = @stockYear OR
(
StockIntervals.[IntervalInYears] <> 0 AND
0 = (@stockYear - StockRequests.[StartYear]) % NULLIF(StockIntervals.[IntervalInYears], 0)
)
)
In Entity Framework you can use something like (value == 0 ? null : value)
query.Where(x =>
x.StartYear <= selectedYear &&
(
x.StartYear == selectedYear ||
(x.StockInterval.IntervalInYears != 0
&& selectedYear - x.StartYear %
(x.StockInterval.IntervalInYears == 0 ? null : x.StockInterval.IntervalInYears)
== 0)
)
);