Why do we prefer classical logic over non-classical logic?
In classical logic, we have paradoxes like paradoxes of material implication. If non-classical logic like relevance logic fixes those problems, why do we still continue to use classical logic?
Relevance logic enables us to avoid some prima facie issues with classical logic. But at a high price. For a start, in many systems of relevance logic we lose disjunctive syllogism, i.e. the rule that from $A \lor B$ and $\neg A$ you can infer $B$; yet intuitively that is an absolutely fundamental valid rule.
This is rather typical when choosing between formal logical systems. We start with a bunch of logical 'intuitions' we'd like a formal logic to conform to. Here's a selection: disjunctive syllogism is OK, conditional proof is OK, modus ponens is OK, entailment is unrestrictedly transitive, the indicative conditional isn't truth-functional, a contradiction doesn't imply every proposition, etc. etc. And then we find that we can't consistently satisfy all those desiderata together. Drat! What to do?
We have to look around for "best buys" that satisfy enough of the desiderata that we most care about meeting (or care about meeting in a particular context). You pay your money and you make your choice!
And experience shows that if we most care about modelling the reasoning of mathematicians doing standard textbook maths, for example, then classical logic is actually brilliant in all sorts of ways (it is a great fit AND has beautiful proof-systems AND has an elegant and intuitive semantics, etc.). Given its great positive virtues we then learn to live with the supposed failures of "relevance" (the conditional is material, a contradiction entails anything): these failures are, in the context, usually deemed a price worth paying. And that's why we (most of us) stick to classical logic (most of the time, for most purposes).
But there is no One True Logic written on tablets of stone. It's a question of costs and benefits: and your weighing of the costs and benefits can reasonably differ from the majority view.
The "paradoxes of material implication" are not paradoxes, in the sense of contradictions, they are just non-intuitive. And any "logic", classical or not, is a human construction, which some people prefer to use to think about the real world, but this is by no means necessary; just because a logic doesn't exactly correspond how you think reality behaves is no count against its being interesting to think about, or its being useful as an approximation to or model of reality.
Mathematics is all about modeling. When modeling, you are building a bridge, or rather a translation mechanism, between a problem in the real world domain, and some formal language, formal axioms, and a a logical system to be able to symbolically manipulate axioms and consequences.
The model is not trying to be a faithful representation of reality. It is trying to be a good enough approximation of reality, while providing useful and powerful ways to deduce properties of the real life problem by symbolically manipulating marks on a piece of paper using the laws of the chosen logic.
Those models that are used are precisely those that yield good enough approximations together with powerful enough proof techniques. Today, it seems that classical logic achieves good results (if not excellent results). Of course, things can change and there are reasons to consider other logical systems, and indeed people are researching nonclassical logical systems and their applications. But, before you throw away a perfectly good horse for a slightly better one, you need to think carefully.
A very common opinion of mathematics is "mathematical platonism", which holds that mathematical obejcts exist in some sense and that mathematics is the study of these objects. The rules of classical logic are closely tied to this viewpoiont.
The "paradoxical" formulas of material implication are verified when we interpret them as talking about truth values in a fixed model. For example, if we know that $p$ and $q$ are statements about a fixed model, we know that $p \to (q \lor \lnot q)$ will be true in the model, by reasoning by cases about the truth values of $p$ and $q$ in the model.
The completeness theorem for first-order logic says that a formula is provable in first-order logic if and only if it is true in every model. This statement has two parts:
If we can prove a formula in first-order logic, the formula is true in all models. To a platonist, this means that if we already have a fixed model in mind, and we prove a formula, we know the formula will be true in that model.
A formula that is true in every model is already provable in first-order logic, so we cannot extend first-order logic to a properly stronger logic (that is, one that proves more sentences) while simultaneously maintaining (1).
The completeness theorem thus says that first-order logic is the strongest logic possible (in terms of proving the most sentences) whose results are sound when applied to an arbitary model that we have already fixed. That is exactly the sort of logic that we would want, as mathematical platonists, in order to study a collection of pre-existing structures.
Relevance logic, for example, makes fewer implication formulas provable. That is of interest if we are trying to study the natural-language "implies" relationship, but it is less interesting if we are trying to study the field of real numbers and we want to generate as many formulas as possible that are true in that structure.