Why is the continuum hypothesis believed to be false by the majority of modern set theorists?
I have written extensively on very related matters. I suggest you take a look here. What I will say now is more or less there or in the links there (some times with additional details), but I hope you find the description here useful, even if it is somewhat of a caricature.
Naturally, there are set theorists that consider the question of the truth value of $\mathsf{CH}$ meaningless. Set theory is the study of the consequences of the Zermelo Fraenkel (+Choice) axioms, and that's it. Part of what one studies is consistency results, and their associated consistency strength, but truth only makes sense in the context of a proof, which means a formal proof in first order logic, and these proofs only make sense within the context of an axiomatic theory, and the theory that has been agreed upon (more or less universally) is $\mathsf{ZFC}$, which is not enough to settle the continuum hypothesis.
In fact, there may be a somewhat agnostic minority within this group, that does not even consider the issue of the consistency of $\mathsf{ZFC}$ a settled matter. All that we know is that either it is consistent, but its consistency is unprovable, or else it is inconsistent, and we have not yet uncovered its contradictions. (Within this group, most seem to consider that restricting replacement to $\Sigma_2$ formulas ought to be on the "safe" side. Others do not even agree on this. But this is another matter.)
(From a philosophical point of view, I consider this view unsatisfactory, and the equating of proof with first-order proof problematic, but let's not get into this. Note that mathematically there is nothing wrong with this view in that in no way restricts what kind of results one would consider valid, or what kind of structures one would study.)
Among set theorists that consider that set theory is not just the study of the consequences of $\mathsf{ZFC}$, there is a more or less unified consensus that one of the goals of work in the field is to identify natural extensions of the current list of axioms that will eventually be recognized as part of what we should consider the basic list. There is a program essentially initiated by Gödel that hopes to identify these extensions, and obtain as a result a more complete description of the universe of sets. The main success of the program so far is the identification of the large cardinal hierarchy, that (I believe) most set theorists agree are part of the set theoretic universe, even if some traditional adherence to $\mathsf{ZFC}$ still leads us to explicitly mention their use every time they are invoked.
We cannot yet fully explain the coherency of the theories given by large cardinals, but we have partial results. This coherency (at the arithmetic, and later, at the projective level), and the companion generic absoluteness results, is one of the main sources of their acceptance. The problem with large cardinals is that they do not settle $\mathsf{CH}$.
One can think of large cardinals as providing the universe with "height". One may expect that the true universe of sets should also be "wide", and so after large cardinals one would pursue extensions that imply this width. The most successful set of principles providing the universe with such richness comes in the form of reflection principles. These principles (typically, more or less natural generalizations of reflection principles outright provable in $\mathsf{ZFC}$) tend to imply that $\mathsf{CH}$ is false (in fact, they usually imply that the continuum is $\aleph_2$). To my mind, the strongest current arguments for $\lnot\mathsf{CH}$ come from the embracing of strong reflection principles.
(These principles come in several flavors. One can see many large cardinal hypotheses formulated in terms of elementary embeddings to be essentially giving us local reflection principles. Other establish that many initial stages of the universe resemble the full universe in non-trivial ways. There are yet others.)
Other arguments have been proposed. Determinacy, which contradicts choice but is true in some inner models if we accept large cardinals, implies that the continuum is very large. We expect that this should be reflected at the definable (projective) level, so not only we expect $\mathsf{CH}$ to fail, we expect a theory that would provide us with explicit counterexamples. It was explicitly stated by Donald Martin that, if one is to argue against $\mathsf{CH}$, this would be a reasonable program to achieve this goal, see for example
Donald A. Martin. Hilbert's first problem: the continuum hypothesis. In Mathematical developments arising from Hilbert problems (Proc. Sympos. Pure Math., Northern Illinois Univ., De Kalb, Ill., 1974), pp. 81–92. Amer. Math. Soc., Providence, R. I., 1976. MR0434826 (55 #7790).
The most successful result in this direction seems to be Woodin's realization that $\mathsf{MM}$, Martin's maximum (a strengthening of reflection principles, describing in a precise way how the universe satisfies a sort of "saturation"), implies that the projective ordinal $\mathbf{\delta}^1_2$ is $\omega_2$, thus indicating that there are projective pre-well-orderings of the reals that witness the falsehood of $\mathsf{CH}$.
For an account of other arguments against $\mathsf{CH}$, see Believing the axioms, by Penelope Maddy, particularly the second part: J. Symbolic Logic 53 (1988), no. 2, 481-511 and no. 3, 736-764. These papers are also an excellent introduction to the question of what counts as evidence when a formal proof is impossible.
A recent argument against $\mathsf{CH}$ was proposed by Woodin, see here and here. As explained in the first link above, there were some problems with this argument, that weaken its conclusion.
There are also set theorists that expect $\mathsf{CH}$ to be true. I do not consider the argument that $\mathsf{CH}$ trivializes the theory of cardinal characteristics of the continuum to carry much weight, for two reasons. One, the theory of Tukey reductions remains regardless of $\mathsf{CH}$. Second, $\mathsf{CH}$ provides us with a rich variety of objects in analysis, the theory of linear orders, algebra, etc, that more than makes up for the coincidence of a few cardinals. In fact, there is an empirical dichotomy that gives us much control ("few objects") in the presence of forcing axioms such as $\mathsf{MM}$, while it gives us great chaos ("as many examples as possible") if we accept $\mathsf{CH}$.
The strongest arguments for $\mathsf{CH}$ that I know of come from three fronts: One, $\mathsf{CH}$ gives us a rich theory of generic embeddings. This view is detailed in the work of Matt Foreman. Foreman identified a natural hierarchy of generic embeddings, and (in the 1980s) showed that some of these embeddings actually imply $\mathsf{CH}$. These are natural statements that in a sense generalize large cardinals. On the other hand, there are also natural generic embedding statements that contradict $\mathsf{CH}$, so this argument, even if appealing, is at best inconclusive.
Two, rather than $\mathsf{CH}$, we pursue a view of the universe as an "$L$-like" model, and it is a basic consequence of the resulting fine-structural theory that $\mathsf{CH}$ (in fact, $\mathsf{GCH}$) holds. The second view fits naturally with the inner model program of the California school, and any progress in this program can be seen as strengthening the case of $\mathsf{CH}$. However, this second approach loses some strength, as the pursuit of the inner model program is largely independent of the "background" theory , and conflating the two may be seen as a distraction rather than an advantage. Certainly, the fact that there are nice inner models in the universe does not immediately suggest that the universe itself is one of them. (On the other hand, one could argue that $\mathsf{HOD}$ should be but, again, that's another matter.)
There is yet a third argument, perhaps the strongest. Woodin has a nice result showing that $\mathsf{CH}$, itself a $\Sigma^2_1$ statement, is "maximal" among $\Sigma^2_1$ statements, in that any such statement that can be proved consistent with large cardinals via forcing is actually a consequence of $\mathsf{CH}$(+large cardinals).
There is not really a corresponding result for the negation of $\mathsf{CH}$. There are, however, maximality results of another kind (found through the theory of $\mathbb P_{max}$ forcing).
The result in itself is of course not "proof" that $\mathsf{CH}$ is true. If one wants to make a case, one should aim at presenting this maximality as part of a larger picture, and there have been many problems trying to carry this out. (See here for an extension.)
A very attractive alternative has emerged recently. We understand a good set theory should allow us to interpret other theories. So, if we want to study models of $\mathsf{CH}$, the theory should be able to provide us with such models, without any artificial restrictions. Instead, if we prefer models of $\mathsf{MM}$, displaying such models should be possible as well. The method of forcing gives us a tool for carrying out these interpretations. If it turns out that two theories are mutually interpretable, then on mathematical grounds there is really no reason to prefer one over the other. Such choice would in principle be guided by extra-mathematical considerations, perhaps of an aesthetic nature.
Taking this point of view seriously leads to a multiverse theory. There are several candidate theories here (there are some additional details at the first link). In all, it is the case that $\mathsf{CH}$ is true in some models and false in others, and it does not quite make sense to ask whether it is true or false in absolute terms. The multiverse approach I prefer gives us a "partial order" of universes and their forcing extensions. If a model is a forcing extension of another one, then we also consider the latter. Part of the formalization of the theory may be in deciding whether there is a "distinguished ground model" or not. If there is, of course it makes sense to ask whether $\mathsf{CH}$ holds in it, but deciding this does not appear to me to constitute a solution to the continuum problem. In fact, one could say that the continuum problem is not a "real question" in this view of set theory. The clearest presentation of this approach that I am aware of is this essay by Steel.
Is there really a majority preferring the negation of $\mathsf{CH}$? The picture provided by reflection principles, forcing axioms, Woodin's results with $\mathbb P_{max}$, etc, is very compelling. On the other hand, the search for inner models of large cardinals gives us very nice theories implying $\mathsf{CH}$, including what Woodin calls "ultimate $L$", and then there are Woodin's maximality results. It appears to me forcing axioms are more popular (though I may be biased). Certainly the proofs of their consistency (though involved) are less technical that the proofs of maximality from $\mathsf{CH}$ and its proposed generalizations, so it seems natural more set theorists study their consequences and consider them.
Popularity, of course, cannot be the arbiter of truth: Whatever axioms we decide best explain the universe of sets ought not to be chosen based on it. But the popularity of forcing axioms and reflection principles is due to mathematical reasons: They have great explanatory power. Many classification results can be carried out in their presence. They serve as organizing principles. These are all benchmarks of a good theory.
The principles discussed by Maddy in her essays suggest that principles giving us a rich universe of sets should be preferable over the alternative. This explains why principles such as $V=L$ (that implies $\mathsf{CH}$) that some model theorists prefer (but see here), are actually not given much weight, since they unnecessarily restrict the interpretative power of the theory. (For an opposing view to this argument against $V=L$, see here.)
I expect in the future the multiverse view will gain weight. Beyond this, I cannot really say what the ultimate status of $\mathsf{CH}$ will be.