Why should casting be avoided? [closed]
I generally avoid casting types as much as possible since I am under the impression that it's poor coding practice and may incur a performance penalty.
But if someone asked me to explain why exactly that is, i would probably look at them like a deer in headlights.
So why/when is casting bad?
Is it general for java, c#, c++ or does every different runtime environment deal with it on it's own terms?
Specifics for a any language are welcome, example why is it bad in c++?
Solution 1:
You've tagged this with three languages, and the answers are really quite different between the three. Discussion of C++ more or less implies discussion of C casts as well, and that gives (more or less) a fourth answer.
Since it's the one you didn't mention explicitly, I'll start with C. C casts have a number of problems. One is that they can do any of a number of different things. In some cases, the cast does nothing more than tell the compiler (in essence): "shut up, I know what I'm doing" -- i.e., it ensures that even when you do a conversion that could cause problems, the compiler won't warn you about those potential problems. Just for example, char a=(char)123456;
. The exact result of this implementation defined (depends on the size and signedness of char
), and except in rather strange situations, probably isn't useful. C casts also vary in whether they're something that happens only at compile time (i.e., you're just telling the compiler how to interpret/treat some data) or something that happens at run time (e.g., an actual conversion from double to long).
C++ attempts to deal with that to at least some extent by adding a number of "new" cast operators, each of which is restricted to only a subset of the capabilities of a C cast. This makes it more difficult to (for example) accidentally do a conversion you really didn't intend -- if you only intend to cast away constness on an object, you can use const_cast
, and be sure that the only thing it can affect is whether an object is const
, volatile
, or not. Conversely, a static_cast
is not allowed to affect whether an object is const
or volatile
. In short, you have most of the same types of capabilities, but they're categorized so one cast can generally only do one kind of conversion, where a single C-style cast can do two or three conversions in one operation. The primary exception is that you can use a dynamic_cast
in place of a static_cast
in at least some cases and despite being written as a dynamic_cast
, it'll really end up as a static_cast
. For example, you can use dynamic_cast
to traverse up or down a class hierarchy -- but a cast "up" the hierarchy is always safe, so it can be done statically, while a cast "down" the hierarchy isn't necessarily safe so it's done dynamically.
Java and C# are much more similar to each other. In particular, with both of them casting is (virtually?) always a run-time operation. In terms of the C++ cast operators, it's usually closest to a dynamic_cast
in terms of what's really done -- i.e., when you attempt to cast an object to some target type, the compiler inserts a run-time check to see whether that conversion is allowed, and throw an exception if it's not. The exact details (e.g., the name used for the "bad cast" exception) varies, but the basic principle remains mostly similar (though, if memory serves, Java does make casts applied to the few non-object types like int
much closer to C casts -- but these types are used rarely enough that 1) I don't remember that for sure, and 2) even if it's true, it doesn't matter much anyway).
Looking at things more generally, the situation's pretty simple (at least IMO): a cast (obviously enough) means you're converting something from one type to another. When/if you do that, it raises the question "Why?" If you really want something to be a particular type, why didn't you define it to be that type to start with? That's not to say there's never a reason to do such a conversion, but anytime it happens, it should prompt the question of whether you could re-design the code so the correct type was used throughout. Even seemingly innocuous conversions (e.g., between integer and floating point) should be examined much more closely than is common. Despite their seeming similarity, integers should really be used for "counted" types of things and floating point for "measured" kinds of things. Ignoring the distinction is what leads to some of the crazy statements like "the average American family has 1.8 children." Even though we can all see how that happens, the fact is that no family has 1.8 children. They might have 1 or they might 2 or they might have more than that -- but never 1.8.
Solution 2:
Lots of good answers here. Here's the way I look at it (from a C# perspective).
Casting usually means one of two things:
-
I know the runtime type of this expression but the compiler does not know it. Compiler, I am telling you, at runtime the object that corresponds to this expression is really going to be of this type. As of now, you know that this expression is to be treated as being of this type. Generate code that assumes that the object will be of the given type, or, throw an exception if I'm wrong.
-
Both the compiler and the developer know the runtime type of the expression. There is another value of a different type associated with the value that this expression will have at runtime. Generate code that produces the value of the desired type from the value of the given type; if you cannot do so, then throw an exception.
Notice that those are opposites. There are two kinds of casts! There are casts where you are giving a hint to the compiler about reality - hey, this thing of type object is actually of type Customer - and there are casts where you are telling the compiler to perform a mapping from one type to another - hey, I need the int that corresponds to this double.
Both kinds of casts are red flags. The first kind of cast raises the question "why exactly is it that the developer knows something that the compiler doesn't?" If you are in that situation then the better thing to do is usually to change the program so that the compiler does have a handle on reality. Then you don't need the cast; the analysis is done at compile time.
The second kind of cast raises the question "why isn't the operation being done in the target data type in the first place?" If you need a result in ints then why are you holding a double in the first place? Shouldn't you be holding an int?
Some additional thoughts here:
Link
Solution 3:
Casting errors are always reported as run-time errors in java. Using generics or templating turns these errors into compile-time errors, making it much easier to detect when you have made a mistake.
As I said above. This isn't to say that all casting is bad. But if it is possible to avoid it, its best to do so.
Solution 4:
Casting is not inherently bad, it's just that it's often misused as a means to achieve something that really should either not be done at all, or done more elegantly.
If it was universally bad, languages would not support it. Like any other language feature, it has its place.
My advice would be to focus on your primary language, and understand all its casts, and associated best practices. That should inform excursions into other languages.
The relevant C# docs are here.
There is a great summary on C++ options at a previous SO question here.