How should I prepare my 32-bit Delphi programs for an eventual 64-bit compiler? [duplicate]
Solution 1:
First up, a disclaimer: although I work for Embarcadero. I can't speak for my employer. What I'm about to write is based on my own opinion of how a hypothetical 64-bit Delphi should work, but there may or may not be competing opinions and other foreseen or unforeseen incompatibilities and events that cause alternative design decisions to be made.
That said:
There are two integer types, NativeInt and NativeUInt, whose size will float between 32-bit and 64-bit depending on platform. They've been around for quite a few releases. No other integer types will change size depending on bitness of the target.
Make sure that any place that relies on casting a pointer value to an integer or vice versa is using NativeInt or NativeUInt for the integer type. TComponent.Tag should be NativeInt in later versions of Delphi.
I'd suggest don't use NativeInt or NativeUInt for non-pointer-based values. Try to keep your code semantically the same between 32-bit and 64-bit. If you need 32 bits of range, use Integer; if you need 64 bits, use Int64. That way your code should run the same on both bitnesses. Only if you're casting to and from a Pointer value of some kind, such as a reference or a THandle, should you use NativeInt.
Use
PByte
for pointer arithmetic where possible, in preference toNativeInt
orNativeUInt
. It will suffice for most purposes, and is more typesafe because it can't be (easily) mistaken for a normal integer type, and vice versa.Pointer-like things should follow similar rules to pointers: object references (obviously), but also things like HWND, THandle, etc.
Don't rely on internal details of strings and dynamic arrays, like their header data.
Our general policy on API changes for 64-bit should be to keep the same API between 32-bit and 64-bit where possible, even if it means that the 64-bit API does not necessarily take advantage of the machine. For example, TList will probably only handle MaxInt div SizeOf(Pointer) elements, in order to keep Count, indexes etc. as Integer. Because the Integer type won't float (i.e. change size depending on bitness), we don't want to have ripple effects on customer code: any indexes that round-tripped through an Integer-typed variable, or for-loop index, would be truncated and potentially cause subtle bugs.
Where APIs are extended for 64-bit, they will most likely be done with an extra function / method / property to access the extra data, and this API will also be supported in 32-bit. For example, the Length() standard routine will probably return values of type Integer for arguments of type string or dynamic array; if one wants to deal with very large dynamic arrays, there may be a LongLength() routine as well, whose implementation in 32-bit is the same as Length(). Length() would throw an exception in 64-bit if applied to a dynamic array with more than 2^32 elements.
Related to this, there will probably be improved error checking for narrowing operations in the language, especially narrowing 64-bit values to 32-bit locations. This would hit the usability of assigning the return value of Length to locations of type Integer if Length(), returned Int64. On the other hand, specifically for compiler-magic functions like Length(), there may be some advantage of the magic taken, to e.g. switch the return type based on context. But advantage can't be similarly taken in non-magic APIs.
Dynamic arrays will probably support 64-bit indexing. Note that Java arrays are limited to 32-bit indexing, even on 64-bit platforms.
Strings probably will be limited to 32-bit indexing. We have a hard time coming up with realistic reasons for people wanting 4GB+ strings that really are strings, and not just managed blobs of data, for which dynamic arrays may serve just as well.
Perhaps a built-in assembler, but with restrictions, like not being able to freely mix with Delphi code; there are also rules around exceptions and stack frame layout that need to be followed on x64.
Solution 2:
First of all, FreePascal already offers 64-bits support. It's not Delphi, though.
Second of all, I expect about the same problems that existed in the time Delphi 1 was upgraded to Delphi 2. The biggest problem is is mostly address-space related and the problem here is that pointers will be widened from 4 bytes to 8 bytes. In WIN16 they use to be 2 bytes and a trick was needed to get over the 64KB boundary by using segments and offsets for pointers. (With the possibility to use default segments for several tasks.)
It's also likely that certain datatypes will become bigger than they are now. The integer-type will be 8 bytes, most likely. (Used to be just 2 bytes in Windows 2.) Enumerations will likely become bigger too. But most other datatypes are likely to keep their current size, so not too many changes here.
Another issue will be memory requirements. Since pointers will be 8 bytes long, an application that uses a lot of them will also eat up a lot more memory. A list with 10.000 pointers will increase from 40.000 bytes to 80.000 bytes. You might want to use a bit more memory than on a 32-bit system.
Speed will also change a bit. Since the processor now handles 8 bytes at the same time, it can process data much faster. But since pointers and some data types become bigger, receiving or sending these to some device or memory will be a bit slower. In general, your applications will be slightly faster in general, but some parts might actually become slower!
Finally, changes in the Windows API will require you to use the 64-bits API functions. Maybe the Delphi compiler will do something smart to allow code to call 32-bit API functions, but this would slow down performance because the processor now switches between native 64-bits mode and emulated 32-bits mode.
Solution 3:
Depending on your code, you can try to compile it using FreePascal, which supports both 32-bit and 64-bit compilation. The compiler will warn you about possibly erroneous places in your code.
Solution 4:
Many similar questions were asked when it was announced that Delphi 2009 would only create Unicode applications. In the end it turned out that most existing code ran just fine without changes. Tricky parts were code that assumed that SizeOf(Char) = 1
and 3rd party components that might be doing that.
I would expect the move to 64-bit Delphi to be a similar experience. Everything just works out of be box, except for code that plays tricks with pointers and assumes that SizeOf(Pointer) = 4
or SizeOf(Pointer) = SizeOf(Integer)
. You can already fix such issues today by calling SizeOf(Pointer)
rather than hardcoding 4
and using NativeInt
or NativeUInt
when you need pointer-sized integers.
You should use SizeOf(Pointer)
rather than SizeOf(NativeInt)
if you want your code to work with Delphi 2007. Delphi 2007 has an unfortunate bug that causes SizeOf(NativeInt)
to return 8 instead of 4 as it should. This was fixed in Delphi 2009.
Solution 5:
The vast majority of simple applications should work just fine. As far as I can see, only applications that manually make use of pointers are at a risk. Indeed, if a pointer now is 64-bit, and you use it in calculations together with integers or cardinals (that are still 32-bit by default), you will get into trouble. I also think it is rather common that declarations for API functions that take pointers as arguments are using cardinal
s instead of the (unsigned) native integer type.
To make code that works well on any platform, one should use NativeUInt
s (IIRC, don't have a Deplhi compiler right now) instead of cardinal
s when working with pointers and integers simultaneously.