Will you still feed me, when I’m 64-bit...

Porting software to 64-bit compatibility can have unexpected security implications.

64-bit architecture is well and truly here, but 32-bit software is still in wide use. However, any porting of software to 64-bit compatibility can have unexpected security implications, even without any code changes in the programs, drivers or operating systems. This is particularly dangerous in situations where code has already been subject to code review and been assessed to be free from exploitable vulnerabilities in a 32-bit environment: it could immediately become vulnerable when compiled on a 64-bit system.

With the wide availability of x64 CPUs, many organisations are now switching to 64-bit operating systems and applications. This is driven by the increasing memory requirements of applications and servers, the decreasing cost of the new hardware and the widely-available support for applications and operating systems.

When code reviews are conducted of C/C++ applications that were developed on 32-bit systems and then ported to 64-bit, certain classes of security vulnerability are commonly identified.

This article gives a brief overview of these types of vulnerability and what to do about them.

It should be noted that these classes of vulnerability are not new and similar issues have been found and exploited before. However, the migration to 64-bit technology is regularly leaving organisations exposed to risk, particularly when there is a reliance on security reviews and assurance activities performed previously on a different architecture.

Why can problems occur?

On 32-bit systems, the amount of possible input to an application is naturally limited by the available address space. For example, on Microsoft Windows systems, memory allocations in user-mode are usually less than 2 gigabytes in size. In reality however, the space available for memory allocations on 32-bit systems will be much less, as space will be reserved for binaries, stacks and heaps. Nevertheless, this can still be more than 2 gigabytes when the /3GB switch is used during booting, although this is not the default setting.

However, on 64-bit systems these limits are greatly increased and allocation of much larger memory blocks may be possible, particularly with the large amounts of RAM now available on 64-bit systems. Whilst good practice dictates that the size of any data passed to a function is checked, it is often the case that developers make assumptions about the maximum possible size of that data – and these assumptions could be based on the upper limit for a memory allocation on the platform itself. When transferred to a 64-bit system, these deviations from best practice can become exploitable if an attacker can introduce large amounts of data into the application.

While providing large amounts of data to an application may not seem practical as an attack in some situations, it should be remembered that on a 20Mbit line it will only take about half an hour to send 4 gigabytes of data. As many applications will happily sit there unattended and unmonitored accepting input, this is a perfectly viable attack. Similarly, local application or kernel vulnerabilities which require large amounts of memory are even more likely to be exploited, as allocating and filling 4 gigabytes of memory will only take seconds on modern systems.


There follow some examples of vulnerabilities that can occur on 64-bit systems that would not be exploitable on 32-bit systems.

Integer overflows

Where the size of input is obtained and added to (e.g., incremented to make space for a terminating character) and that size is represented as an unsigned integer, the integer could overflow if more data were introduced than the maximum value of that unsigned integer. On 32-bit systems, there would not be enough memory to hold 0xFFFFFFFF bytes of data along with program code and the operating system, so the size could never be enough to trigger the overflow. However, on 64-bit systems this becomes a real possibility.

Truncation in conversion from “long” to “int”

On 32-bit systems, the value types ‘unsigned int’, ‘long’ and ‘size_t’ can be used interchangeably. However, on 64-bit systems these value types are not equivalent. In situations where these have not been used in the correct manner, exploitable conditions can exist.


As the previous examples show, the migration of software from 32-bit to 64-bit systems can introduce new vulnerabilities, or make previously unexploitable vulnerabilities exploitable. Consequently, it is recommended that the migration process should always include a code review during which the focus should be placed on security. As we have seen, the assumptions made by programmers and used in previous code reviews may not hold true.

The following recommendations provide general guidance on identifying and resolving the types of issues which could be encountered:

  • Are there any size limits on incoming data? If not, it is very likely that the code handling the incoming data is flawed, or that the functions using the input afterwards will not be coded so as to handle the data in a safe manner. Reallocation operations in network applications have proven to be particularly vulnerable. In many scenarios, limiting the input data to prevent excessive amounts of memory being allocated is a reasonable control to enforce.
  • Review any usage of ‘int’ types for length, offset and size values. Any use of a 32-bit integer for these kinds of values should be investigated as the code is likely to be flawed in the great majority of cases. If code is found to be affected by this issue, then each instance will need to be evaluated to determine the impact. Developers may wish to review the use of ‘int’ in their application as a whole, and use safer types such as ‘long’ or preferably ‘size_t’.
  • When code is first compiled for a 64-bit platform, it is important that attention is paid to any compiler warnings, especially those concerning truncation and casting of integer types. These can often indicate bugs which might be exploitable.



Accreditations & Certificates

MWR is an accredited member of The Cyber Security Incident Response Scheme (CSIR) approved by CREST (Council of Registered Ethical Security Testers).
MWR is certified under the Cyber Incident Response (CIR) scheme to deal with sophisticated targeted attacks against networks of national significance.
We are certified to comply with ISO 9001 and 14001 in the UK, internationally accepted standards that outline how to put an effective quality and environmental management systems in place.
MWR is certified to comply with ISO 27001 to help ensure our client information is managed securely.
As an Approved Scanning Vendor MWR is approved by PCI SSC to conduct external vulnerability scanning services to PCI DSS Requirement 11.2.2.
We are members of the Council of Registered Ethical Security Testers (CREST), an organisation serving the needs of the information security sector.
MWR is a supplier to the Crown Commercial Service (CCS), which provides commercial and procurement services to the UK public sector.
MWR is a Qualified Security Assessor, meaning we have been qualified by PCI to validate other organisation's adherence to PCI DSS.
As members of CHECK we are measured against high standards set by NCSC for the services we provide to Her Majesty's Government.
MWR’s consultants hold Certified Simulated Attack Manager (CCSAM) and Certified Simulated Attack Specialist (CCSAS) qualifications and are authorized by CREST to perform STAR penetration testing services.