You are here
Securing Processors after Meltdown and Spectre
Can bug-free software resist attacks when used in our computers or smartphones? One would like to think so, but a closer look at how these devices operate shows that this is far from certain.
The race for performance...
There are different abstraction layers in computing. First come the programs we are familiar with, which are intended for end users. These run on an operating system that communicates with hardware by means of a driver. The hardware understands a series of instructions, and can be implemented in a number of ways, known as microarchitectures.
In order to write a program, a developer needs to interact with the operating system, not to understand it perfectly. Similarly, creating such a system means communicating with the hardware—not having a full grasp of the whole of its components. Each layer is therefore designed to be an interface with the layer above it, and can be used without fully mastering its functioning. This facilitates the development of applications as well as changes to hardware, thereby enabling manufacturers to alter how hardware works without having to rewrite all of the software programs.
This is all the better since manufacturers are always fine-tuning hardware. The main selling point of a computer or smartphone is its performance, as we want our favorite programs to open quickly and be responsive. For this reason, the component that has evolved the most in our devices is the central processing unit (CPU), which acts as the machine's brain: it receives and processes instructions, makes calculations, accesses memory to retrieve data, and verifies authorizations for each of these operations. Manufacturers have made increasingly complex processors to keep up with the performance race, adding hardware to optimize these procedures, as well as a series of "tricks" to help CPUs save time.
To the detriment of security?
In early January 2018, two major attacks targeting processors, called Meltdown and Spectre, were detected and much debated. These critical vulnerabilities were independently discovered only a few months apart by several research teams, including Google and the Graz University of Technology, in Austria. Meltdown and Spectre exploited the "tricks" used by processors, which sometimes need information that is not yet at their disposal to complete the next operation. To avoid waiting for this data, the CPU anticipates the next steps to be performed. Yet it sometimes makes mistakes, in which case it retraces its calculations to prevent an incorrect result. The attacks involve tricking the processor into executing tasks it is not supposed to finalize. The latter eventually realizes this, without it having a direct effect on the program; however, even though the outside world does not get to see the result of these operations, they are nonetheless carried out, and leave a trace in each of the CPU's units. An attacker can thus indirectly retrieve elements of a "ghost execution," even when the program has no software vulnerabilities.
While exploiting this ghost execution was the novelty of Spectre and Meltdown, finding traces of secret moves at the microarchitectural level is hardly new. Attacks relying on microarchitecture have been theorized for more than twenty years, and the first ones were shown in practice over a decade ago. Spectre and Meltdown were based on this body of research.
For a few years now, scientists have taken a particular interest in programs using the cache, a small ultrafast memory located at the level of the processor. The latter is fitted with this device because its primary memory—also known as RAM—is fairly slow compared with its calculation speed. While they are very fast, caches have limited capacity; a recent model contained only a few megabytes, barely enough for a high-definition photo. Yet they are useful for storing recently-used data, so as to speed up subsequent accesses.
This raises a series of security problems, as a malicious software can guess what another program is doing by spying on the time it takes for its own memory accesses, thereby making it possible to attack certain implementations of otherwise secure encryption algorithms, such as AES or RSA.
Changing paradigm to improve processor security
While the various abstraction layers have the advantage of facilitating the development and backward compatibility of software, they have their drawbacks in terms of security, which was for a long time thought to primarily involve software, to the exclusion of hardware.
Yet software ultimately runs on hardware, and it is therefore impossible to dissociate the two. The fact that developing software does not require to understand all of the subtleties of hardware—which for that matter is increasingly complex—leaves the door open to vulnerabilities, such as the cache attacks described above, or Spectre and Meltdown. It therefore seems that this widespread paradigm must be changed to ensure the security of our information systems.
This raises a number of issues. First, there is no effective method as yet for detecting whether such attacks are taking place in a system, as they leave no trace in the software or files present in the machine. Nor is there an effective method for verifying whether a program, such as encryption algorithms, is allowing secret information to leak through the microarchitecture. To complicate matters, these incidents are made possible by optimization techniques that have been part of processors for fifteen to twenty years, and are crucial to their performance. It is simply impossible for manufacturers to do away with them. To put things simply, given that the attacks are a result of performance optimization, avoiding the ones without sacrificing the other is proving to be quite a challenge!
The analysis, views and opinions expressed in this section are those of the authors and do not necessarily reflect the position or policies of the CNRS.