memory access attacks
April 23, 2015

New computer chips could defend against memory-access attacks

Chuck Bednar for - @BednarChuck

A team of MIT engineers and computer science experts that developed a method for thwarting cyberattacks in the cloud two years ago have adapted their technique for use in computer chips, keeping hackers from swiping your data without having to gain access to it.

The researchers, who first presented a layout of a custom-build chip that will use their technique at the Architectural Support for Programming Languages and Operating Systems conference last month, have rigged it so that the circuitry will query multiple memory addresses in order to mask the one from which it is actually attempting to retrieve data.

By disguising memory-access patterns, MIT professor Srini Devadas and his colleagues hoped to protect people from potential threats in the cloud. Now, by adapting the technology to chips used in home systems, they are looking to key prying eyes from stealing your computer’s data.

Preventing attacks by accessing more data than necessary

However, the researchers explained that querying multiple addresses requires shipping far more data between the chip and the system’s memory than would normally be required. So they came up with a way to minimize the amount of extra information needed: storing addresses in a “tree” or a data structure in which each “node” is attached only to the ones above and below it.

Each address is randomly assigned to a path through the tree – or, more specifically, a sequence of nodes that stretch from the top of the tree to its bottom and does not backtrack. When the chip requires the data stored at a specific point, it also requests all data from all of the other connected nodes that lie along the same path.

In their prior work, the members of Devadas’ group proved that pulling data from one lone path was as effective at throwing off a potential cyberattacker as if they chip had collected data from every single memory address in use. Once it reads the path, however, their chip must also write data to the whole path to further obscure which was the targeted node.

“Back in 2012, we were working on how to run real life programs using a cryptographic technique called ‘Fully Homomorphic Encryption (FHE),’” MIT graduate student Chris Fletcher, first author of the research, told redOrbit via email. “FHE is like a magic trick: it allows a cloud server to compute on your encrypted data without ever decrypting it! At no point does the cloud server learn anything about your data, giving the best possible security. The problem is FHE is very slow, and its overhead explodes further if you try to run complicated programs.”

“Our idea was to approximate the security of FHE with a processor chip to dramatically reduce the overheads,” he added. “When the program runs inside the chip, you trust the processor to keep it safe (it's very difficult to break the processor open and see what is going on inside). When your program and data need to leave the processor – say, to read or write some external database – we use a cryptographic technique called Oblivious RAM (ORAM) to scramble the read/write. So, if you trust just the processor (as opposed to the whole cloud) you get a similar level of security as FHE.  At the same time, you can now run complicated programs for just the cost of scrambling the external requests (much, much cheaper than FHE).”

Using different nodes for reading and writing data

For maximum effect, the chip does not typically write data to the same node that it read it from, they added. Also, most nodes lie on more than one path, and when the chip writes information to memory, it pushes that data as far down the tree as it can, meaning that it searches for a vacancy that occurs just before the block’s assigned path branches off from the path it had just read.

The goal, Albert Kwon, an MIT graduate student in electrical engineering and computer science and one of the researchers involved in the study, explained, is to avoid congestion at the top of the tree. In writing the data, the researchers added, the chip still needs to follow the sequence of nodes in the path to avoid hackers from inferring something about the stored data.

Prior attempts at these types of systems required sorting memory addresses based on where they were located on the tree, but the new MIT chip was outfitted with an extra memory circuit which has storage slots that can be mapped to the sequence of nodes in any path throughout the tree. As the system determined the final location of a data block, it stores it in the corresponding slot. All of the blocks are then read out in order, the researchers noted.

Fletcher told redOrbit that past versions of this technology did not “sufficiently scramble requests made outside the processor. Say you wish to read a external database. For complete privacy, you need to hide where you read and what you read.”

“Hiding what you read can be accomplished by just using standard encryption, but up until now no proposal has also been able to hide where you read,” he added. “This is a big deal. Say you are looking up an online map for directions. Prior work would just encrypt the map. But with maps you want to hide where you are, where you want to go, etc. So all that matters is that you hide where you read on the map.”

Chips featuring the system may be a decade away

Their new chip also improves efficiency in another way. Instead of writing data every time it reads data, it does so only on every fifth read. During all of the others, it discards the rest of the decoy data so that when it is time to write the data back out, it will have an average of five more blocks to store on the last series of nodes from which it read.

In most cases, there will be enough free area on the tree to accommodate the extra blocks, but in the rare instances where this is not the case, the chip’s protocols for placing data as close to the bottom of the tree will help prevent or deal with congestion at the top, the MIT team said.

Furthermore, the team is confident that their system can be added to existing chips with relative ease, and that the security features can be toggled on and off as deemed necessary. Unfortunately it may be a while before these chips hit store shelves, as Fletcher told redOrbit that is he believes it will take five to 10 years before the technology will be available to the general public.


Follow redOrbit on TwitterFacebookGoogle+, Instagram and Pinterest.