May 19, 2012
Faster, More Energy Efficient Computer Components On The Way
A pair of new computer components unveiled late this week -- one which will require less energy to store and retrieve information, and one which improves power and resource efficiency by occasionally allowing errors to occur -- could one day fundamentally change the technology behind desktops, laptops, and similar devices.
The first of those two units is known as a "memristor," and according to BBC News Science and Technology Reporter Jason Palmer, its properties "make it suitable for both for computing and for far faster, denser memory."
The unit can remember how much current has passed through it, even after the device containing it has been powered off, and experts told BBC News that it can be manufactured more affordably these days thanks to modern semiconductor techniques.
"The history-dependent nature of their electrical properties would make them able to carry out calculations, but most interest has focused on developing them for memory applications, to replace the widespread 'flash' solid-state memory of USB sticks and memory cards," Palmer wrote on Friday.
"We're reaching the limits of what we can do with flash memory in terms of increasing the storage density, and it's also relatively high power and not as fast as we would like," added Anthony Kenyon of University College London (UCL), who along with colleagues at the school and from France and Spain detail their work with memristors in the Journal of Applied Physics. "Flash memory devices switch at 10,000 nanoseconds (billionths of a second) or so, and in our device we can't measure how fast it is“¦ Our equipment only goes down to 90 nanoseconds. It's at least as fast as that and probably faster."
He also told Palmer that, while their memristor concepts may be less advanced than other, similar components being worked on by other teams that he believes that the ease of manufacturing and the low cost of materials could make them more attractive to consumer electronics manufacturers. Kenyon said that they were in preliminary talks with some "fairly major names" in the industry about making their technology commercially available.
On Thursday, researchers at Rice University announced via press release a new computer chip that they say "challenges the industry's 50-year pursuit of accuracy" -- a design which they argue "improves power and resource efficiency by allowing for occasional errors" and is "at least 15 times more efficient than today´s technology."
Prototypes of the new chip were unveiled this week at the ACM International Conference on Computing Frontiers in Cagliari, Italy -- where research completed by experts at the Houston, Texas-based school, as well as Nanyang Technological University (NTU) in Singapore, Switzerland´s Center for Electronics and Microtechnology (CSEM) and the University of California, Berkeley, earned best-paper honors, the university announced.
"It is exciting to see this technology in a working chip that we can measure and validate for the first time," Project Leader and Rice-NTUS Institute for Sustainable and Applied Infodynamics (ISAID) Director Krishna Palem said in a statement. "Our work since 2003 showed that significant gains were possible, and I am delighted that these working chips have met and even exceeded our expectations."
"The paper received the highest peer-review evaluation of all the Computing Frontiers submissions this year," added Paolo Faraboschi, the program co-chair of the ACM Computing Frontiers conference and an employee of Hewlett Packard Laboratories. "Research on approximate computation matches the forward-looking charter of Computing Frontiers well, and this work opens the door to interesting energy-efficiency opportunities of using inexact hardware together with traditional processing elements."
The goal of the project, according to the university's press release, is to create microchips that require a fraction of modern-day microprocessors by being inexact in certain processes.
"The concept is deceptively simple: Slash power use by allowing processing components -- like hardware for adding and multiplying numbers -- to make a few mistakes," the researchers explain. "By cleverly managing the probability of errors and limiting which calculations produce errors, the designers have found they can simultaneously cut energy demands and dramatically boost performance."
Two examples of this approach include a process known as "pruning," which does away with some infrequently used areas of a microchip's digital circuits, and "confined voltage scaling," which harnesses improvements in processing speed performance to reduce the amount of power required to operate.
The Rice University researchers said that in simulated tests conducted last year, they discovered that the smaller pruned chips were twice the speed as traditional counterparts while needing less than half as much energy.
More recent tests demonstrated that pruning could reduce energy consumption by more than three times ordinary chips when they "deviated from the correct value" by just one-fourth of a percent, study co-author and graduate student Avinash Lingamneni said. When including size and speed increases into their figures, the researchers discovered that they could be up to 7.5 times more efficient than regular chips -- a number which could be increased to as much as 15 times more efficient when larger deviation percentage was allowed, he added.
“Particular types of applications can tolerate quite a bit of error," Christian Enz, project co-investigator and chief of the CSEM branch of the research, explained. "For example, the human eye has a built-in mechanism for error correction. We used inexact adders to process images and found that relative errors up to 0.54 percent were almost indiscernible, and relative errors as high as 7.5 percent still produced discernible images.”
Palem said that prototype hearing aids utilizing the pruned chips are expected by 2013.