February 21, 2013
Ultra-HDTVs Get New Ultra-Efficient Coding Chip
redOrbit Staff & Wire Reports - Your Universe Online
Researchers at the Massachusetts Institute of Technology (MIT) have unveiled a new high-efficiency video coding (HEVC) chip during this week´s International Solid-State Circuits Conference in San Francisco, the MIT News Office reported on Wednesday.
“It is now possible for us to figure out ways in which different types of video data actually interact with hardware,” said Mehul Tikekar, an MIT graduate student in electrical engineering and computer science, and co-author of a recent paper about the work. “People don´t really know, ℠What is the hardware complexity of doing, say, different types of video streams?´”
The work is important because several manufacturers have recently debuted new ultrahigh-definition, or UHD, television models (also known as 4K or Quad HD), with four times the resolution of today´s HDTVs. These UHD TVs require the new HEVC standard.
Broadcom introduced the first commercial HEVC chip last month during the Consumer Electronics Show in Las Vegas, saying the processor would go into large-volume production in mid-2014.
Like older coding standards, HEVC exploits the fact that most of the pixels remain constant in successive frames of video. As a result, broadcasters can typically transmit just the moving pixels rather than transmitting entire frames, saving an enormous amount of bandwidth.
The first step in the encoding process is to calculate so-called “motion vectors,” or mathematical descriptions of the motion of objects in the frame. However, that description will not produce a perfectly faithful image on the receiving end since the orientation of a moving object and the way it is illuminated can change as it moves.
As a result, additional information must be added to correct the motion estimates that are based solely on these vectors. Finally, to conserve even more bandwidth, the motion vectors and the corrective information are run through a standard data-compression algorithm, and the results are sent to the receiver.
The new MIT chip performs this process in reverse, beginning with increasing efficiency by “pipelining” the decoding process. This is accomplished by decompressing a portion of the data and passing it on to a motion-compensation circuit. As soon as the motion compensation begins, the decompression circuit takes in the next chunk of data. After the motion compensation process is complete, the data then passes to a circuit that applies the corrective data, and, finally, to a filtering circuit that smooths out any rough edges that might remain.
AN ESTABLISHED TECHNOLOGY LEARNS A FEW NEW TRICKS
While pipelining is fairly standard in most video chips, the MIT researchers also developed a few new tricks to further improve efficiency. For instance, the application of the corrective data is a single calculation known as matrix multiplication. Matrix multiplication involves numbers in the rows of one matrix being multiplied by numbers in the columns of another, with the results added together to produce entries in a new matrix.
“We observed that the matrix has some patterns in it,” Tikekar explained.
In the new standard, a 32-by-32 matrix, representing a 32-by-32 block of pixels, is multiplied by another 32-by-32 matrix that contains corrective information. In principle, the corrective matrix could contain 1,024 different values, but in practice, “there are only 32 unique numbers,” explained Tikekar. “So we can efficiently implement one of these [multiplications] and then use the same hardware to do the rest.”
Tikekar´s colleague Chiraag Juvekar, another MIT graduate student that took part in the project, developed a more efficient way to store video data in memory. The “naive way” would be to store the values of each row of pixels at successive memory addresses, he said. In that arrangement, the values of pixels that are next to each other in a row would also be adjacent in memory, although the value of the pixels below them would be far away.
In video decoding, however, “it is highly likely that if you need the pixel on top, you also need the pixel right below it,” Juvekar said.
“So we optimize the data into small square blocks that are stored together. When you access something from memory, you not only get the pixels on the right and left, but you also get the pixels on the top and bottom in the same request.”
Anantha Chandrakasan, Professor of Electrical Engineering and head of the MIT´s Department of Electrical Engineering and Computer Science, led the team of researchers, which specializes in low-power devices.
The team is following up on their current work by trying to reduce the power consumption of the chip even further in order to prolong the battery life of quad-HD cell phones or tablet computers. Tikekar said that one of the design modifications that they plan to investigate is the use of several smaller decoding pipelines that work in parallel. Reducing the computational demands on each group of circuits would also reduce the chip´s operating voltage, he explained.
The researchers´ HEVC chip design was executed by the Taiwan Semiconductor Manufacturing Company, while Texas Instruments funded the chip´s development.