February 12, 2012
Experts Develop ‘Fastest Possible’ Data Transmission Method
A team of U.S. and Israeli researchers say they have developed a new type of encoding scheme that will guarantee the fastest possible delivery of data, regardless of wireless connection strength or the amount of interference present on the network.
The discovery, which was revealed in a Friday Massachusetts Institute of Technology (MIT) press release and will be detailed in the upcoming issue of the journal IEEE Transactions on Information Theory, works by creating one large codeword for each message to be sent, with different portions of the whole codeword acting themselves as mini-codewords.
"Say, for instance, that the long codeword – call it the master codeword – consists of 30,000 symbols," explained Larry Hardesty of the MIT News Office. "The first 10,000 symbols might be the ideal encoding if there´s a minimum level of noise in the channel. But if there´s more noise, the receiver might need the next 5,000 symbols as well, or the next 7,374. If there´s a lot of noise, the receiver might require almost all of the 30,000 symbols. But once it has received enough symbols to decode the underlying message, it signals the sender to stop."
In their study, Wornell and his colleagues, Uri Erez at Israel's Tel Aviv University, and Mitchell Trott at Google, say that they have mathematically developed a method that guarantees that the codeword transmitted is both identifiable and as short as possible given the traffic or "noise" present on the channel.
"In order to decode a message, the receiver needs to know the numbers by which the codewords were multiplied," Hardesty said. "Those numbers – along with the number of fragments into which the initial message is divided and the size of the chunks of the master codeword – depend on the expected variability of the communications channel. Wornell surmises, however, that a few standard configurations will suffice for most wireless applications."
According to the MIT News Office, only the first chunk of the master code must be transmitted in its entirety. After that, the recipient of the code could complete the decoding process with only individual chunks of the code. As a result, the size of that initial portion of code "is calibrated to the highest possible channel quality that can be expected for a particular application," and the complexity of the decoding process winds up being dependant on the number of individual fragments that the original message needs to be divided into.
A patent for the technology, applied for by the research team last September, describes it as "a powerful new class of methods for encoding digital data for reliable transmission over unreliable communication channels is described. With this method, the message bits are divided into multiple submessages and the bits in each layer are encoded using a standard error correction code to provide a plurality of subcodewords."
"A first linear transformation is applied to each of the subcodewords. The so-transformed subcodewords from the different submessages are then combined to form a first redundancy block to be transmitted," the patent also said. "Additional redundancy blocks are generated by repeating this process on the same message but with jointly related nonidentical sets of linear transformations. The result is a set of codewords for each message which are then used to generate a transmitted waveform in one of several different ways, depending upon the application."
On the Net: