Two Separate Studies On MERS-Coronavirus Offer Better Understanding Of Newly Emerging Disease

Lawrence LeBlond for redOrbit.com – Your Universe Online

Two separate studies this week are helping scientists gain a better understanding of the dangerous Middle East respiratory syndrome (MERS) coronavirus, a disease that has sickened more than 300 people and left at least a hundred dead since it was first reported in September 2012.

HUMAN ANTIBODIES

The first study comes from scientists at Dana-Farber Cancer Institute who have identified natural human antibodies against the virus that causes MERS, a step that could help in the development of new treatments against the newly emerging disease.

Currently there is no vaccine or antiviral treatment for MERS, which has a mortality rate of more than 40 percent.

The team, led by Wayne Marasco, MD, an infectious expert at Dana-Farber, found that these “neutralizing” antibodies prevented a key part of the virus from attaching to protein receptors that allow the virus to infect human cells. Their findings are published in the Proceedings of the National Academy of Sciences.

“This panel of neutralizing antibodies offers the possibility of developing human monoclonal antibody-based immunotherapy, especially for health care workers,” noted the authors.

Marasco and colleagues discovered the MERS antibodies using a “library” of some 27 billion human antibodies they had created and maintained in a freezer at the lab – this is one of the largest such libraries in the world, note the authors.

The research team took seven MERS-specific neutralizing antibodies from the library after using samples of the virus to screen for them.

MERS coronavirus “has on its surface an array of spike-shaped proteins that bind to host cells – specifically to receptor proteins called DPP4 on the surface of cells that line human airways. The neutralizing antibodies identified in the study prevented the virus’ spikes from binding to the DPP4 receptors,” wrote the authors.

The researchers selected one of the antibodies, labeled 3b11, as a lead candidate for further research. The antibody has been produced in sufficient quantities to begin testing in non-human primates and mice to determine if they protect against the virus, according to Marasco. However, the studies have been delayed because no good animal model for MERS has been developed.

Marasco said that an antibody-based treatment for MERS would be administered by injection and may provide protection from the disease for about three weeks.

CAMEL LINK

A second study this week has found more definitive evidence linking camels to the ongoing MERS outbreak.

Camels have been one of the key targets of concern since a study last August found MERS antibodies in camels that were similar to the disease in humans. Subsequent research found evidence of the disease in camels and several human cases have been reported after close contact with the dromedary animals since.

In this latest camel study, scientists from the Center for Infection and Immunity at Columbia University’s Mailman School of Public Health, King Saud University, and EcoHealth Alliance extracted a complete, live, infectious sample of MERS coronavirus from two camels in Saudi Arabia. A lab analysis of the sample matched MERS coronavirus found in humans, indicating that the virus in camels is capable of infecting humans and could very well be the source of the outbreak.

The team, publishing a paper in the journal mBio, examined nasal samples collected during a countrywide survey of camels and selected samples from two animals with the highest viral load. They obtained complete genomic sequence from both animals as well as the virus from nasal samples of several other camels.

The consensus genomic sequences were consistent with viruses found in human cases; however, the researchers noted that samples from camels contained more than one virus genotype. Over the course of 48 hours of culture in primate cells, the genomic variation of viruses narrowed, mirroring the lower sequence diversity found in humans with MERS.

“The finding of infectious virus strengthens the argument that dromedary camels are reservoirs for MERS-CoV,” said first author Thomas Briese, PhD, associate director of the Center for Infection and Immunity and associate professor of Epidemiology at the Mailman School. “The narrow range of MERS viruses in humans and a very broad range in camels may explain in part the why human disease is uncommon: because only a few genotypes are capable of cross species transmission.”

“Given these new data, we are now investigating potential routes for human infection through exposure to camel milk or meat products,” said co-author Abdulaziz N. Alagaili, PhD, director of the Mammals Research Chair at King Saud University. “This report builds on work published earlier this year when our team found that three-quarters of camels in Saudi Arabia carry MERS virus.”

MERS IN EGYPT

The first case of a dangerous SARS-like virus has been reported in Egypt, the state news agency MENA said on Saturday, as reported by the Associated Press.

The cases occurred in a 27-year-old civil engineer who was diagnosed April 26 after returning from Saudi Arabia, where MERS is prevalent. MENA said the man was quarantined upon his arrival at Cairo airport Friday and transported to a local hospital. He was reportedly being treated for pneumonia and was in stable condition at last report.

Egypt becomes the 15th country to report a MERS case and the latest one outside the Arabian Peninsula. Concerns have been rising in Egypt because of the recent evidence of the virus in camels there.

According to Saudi Arabia’s Ministry of Health, five more people in the kingdom have died from MERS as of Sunday, April 27. The ministry said that 92 people have died and 313 have contracted the virus since September 2012.

The World Health Organization (WHO) had released its number on Saturday, April 26, stating that 261 cases have been reported along with 93 deaths (the agency doesn’t always recognize all reports immediately from government and media sources).

Researchers Develop 3D Printer Capable Of Making Fabrics, Stuffed Animals

redOrbit Staff & Wire Reports – Your Universe Online

Modern 3D printing technology has been used to create a variety of different objects, ranging from artificial bones to batteries and beyond, but researchers from Carnegie Mellon University and Disney Research Pittsburgh have developed a new printer capable of creating something far more soft and cuddly.

Using a device described as a cross between a 3D printer and a sewing machine, the researchers report that they can turn wool and wool-blended yarns into 3D fabric objects made from a type of loose felt. The results, CMU Human-Computer Interaction Institute professor Scott Hudson said in a statement Monday, were similar in nature to materials that were knitted by hand.

“I really see this material being used for things that are held close. We’re really extending the set of materials available for 3D printing and opening up new possibilities for what can be manufactured,” he explained.

Those possibilities could include apparel, accessories such as scarves and hats, and possibly even teddy bears and other types of stuffed animals. In addition, it could ultimately be used to create parts for so-called “soft robots,” or machines specifically designed for human contact.

According to Hudson, who discussed the device Monday during the 2014 CHI Conference on Human Factors in Computing Systems in Toronto, said that it functions much like other 3D printers in that it can develop objects using computerized designs as a base model.

“This printer allows the substantial advantages of additive manufacturing techniques (including rapid turn-around prototyping of physical objects and support for high levels of customization and configuration) to be employed with a new class of material,” according to Disney Research.

By creating a material that is “a form of loose felt formed when fibers from an incoming feed of yarn are entangled with the fibers in layers below it,” the company added that their printer “extends 3D printing from typically hard and precise forms into a new set of forms which embody a different aesthetic of soft and imprecise objects, and provides a new capability for researchers to explore the use of this class of materials in interactive devices.”

The so-called felting printer can be used to quickly develop prototypes of objects and to customize products, and uses a process similar to the Fused Deposition Modeling (FDM) technique common in low-end 3D printers, the researchers explained. In FDM printers, melted plastic is extruded in a thin line into one layer, and then additional layers are added, adhering to each other as the plastic cools in order to achieve the desired shape.

However, in the felting printer, the printer head feeds out yarn instead of melted plastic. The printer head has a barbed felting needle attached to it, and that needle repeatedly pierces the yarn and drags the individual fibers down into the yarn and the layers below. This intertwines the fibers and causes the layers to bond together.

Hudson said that the printer does not have the same dimensional accuracy as conventional 3D printers since the yarn is far thicker than the layers of plastic deposited during FDM printing. Likewise, the felt is not as strong as normal types of fabric, meaning that a layer of nylon mesh fabric had to be added into the printing process in order for soft objects to be attached to harder ones to prevent the material from tearing at the attachment point.

During his presentation, Hudson reportedly demonstrated techniques for bridging hard and soft materials, as well as for altering how stiff the soft objects are and how electronic components could be added to the process. Currently, the printer is only capable of producing fabric objects, but Hudson said he believed that it could be possible to develop a device capable of producing both fabric and plastic elements in a single fabrication.

Smartphone Sensors Can Leave Trackable Fingerprints

Jonathan Damery, ECE ILLINOIS

Fingerprints — those swirling residues left on keyboards and doorknobs — are mostly invisible. They can affirm your onetime presence, but they cannot be used to track your day-to-day activities.

They cannot tell someone in real time that after exercising at the gym, you went to office in a bus and played video games during lunch. But what if our hand-held electronics are leaving real-time fingerprints instead? Fingerprints that are so intrinsic to the device that, like our own, they cannot be removed?

Research by Associate Professor Romit Roy Choudhury and graduate students Sanorita Dey and Nirupam Roy has demonstrated that these fingerprints exist within smartphone sensors, mainly because of imperfections during the hardware manufacturing process.

In some ways, it’s like cutting out sugar cookies. Even using the same dinosaur-shaped cutter, each cookie will come out slightly different: a blemish here, a pock there. For smartphone sensors, these imperfections simply occur at the micro- or nanoscale.

Their findings were published at the Network and Distributed System Security Symposium (NDSS), a major conference on wireless and web security, held last February in San Diego. The research also won the best poster award at the HotMobile international workshop in 2013.

Other collaborators on this project are Professors Srihari Nelakuditi and Wenyuan Xu at the University of South Carolina (USC). Nirupam and Sanorita completed their master’s degrees at USC and jointly won the MS Thesis Dissertation Award.

The researchers focused specifically on the accelerometer, a sensor that tracks three-dimensional movements of the phone — essential for countless applications, including pedometers, sleep monitoring, mobile gaming — but their findings suggest that other sensors could leave equally unique fingerprints.

“When you manufacture the hardware, the factory cannot produce the identical thing in millions,” Roy said. “So these imperfections create fingerprints.”

Of course, these fingerprints are only visible when accelerometer data signals are analyzed in detail. Most applications do not require this level of analysis, yet the data shared with all applications — your favorite game, your pedometer — bear the mark. Should someone want to perform this analysis, they could do so.

The researchers tested more than 100 devices over the course of nine months: 80 standalone accelerometer chips used in popular smartphones, 25 Android phones, and 2 tablets.

The accelerometers in all permutations were selected from different manufacturers, to ensure that the fingerprints weren’t simply defects resulting from a particular production line. With 96 percent accuracy, the researchers could discriminate one sensor from another.

“We do not need to know any other information about the phone — no phone number or SIM card number,” Dey said. “Just by looking at the data, we can tell you which device it’s coming from. It’s almost like another identifier.”

In the real world, this suggests that even when a smartphone application doesn’t have access to location information (by asking “this application would like to use your current location”), there are other means of identifying the user’s activities. It could be obtained with an innocuous-seeming game or chatting service, simply by recording and sending accelerometer data. There are no regulations mandating consent.

To collect the data, the researchers — as with any would-be attacker — needed to sample the accelerometer data. Each accelerometer was vibrated using a single vibrator motor — like those that buzz when a text message is received — for two-second intervals. During those periods, the accelerometer detected the movement and the readings were transmitted to a supervised-learning tool, which decoded the fingerprint.

“Even if you erase the app in the phone, or even erase and reinstall all software,” Roy said, “the fingerprint still stays inherent. That’s a serious threat.”

At this point, however, there is no absolute solution. Smartphone cases made of rubber or plastic do little to mask the signal. Deliberately injecting white noise in the sensor data can smudge the fingerprint, but such noise can also affect the operation of the application, making your pedometer inaccurate and functionally useless.

If accelerometer data were processed directly on the phone or tablet, rather than on the cloud, the fingerprint could be scrubbed before sending information to the application. That is, the pedometer application might only receive basic information like “300 steps taken,” rather than receiving the raw accelerometer data. This, however, imposes a load on the phone’s processor and, more importantly, reduces the phone’s battery life.

The research also suggests that other sensors in the phone — gyroscopes, magnetometers, microphones, cameras, and so forth — could possess the same types of idiosyncratic differences. So even if, at a large scale, the accuracy of accelerometer fingerprints diminishes, when combined with prints from other sensors, an attack could be even more precise.

“Imagine that your right hand fingerprint, by some chance, matches with mine,” Roy Choudhury said. “But your left-hand fingerprint also matching with mine is extremely unlikely. So even if accelerometers don’t have unique fingerprints across millions of devices, we believe that by combining with other sensors such as the gyroscope, it might still be possible to track a particular device over time and space.”

For smartphone users and e-book readers, smartwatch wearers and tablet devotees, perhaps the most critical take-home message, in the short run anyway, is the importance of vigilance.

“Don’t share your accelerometer data without thinking about how legitimate or how secure that application is,” Dey said. “Even if it’s using only the sensor data, still it can attack you in some way. The consumer should be aware.”

Decrease In Large Wildlife Populations Drives Increase In Zoonotic Diseases

Smithsonian

Populations of large wildlife are declining around the world, while zoonotic diseases (those transmitted from animals to humans) are on the rise. A team of Smithsonian scientists and colleagues have discovered a possible link between the two. They found that in East Africa, the loss of large wildlife directly correlated with a significant increase in rodents, which often carry disease-causing bacteria dangerous to humans. The team’s research is published in the Proceedings of the National Academy of Sciences, April 28.

“Our study shows us that ecosystem health, wildlife health and human health are all related,” said Kristofer Helgen, curator of mammals at the Smithsonian’s National Museum of Natural History and co-author of the research.

Large animals, such as elephants, giraffes, antelope and zebras, have a profound influence on their ecosystems by feeding on vast amounts of vegetation and compacting and disturbing soil. As populations of these large species decline, the ecosystems they once dominated change in many ways. The team’s main question was whether the loss of large wildlife influences the risk of people contracting diseases spread by rodents—a pressing question, as more than 60 percent of infectious human diseases are zoonotic.

“Understanding the linkages between biodiversity loss and zoonotic disease is important for both public health and nature conservation programs,” said Hillary Young, former Smithsonian post-doctoral fellow and current assistant professor at the University of California, Santa Barbara. “While this correlation has been the topic of much scientific debate, ours is one of the only studies to offer clear experimental evidence.” Young is the lead author of the research paper.

Using 24 acres of savanna that had been fenced off for 15 years to keep large animals out in central Kenya, the scientists examined rodent populations inside and outside the area for three years. They also tracked the presence of Bartonella infections in the rodents and their fleas. Bartonella, a group of bacteria found around the world, can cause bartonellosis in humans—an infectious disease that can lead to joint swelling, liver damage, memory loss and other symptoms.

The team regularly trapped rodents in the area, represented by several species of mice, rats and gerbils. Each rodent was identified to species, sexed, weighed and marked. A blood sample and fleas, if they were present, were collected from each rodent for testing before it was released where it was captured.

The team found that rodent and, consequently, flea abundance doubled inside the area that excluded large wildlife. Without having to compete with large animals for food, the rodent population grew twofold. When the rodents and fleas in the area doubled, the team found that those infected with Bartonella doubled as well.

The removal of large wildlife from the ecosystem could be directly linked to the increase in rodents and the rodent-borne disease, thus increasing risk to humans. These results suggest that a partial solution to problems of rodent-borne disease could come in the form of wildlife conservation.

“Africa’s large wildlife faces many threats—elephants, rhinos and other large mammals continue to decline in the face of growing human populations, expanding agriculture and the impacts of poaching and wildlife trade,” said Helgen. “While we know that conservation is good for wildlife and for economies reliant on tourism, our study shows a less-intuitive dimension of conservation that could greatly benefit the people living alongside wildlife.”

This study is the first of several more to come. The team plans to expand its research to a wider suite of infectious diseases to see which might respond similarly and which do not. They will also undertake further studies not only in carefully controlled experimental sites but in the “real world” where humans have already altered the landscape and eradicated much of the large wildlife.

The team’s research has implications well beyond Africa. “While rodent-borne diseases are a major issue in Africa, they are everywhere—Europe, Asia, North and South America,” Young said. “What we find here may very well be applicable in other parts of the world.”

NASA Honors William Shatner With Distinguished Public Service Medal

[ Watch the Video: William Shatner Hosts Curiosity’s ‘Grand Entrance’ to Mars ]

NASA

After nearly 50 years of warping across galaxies and saving the universe from a variety of alien threats and celestial disasters, Star Trek’s William Shatner finally went where no other member of Starfleet has gone before. This weekend, the acclaimed actor and director was honored with NASA’s Distinguished Public Service medal, the highest award bestowed by the agency to non-government personnel.

The honor was presented to Shatner Saturday evening in Los Angeles at his annual Hollywood Charity Horse Show, where he raises money for a variety of children’s causes. The citation for the medal reads, “For outstanding generosity and dedication to inspiring new generations of explorers around the world, and for unwavering support for NASA and its missions of discovery.”

“William Shatner has been so generous with his time and energy in encouraging students to study science and math, and for inspiring generations of explorers, including many of the astronauts and engineers who are a part of NASA today, ” said David Weaver, NASA’s associate administrator for the Office of Communications at NASA Headquarters in Washington.  “He’s most deserving of this prestigious award.”

A life-long advocate of science and space exploration, Shatner gained worldwide fame and became a cultural icon for his portrayal of Captain James Tiberius Kirk, commander of the starship USS Enterprise in NBC’s science fiction television series “Star Trek” from 1966 to 1969. It was a role he would reprise in an animated version of the series in 1973, seven major films from 1979 to 1994, and more recent “Star Trek” video games.

Shatner’s relationship with NASA dates back to the original series, with references to the space agency and its programs that were incorporated into storylines throughout the television and film franchises. In 1979, when NASA was ready to introduce a reusable spacecraft as the successor to the Apollo program, a new space shuttle prototype, originally to be named Constitution, was dubbed Enterprise in honor of the Star Trek universe and the work of Shatner and his series co-stars.

More recently, Shatner donated his time and vocal talent to host the NASA documentary celebrating the 30th anniversary of space shuttle missions. To honor the final flight of shuttle Discovery in 2011, he agreed to recreate his famous Star Trek television introduction in one of the last wake-up calls for the astronauts of the STS-133 mission.

In 2012, he hosted a video presentation previewing the dramatic mission of the Mars rover Curiosity and voiced his support for NASA spinoff technologies that come as a result of investments in science, technology and exploration.

Other past recipients of the NASA Distinguished Public Service Medal include astrophysicist Neil deGrasse Tyson, former NASA Jet Propulsion Laboratory director and Voyager project scientist Edward Stone, theoretical physicist and astronomer Lyman Spitzer, and science fiction writer Robert Heinlein. The award is presented to those who “… have personally made a contribution representing substantial progress to the NASA mission. The contribution must be so extraordinary that other forms of recognition would be inadequate.”

Besides his acting and directing talents, Shatner is a prolific author, having taken the reins for nearly 50 books, and is an accomplished horse rider and breeder.

Costly Kidney Stone Treatment Complications Are Quite Common

Brett Smith for redOrbit.com – Your Universe Online

While surgical procedures to treat kidney stones are fairly common these days, about one-in-seven results in complications that require additional, potentially expensive, care, according to a new study in the journal Surgery.

The study revealed that the average cost of these complications, in the form of emergency care, is about $30,000.

“Our findings provide a good starting point to understand why these complications are happening and how they can be prevented, because the costs to patients who suffer complications and to the health care system are substantial,” said study author Dr. Charles D. Scales, assistant professor of surgery at Duke University, in a recent statement.

Common therapies to treat kidney stones include using shock waves from outside the body to blast a stone, inserting a tube into the urethra and extracting any stones through a small incision made in the lower back. The total annual cost of these procedures in the United States is estimated at around $10 billion, according to the study team.

In the study, scientists from Duke, the RAND Corporation and UCLA reviewed the end results of greater than 93,000 privately-insured patients who experienced treatment for kidney stones. To determine if follow-up care was necessary after the initial procedure, the scientists looked at emergency room trips or hospital visitations within 30 days of the initial kidney stone treatment, a lengthier window than had been studied prior.

The study team found that patients who had their kidney stone operation at hospitals who regularly report high volumes of the procedure were significantly less likely to have difficulties.

When difficulties took place, they were least frequent after shock wave treatment, afflicting 12 percent of patients. Those who received ureteroscopy therapy, the next most frequent procedure, had even more future related visits, with 15 percent of patients. However, shock wave therapy did have the greater costs linked with the emergency appointments, at over $32,000. Charges for difficulties of incision-based surgery were highest, averaging more than $47,000 when a complication-related visit took place.

Scales said additional study is necessary to discover why these procedures resulted in distinct complication levels and costs, but mentioned patients may not be prepared for difficulties as a result of minimally invasive procedures.

“From the patient perspective, an unplanned emergency department visit or hospital admission after a low-risk ambulatory procedure is a significant event,” Scales said. “Kidney stones are excruciatingly painful and primarily affect people who are of working age. These patients face not only the cost of treatment, but also the financial difficulties from time off work due to pain and treatment.”

With the costs related to medical procedures currently under more scrutiny than ever, these findings are particularly relevant to public policy decisions, Scales said.

“Reducing unplanned emergency visits and hospitalizations associated with kidney stone treatments could result in significant cost savings if the causes can be identified and addressed,” Scales said.

He added that his future research will probably include determining why problems occur and how they can be prevented.

New Research Analyzes Trampoline Fractures On Children

Indiana University
Trampoline accidents sent an estimated 288,876 people, most of them children, to hospital emergency departments with broken bones from 2002 to 2011, at a cost of more than $400 million, according to an analysis by researchers at the Indiana University School of Medicine.
Including all injuries, not just fractures, hospital emergency rooms received more than 1 million visits from people injured in trampoline accidents during those 10 years, boosting the emergency room bills to just over $1 billion, according to the study.
The research, published online in the Journal of Pediatric Orthopedics, is the first to analyze trampoline fracture patterns in a large population drawn from a national database, said the study’s lead author, Randall T. Loder, M.D., chair of the IU School of Medicine Department of Orthopaedic Surgery and a surgeon at Riley Hospital for Children at IU Health.
“There have not been any large-scale studies of these injuries,” Dr. Loder said. “We wanted to document the patterns of injury. This gives us an idea of the magnitude of the problem across the country.”
Dr. Loder and his colleagues retrieved data for all trampoline-related injuries for the decade beginning 2002 from the National Electronic Injury Surveillance System, which collects data from a sample of 100 hospitals across the country. Using statistical techniques, they estimated there were just over 1 million emergency department visits, with 288,876 of them involving fractures.
About 60 percent of the fractures were upper-extremity injuries, notably fingers, hands, forearms and elbows. Lower-extremity fractures most commonly were breaks in the lower leg — the tibia and fibula — and ankles. Just over 4 percent involved fractures to the axial skeleton, including the spine, head, and ribs and sternum. An estimated 2,807 spinal fractures were reported during the period studied.
“Fortunately, there were fewer spine injuries than might have been expected, but those can be catastrophic,” said Meagan Sabatino, clinical research coordinator for pediatric orthopedic surgery and a study co-author.
While the average age for most of the injuries was about 9 years old, the average age for axial skeleton injuries was substantially higher at 16.6 years old.
“They’re probably jumping higher, with more force,” Dr. Loder said.
“And believe me, teenagers are risk takers. Younger kids may not understand potential outcomes of their actions, but they’re not so much risk takers. Teenagers, they’ll just push the limit,” he said.
Year by year, the researchers reported that emergency department visits rose steadily from just under 40,000 in 1991 to a peak of about 110,000 in 2004. Since then the numbers have fallen, to just over 80,000 in 2011.
“The number of injuries has declined, but not fast enough,” Dr. Loder said.
Because the data are collected only from hospitals, both the numbers of injuries and costs are likely significantly underestimated because some patients likely went to urgent care centers or family physicians for treatment. Moreover, the data do not include costs for non-emergency-room care, from surgery to subsequent physical therapy or other treatments for more serious injuries.
Nearly all of the fractures — 95 percent — occurred at the patient’s home. Noting that both the American Academy of Pediatrics and the American Academy of Orthopedic Surgeons strongly advise against home trampoline use, the researchers endorsed more education and better prevention strategies directed to homeowners.
In an interview, Dr. Loder went further, saying he would like to see home trampolines banned.
“I think trampolines should not be allowed in backyards. It’s that simple,” he said. “It’s a significant public health problem.”
Also contributing to the study was William Schultz, a medical student at the IU School of Medicine.
The IU Department of Orthopaedic Surgery provided support for the research.

Chronic Stress In Marriage Can Lead To Depression

April Flowers for redOrbit.com – Your Universe Online

Marriage is hard work and can often lead to stress. A new study from the University of Wisconsin-Madison reveals that such marital stress could make people more susceptible to depression.

According to the research team, people who experience chronic stress in their marriage are less able to savor positive experiences, and are more likely to report other depressive symptoms. The findings, published in a recent issue of the Journal of Psychophysiology, have important implications for researchers by helping them better understand what makes some people vulnerable to depression, and develop better tools to combat the issue.

The team was led by Dr. Richard Davidson, UW-Madison William James and Vilas Professor of Psychology and Psychiatry. Davidson is also the founder of the Center for Investigating Healthy Minds at the UW’s Waisman Center.

“This is not an obvious consequence, if you will, of marital stress, but it’s one I think is extraordinarily important because of the cascade of changes that may be associated,” Davidson said in a recent statement. “This is the signature of an emotional style that reveals vulnerability to depression.”

Many prior studies have shown that married people are typically happier and healthier than their single counterparts. Other studies have shown that when marriage goes wrong, it can become one of the most significant long-term social stressors. This is why the UW-Madison research team thought that chronic marital stress might work as a model for how other common daily stressors might lead to depression and similar conditions.

“How is it that a stressor gets under your skin and how does that make some more vulnerable to maladaptive responses?” says UW-Madison graduate student Regina Lapate.

The research team recruited married adult participants for the longitudinal study, which was part of the National Institute on Aging-funded Midlife in the United States (MIDUS). The study subjects were asked to complete questionnaires, rating their stress on a six-point scale. The questions included such items as how often they felt let down by their partner, or how frequently they were criticized by their partner. The participants were also evaluated for depression.

The researchers repeated the questionnaire and depression assessment approximately nine years later.

As a means of measuring their resilience – which is how fast a person recovers from negative experiences – the participants were invited to the laboratory in year 11 to undergo emotional response testing.

The researchers measured the electrical activity of the corrugator supercilii, or the frowning muscle, while the participants were shown 90 images with a mix of negative, neutral and positive images to assess the intensity and duration of their response.

The frowning muscle has a basal level of tension at all times. During a positive emotional response, the muscle becomes more relaxed, while the tension is increased during a negative emotional response.

Prior studies have found that measuring how activated or relaxed the frown muscle becomes, and how long it takes to return to the basal level of tension, is a reliable method for assessing emotional response and depression.

“It’s a nice way to get at what people are experiencing without asking people for their emotional response: ‘How are you feeling?'” Lapate says.

According to previous research, people with depression have a fleeting response after positive emotional triggers. How long it takes for either the activation or relaxation response to subside is the focus of the current study.

“If you measure at just one time point, you are losing valuable information,” says Lapate.

The most significant time measurement, according to the team, was the five to eight seconds following a positive image exposure.

The researchers found shorter-lived responses to positive stimuli from those participants who had reported higher levels of marital stress than those who had reported more satisfaction in their marriages. For negative responses, however, there was no significant difference.

Davidson wants to focus his future research on finding ways to help people overcome this weakened ability to enjoy positive experiences, enabling them to develop more resilience to stress.

“To paraphrase the bumper sticker: ‘Stress happens,'” says Davidson. “There is no such thing as leading a life completely buffered from the slings and arrows of everyday life.”

Davidson hopes his results will lead to the discovery of better tools to stop this kind of stress from occurring in the first place.

“How we can use simple interventions to actually change this response?” he asks. “What can we do to learn to cultivate a more resilient emotional style?”

Eating Less Red Meat, Reducing Food Waste Would Reduce Agricultural CO2 Emissions

redOrbit Staff & Wire Reports – Your Universe Online
Reducing global red meat consumption and reducing food waste could drastically reduce the annual amount of carbon emissions from the worldwide agriculture industry, according to a new report released Friday by Climate Focus and California Environmental Associates.
The report, entitled Strategies for Mitigating Climate Change in Agriculture, entailed the review and synthesis of a vast array of different literature on agriculture and climate change (including some unpublished data). The result of that analysis was the development of 12 key strategies intended to eliminate agriculture’s climate footprint, the organizations behind the study said in a statement.
According to the report’s findings, changes to procedures in Brazil, China, the European Union and the US could have the greatest impact on global emission levels. The authors stress that the role consumption plays in producing food-related emissions is often overlooked, and that by changing diets and cutting back on food waste levels in key nations could eliminate over three gigatons of CO2 production each year.
“By reducing the climate impact of the food we eat, we can improve our health and the health of the planet,” explained study co-author and Climate Focus Director Dr. Charlotte Streck. “By making the way we produce food more efficient, farmers can reap the benefits of increased production while decreasing the environmental impacts of farming.”
“The energy and transport sectors have seen a significant growth in innovation needed to ensure the long term sustainability of the sectors. It is time that agriculture followed,” she added. “There are so many ways in which policymakers can help farmers boost productivity while mitigating climate change. We need to dispel the notion, once and for all, that productivity and sustainability can’t work hand in hand.”
One of the report’s key findings is that 70 percent of the direct greenhouse gas emissions linked to the agriculture industry originate from cows, sheep and other grazing livestock. The bulk of these emissions would be eliminated if there was less demand for beef products, particularly in two countries: the US, which is currently the largest consumer of red meat on Earth, and China, where demand for such meats is expected to increase rapidly.
Americans are already beginning to eat less meat, with per capita consumption of red meat dropping from a peak of 88.8 pounds in 1976 to just 58.7 pounds in 2009, though in comparison to other nations, consumption rates remain high, the authors said. Conversely, beef consumption is expected to increase by 116 percent by 2050 – bad news, considering beef livestock carbon emissions are six times those of poultry on a per unit basis.
“Because China already has a climate-friendly diet and hasn’t yet embraced beef, it’s still possible to discourage the consumption of more beef without changing the country’s traditional beliefs and culture,” said co-author Amy Dickie, Director of Philanthropic Services at California Environmental Associates. “Steering the Chinese diet in a more climate-friendly direction would yield enormous benefits for the country’s health and food security – as well as the global climate.”
US and China are also two of the countries that could help reduce agriculture’s climate footprint by limiting food waste, joining Sub-Saharan Africa and South and Southeast Asia. Between 30 and 40 percent of the global food supply is lost while traveling along the supply chain from the farmer to consumers, and such waste is the result of inefficient production, storage, distribution and consumption practices, the study claims.
One measure which could be instituted includes fixing the confusion between “sell-by” and “best-by” dates and other food labels that often causes American consumers to dispose of perfectly edible food. Also, consumers in the US and European Union should be encouraged to stop disposing of food based on its shape or color, American and Chinese restaurants should cut back on portion sizes, and Southeast Asia and Sub-Saharan Africa would benefit from improved food cooling and storage practices that could reduce food loss from spoilage.

Cancer Researchers Print Living Tumors Via 3D Printer

Brett Smith for redOrbit.com – Your Universe Online

For years, cancer-research scientists have been using two-dimensional petri dish models to study and test potential treatments. But now, a team of scientists led by Drexel University’s Wei Sun has developed a method for 3-D printing living tumors – a development that could revolutionize cancer research.

According to a report on the development recently published the journal Biofabrication, the team was able to print out viable tumors by using a mixture of cervical cancer cells and a hydrogel substance that looks like a common ointment.

“This is the first time to report that one can build a 3D in vitro tumor model through 3D Printing technology,” said Sun, the director of Drexel’s research center at the Shanghai Advanced Research Institute, in a recent statement. “This may lead to a new paradigm for cancer research and for individual cancer therapies. We have developed a technological platform and would like to work with biologists and pathologists to encourage them to use the developed platform for 3D biology and disease studies.”

Because tumors within the body have a distinct surface area, form and cellular structure, cell culture samples grown in a lab come with inherent limitations, meaning information from tests using these specimens will vary from the response of an actual tumor to a treatment. Until now, these cell cultures were their best choice.

“Two-dimensional cell culture models are traditionally used for biology study and drug screening,” Sun said. “However, two-dimensional culture models can not represent true 3D physiological tissues so it lacks the microenvironment characteristics of natural 3D tissues in vivo. This inherent inadequacy leads to shortcomings in cancer research and anti-tumor drug development. On the other hand, 3D tumor models can represent true tumor 3D pathological organizations and will lead to a new paradigm for cancer study.”

[ Watch the Video: Researchers Create Live Cancer Tumors With A 3D Printer ]

In the study, the researchers took several main variables into account: width of nozzle, pace and pressure of extrusion, design and dimensions of deposition, as well as viscosity and temperature of substrates. The team was able to produce cancer cells with a 90 percent survival rate that grew into spheroid-shaped tumors in about eight days.

“The keys to keeping the cells alive were controlling the temperature of the nozzle and using a hearty strain of cancer cells,” Sun said. “We chose the Hela cell, which is a robust form of cervical cancer that has been used in research for many years. Because of this, we had a good idea as to how it would behave under certain conditions. This allowed us to control the variables of the extrusion process until we were able to successfully create a model.”

The researchers compared their 3-D model against a two-dimensional culture sample with a standard anti-cancer drug. The 3-D printed tumors exhibited more resistance to the chemical treatment method than the very same cancer cellular material grown inside a petri dish – an example of the difference that exists between test outcomes and success rates of cancer remedies.

The researchers said they plan to work on printing tumors made from multiple different cells – a characteristic often present in those extracted from cancer patients. They said they also want to find ways to connect the 3-D models to tissues and blood vessels that they have also printed, which would help to replicate how tumors grow in the body.

“We will try to understand the cell-cell and cell-substrate communication and immune responses for the printed tumor-like models,” Sun said. “Our goal is to take this tumor-like model and make it into a more of an in vivo simulation. And to apply it to study the development, invasion and metastasis of cancer, to test the efficacy and safety of new cancer drugs, as well as the specific therapy for individual cancer patient”

Image 2 (below):  After eight days, the printed mixture of cervical cancer cells and hydrogel grows into living spheroid tumors -shown here in fluorescent dye. Credit: Wei Sun, Drexel University

SpaceX Could Save The Government Money By Launching Satellites

Lawrence LeBlond for redOrbit.com – Your Universe Online

SpaceX has proven to NASA that it is a viable contender in the launch of scientific missions to the International Space Station and now it wants to prove it can also send payloads into space for the US government, stating that it can save the country $1 billion annually in doing so.

However, despite the company’s status as a venerable launch provider, another longstanding “monopoly” over national security launches is preventing SpaceX from getting that chance. Now, CEO Elon Musk just announced that his company is filing an official protest complaining about United Launch Alliance’s (ULA) unfair control over national security launches, according to a report from The Verge.

More than anything else, Musk said he just wants the current deals reexamined.

“Let’s shine some sunlight on this,” he said during a conference call, picked up by The Verge. “As I’ve said, sunlight is the best disinfectant. If everything’s fine, then I guess that’s great. But that seems unlikely to me.”

Musk said the US Air Force would do well to cancel its current 36-core contract that the ULA has recently bid on, and wait until SpaceX receives its formal certification. Once that goal is met, Musk said that a legitimate competition can ensue to see which of the companies is truly most deserving of the Air Force deal.

It seems like a fair scenario for Musk and his SpaceX team, but it is likely one that Boeing and Lockheed will not respect.

When Boeing and Lockheed joined forces in 2006 to begin government launch operations under the ULA name, it ultimately led to the company pulling in $3.5 billion in annual funding from the government, according to a handout distributed at a SpaceX media event today.

Musk is confident that his company can do a much better job at a much lower cost to the government.

“What we feel is that this is not right. That the national security launches should be put up for competition,” Musk said, as cited by The Verge, adding that since SpaceX has made it clear that it can launch NASA satellites, there would be no logical reason to think it couldn’t handle government satellites as well.

With that $1 billion in savings, the government could put that money to far better uses, such as funding an entire year’s worth of operations for a dozen F-16 squadrons, Musk explained.

According to TechCrunch, SpaceX is currently building out several launch facilities in Texas and Florida. In Florida, the company is modifying launch pad 39a at NASA Cape Canaveral facility where the Apollo 11 launched from in the 1969. These facilities could potentially handle future launches for the government if SpaceX was to win itself a contract.

Brain Wave Study May Yield Improvements In Visual Perception

[ Watch the Video: Are Alpha Waves The Future Of Brain Monitoring? ]

Brett Smith for redOrbit.com – Your Universe Online

Have you ever mistakenly driven through a red light or been unable to see a familiar face in a crowded room?

By using a unique method to test brain waves, researchers from the University of Illinois at Urbana-Champaign and City College of New York have detailed new information on how the mind processes sensory stimuli that may or may not be perceived, according to a new report in the Journal of Cognitive Neuroscience.

“When we have different things competing for our attention, we can only be aware of so much of what we see,” said study author Kyle Mathewson, a postdoctoral fellow in the Beckman Institute for Advanced Science and Technology at UIUC. “For example, when you’re driving, you might really be concentrating on obeying traffic signals.”

Mathewson said this lack of awareness is particularly important when it comes to recognizing an unexpected event – such as a small child running into the middle of the road.

“In the car, we may see something so brief or so faint, while we’re paying attention to something else, that the event won’t come into our awareness,” Mathewson said. “If you present this scenario hundreds of times to someone, sometimes they will see the unexpected event, and sometimes they won’t because their brain is in a different preparation state.”

In the study, the researchers used both electroencephalography (EEG) and the event-related optical signal (EROS) to evaluate 16 participants and chart the electrical and optical information onto individual MRI brain images.

While EEG documents the electric activity across the scalp, EROS uses infra-red light transferred via optical fibers to assess variations in optical properties in the working areas of the cerebral cortex. Due to the skull, EEG sensors are not the best option for determining where brain signals are created. EROS, which investigates how light is dispersed, can non-invasively identify activity inside the brain.

“EROS is based on near-infrared light,” the researchers said. “It exploits the fact that when neurons are active, they swell a little, becoming slightly more transparent to light: this allows us to determine when a particular part of the cortex is processing information, as well as where the activity occurs.”

The researchers’ methods allowed them to chart where suppressing alpha oscillations were coming from. They found that alpha waves are created in the cuneus, which sits an area of the brain that handles visual information.

However, by concentrating interest and being more aware, the executive function of the mind can engage and suppress the alpha waves, thus enabling seeing objects or events that might have been missed in a more laid back mental state.

“We found that the same brain regions known to control our attention are involved in suppressing the alpha waves and improving our ability to detect hard-to-see targets,” said study author Diane Beck, a member of the Beckman’s Cognitive Neuroscience Group.

“Knowing where the waves originate means we can target that area specifically with electrical stimulation” said Mathewson. “Or we can also give people moment-to-moment feedback, which could be used to alert drivers that they are not paying attention and should increase their focus on the road ahead, or in other situations alert students in a classroom that they need to focus more, or athletes, or pilots and equipment operators.”

Untangling Brazil’s Controversial New Forest Code

Approved in 2012, Brazil’s new Forest Code has few admirers. Agricultural interests argue that it threatens the livelihoods of farmers. Environmentalists counter that it imperils millions of hectares of forest, threatening to release the billions of tons of carbon they contain. A new study, co-authored by Woods Hole Research Center (WHRC) scientists Michael Coe, Marcia Macedo and Brazilian colleagues, published this week in Science, aims to clarify the new law. Entitled “Cracking Brazil’s Forest Code,” the article is the first to quantify the implications of recent changes to the Forest Code and identify new opportunities and challenges for conservation.
The Brazilian Forest Code is the largest single protector of forests on private properties, which contain over half of Brazil’s remaining forests and savannahs. Though championed by conservationists, the law has proved challenging to enforce. As global demand for beef and animal feed increased in the early 2000s, annual deforestation in the Brazilian Amazon surged to more than 20,000 km2 per year – prompting global outrage and a redoubling of efforts to improve enforcement. These pressures inspired a backlash from agribusiness interests, who lobbied to reduce the burden put on landowners to conserve and restore forests.
The new Forest Code is the product of a long and bitter debate in the Brazilian congress, fueled by tensions between the agribusiness lobby, government enforcement agencies, and conservationists. According to the study, the new law granting amnesty to landowners who deforested illegally before 2008, reduces the area to be reforested from 500,000 km2 to 210,000 km2. “The agribusiness lobby should see this as a big win,” explains lead author Britaldo Soares-Filho of the Federal University of Minas Gerais (UFMG,) “but if they continue to boycott and sabotage the Forest Code, they will be shooting themselves in the foot.” Ultimately, he warns, “agricultural productivity depends on the conservation of native ecosystems and the climate stability they provide.”
The recent changes affect conservation in all Brazilian biomes, including the Amazon, Cerrado, and Atlantic Forest. “Brazil has done a great job reducing deforestation in the Amazon, but the other biomes have been short-changed in the process,” notes Dr. Macedo. Only 50% of the Cerrado forest remains intact and deforestation is increasing. This study estimates that the new law allows legal deforestation of an additional 400,000 km2 of the Cerrado, “That’s an area almost the size of California. Allowing that to happen would be an environmental disaster,” emphasizes Dr. Macedo.
Despite big losses for the environment, the law also introduced two key conservation measures that could pave the way for commoditizing standing forests in all biomes. First, it creates a new market that allows landowners to trade surplus forests (those that could be legally deforested) on one property, to offset restoration requirements on another. The study found that, if fully implemented, this could reduce the areas requiring restoration to as little as 5,500 km2 of arable land. The new law also creates an online land registry system that streamlines the process for landowners to register their property boundaries and environmental information. More advanced monitoring and documentation of over 5 million rural properties will dramatically improve enforcement. According to Dr. Coe, “No other country has attempted a registry of this scale. By allowing greater transparency, the system has the potential to help improve compliance and therefore become a big win for the environment.”
The Forest Code continues to be difficult to enforce and some worry that the amnesty provided for illegal deforestation may set a dangerous precedent, creating the expectation of impunity for future deforestation. “To be effective, the Forest Code must be tied to economic incentives that reward landowners who conserve native vegetation,” says co-author Raoni Rajão of UFMG.
Fortunately, private initiatives have sprung up to support compliance in the form of international certification standards, commodity roundtables, and boycotts of products produced on newly deforested land. New public initiatives like Brazil’s Low-Carbon Agriculture Program, which provides US$1.5 billion in annual subsidized loans to improve agricultural production, while reducing associated carbon emissions, are also key. Such initiatives will be critical if Brazil hopes to succeed in reconciling environmental conservation and agricultural development.

On the Net:

One In 13 Schoolchildren Take Medication For Behavioral Disorders

Lawrence LeBlond for redOrbit.com – Your Universe Online

According to a new government report, one in 13 US schoolchildren is taking at least one medication for emotional or behavioral disorders and more than half of the parents of these children maintain that the drugs used are actually working for their kids.

The report is released by the National Center for Health Statistics (NCHS), a research arm of the US Centers for Disease Control and Prevention (CDC).

“We can’t advise parents on what they should do, but I think it’s positive that over half of parents reported that medications helped ‘a lot,’” said report author LaJeana Howie, a statistical research scientist at NCHS.

For the report, which gleaned data from the National Health Interview Survey, Howie and her colleagues found that among children between the ages six and 17, 7.5 percent used prescribed medication for emotional or behavioral difficulties in 2011-2012.

Among those who were prescribed medications, males and non-Hispanic white children were more likely to be on such pills than females and children of other ethnic groups. Among females, the percentage of children who used prescribed medications was higher for older females (6.3 percent) than for younger girls (4.0 percent). There was no significant difference in percentages between younger and older males.

Prescriptions also varied based on health insurance status and poverty status. A higher percentage (9.9 percent) of children with Medicaid or CHIP coverage used prescribed medications compared with those who had private insurance (6.7 percent) or were uninsured (2.7 percent).

Children living in poverty (9.2 percent) used prescribed medicines more than children in families having income at 100-200 percent of the poverty level (6.6 percent). Prescribed medications in children in families having income 200-400 percent of the poverty level and those with incomes more than 400 percent the poverty level were similar – 7.3 percent and 7.2 percent, respectively.

The report also found differences among children in the parent’s perception of the benefit of medication for emotional or behavioral difficulties.

Overall, about 55 percent of the children had a parent report that the medication being used helped the child “a lot”; 26 percent said they helped “some”; and about 19 percent said the medications either helped “a little” or “not at all.”

Howie and her colleagues noted that they were not able to identify which specific disorders the children were being treated for. However, they believe there is a strong likelihood that attention-deficit/hyperactivity disorder (ADHD) could be the main reason medications are prescribed, based on the fact that 81 percent of children with emotional or behavioral difficulties have been diagnosed with ADHD at some point in their lives.

The researchers were also unable to identify the specific medications prescribed to the children in the report.

An expert not involved in the research agreed that ADHD is likely one of the most common conditions involved.

“Although the authors don’t really talk about the diagnoses, ADHD is likely the most overwhelming diagnosis. Oppositional defiant disorder, anxiety and depression are other likely diagnoses,” Dr. Andrew Adesman, chief of developmental and behavioral pediatrics at Steven and Alexandra Cohen Children’s Medical Center of New York, told HealthDay reporter Serena Gordon.

In regards to the findings in this report, Howie noted that it is difficult to speculate what factors would account for the differences seen.

For his part, Adesman said there are many factors that might contribute to more use of medications in people living under the poverty line and for those on government insurance programs.

“There may be parenting challenges, such as more single-parent households, medications may be more available than access to behavioral treatments, there may be more logistical issues with nonpharmaceutical interventions, like getting time off from work,” Adesman told HealthDay. “Many more families have access to prescription medications than to non-pharmaceutical interventions. There’s a lack of mental health treatment parity.”

“It’s encouraging that children who are identified as taking prescription medications are benefiting from those medications,” Adesman said. However, he added, “There are nonpharmaceutical treatments for virtually all psychiatric diagnoses in children. For households where a child has significant emotional or behavioral difficulties, counseling, behavior management and some forms of psychotherapy can be helpful as well.”

“Over the past two decades, the use of medication to treat mental health problems has increased substantially among all school-aged children and in most subgroups of children,” the researchers wrote in the report. “Data collected by national health surveys play a key role in monitoring and understanding the factors associated with the expanded use of medication for the emotional and behavioral problems of children.”

Iron Consumption Could Increase Heart Disease Risk: Study

Indiana University
A new study from the Indiana University School of Public Health-Bloomington has bolstered the link between red meat consumption and heart disease by finding a strong association between heme iron, found only in meat, and potentially deadly coronary heart disease.
The study found that heme iron consumption increased the risk for coronary heart disease by 57 percent, while no association was found between nonheme iron, which is in plant and other non-meat sources, and coronary heart disease.
The study was published online ahead of print in the Journal of Nutrition. Along with first author Jacob Hunnicutt, a graduate student in the school’s Department of Epidemiology and Biostatistics, the study’s co-authors are Ka He and Pengcheng Xun, faculty members in the department.
Hunnicutt said the link between iron intake, body iron stores and coronary heart disease has been debated for decades by researchers, with epidemiological studies providing inconsistent findings. The new IU research, a meta-analysis, examined 21 previously published studies and data involving 292,454 participants during an average 10.2 years of follow-up.
The new study is unique because it looks at the associations of total iron consumption as well as heme and nonheme iron intake in comparison to the risk of coronary heart disease. The only positive association involved the intake of heme iron.
The body treats the two kinds of iron differently. It can better control absorption of iron from vegetable sources, including iron supplements, but not so with iron from meat sources.
“The observed positive association between heme iron and risk of CHD may be explained by the high bioavailability of heme iron and its role as the primary source of iron in iron-replete participants,” the researchers wrote in the journal article. “Heme iron is absorbed at a much greater rate in comparison to nonheme iron (37 percent vs. 5 percent). Once absorbed, it may contribute as a catalyst in the oxidation of LDLs, causing tissue-damaging inflammation, which is a potential risk factor for CHD.”
Iron stores in the body increase over time. The only way to reduce iron in the body is by bleeding, donating blood or menstruation. Some dietary choices, such as coffee and tea, also can inhibit iron absorption.

Skin Layer Grown From Human Stem Cells Could Replace Animals In Drug, Cosmetics Testing

King’s College London

An international team led by King’s College London and the San Francisco Veteran Affairs Medical Center (SFVAMC) has developed the first lab-grown epidermis – the outermost skin layer – with a functional permeability barrier akin to real skin. The new epidermis, grown from human pluripotent stem cells, offers a cost-effective alternative lab model for testing drugs and cosmetics, and could also help to develop new therapies for rare and common skin disorders.

The epidermis, the outermost layer of human skin, forms a protective interface between the body and its external environment, preventing water from escaping and microbes and toxins from entering. Tissue engineers have been unable to grow epidermis with the functional barrier needed for drug testing, and have been further limited in producing an in vitro (lab) model for large-scale drug screening by the number of cells that can be grown from a single skin biopsy sample.

The new study, published in the journal Stem Cell Reports, describes the use of human induced pluripotent stem cells (iPSC) to produce an unlimited supply of pure keratinocytes – the predominant cell type in the outermost layer of skin – that closely match keratinocytes generated from human embryonic stem cells (hESC) and primary keratinocytes from skin biopsies. These keratinocytes were then used to manufacture 3D epidermal equivalents in a high-to-low humidity environment to build a functional permeability barrier, which is essential in protecting the body from losing moisture, and preventing the entry of chemicals, toxins and microbes.

A comparison of epidermal equivalents generated from iPSC, hESC and primary human keratinocytes (skin cells) from skin biopsies showed no significant difference in their structural or functional properties compared with the outermost layer of normal human skin.

Dr Theodora Mauro, leader of the SFVAMC team, says: “The ability to obtain an unlimited number of genetically identical units can be used to study a range of conditions where the skin’s barrier is defective due to mutations in genes involved in skin barrier formation, such as ichthyosis (dry, flaky skin) or atopic dermatitis. We can use this model to study how the skin barrier develops normally, how the barrier is impaired in different diseases and how we can stimulate its repair and recovery.”

Dr Dusko Ilic, leader of the team at King’s College London, says: “Our new method can be used to grow much greater quantities of lab-grown human epidermal equivalents, and thus could be scaled up for commercial testing of drugs and cosmetics. Human epidermal equivalents representing different types of skin could also be grown, depending on the source of the stem cells used, and could thus be tailored to study a range of skin conditions and sensitivities in different populations.”

How Productive Are The Ore Factories In The Deep Sea?

GEOMAR Helmholtz Centre for Ocean Research Kiel

GEOMAR scientists demonstrate in “Nature” the supply routes of black smokers

Hydrothermal vents in the deep sea, the so-called “black smokers”, are fascinating geological formations. They are home to unique ecosystems, but are also potential suppliers of raw materials for the future. They are driven by volcanic “power plants” in the seafloor and release amounts of energy that could meet the needs of a small town. But how exactly do they extract this energy from the volcanic rock? Researchers at GEOMAR Helmholtz Centre for Ocean Research Kiel have now used computer simulations to understand the underground supply routes. The study is published in the international journal Nature.

About ten years after the first moon landing, scientists on earth made a discovery that proved that our home planet still holds a lot of surprises in store for us. Looking through the portholes of the submersible ALVIN near the bottom of the Pacific Ocean in 1979, American scientists saw for the first time chimneys, several meters tall, from which black water at about 300 degrees and saturated with minerals shot out. What we have found out since then: These “black smokers”, also called hydrothermal vents, exist in all oceans. They occur along the boundaries of tectonic plates along the submarine volcanic chains. However, to date many details of these systems remain unexplained.

One question that has long and intensively been discussed in research is: Where and how deep does seawater penetrate into the seafloor to take up heat and minerals before it leaves the ocean floor at hydrothermal vents? This is of enormous importance for both, the cooling of the underwater volcanoes as well as for the amount of materials dissolved. Using a complex 3-D computer model, scientists at GEOMAR Helmholtz Centre for Ocean Research Kiel were now able to understand the paths of the water toward the black smokers.

In general, it is well known that seawater penetrates into the Earth’s interior through cracks and crevices along the plate boundaries. The seawater is heated by the magma; the hot water rises again, leaches metals and other elements from the ground and is released as a black colored solution. “However, in detail it is somewhat unclear whether the water enters the ocean floor in the immediate vicinity of the vents and flows upward immediately, or whether it travels long distances underground before venting,” explains Dr. Jörg Hasenclever from GEOMAR.

This question is not only important for the fundamental understanding of processes on our planet. It also has very practical implications. Some of the materials leached from the underground are deposited on the seabed and form ore deposits that may be of economically interest. There is a major debate, however, how large the resource potential of these deposits might be. “When we know which paths the water travels underground, we can better estimate the quantities of materials released by black smokers over thousands of years,” says Hasenclever.

Hasenclever and his colleagues have used for the first time a high-resolution computer model of the seafloor to simulate a six kilometer long and deep, and 16 kilometer wide section of a mid-ocean ridge in the Pacific. Among the data used by the model was the heat distribution in the oceanic crust, which is known from seismic studies. In addition, the model also considered the permeability of the rock and the special physical properties of water.

The simulation required several weeks of computing time. The result: “There are actually two different flow paths – about half the water seeps in near the vents, where the ground is very warm. The other half seeps in at greater distances and migrates for kilometers through the seafloor before exiting years later.” Thus, the current study partially confirmed results from a computer model, which were published in 2008 in the scientific journal “Science”. “However, the colleagues back then were able to simulate only a much smaller region of the ocean floor and therefore identified only the short paths near the black smokers,” says Hasenclever.

The current study is based on fundamental work on the modeling of the seafloor, which was conducted in the group of Professor Lars Rüpke within the framework of the Kiel Cluster of Excellence “The Future Ocean”. It provides scientists worldwide with the basis for further investigations to see how much ore is actually on and in the seabed, and whether or not deep-sea mining on a large scale could ever become worthwhile. “So far, we only know the surface of the ore deposits at hydrothermal vents. Nobody knows exactly how much metal is really deposited there. All the discussions about the pros and cons of deep-sea ore mining are based on a very thin database,” says co-author Prof. Dr. Colin Devey from GEOMAR. “We need to collect a lot more data on hydrothermal systems before we can make reliable statements”.

Original publication: Hasenclever, J., S. Theissen-Krah, L. H. Rüpke, J. P. Morgan, K. Iyer, S. Petersen, C. W. Devey: Hybrid shallow on-axis and deep off-axis hydrothermal circulation at fast-spreading ridges, Nature, http://dx.doi.org / 10.1038/nature13174

Existing Cochlear Technology Used To Re-grow Auditory Nerves

[ Watch The Video: Bionic Ear Delivers DNA To Regrow Auditory Nerve Cells ]

University of New South Wales

Researchers at UNSW Australia have for the first time used electrical pulses delivered from a cochlear implant to deliver gene therapy, thereby successfully regrowing auditory nerves.

The research also heralds a possible new way of treating a range of neurological disorders, including Parkinson’s disease, and psychiatric conditions such as depression through this novel way of delivering gene therapy.

The research is published today (Thursday 24 April) in the prestigious journal Science Translational Medicine.

“People with cochlear implants do well with understanding speech, but their perception of pitch can be poor, so they often miss out on the joy of music,” says UNSW Professor Gary Housley, who is the senior author of the research paper.

“Ultimately, we hope that after further research, people who depend on cochlear implant devices will be able to enjoy a broader dynamic and tonal range of sound, which is particularly important for our sense of the auditory world around us and for music appreciation,” says Professor Housley, who is also the Director of the Translational Neuroscience Facility at UNSW Medicine.

The research, which has the support of Cochlear Limited through an Australian Research Council Linkage Project grant, has been five years in development.

[ Watch The Video: Regenerated Auditory Nerves ]

The work centers on regenerating surviving nerves after age-related or environmental hearing loss, using existing cochlear technology. The cochlear implants are “surprisingly efficient” at localized gene therapy in the animal model, when a few electric pulses are administered during the implant procedure.

“This research breakthrough is important because while we have had very good outcomes with our cochlear implants so far, if we can get the nerves to grow close to the electrodes and improve the connections between them, then we’ll be able to have even better outcomes in the future,” says Jim Patrick, Chief Scientist and Senior Vice-President, Cochlear Limited.

It has long been established that the auditory nerve endings regenerate if neurotrophins – a naturally occurring family of proteins crucial for the development, function and survival of neurons – are delivered to the auditory portion of the inner ear, the cochlea.

But until now, research has stalled because safe, localized delivery of the neurotrophins can’t be achieved using drug delivery, nor by viral-based gene therapy.

[ Watch The Video: Cochlea After Gene Delivery ]

Professor Housley and his team at UNSW developed a way of using electrical pulses delivered from the cochlear implant to deliver the DNA to the cells close to the array of implanted electrodes. These cells then produce neurotrophins.

“No-one had tried to use the cochlear implant itself for gene therapy,” says Professor Housley. “With our technique, the cochlear implant can be very effective for this.”

While the neurotrophin production dropped away after a couple of months, Professor Housley says ultimately the changes in the hearing nerve may be maintained by the ongoing neural activity generated by the cochlear implant.

“We think it’s possible that in the future this gene delivery would only add a few minutes to the implant procedure,” says the paper’s first author, Jeremy Pinyon, whose PhD is based on this work. “The surgeon who installs the device would inject the DNA solution into the cochlea and then fire electrical impulses to trigger the DNA transfer once the implant is inserted.”

Integration of this technology into other ‘bionic’ devices such as electrode arrays used in deep brain stimulation (for the treatment of Parkinson’s disease and depression, for example) could also afford opportunities for safe, directed gene therapy of complex neurological disorders.

“Our work has implications far beyond hearing disorders,” says co-author Associate Professor Matthias Klugmann, from the UNSW Translational Neuroscience Facility research team. “Gene therapy has been suggested as a treatment concept even for devastating neurological conditions and our technology provides a novel platform for safe and efficient gene transfer into tissues as delicate as the brain.”

Moderate Exercise Keeps Hippocampus Healthy In People At Risk For Alzheimer’s Disease

University of Maryland
Maintaining hippocampus volume is key to delaying cognitive decline and onset of dementia symptoms in those with genetic risk for Alzheimer’s

A study of older adults at increased risk for Alzheimer’s disease shows that moderate physical activity may protect brain health and stave off shrinkage of the hippocampus– the brain region responsible for memory and spatial orientation that is attacked first in Alzheimer’s disease. Dr. J. Carson Smith, a kinesiology researcher in the University of Maryland School of Public Health who conducted the study, says that while all of us will lose some brain volume as we age, those with an increased genetic risk for Alzheimer’s disease typically show greater hippocampal atrophy over time. The findings are published in the open-access journal Frontiers in Aging Neuroscience.
“The good news is that being physically active may offer protection from the neurodegeneration associated with genetic risk for Alzheimer’s disease,” Dr. Smith suggests. “We found that physical activity has the potential to preserve the volume of the hippocampus in those with increased risk for Alzheimer’s disease, which means we can possibly delay cognitive decline and the onset of dementia symptoms in these individuals. Physical activity interventions may be especially potent and important for this group.”
Dr. Smith and colleagues, including Dr. Stephen Rao from the Cleveland Clinic, tracked four groups of healthy older adults ages 65-89, who had normal cognitive abilities, over an 18-month period and measured the volume of their hippocampus (using structural magnetic resonance imaging, or MRI) at the beginning and end of that time period. The groups were classified both for low or high Alzheimer’s risk (based on the absence or presence of the apolipoprotein E epsilon 4 allele) and for low or high physical activity levels.
Of all four groups studied, only those at high genetic risk for Alzheimer’s who did not exercise experienced a decrease in hippocampal volume (3%) over the 18-month period. All other groups, including those at high risk for Alzheimer’s but who were physically active, maintained the volume of their hippocampus.
“This is the first study to look at how physical activity may impact the loss of hippocampal volume in people at genetic risk for Alzheimer’s disease,” says Dr. Kirk Erickson, an associate professor of psychology at the University of Pittsburgh. “There are no other treatments shown to preserve hippocampal volume in those that may develop Alzheimer’s disease. This study has tremendous implications for how we may intervene, prior to the development of any dementia symptoms, in older adults who are at increased genetic risk for Alzheimer’s disease.”
Individuals were classified as high risk for Alzheimer’s if a DNA test identified the presence of a genetic marker – having one or both of the apolipoprotein E-epsilon 4 allele (APOE-e4 allele) on chromosome 19 – which increases the risk of developing the disease. Physical activity levels were measured using a standardized survey, with low activity being two or fewer days/week of low intensity activity, and high activity being three or more days/week of moderate to vigorous activity.
“We know that the majority of people who carry the APOE-e4 allele will show substantial cognitive decline with age and may develop Alzheimer’s disease, but many will not. So, there is reason to believe that there are other genetic and lifestyle factors at work,” Dr. Smith says. “Our study provides additional evidence that exercise plays a protective role against cognitive decline and suggests the need for future research to investigate how physical activity may interact with genetics and decrease Alzheimer’s risk.”
Dr. Smith has previously shown that a walking exercise intervention for patients with mild cognitive decline improved cognitive function by improving the efficiency of brain activity associated with memory. He is planning to conduct a prescribed exercise intervention in a population of healthy older adults with genetic and other risk factors for Alzheimer’s disease and to measure the impact on hippocampal volume and brain function.

Study Finds High School Athletes Suffered 2.5 Million Basketball Injuries In 6 Seasons

[ Watch the Video: Basketball Injuries In High School Athletes On The Rise ]

Nationwide Children’s Hospital

Study calls for more access to on-site athletic trainers to properly assess injuries

Basketball is a popular high school sport in the United States with 1 million participants annually. A recently published study by researchers in the Center for Injury Research and Policy at Nationwide Children’s Hospital is the first to compare and describe the occurrence and distribution patterns of basketball-related injuries treated in emergency departments and the high school athletic training setting among adolescents and teens.

The study, published online in the Journal of Athletic Training, examined data relating to adolescents 13-19 years of age who were treated in US emergency departments (EDs) from 2005 through 2010 and those treated in the high school athletic training setting during the 2005—2006 through the 2010—2011 academic years for an injury associated with basketball. Nationally, 1,514,957 patients with basketball-related injuries were treated in EDs and 1,064,551 were treated in the athletic training setting.

The study found that in general, injuries that are more easily diagnosed and treated, such as sprains/strains, were more likely to be treated onsite by an athletic trainer while more serious injuries, such as fractures, that require more extensive diagnostic and treatment procedures were more commonly treated in an ED.

“Athletic trainers play a really important role in helping to assess those more mild or moderate injuries and that helps alleviate a burden on the health care system and on families,” said Lara McKenzie, PhD, the study’s lead author and principal investigator in the Center for Injury Research and Policy at Nationwide Children’s. “They are right there on the sidelines. They are there when some of these things happen. And they can be a great resource for families to evaluate that injury immediately.”

In 1998, the American Medical Association recommended all high school sports programs enlist an athletic medicine unit consisting of a physician director and an athletic trainer, yet as of 2009, the National Athletic Trainers’ Association estimated only 42 percent of high school sports teams met this recommendation. With more than half of US high school athletes not having access to an athletic trainer during practice or competition, a vast majority of injured players wind up in urgent care facilities and emergency departments, some unnecessarily.

Dr. McKenzie, also a faculty member at The Ohio State University College of Medicine, said that while athletic trainers cannot treat every injury, they can make the system more efficient by only sending athletes to the hospital when it is necessary and helping athletes return to play when it is safe.

“We are there to prevent injuries, evaluate them quickly, treat them immediately and try our best to make sure that as we return them to play we do it in the most safe and efficient way possible,” said Kerry Waple, ATC, certified athletic trainer in Sports Medicine at Nationwide Children’s. “There are a lot of injuries that happen that are winding up in urgent cares and emergency departments that don’t need to be there.”

The Sports Medicine team at Nationwide Children’s partners with 13 high schools across central Ohio and provides on-site certified athletic trainers for athletes during both practice and competition. The athletic trainers provide immediate triage, diagnosis and treatment to injured athletes and help them return to play safely and at the right time. In addition, Nationwide Children’s Sports Medicine experts provide injury prevention techniques to high school and middle school athletes.

Data for this study were obtained from the National Electronic Injury Surveillance System (NEISS), which is operated by the U.S. Consumer Product Safety Commission. The NEISS database provides information on consumer product-related and sports- and recreation-related injuries treated in hospital emergency departments across the country. Data for this study were also obtained from the High School Reporting Information Online™ database.

Physicists Consider Implications About The Universe’s First Light

The Kavli Foundation
The world was stunned by the recent announcement that a telescope at the South Pole had detected a cosmic fossil from the earliest moments of creation; During a live Google Hangout, 4 astrophysicists discussed the implications
Last month, scientists announced the first hard evidence for cosmic inflation, the process by which the infant universe swelled from microscopic to cosmic size in an instant. This almost unimaginably fast expansion was first theorized more than three decades ago, yet only now has “smoking gun” proof emerged.
What is this result and what does it mean for our understanding of the universe? Late last week, two members of the discovery team discussed the finding and its implications with two of the field’s preeminent thought leaders.
Walter Ogburn is a postdoctoral researcher at the Kavli Institute for Particle Astrophysics and Cosmology at Stanford University, and a member of the discovery team. For him, the exciting thing “is not just confirming that inflation happened— many of us already had a pretty good idea that was likely to be the case—but having a chance to figure out exactly how it happened, what it was that drove it, whether there are new particles and new fields that participated in it, and which of the many models could be correct.”
That’s made possible by the strength of the detected signal. Far from the quiet whisper that many expected, the signal turned out to be a relatively loud drone. That brings with it many implications.
“The theoretical community is abuzz,” says theorist Michael S. Turner, Director of the Kavli Institute for Cosmological Physics (KICP) and the Bruce V. and Diana M. Rauner Distinguished Service Professor at the University of Chicago. Turner, who was not involved in the experiment, continues: “We got the signal we were looking for—that’s good—but we shouldn’t have gotten one according to the highbrow theorists because they said it should be too small. So we also got a surprise. And often in science, that’s the case. We like to the experimenters to find what we predict, but we also like surprises.”
This surprise is still so new that additional implications keep coming to light each week. It’s already clear that the result rules out many theoretical models of inflation—most of them, in fact—because they predict a signal much weaker than the one detected. In addition, the discovery also seems to disprove a theory that says that the universe expands, collapses and expands again in an ongoing cycle.
More than that, the result could very well be what Turner calls a “crack in the cosmic egg,” offering clues that even the most accepted theoretical assumptions contain inaccuracies.
“There have been hints for a while now that maybe something else is going on,” says KICP Deputy Director John Carlstrom, who leads two other experiments that study the universe’s first light. “Maybe we need to… allow some new physics in there. Maybe there are more neutrinos. Maybe they’re more massive than we thought. Or maybe it’s something none of us have thought of yet.”
Theorists will carefully consider these ideas and their implications over the coming months and years. Meanwhile, the signal still needs to be experimentally confirmed. Results from other telescopes, including the Planck satellite and the South Pole Telescope, are expected in the coming year. After that, the next step will be to measure more carefully the characteristics of the signal, searching for evidence of how inflation took place and how exactly the universe worked in its high-energy infancy. Those results may shed light on some of our biggest questions about how the universe began and how the forces of nature are unified.
But for now, the community is still buzzing with this first evidence of cosmic inflation.
“It’s a funny thing when you’re on the inside of a discovery like this,” says Abigail Vieregg, an active member the discovery team and a professor at the University of Chicago and KICP. “It’s only when you release the results to the world and watch the reaction of the community that, at least for me, it really hits home how important it is. If this is what we think it is, it’s a very big deal.”

Researchers Analyze Similarities Between Gyms And Fast Food Restaurants

redOrbit Staff & Wire Reports – Your Universe Online
Gyms and McDonalds may seem like polar opposites, but the authors of research appearing in a recent edition of the Sports, Education and Society claim that fast food restaurants and fitness centers have more in common than you might think.
In the study, University of Gothenburg professor Thomas Johansson and Linnaeus University senior lecturer Jesper Andreasson describe how they analyzed how fitness professionals view their occupation in the framework of the global wellness industry, as well as a phenomenon known as “McDonaldisation” of the gym culture.
Their work was based in part on interviews with personal trainers and group fitness instructors, as well as an investigation of Les Mills, an industry leader that operates on a franchise model. Currently, more than 14,000 gyms in 80 different countries have paid for permission to run Les Mills fitness classes, with a total of four million participants taking part during an average week, the authors explained in a March 25 statement.
Like McDonalds and other franchise-based fast food restaurants, Les Mills “implies a standardized set of techniques that look basically the same in all forms of group fitness training,” Johansson explained. “It’s really a business empire built around group fitness.”
The business revolves around a head trainer presenting movements that are strictly adhered to, with music playing while people taking the class are performing the exercises. Updated instructions are delivered every three months to all instructors throughout each of the Les Mills-certified fitness centers, resulting in local educators having extremely little influence over the fitness classes they teach.
Since they are unable to change the movements, music or the delivery method of the exercise instruction, Andreasson noted that the instructors have few opportunities to tap into their entire field of experience. As a result, their talents are not fully utilized, because they must stay so closely to the pre-designated terminology and choreography since the gyms promote their services as a suite of services.
“Even though gym and fitness franchises differ from hamburger restaurant chains, there are crucial similarities, but also differences. One can, for example, discern a tendency towards the construction of predesigned and highly monitored programs,” the authors wrote. “Homogenization is also apparent when looking at the body ideals produced, as fitness professionals work on their own or clients’ bodies, which makes it possible to anticipate a global body ideal.”
“The social and cultural patterns of self-regulation and self-government found in gym and fitness culture can be understood and analyzed in a global context,” they added. “What we find is an intriguing and complex mixture of regulation, control and standardization, on the one hand, and a struggle to express the body, to be ‘free’ and to transgress the boundaries set by the commercial global fitness industry, on the other.”

What Gave Us The Advantage Over Extinct Types Of Humans?

The answer lies in changes in the way our genes work

In parallel with modern man (Homo sapiens), there were other, extinct types of humans with whom we lived side by side, such as Neanderthals and the recently discovered Denisovans of Siberia. Yet only Homo sapiens survived. What was it in our genetic makeup that gave us the advantage?

The truth is that little is known about our unique genetic makeup as distinguished from our archaic cousins, and how it contributed to the fact that we are the only species among them to survive. Even less is known about our unique epigenetic makeup, but it is exactly such epigenetic changes that may have shaped our own species.

While genetics deals with the DNA sequence itself and the heritable changes in the DNA (mutations), epigenetics deals with heritable traits that are not caused by mutations. Rather, chemical modifications to the DNA can efficiently turn genes on and off without changing the sequence. This epigenetic regulatory layer controls where, when and how genes are activated, and is believed to be behind many of the differences between human groups.

Indeed, many epigenetic changes distinguish us from the Neanderthal and the Denisovan, researchers at the Hebrew University of Jerusalem and Europe have now shown.

In an article just published in Science, Dr. Liran Carmel, Prof. Eran Meshorer and David Gokhman of the Alexander Silberman Institute of Life sciences at the Hebrew University, along with scientists from Germany and Spain, have reconstructed, for the first time, the epigenome of the Neanderthal and the Denisovan. Then, by comparing this ancient epigenome with that of modern humans, they identified genes whose activity had changed only in our own species during our most recent evolution.

Among those genetic pattern changes, many are expressed in brain development. Numerous changes were also observed in the immune and cardiovascular systems, whereas the digestive system remained relatively unchanged.

On the negative side, the researchers found that many of the genes whose activity is unique to modern humans are linked to diseases like Alzheimer’s disease, autism and schizophrenia, suggesting that these recent changes in our brain may underlie some of the psychiatric disorders that are so common in humans today.

By reconstructing how genes were regulated in the Neanderthal and the Denisovan, the researchers provide the first insight into the evolution of gene regulation along the human lineage and open a window to a new field that allows the studying of gene regulation in species that went extinct hundreds of thousands of years ago.

On the Net:

Mother’s Age Also Influences Autism Risk

Brett Smith for redOrbit.com – Your Universe Online

While several recent studies on the risk of a child developing an autism spectrum disorder (ASD) have focused on paternal age or even the age of the grandfather, a new Sweden-based study has found that risk for ASD increases with the age of both parents – but particularly for would-be mothers.

The study, published in the International Journal of Epidemiology, revealed that ASD risk with parental age increased linearly for older fathers, while risk increased rapidly for women after the age of 30.

“The open question at hand really is, what biological mechanisms underlie these age effects?” said Brian K. Lee, an assistant professor in the Drexel University School of Public Health, in a recent statement.

To reach their conclusion, the researchers reviewed a large population database sample of nearly 420,000 children born in Sweden between 1984 and 2003. The study team considered several possible confounding factors that could fluctuate with parental age and also affect risk, including family income and each parent’s psychiatric background. The study also used a detailed case-finding strategy, to determine more ASD cases than other research studies might, based on all routes to care in a socialized healthcare system.

The researchers said their objective was to analyze the effect of parental age in greater detail by investigating possible differing risks of ASD with and without mental disability – one of the most substantial coinciding diagnoses with ASD, with a substantial effect on well-being. The researcher said their study was the first population-based study with an ASD sample sufficiently large enough to analyze ASD risk in populations of children regardless of intellectual disability.

“When considering risk factors, we can’t necessarily lump all ASD cases together, even though they fall under a broad umbrella of autism,” Lee said. “We need to keep an open mind in case intellectual disability might be a marker of a different underlying mechanism.”

Upon discovering that ASD with intellectual disability had a stronger connection with older parents than ASD without intellectual disability, the study team said their work supports further investigation of possible different causes.

Lee mentioned that, even though age effects are critical indicators of risk at the population level, they aren’t very considerable for a couple’s family planning due to the fact that overall risk remains low.

“The absolute risk of having a child with ASD is still approximately 1 in 100 in the overall sample, and less than 2 in 100 even for mothers up to age 45,” Lee said.

Are Forensics Experts Relying On Inconsistent Fingerprint Technology?

Lawrence LeBlond for redOrbit.com – Your Universe Online

Forensic detectives have been relying on unique fingerprints to catch criminals for more than a century, but a new report by the Home Office’s first Forensic Science Regulator said human fingerprints may not be as unique as once thought.

Mike Silverman, the man who introduced the first automated fingerprint detection system to the Metropolitan Police, claims that this foundation of forensic investigation and identification is flawed, noting that human error, partial prints and false positives is making fingerprinting less reliable as solid evidence in whodunit cases.

“No two fingerprints are ever exactly alike in every detail; even two impressions recorded immediately after each other from the same finger,” said Silverman. “And the fingerprint often isn’t perfect, particularly at a crime scene. It might be dirty or smudged. There are all sorts of things that reduce the accuracy.”

“And not everyone’s fingerprints have been recorded so it’s impossible to prove that no two are the same,” he added. “It’s improbable, but so is winning the lottery, and people do that every week.”

Silverman said that there are other factors that also reduce the ‘uniqueness’ of fingerprints, such as some skin conditions, which can make fingertips smooth. As well, elderly citizen’s skin changes in elasticity, making their prints seem warped. Furthermore, families are also known to share similar patterns.

Silverman stressed that, because of these printing inconsistencies, it is important that juries are made aware that not all fingerprints are unique.

“Too often they see programmes like CSI and that raises their expectations. What you see on CSI or Silent Witness simply doesn’t exist,” Silverman said, as cited by Mail Online’s Sian Boyle.

“[In reality] it requires an expert examiner to determine whether a print taken from crime scene and one taken from a subject are likely to have originated from the same finger,” Silverman told The Telegraph’s Sarah Knapton.

Criminal investigations have led to numerous cases in which innocent people have been wrongly accused based on inaccurate fingerprinting evidence.

One such case occurred in 2004, when Brandon Mayfield was wrongly linked to the Madrid train bombings by US federal fingerprint experts. Another case involved Scottish police officer Shirley McKie, who was wrongly accused of having been at a murder scene in 1997 after a print supposedly matching hers was found near the body.

“What both cases clearly demonstrate is that, despite the way fingerprint evidence is portrayed in the media, all comparisons ultimately involve some human element and, as a result, they are vulnerable to human error,” said Silverman, who now works as a private forensic consultant.

Unlike DNA analysis, which gives a statistical probability of a match, fingerprint experts rely on evidence that constitutes either a 100 percent certain match or a 100 percent certain exclusion.

The problem is that not all experts make the same judgment on whether a print matches a mark at a crime scene when presented with the same evidence twice.

A recent study by Southampton University found that two thirds of experts who were unknowingly shown the same sets of fingerprints twice came to a different conclusion on the second occasion.

Fingerprinting technology was first used in Scotland Yard in 1901, after an earlier study by Dr Henry Faulds discovered that they may be useful in identifying individuals. That paper, published in the journal Nature in 1880, did not interest the Met Police at the time, wrote The Telegraph’s Knapton.

Dr Faulds remained undeterred. He approached Charles Darwin with the idea, who passed it along to his cousin Frances Galton, who published a book on the forensic science of fingerprints, claiming that the odds of two persons having the same prints was about one in 64 million.

On the basis of this book and later research, the Fingerprint Bureau was founded in 1901 and eventually expanded out to the creation of the Forensic Science Service (FSS), which provided services to all UK police departments.

The FSS remained in service until 2010, when the private sector took over all forensic work. The Met Police recently re-established its own forensics lab.

Silverman, who worked with police on the murder cases of Damilola Taylor and Rachel Nickel, believes the closing of the FSS could lead to future miscarriages in the justice system.

“Police forces have to slash their budgets and the easy thing not to spend money on is forensic services,” he told Knapton. “You have to ask yourself what price you put on justice.”

No Significant Connection Found Between Mental Illness And Crime

Gerard LeBlond for www.redorbit.com – Your Universe Online

Are mental illness and crime linked?

According to a recent study published by the American Psychological Association, only 7.5 percent of all crimes were directly related to a serious mental disorder. In the analysis, researchers studied 429 crimes that were committed by 143 different offenders with three types of mental illness.

Based on the results, the study authors stated that three percent of the crimes these individuals committed were directly related to the symptoms of major depression, four percent with schizophrenia and 10 percent with bipolar disorder.

“When we hear about crimes committed by people with mental illness, they tend to be big headline-making crimes so they get stuck in people’s heads. The vast majority of people with mental illness are not violent, not criminal and not dangerous,” said lead researcher Jillian Peterson, PhD, of Normandale Community College.

The study consisted of former defendants of a mental health court in Minneapolis completing a two-hour interview about their criminal background and mental health symptoms over an average of 15 years. The study, published online in the journal Law and Human Behavior, is the first to analyze the connection between mental illness and crime.

There were no predictable patterns discovered linking crime and mental illness in the study. Of the offenders who committed the crimes that were directly related to their illness, two-thirds stated they also had committed crimes unrelated to mental illness, like poverty, being unemployed, substance abuse or being homeless.

“Is there a small group of people with mental illness committing crimes again and again because of their symptoms? We didn’t find that in this study,” Peterson said.

There are more than 1.2 million people with mental illness currently in jails and prisons and two to four times the people on parole or probation as the people without any mental disorder.

The study assessed the crimes committed by people with the three major mental disorders and placed them into three categories: no relationship to the crime; mostly unrelated to the crime; and related or directly related to the crime.

When the study was complete, researchers found that when combining the latter categories, only 18 percent of the crimes were related to their disorder. The crimes committed by people with bipolar disorder resulted in 62 percent of them being directly or mostly related to their symptoms, with an additional 23 percent to schizophrenia and 15 percent to depression. Peterson also said that the bipolar result may have been inflated due to the fact the some persons may have been on drugs or alcohol.

The participants were mostly male with the average age of 40, were 42 percent white, 42 percent black and 16 percent other races. Of these offenders, 85 percent had substance abuse problems. Crimes of a violent nature were not included in the study, but the participants described other violent crimes they had committed. The study also did not cover how substance abuse interacted with mental illness to influence their behavior.

The researchers suggested that programs should be expanded to help mentally ill people with treatment to include cognitive-behavioral treatment about criminal thinking, anger management and other behavioral issues. Basic needs programs are also essential for offenders after incarceration that include drug treatment, housing and employment, according to Peterson.

Climate Not The Only Factor Ushering In Change For Northern Forests

USDA Forest Service
In the most densely forested and most densely populated quadrant of the United States, forests reflect two centuries of human needs, values and practices. Disturbances associated with those needs, such as logging and clearing forests for agriculture and development, have set the stage for management issues of considerable concern today, a U.S. Forest Service study reports.
The report – Five anthropogenic factors that will radically alter forest conditions and management needs in the Northern United States –  was published recently by the journal Forest Science and is part of the Northern Forest Futures Project, an effort led by the Forest Service’s Northern Research Station to forecast forest conditions over the next 50 years in the 20-state region extending from Maine to Minnesota and from Missouri to Maryland.
“In our research, we found five short- and long-term factors that will be highly influential regardless of the nature and magnitude of the effects of climate change,” said lead author Stephen Shifley, a research forester with the Northern Research Station. “Addressing these issues today will make northern forests more resilient to the effects of climate change and to any other natural or anthropogenic disturbances in the long term.”
The five factors identified the study are:
Northern forests lack age-class diversity and will uniformly grow old without management interventions or natural disturbances. Nearly 60 percent of northern forest land is clustered in age classes spanning 40 to 80 years; young forests (age 20 years or less) are only 8 percent of all forests in the region; and forests older than 100 years are 5 percent of forests.
The area of forest land in the North will decrease as a consequence of expanding urban areas. Cities in the 20-state region are expected to gain another 27 million people in the next 50 years and grow by about 5 million hectares.
Invasive species will alter forest density, diversity, and function. The US North has the dubious distinction of having the greatest number of invasive insects and plants per county due to nearly three centuries of active commerce, diverse tree species that provide suitable habitats, and the means for invasive species to spread.
Management intensity for timber is low in Northern forests and likely to remain so. A low propensity or low capacity for forest management reduces options for addressing perceived problems such as low forest diversity, invasive species, and other insects or disease problems.
Management for non-timber objectives will gain relevance but will be challenging to implement. An unintended consequence of reduced timber harvesting may be reduced capacity to subsidize other restoration activities – either through revenue from timber sales or through manipulation of vegetation and woody fuels during logging.
Actions that researchers and land managers can take to address these issues include:

  • Develop quantifiable state and regional goals for forest diversity.
  • Understand the spatial and structural impacts of urban expansion on forests.
  • Develop symbiotic relationships among forest owners, forest managers, forest industry and the general public to support contemporary conservation goals.
  • Work to understand the many dimensions of forest change.

“The northern quadrant of the United States includes 172 million acres of forest land and 124 million people,” said Michael T. Rains, Director of the Northern Research Station and the Forest Products Laboratory. “In the next 50 years, the link between forests and economic and human health will grow. The Northern Forest Futures Project is helping identify the individual and collective steps needed to ensure healthy and resilient futures for trees and people alike.”
Co-authors on the study included Forest Service scientists Keith Moser, Dave Nowak, Pat Miles, Brett Butler, and Eric Greenfield; Francisco X. Aguilar of the University of Missouri, and Ryan DeSantis of the University of California Cooperative Extension.

Codeine Prescriptions To Children In The ER Continue Despite Dangers

Lawrence LeBlond for redOrbit.com – Your Universe Online
Codeine, or 3-methylmorphine, is an opioid that is used to treat mild to moderate pain and to suppress cough. While it is widely prescribed to adults, it is of great concern that US emergency rooms continue to prescribe it to children, despite its potentially harmful effects.
A report published in the May issue of the journal Pediatrics, offers some solutions to this dangerous practice. Solutions include provider prescription behaviors to promote the use of better alternatives to codeine, such as ibuprofen and hydrocodone.
“Despite strong evidence against the use of codeine in children, the drug continues to be prescribed to large numbers of them each year,” lead author Sunitha Kaiser, MD, UCSF assistant clinical professor of pediatrics at UCSF Benioff Children’s Hospital San Francisco, noted in a statement. “It can be prescribed in any clinical setting, so it is important to decrease codeine prescription to children in other settings such as clinics and hospitals, in addition to emergency rooms.”
There is much variability in how children process codeine. A third of all children receive no symptom relief from taking the drug and one in 12 can accumulate toxic amounts of the opioid, leading to breathing difficulties and possible death.
The use of codeine in children is highly unadvised by national and international organizations.
The American Academy of Pediatrics (AAP) issued guidelines in 1997 (reaffirmed in 2006) that warn of the dangers and lack of documented effectiveness of codeine use in children with cough and cold. As well, the American College of Chest Physicians issued a guideline in 2006, advising against the reliance of the drug for pediatric cough.
Previously, there was no information on to what extent codeine was being prescribed to children in ERs around the country. For the most part, it is generally prescribed for everything from a painful injury to cough due to cold in children.
To determine just how often codeine is prescribed in US emergency rooms, Kaiser and her colleagues used the National Hospital and Ambulatory Medical Care Survey from the National Center for Health Statistics. The study looked at codeine prescriptions to children ages three to 17 who visited the ER between 2001 and 2010.
The team found the rates of codeine prescriptions decreased from 3.7 percent to 2.9 percent during the 10-year period. However, they found that too many children are still being prescribed the drug, from 559,000 to 877,000 codeine prescriptions per year – averaging out to nearly 2,000 codeine prescriptions per day.
The team also found no decline in codeine prescriptions associated with the 2006 guideline.
Based on their findings, the authors said codeine prescriptions were higher in children between the ages of eight and 12 and in regions outside the Northeast, and lower in non-Hispanic children or those with Medicaid.
“Further research is needed to determine the reasons for these lower rates so we can reduce codeine prescriptions to all children,” Kaiser said.
“Many children are at risk of not getting any benefit from codeine, and we know there are safer, more effective alternatives available,” Kaiser added. “A small portion of children are at risk of fatal toxicity from codeine, mainly in situations that make them more vulnerable to the effects of high drug levels such as after a tonsillectomy.”
Instead of codeine, Kaiser thinks ERs should be prescribing ibuprofen, which is equal to or better for treating injury pain. As well, she maintains that hydrocodone is a much safer and more effective alternative to codeine. Also, dark honey has been shown to be better at suppressing cough than most over-the-counter medicines, and the AAP recommends it for children over 12 months of age.

Corn-Based Biofuel Production Could Generate More CO2 Than Gasoline

redOrbit Staff & Wire Reports – Your Universe Online

Research published Sunday in the journal Nature Climate Change is calling into question whether or not corn crop residue can be used to meet US government mandates to increase the production of biofuels and reduce greenhouse gas emissions.

In fact, University of Nebraska-Lincoln Department of Agronomy & Horticulture assistant professor Adam Liska and his colleagues report that using corn residue to create ethanol or other types of biofuel actually reduces soil carbon and could generate more greenhouse gases than gasoline.

According to the researchers, the stalks, leaves and cobs left in cornfields following a harvest has been viewed as a good resource for the production of cellulosic ethanol. The US Department of Energy has invested more than $1 billion in federal funds in order to support research to develop ethanol made from this matter (which is also known as corn stover), though the process has not been extensively commercialized as of yet.

However, using a supercomputer model developed at the university’s Holland Computing Center, Liska’s team estimated the impact of residue removal on 128 million acres dispersed across 12 different states in the Corn Belt. They discovered that removing crop residue from these cornfields would cause the creation of an extra 50 to 70 grams of carbon dioxide per megajoule of biofuel energy produced.

The total annual emissions, averaged over a period of five years, would be approximately 100 grams of CO2 per megajoule, which would be seven percent higher than gasoline emissions, the study authors explained. That would also be 62 grams higher than the 60 percent greenhouse gas emission reductions required by the 2007 Energy Independence and Security Act.

Furthermore, Liska and his associates report that the carbon emission rate remained constant, no matter how much stover was removed. If less residue is removed, there is less decline in soil carbon, but it also results in a lower biofuel energy yield, the professor noted.

“To mitigate increased carbon dioxide emissions and reduced soil carbon, the study suggests planting cover crops to fix more carbon in the soil,” the university said in a statement. “Cellulosic ethanol producers also could turn to alternative feedstocks, such as perennial grasses or wood residue, or export electricity from biofuel production facilities to offset emissions from coal-fueled power plants.”

Liska said that his team attempted to find flaws in the study, but were unable to do so. “If this research is accurate, and nearly all evidence suggests so, then it should be known sooner rather than later, as it will be shown by others to be true regardless,” he added. “Many others have come close recently to accurately quantifying this emission.”

The study was funded through a three-year grant from the Energy Department valued at $500,000, and used carbon dioxide measurements taken between 2001 and 2010 in order to validate a soil carbon model developed using information from three dozen international field studies.

The researchers used USDA soil maps and crop yields to extrapolate potential CO2 emissions across 580 million 30 meter by 30 meter regions in the Corn Belt states referred to as “geospatial cells.” Their findings indicate that the highest net loss of carbon from residue removal occurred in the states of Minnesota, Iowa and Wisconsin, due to cooler temperatures and an increased amount of carbon in the soil.

Exotic Materials Study Could Lead To Advanced Electronic Devices

redOrbit Staff & Wire Reports – Your Universe Online

In an attempt to help scale down the size of electronic devices to atomic dimensions, researchers from Cornell University and the Brookhaven National Laboratory have demonstrated how to convert a particular transition metal oxide from a metal to an insulator by reducing its size to less than one nanometer thick.

In research currently appearing online and scheduled for publication in the May edition of the journal Nature Nanotechnology, the study authors explain how they were able to synthesize atomically thin samples of a lanthanum nickelate (LaNiO3) utilizing a precise growth technique known as molecular-beam epitaxy (MBE).

Lead researcher Kyle Shen, a physics professor at Cornell, and his colleagues discovered that the process caused the material to abruptly change from a metal to an insulator when its thickness is reduced to less than one nanometer.

Following that change, the conductivity is switched off, preventing electrons to flow through the material – a trait which could be beneficial for use in nanoscale switches or transistors, according to the study authors.

Using a unique system that integrates the growth of MBE film with a method known as angle-resolved photoemission spectroscopy (ARPES), the researchers detailed how the specific movements and interactions of the electron in the material were altered, thus changing the thickness of their oxide films on an atom-by-atom basis.

They found that once the films were less than three nickel atoms thick, the electrons formed an unorthodox nanoscale pattern similar to that of a checkerboard. The discovery demonstrates the ability to control exotic transition metal oxides’ electronic properties at the nanometer scale, while also revealing the surprisingly cooperative interactions which rule electron behavior in these types of extremely thin substances.

The authors report that their work helps pave the way for the use of oxides in the creation of next-gen electronic devices. They wrote that these transition metal oxides have several advantages over conventional semiconductors, including that the fact that their “high carrier densities and short electronic length scales are desirable for miniaturization” and that the strong interactions “open new avenues for engineering emergent properties.”

In addition to Shen, first author Phil King, former Kavli postdoctoral fellow and current University of St. Andrews faculty member; industrial chemistry professor Darrell Schlom; Haofei Wei, Yuefeng Nie, Masaki Uchida, Carolina Adamo, and Shabo Zhu of Cornell University; and Xi He and Ivan Božović of the Brookhaven National Laboratory were also involved in the research.

Their work was supported by the Kavli Institute at Cornell for Nanoscale Science, the Office of Naval Research, the National Science Foundation (NSF) through the Cornell Center for Materials Research, and the US Department of Energy.

Gecko-Inspired Adhesive Material Now Usable On Wood, Other Surfaces

redOrbit Staff & Wire Reports – Your Universe Online

The University of Massachusetts Amherst scientists behind a super-adhesive material inspired by gecko feet have described a new, more versatile version of their invention that can be used on real-world surfaces.

The improved version of the reusable material known as Geckskin is capable of strongly adhering to a greater variety of surfaces. However, just like a gecko’s feet, it is able to detach from those surfaces easily, the researchers explain in the most recent issue of the journal Advanced Materials.

“Imagine sticking your tablet on a wall to watch your favorite movie and then moving it to a new location when you want, without the need for pesky holes in your painted wall,” polymer science and engineering professor Al Crosby explained in a statement Thursday.

Previously, Crosby, polymer science researcher Dan King, biology professor Duncan Irschick and their colleagues demonstrated that Geckskin could hold loads of up to 700 pounds on smooth surfaces such as glass.

However, in their new paper, they explain how they have expanded their design theory in order to allow the material to stick powerfully to a greater variety of surfaces, including wood and drywall. It accomplishes this not by mimicking miniature, nanoscopic hairs typically found on gecko feet, but by improving on “draping adhesion” derived from the lizards’ skin-tendon-bone system, the study authors explained.

“The key to making a strong adhesive connection is to conform to a surface while still maximizing stiffness,” King said. He and his co-developers were able to create this ability in Geckskin through a combination of soft elastomers and extremely still fabrics such as those made out of glass or carbon fiber. By altering the relative stiffness of those materials, Crosby’s team was able to optimize the material for several different applications.

The researchers then compared the performance of three different versions of Geckskin to an actual Tokay gecko in order to verify their claims about the material. They reported that, as anticipated, one version of Geckskin was able to match (and perhaps even exceed) the living gecko’s performance on all tested surfaces.

“The gecko’s ability to stick to a variety of surfaces is critical for its survival, but it’s equally important to be able to release and re-stick whenever it wants,” Irschick said. “Geckskin displays the same ability on different commonly used surfaces, opening up great possibilities for new technologies in the home, office or outdoors.”

“It’s been a lot of fun thinking about all of the different things you ever would want to hang somewhere, and then doing it. Geckskin changes the way you think,” added Crosby, who in February 2012 described the adhesive material as approximately 16 inches square, or “about the size of an index card.”

Could A Dual Laser Beam Help Us Control The Weather?

Brett Smith for redOrbit.com – Your Universe Online

While controlling the weather may have been fodder for classic episodes of the G.I. Joe cartoons, researchers from the University of Central Florida have been researching just that and say they may have found a way to do it by using a dual laser beam.

Condensation, lightning and storm activity are all associated with considerable amounts of static charged particles. Exciting those particles with the proper kind of laser could possibly cause a shower in cases of extreme drought, according to the Florida team.

The researchers’ report in Nature Photonics describes how a laser beam dressed with a second beam prevents the dissipation of energy necessary to excite charged particles in a cloud.

While high-intensity lasers can travel millions of miles, “when a laser beam becomes intense enough, it behaves differently than usual – it collapses inward on itself,” said study author Matthew Mills, a graduate student in the university’s Center for Research and Education in Optics and Lasers (CREOL).

“The collapse becomes so intense that electrons in the air’s oxygen and nitrogen are ripped off creating plasma – basically a soup of electrons,” Mills added.

When this happens, the laser beam pushes back outward – creating a battle between the spreading and collapsing of an ultra-short laser pulse. Known as filamentation, the tension helps to create a filament or “light string” that only develops temporarily until the properties of air make the beam disperse.

“Because a filament creates excited electrons in its wake as it moves, it artificially seeds the conditions necessary for rain and lightning to occur,” Mills said.

Previous efforts have generated “electrical events” in clouds, raising the threat of lightning strikes affecting any close-proximity effort to seed a cloud with a laser.

“What would be nice is to have a sneaky way which allows us to produce an arbitrary long ‘filament extension cable.’ It turns out that if you wrap a large, low intensity, doughnut-like ‘dress’ beam around the filament and slowly move it inward, you can provide this arbitrary extension,” Mills said.

“Since we have control over the length of a filament with our method, one could seed the conditions needed for a rainstorm from afar,” Mills added. “Ultimately, you could artificially control the rain and lightning over a large expanse with such ideas.”

Currently, the Florida researchers have only been able to extend their ‘dressed’ beam about 7 feet, but said they are planning to stretch the filament even farther.

“This work could ultimately lead to ultra-long optically induced filaments or plasma channels that are otherwise impossible to establish under normal conditions,” said study Demetrios Christodoulides, a professor of optics who is overseeing work on the project.

“In principle such dressed filaments could propagate for more than 50 meters or so, thus enabling a number of applications,” Christodoulides said. “This family of optical filaments may one day be used to selectively guide microwave signals along very long plasma channels, perhaps for hundreds of meters.”

The study team said other potential uses include long-distance sensors and spectrometers to identify chemical makeup of objects.

Researchers Have Fully Sequenced The Deadly Human Pathogen Cryptococcus

By Marla Vacek Broadfoot, Duke University

Ten-year effort yields map for finding weaknesses in the fungus

Within each strand of DNA lies the blueprint for building an organism, along with the keys to its evolution and survival. These genetic instructions can give valuable insight into why pathogens like Cryptococcus neoformans — a fungus responsible for a million cases of pneumonia and meningitis every year — are so malleable and dangerous.

Now researchers have sequenced the entire genome and all the RNA products of the most important pathogenic lineage of Cryptococcus neoformans, a strain called H99. The results, which appear April 17 in PLOS Genetics, also describe a number of genetic changes that can occur after laboratory handling of H99 that make it more susceptible to stress, hamper its ability to sexually reproduce and render it less virulent.

The study provides a playbook that can be used to understand how the pathogen causes disease and develop methods to keep it from evolving into even deadlier strains.

“We are beginning to get a grasp on what makes this organism tick. By having a carefully annotated genome of H99, we can investigate how this and similar organisms can change and mutate and begin to understand why they aren’t easily killed by antifungal medications,” said study coauthor John Perfect, M.D., a professor of medicine at Duke who first isolated H99 from a patient with cryptococcal meningitis 36 years ago.

The fungus Cryptococcus neoformans is a major human pathogen that primarily infects individuals with compromised immune systems, such as patients undergoing transplant or those afflicted with HIV/AIDS. Researchers have spent many years conducting genetic, molecular and virulence studies on Cryptococcus neoformans, focusing almost exclusively on the H99 strain originally isolated at Duke. Interestingly, investigators have noticed that over time, the strain became less and less virulent as they grew it in the laboratory.

“Virulence, or the ability of this organism to cause disease in mice or humans, is not very stable. It changes, and can rapidly be lost or gained. When the organism is in the host it is in one state, but when we take it out of the host and begin growing it in the laboratory it begins mutating,” said Fred Dietrich, senior study author and associate professor of molecular genetics and microbiology at Duke University School of Medicine.

Dietrich and his colleagues decided that the best way to investigate how the virulence of this pathogen could change over time was to develop a carefully annotated genomic map of the H99 strain, both in its original state as well as after it had been cultured. In an effort that took ten years and dozens of collaborators, the researchers sequenced the original H99 and nine other cultured variants, analyzing both the genome, the genetic code written in the DNA, as well as the transcriptome, the RNA molecules that occupy the second step in the flow of genetic information from DNA to RNA to protein.

The researchers found that the organism possessed a number of molecular tricks — such as the ability to produce genetic messages from both strands of DNA — that enable it to adapt and survive in changing conditions.

Cryptococcus neoformans has to cope with a large number of different stresses and probably needs a very flexible metabolism. It is tempting to hypothesize that its complex RNA metabolism provides a mechanism to achieve such flexibility,” said Guilhem Janbon, Ph.D., lead study author and faculty in molecular mycology at the Pasteur Institute.

They also discovered that the original and cultured strains were surprisingly similar to each other. After scanning the 20 million A’s, C’s, T’s and G’s that make up the pathogen’s genetic code, they found only 11 single nucleotide variants and 11 insertions or deletions that could explain why cultured strains behaved differently.

“Our results provide the groundwork needed to understand how this organism causes disease, because the next step will involve mutating every gene one by one to see which ones are required for pathogenesis,” said Joseph Heitman, M.D., Ph.D., senior study author and professor and chair of molecular genetics and microbiology at Duke.

The results will not only help researchers study this particular organism, but can also serve as a starting point for studies on other strains of Cryptococcus neoformans.

“This genome will serve as an important reference for the field, to enable a wide range of analysis from examining individual genes to comparing the genomes and transcriptomes of other strains and conditions,” said Christina Cuomo, Ph.D., study co-author and leader of the Fungal Genome Sequencing and Analysis Group at the Broad Institute. “We are already leveraging this genome to identify variants in the sequence of hundreds of additional isolates of Cryptococcus neoformans.”

Gene Variant Raises Risk For Aortic Tear And Rupture

Researchers from Yale School of Medicine and Celera Diagnostics have confirmed the significance of a genetic variant that substantially increases the risk of a frequently fatal thoracic aortic dissection or full rupture. The study appears online in PLOS ONE.

Thoracic aortic aneurysms, or bulges in the artery wall, can develop without pain or other symptoms. If they lead to a tear — dissection — or full rupture, the patient will often die without immediate treatment. Therefore, better identification of patients at risk for aortic aneurysm and dissection is considered essential.

The research team, following up on a previous genome-wide association study by researchers at Baylor College of Medicine, investigated genetic variations in a protein called FBN-1, which is essential for a strong arterial wall. After studying hundreds of patients at Yale, they confirmed what was found in the Baylor study: that one variation, known as rs2118181, put patients at significantly increased risk of aortic tear and rupture.

“Although surgical therapy is remarkable and effective, it is incumbent on us to move to a higher genetic level of understanding of these diseases,” said senior author John Elefteriades, M.D., the William W. L. Glenn Professor of Surgery (Section of Cardiac Surgery) at Yale School of Medicine, and director of the Aortic Institute at Yale-New Haven Hospital. “Such studies represent important steps along that path.”

The researchers hope their confirmation of the earlier study may help lead to better clinical care of patients who may be at high risk of this fatal condition. “Patients with this mutation may merit earlier surgical therapy, before aortic dissection has a chance to occur,” Elefteriades says. Yale cardiothoracic surgeons will now begin assessing this gene in clinical patients with aneurysm disease.

On the Net:

CU Researchers Discover Target For Treating Dengue Fever

Two recent papers by a University of Colorado School of Medicine researcher and colleagues may help scientists develop treatments or vaccines for Dengue fever, West Nile virus, Yellow fever, Japanese encephalitis and other disease-causing flaviviruses.

Jeffrey S. Kieft, PhD, associate professor of biochemistry and molecular genetics at the School of Medicine and an early career scientist with the Howard Hughes Medical Institute, and colleagues recently published articles in the scholarly journals eLife and Science that explain how flaviviruses produce a unique RNA molecule that leads to disease.

More than 40 percent of people around the world are at risk of being bitten by mosquitoes infected with the virus that causes Dengue fever and more than 100 million people are infected, according to eLife. Many develop headaches, pain and fever, but some develop a life-threatening condition where tiny blood vessels in the body begin to leak. Other flaviviruses, such as West Nile virus, are rapidly spreading around the globe. Flaviviruses are considered dangerous emerging pathogens.

The eLife paper shows that the virus causing Dengue fever and other closely related viruses like West Nile and Japanese encephalitis use instructions encoded on a single strand of RNA to take over an infected cell and reproduce. The viruses also exploit an enzyme that cells use to destroy RNA to instead produce short stretches of RNA that, among other things, may help the virus avoid the immune system of its host. Ironically, these viruses use a structured RNA molecule to resist an enzyme that normally “chews up” RNA.

The Science paper reveals the discovery that the resistant RNA folds up into an unprecedented “knot-like” structure. The enzyme, normally adept at breaking up RNA structure, encounters this particular structured RNA and cannot “untangle” it; thus the enzyme is thwarted. This is the first time this sort of RNA structure has been observed and it has characteristics that may be amenable to targeting by new drugs. To discover this structure, the researchers used a technique called x-ray crystallography, which allowed them to determine the structures of individual molecules.

This understanding of how an RNA found in many different flaviviruses thwarts a powerful enzyme may help scientists develop treatments or vaccines.

On the Net:

Differences Between Neanderthals And Modern Man Caused By Genetic Switches

Brett Smith for redOrbit.com – Your Universe Online

With Neanderthals and modern humans sharing more than 99.8 percent of their genetic material, the differences in DNA between the two species are fairly minimal and a new study has found that the differences seen in phenotypes are mostly caused by certain genes being “switched on” or “switched off.”

According to the study published in the journal Science, genetic switches that affect the size and shape of limbs, as well as those that affect the development of the brain, are the most pronounced differences.

The study brings up the importance of researching the epigenome, or the genetic aspects that are responsible for switching on or off certain genes. Recent research has revealed how the epigenome can affect everything from cancer risk to the subtle differences between identical twins, each of whom have a copy of the same genetic material. The switching off of genes is typically achieved through a process called methylation – in which a methyl group, comprised of a carbon atom and three hydrogen atoms, is attached to a gene.

To uncover the epigenomic differences between Neanderthals and moderns humans, scientists took genetic material from limb bones of a living person, a Neanderthal and a Denisovan – an extinct Stone Age human that lived in Eurasia.

The study team was able to find approximately 2,200 regions that were triggered in today’s humans, but switched off in either or both extinct species, or the other way around. One of the main differences identified by the team was a group of five genes called HOXD, which impacts the appearance and size of limbs. It was mainly silenced in both ancient species, the scientists said. The HOXD differences could explain Neanderthals’ characteristic shorter limbs, bowleggedness and oversized hands and fingers.

Chris Stringer of the Natural History Museum in London, who was not directly involved in the study, told Reuters that the HOXD gene finding “may help to explain how these ancient humans were able to build stronger bodies, better adapted to the physical rigors of Stone Age life.”

The study team noted that the epigenome can be affected by lifestyle and environmental factors. This means that the differences observed could be unique to the individual sampled – rather than being representative on an entire species.

The researchers also found major epigenomic differences with respect to genes known to be related to neurological and psychiatric disorders such as autism and Alzheimer’s disease. These genes were silenced in the Neanderthal samples.

Stem Cells Created From Adult Cells

Brett Smith for redOrbit.com – Your Universe Online
In a significant breakthrough a team of scientists from California and Seoul, South Korea have been able to create viable stem cells from an adult donor that perfectly match the donor’s DNA, according to a new report in the journal Cell Stem Cell.
The development, referred to as “therapeutic cloning,” involves the production of embryonic cells for scientific purposes and many object to this type of research based on moral or religious grounds. Debate over this type of work was stoked in 1997 with the announcement that it was used to create the clone of a sheep, called Dolly. In 2005, the United Nations called for a ban on cloning and the United States government currently prohibits the use of federal dollars for cloning research.
The scientists behind the latest development, which was partially funded by the government of South Korea, acknowledged that if the embryos in their study were implanted in a uterus they could have developed into a fetus.
“Without regulations in place, such embryos could also be used for human reproductive cloning, although this would be unsafe and grossly unethical,” study author Dr. Robert Lanza, chief scientist of Massachusetts-based biotech Advanced Cell Technology, told Reuters reporter Sharon Begley.
To produce viable stem cells from an adult donor, the researchers first inserted DNA from an adult skin cell into a donated ovum. The scientists then delivered an electric shock to fuse the genetic material to the ovum. Eventually, the ovum divides and multiplies – becoming a viable embryo in five or six days. Pluripotent stem cells, which can become any type of cell in the body, are located on the interior of this embryo.
Last year, a team of Oregon scientists reported on their success in combining genetic material from fetal and infant cells with DNA-extracted eggs. The team was able to develop their eggs into approximately 150-cell embryos.
The Oregon team said a major aspect of their success was allowing the engineered eggs to sit for 30 minutes before hitting them with the charge of electricity that – like Dr. Frankenstein’s monster – set the eggs on the path to becoming alive.
In the new study, the researchers waited two hours before triggering the egg, which Lanza said allowed them to succeed.
“It gives you time for the massive amount of genetic reprogramming required” for the adult DNA to develop into an embryo, he told Begley.
Although the scientists were able to achieve success, the process is highly inefficient – with the team only able to produce 1 embryo out of 39 tries for each adult donor, ages 35 and 75. According to Lanza, the low success rate and high costs of the techniques means only extremely wealthy individuals would be able to generate stem cells from their own DNA as it currently stands.
The geneticist added that another major barrier to widespread adoption is the lack of donated human eggs – which are extracted through a sometimes painful procedure.
On a more positive note, Lanza said a unique stem cell for each person may not be necessary for stem cell therapies as only “100 human embryonic stem cell lines would generate a complete match for over half the (US) population.”

Lasting Drought Effect In The Rainy Eastern US

April Flowers for redOrbit.com – Your Universe Online

So far, the spring of 2014 has seen more than 40 percent of the western US in a drought that the USDA has deemed “severe” or “exceptional.” In 2013, the drought was just as severe, and in 2012 it spread to the humid eastern states.

Looking at the effects of a drought in geological terms, it would be easy to assume that a three-year drought is inconsequential in the long term, despite the devastating effects it might have on farmers and crops in the short term.

A new study published in a recent issue of Ecological Monographs and led by researchers at the Harvard Forest and Columbia’s Lamont-Doherty Earth Observatory, reveals that short-lived but severe climatic events can trigger cascades of ecosystem changes that can last for centuries.

Scientists have found some of the most compelling evidence of ecosystem response to drought and other challenges in the trunks of the oldest trees. The team analyzed tree rings from a study area of over 300,000 square miles in the eastern US and up to 400 years old. Their results point to ways in which seemingly stable forests could abruptly change over the next century.

“Trees are great recorders of information,” Dave Orwig, an ecologist at the Harvard Forest, said in a recent statement. “They can give us a glimpse back in time.”

Across the broadleaf forests of the eastern US — Kentucky, Tennessee, North Carolina and Arkansas — the tree records revealed that the simultaneous death of many trees opened huge gaps in the forest. These gaps allowed for the growth of a new generation of saplings.

The team searched for historical records that would show that the dead trees succumbed to logging, ice storms or hurricanes. No such records were found, however. Instead, the researchers believe that the trees were weakened by repeated drought leading up to the 1770s. An intense drought from 1772 to 1775 further weakened the forest. An unseasonable and devastating frost in 1774 was the final coffin nail for these forests. Until this study, that frost was only known to historical diaries, such as Thomas Jefferson’s Garden Book, where he recounts “a frost which destroyed almost every thing” at Monticello that was “equally destructive thro the whole country and the neighboring colonies.”

The large gaps in the forest created by these severe events allowed for an oversized generation of new trees — a baby boom type satiation — that reshaped the old-growth forests that still stand in the Southeast today.

“Many of us think these grand old trees in our old-growth forests have always been there and stood the test of time,” said Neil Pederson of the Lamont-Doherty Earth Observatory. “What we now see is that big events, including climatic extremes, created large portions of these forests in short order through the weakening and killing of existing trees.”

Pederson will join Harvard Forest as a senior ecologist in the fall of 2014. He notes that as climate warms, extreme events like increasing drought conditions and earlier springs like the one in 1774 could easily recreate the conditions that changed the eastern forests so abruptly in the 17th and 18th centuries.

“We are seeing more and more evidence of climate events weakening trees, making them more likely to succumb to insects, pathogens, or the next severe drought,” said Orwig.

Pederson added, “With this perspective, the changes predicted by models under future climate change seem more real.”

Image 2 (below): The rings of old-growth trees – one ring for every year the tree has been alive – can reveal centuries of forest change. Drought often results in very narrow spaces between rings. Photo by Neil Pederson

Children Spot Objects More Quickly When Prompted By Words Instead Of Images

April Flowers for redOrbit.com – Your Universe Online
Spoken language prompts children to spot objects more quickly than images, according to a new study from Indiana University.
As any book lover will tell you, language is transformative. The transformative powers, however, may reach father than an emotional reaction to novels or poetry.
The study, published in Developmental Science, reveals that spoken language — more so than images — taps into a child’s cognitive system and enhances their ability to learn, as well as to navigate cluttered environments.
Cognitive scientists Catarina Vales and Linda Smith say that their findings open up new avenues for research into the way language shapes the course of developmental disabilities such as ADHD, difficulties with school, and other attention-related problems.
The researchers had participating children play a series of “I spy” games — which are widely used to study attention and memory in adults — where the children were asked to look for one image in a crowded scene on a computer screen. The children were shown an image of the object they needed to find, such as a bed hidden in a group couches.
“If the name of the target object was also said, the children were much faster at finding it and less distracted by the other objects in the scene,” Vales, a graduate student in the Department of Psychological and Brain Sciences, said in a recent statement.
“What we’ve shown is that in 3-year-old children, words activate memories that then rapidly deploy attention and lead children to find the relevant object in a cluttered array,” said Smith, Chancellor’s Professor in the Department of Psychological and Brain Sciences. “Words call up an idea that is more robust than an image and to which we more rapidly respond. Words have a way of calling up what you know that filters the environment for you.”
According to Smith, the study “is the first clear demonstration of the impact of words on the way children navigate the visual world and is a first step toward understanding the way language influences visual attention, raising new testable hypotheses about the process.”
The way language is used can change how people perceive the world around them, Vales said.
“We also know that language will change the way people perform in a lot of different laboratory tasks,” she said. “And if you have a child with ADHD who has a hard time focusing, one of the things parents are told to do is to use words to walk the child through what she needs to do. So there is this notion that words change cognition. The question is ‘how?'”
The findings “begin to tell us precisely how words help, the kinds of cognitive processes words tap into to change how children behave. For instance, the difference between search times, with and without naming the target object, indicate a key role for a kind of brief visual memory known as working memory, that helps us remember what we just saw as we look to something new. Words put ideas in working memory faster than images,” Vales said.
These findings also suggest that language could play a crucial role in a number of developmental disabilities.
“Limitations in working memory have been implicated in almost every developmental disability, especially those concerned with language, reading and negative outcomes in school,” Smith said. “These results also suggest the culprit for these difficulties may be language in addition to working memory.
“This study changes the causal arrow a little bit. People have thought that children have difficulty with language because they don’t have enough working memory to learn language. This turns it around because it suggests that language may also make working memory more effective.”
The study takes this farther, showing how their findings have implications for child development.
“Children learn in the real world, and the real world is a cluttered place,” Smith said. “If you don’t know where to look, chances are you don’t learn anything. The words you know are a driving force behind attention. People have not thought about it as important or pervasive, but once children acquire language, it changes everything about their cognitive system.”
“Our results suggest that language has huge effects, not just on talking, but on attention — which can determine how children learn, how much they learn and how well they learn,” Vales said.

New Technique Detects Microscopic Diabetes-Related Eye Damage

Indiana University researchers have detected new early-warning signs of the potential loss of sight associated with diabetes. This discovery could have far-reaching implications for the diagnosis and treatment of diabetic retinopathy, potentially impacting the care of over 25 million Americans.

“We had not expected to see such striking changes to the retinas at such early stages,” said Ann Elsner, professor and associate dean in the IU School of Optometry and lead author of the study. “We set out to study the early signs, in volunteer research subjects whose eyes were not thought to have very advanced disease. There was damage spread widely across the retina, including changes to blood vessels that were not thought to occur until the more advanced disease states.”

These important early-warning signs were invisible to existing diagnostic techniques, requiring new technology based on adaptive optics. Stephen Burns, professor and associate dean at the IU School of Optometry, designed and built an instrument that used small mirrors with tiny moveable segments to reflect light into the eye to overcome the optical imperfections of each person’s eye.

“It is shocking to see that there can be large areas of retina with insufficient blood circulation,” he said. “The consequence for individual patients is that some have far more advanced damage to their retinas than others with the same duration of diabetes.”

Because these changes had not been observable in prior studies, it is not known whether improved control of blood sugar or a change in medications might stop or even reverse the damage. Further research can help determine who has the most severe damage and whether the changes can be reversed.

The study, “In vivo adaptive optics microvascular imaging in diabetic patients without clinically severe diabetic retinopathy,” was published in the journal Biomedical Optics Express. It is available at http://www.opticsinfobase.org/boe/abstract.cfm?uri=boe-5-3-961.

Diabetes has long been known to damage the retina, the irreplaceable network of nerve cells that capture light and give the first signal in the process of seeing. This damage to the retina, known as diabetic retinopathy, is the leading cause of vision loss in the U.S. for individuals under the age of 75.

The changes to the subjects in the study included corkscrew-shaped capillaries. The capillaries were not just a little thicker, and therefore distorted, but instead the blood vessel walls had to grow in length to make these loops. This is visible only at microscopic levels, making it difficult to determine who has the more advanced disease among patients, because these eyes look similar when viewed with the typical instruments found in the clinic. Yet, some of these patients already have sight-threatening complications.

Diabetes also is known to result in a variety of types of damage to capillaries, the body’s smallest blood vessels. The more commonly known changes, such as microaneurysms along the capillaries, were also present in the study, but seen in much greater detail. In addition to the corkscrew appearance and microaneurysms, along with the hemorrhages in the later stages of the disease, there is also a thickening of the walls of blood vessels. This is thought to be associated with poor blood flow or failure to properly regulate blood flow.

In the study, patients with diabetes had significantly thicker blood vessel walls than found in controls of similar ages, even for relatively small diameter blood vessels. The capillaries varied in width in the diabetic patients, with some capillaries closed so that they no longer transported blood within the retina. On average, though, the capillaries that still had flowing blood were broader for the patients with diabetes. These diabetic patients had been thought to have fairly mild symptoms. In fact, the transport of oxygen and glucose to the retina is already compromised.

Previous diagnostic techniques have been unable to uncover several of these changes in living patients. Simply magnifying the image of the retina is not sufficient. The view through the imperfect optics of the human eye has to be corrected.

The instrument designed by Burns takes advantage of adaptive optics to obtain a sharp image, and also minimized optical errors throughout the instrument. Using this approach, the tiny capillaries in the eye appear quite large on a computer screen. These blood vessels are shown in a video format, allowing careful focus and observation of blood cells moving through the blood vessels. After imaging each patient’s eye, highly magnified retinal images are then pieced together with software, providing still images or videos.

On the Net:

Genetic Diversity Is Key To Tiger Conservation

Brett Smith for redOrbit.com – Your Universe Online

While many tiger conservation efforts focus on the numbers of a population, maintaining the big cats’ genetic diversity is just as important, according to a report published on Thursday in the Journal of Heredity.

“Numbers don’t tell the entire story,” study author Elizabeth Hadly, a professor in environmental biology at Stanford University, said in a recent statement.

According to the study, because the tiger’s decline has taken place so recently – populations have kept a relatively high amount of diversity. Previous research has shown that the greater gene flow there is among tiger communities, the greater genetic diversity is held and the better the odds of species survival become.

Calling the genetic diversity of a population “the basis for adaptation,” the study team used data on previously sequenced mitochondrial fragments from 5 of the 6 existing tiger subspecies to determine the level of population growth necessary to maintain the current levels of diversity.

The scientists discovered that as populations become more segmented; the pools of each tiger subspecies reduce in size along with genetic diversity. This decrease in diversity can result in reduced reproduction rates, swifter spread of disease and additional cardiac defects, among other issues.

The study team also found that for tiger communities to preserve their genetic diversity for the next 150 years, the tiger populations would have to grow to approximately 98,000 individuals if gene flow across species were detained 25 years. In contrast, the population would have to expand to approximately 60,000 if sufficient gene flow were immediately established.

Unfortunately, neither of these scenarios is realistic due to limiting factors such as prey availability, the study team said.

“Since genetic variability is the raw material for future evolution, our results suggest that without interbreeding subpopulations of tigers, the genetic future for tigers is not viable,” said study author Uma Ramakrishnan, a biology researcher at the National Centre for Biological Sciences in Bangalore, India.

The scientists advised concentrating conservation efforts on facilitating wildlife corridors and other means that allow breeding between different tiger populations, which would maintain genetic diversity more efficiently than increasing population sizes. They added that crossbreeding wild and captive tiger subspecies would be another important focus for tiger conservation.

“Every tiger—including those in zoos, which presently outnumber those in the wild—is important as a potential reserve of the genetic diversity of the species,” the study authors wrote in their conclusion. “Research efforts should aim to estimate ongoing gene flow between protected areas, and immediate efforts toward cross-boundary tiger breeding should be considered.”

“This is very much counter to the ideas that many managers and countries have now – that tigers in zoos are almost useless and that interbreeding tigers from multiple countries is akin to genetic pollution,” Hadly said. “In this case, survival of the species matters more than does survival of the exclusive traits of individual populations.”

The study team cited efforts to conserve Florida panthers as an example of the importance in maintaining the genetic diversity of a population. A program introduced a closely-related subspecies to the Florida population and since the introduction the panther population has risen in numbers and exhibited fewer cases of genetic disorders and poor fitness.

Exploring Ways To Protect Fish From Barotrauma

Tom Rickey, PNNL

Fish-friendly hydropower is the goal of international team

Think of the pressure change you feel when an elevator zips you up multiple floors in a tall building. Imagine how you’d feel if that elevator carried you all the way up to the top of Mt. Everest — in the blink of an eye.

That’s similar to what many fish experience when they travel through the turbulent waters near a dam. For some, the change in pressure is simply too big, too fast, and they die or are seriously injured.

In an article in the March issue of the journal Fisheries, ecologists from the Department of Energy’s Pacific Northwest National Laboratory and colleagues from around the world explore ways to protect fish from the phenomenon, known as barotrauma.

Among the findings: Modifying turbines to minimize dramatic shifts in pressure offers an important way to keep fish safe when passing through dams. The research is part of a promising body of work that aims to reduce such injuries by improving turbine designs in dams around the world.

PNNL researchers are working with officials and scientists from Laos, Brazil, and Australia — areas where hydropower is booming — to apply lessons learned from experience in the Pacific Northwest, where salmon is king and water provides about two-thirds of the region’s power. There, billions of dollars have been spent since 1950 to save salmon endangered largely by the environmental impact of hydropower.

“Hydropower is a tremendous resource, often available in areas far from other sources of power, and critical to the future of many people around the globe,” said Richard Brown, a senior research scientist at PNNL and the lead author of the Fisheries paper.

“We want to help minimize the risk to fish while making it possible to bring power to schools, hospitals, and areas that desperately need it,” added Brown.

Harnessing the power of water flowing downhill to spin turbines is the most convenient energy source in many parts of the world, and it’s a clean, renewable source of energy to boot.

In Brazil, several dozen dams are planned along the Amazon, Madeira and Xingu rivers — an area that teems with more than 5,000 species of fish, and where some of the largest hydropower projects in the world are being built. In southeastern Australia, hydropower devices are planned in the area drained by the Murray-Darling river system. And in Southeast Asia, hundreds of dams and smaller hydro structures are planned in the Lower Mekong River Basin.

The authors say the findings from a collaboration that spans four continents improve our understanding of hydropower and will benefit fish around the globe. New results about species in the Mekong or Amazon regions, for instance, can inform fish-friendly practices in those regions of the United States where barotrauma has not been extensively studied.

To ‘Everest’ and back in an instant

Dams vary considerably in the challenges they pose to migrating fish, and the challenges are magnified when a fish must pass through more than one dam or hydro structure. At some, mortality is quite high, while at others, such as along the Columbia River, most fish are able to pass over or through a single dam safely, thanks to extensive measures to keep fish safe. Some fish spill harmlessly over the top, while others pass through pipes or other structures designed to route fish around the dam or steer them clear of the energy-producing turbines.

Still, at most dams, the tremendous turbulence of the water can hurt or disorient fish, and the blades of a turbine can strike them. The new study focuses on a third problem, barotrauma — damage that happens at some dams when a fish experiences a large change in pressure.

Depending on its specific path, a fish traveling through a dam can experience an enormous drop in pressure, similar to the change from sea level to the top of Mt. Everest, in an instant. Just as fast, as the waters swirl, the fish suddenly finds itself back at its normal pressure.

Those sudden changes can have a catastrophic effect on fish, most of which are equipped with an organ known as a swim bladder — like a balloon — to maintain buoyancy at a desired depth. When the fish goes deeper and pressures are greater, the swim bladder shrinks; when the fish rises and pressure is reduced, the organ increases in size.

For some fish, the pressure shift means the swim bladder instantly expands four-fold or eight-fold, like an air bag that inflates suddenly. This rapid expansion can result in internal injuries or even death.

Factors at play include the specific path of a fish, the amount of water going through a turbine, the design of the turbine, the depth of water where the fish usually lives, and the physiology of the fish itself.

“To customize a power plant that is the safest for the fish, you must understand the species of fish in that particular river, their physiology, and the depth at which they normally reside, as well as the tremendous forces that the fish can be subjected to,” said Brown.

PNNL scientists have found that trying to keep minimum pressure higher in all areas near the turbine is key for preventing barotrauma. That reduces the amount of pressure change a fish is exposed to and is a crucial component for any turbine that is truly “fish friendly.” Preventing those extremely low pressures also protects a turbine from damage, reducing shutdowns and costly repairs.

Lower Mekong River Basin

Brown and PNNL colleague Zhiqun (Daniel) Deng have made several trips to work with scientists in Southeast Asia, where dozens of dams are planned along the Mekong River and its tributaries. The Mekong starts out high in Tibet and travels more than 2,700 miles, touching China, Myanmar, Laos, Thailand, Cambodia, and Vietnam. The team estimates more than 1,200 species of fish make their home in the Mekong, including the giant Mekong catfish and the giant freshwater stingray, as well as the endangered Irrawaddy dolphin.

The scientists estimate that the region’s fish account for almost half of the protein in the diet of the people of Laos and nearly 80 percent for the people of Cambodia. Four out of five households in the region rely heavily on fish for food, jobs, or both.

“Many people in Southeast Asia rely on fish both for food and their livelihood; it’s a huge issue, crucial in the lives of many people. Hydropower is also a critical resource in the region,” said Deng, a PNNL chief scientist and an author of the paper.

“Can we reduce the impact of dams on fish, to create a sustainable hydropower system and ensure the food supply and livelihoods of people in these regions? Can others learn from our experiences in the Pacific Northwest? This is why we do research in the laboratory — to make an impact in the real world, on people’s lives,” added Deng.

The same team of scientists just published a paper in the Journal of Renewable and Sustainable Energy, focusing broadly on creating sustainable hydro in the Lower Mekong River Basin. The paper discusses the potential for hydropower sources in the region (30 gigawatts), migratory patterns of its fish, the importance of fish-friendly technology, and further studies needed to understand hydro’s impact on fish of the Mekong.

Authors of the Fisheries paper include scientists from PNNL, the National University of Laos, and the Living Aquatic Resources Research Center in Laos, the Federal University of São João Del-Rei in Brazil, the University of British Columbia; and from Australia, the Port Stephens Fisheries Institute in New South Wales, the Narrandera Fisheries Centre in New South Wales, and Fishway Consulting Services. PNNL’s work in the area has been funded by the U.S. Army Corps of Engineers, DOE’s Office of Energy Efficiency and Renewable Energy, and AusAid, the Australian Agency for International Development.

Pygmy Slow Loris, Nycticebus pygmaeus

The pygmy slow loris (Nycticebus pygmaeus) is a primate that can be found in Laos, eastern areas of Cambodia, the Yunnan Province, and in areas east of Mekong River in Vietnam. It prefers to reside in secondary, semi-evergreen, and mixed deciduous forests. This species was formally described in 1907 by J. Lewis Bonhote and was classified as one species with all loris species, although there are now nine distinct species.

The pygmy slow loris reaches an average body length between 7.7 and 9.1 inches, with a weight between 13 and 30 ounces. Although there are no significant differences between the sizes of males and females, all adult individuals vary in weight seasonally, most likely gaining weight in order to survive harsh winters. The fur on its upper body and back are red to reddish brown in color, with a black stripe extending down the back, while the underbelly is greyish-black with yellowish-orange markings. The stripe on the back, and the silver hairs that are sometimes present, can be absent, leading some to believe that seasonal variations in color can occur or that these markings may represent a separate species. The sides of the face and head are reddish in color and a white stripe extends down its nose from its forehead. The dark circles that occur around its eyes fade as it grows older.

Wild pygmy slow lorises are typically found in groups of two to four individuals and are active during the nighttime hours. Males hold territories that they will defend and they are known to scent mark over the scent of other males. This species communicates using many vocalizations including short whistles that are used by mothers and young and loud whistles that are used by females to attract a mate. Like all loris species, it will rub a toxic secretion from a gland near its elbow onto its fur, which is used to deter predators.

In captivity, females are able to breed for four to five days between the months of June and October and they are known to be slightly violent towards potential mates. After breeding, females can be pregnant between 184 and 200 days, after which time they will give birth to one or two young. Weaning occurs at about twenty-four weeks of age and sexual maturity is thought to be reached at sixteen to eighteen months of age.

Like other slow lorises, the pygmy slow loris is an omnivore, feeding on insects and fruit, among other food types. It prefers to consume plant materials like gum, which is readily available within its range. Its diet does depend on seasonal changes and it has been observed barely moving during the winter months in order to conserve energy.  This species is thought to live for up to twenty years.

The pygmy slow loris is threatened by habitat loss in all areas of its range. Because of political issues in its range and its nocturnal lifestyle, information regarding its population numbers is limited and highly variable. This species is also threatened by hunting and the illegal wildlife trade, which has increased in many areas due to urban growth and economic fluctuations. This species has already experienced local extinctions in some areas of its range and is expected to undergo more extinction in the future. This species does occur in protected areas and has its own Species Survival Plan, which includes a successful captive breeding program. The pygmy slow loris appear in Appendix I of CITES and on the IUCN Red List with a conservation status of “Vulnerable.”

Image Caption: A Pygmy Slow Loris (Nycticebus pygmaeus) at the Duke Lemur Center in Durham, North Carolina. Credit: David Haring/Wikipedia (CC BY-SA 3.0)

Water World Theory Of Life’s Origins Outlined In New Study

Whitney Clavin, NASA’s Jet Propulsion Laboratory
Life took root more than four billion years ago on our nascent Earth, a wetter and harsher place than now, bathed in sizzling ultraviolet rays. What started out as simple cells ultimately transformed into slime molds, frogs, elephants, humans and the rest of our planet’s living kingdoms. How did it all begin?
A new study from researchers at NASA’s Jet Propulsion Laboratory in Pasadena, Calif., and the Icy Worlds team at NASA’s Astrobiology Institute, based at NASA’s Ames Research Center in Moffett Field, Calif., describes how electrical energy naturally produced at the sea floor might have given rise to life. While the scientists had already proposed this hypothesis — called “submarine alkaline hydrothermal emergence of life” — the new report assembles decades of field, laboratory and theoretical research into a grand, unified picture.
According to the findings, which also can be thought of as the “water world” theory, life may have begun inside warm, gentle springs on the sea floor, at a time long ago when Earth’s oceans churned across the entire planet. This idea of hydrothermal vents as possible places for life’s origins was first proposed in 1980 by other researchers, who found them on the sea floor near Cabo San Lucas, Mexico. Called “black smokers,” those vents bubble with scalding hot, acidic fluids. In contrast, the vents in the new study — first hypothesized by scientist Michael Russell of JPL in 1989 — are gentler, cooler and percolate with alkaline fluids. One such towering complex of these alkaline vents was found serendipitously in the North Atlantic Ocean in 2000, and dubbed the Lost City.
“Life takes advantage of unbalanced states on the planet, which may have been the case billions of years ago at the alkaline hydrothermal vents,” said Russell. “Life is the process that resolves these disequilibria.” Russell is lead author of the new study, published in the April issue of the journal Astrobiology.
Other theories of life’s origins describe ponds, or “soups,” of chemicals, pockmarking Earth’s battered, rocky surface. In some of those chemical soup models, lightning or ultraviolet light is thought to have fueled life in the ponds.
The water world theory from Russell and his team says that the warm, alkaline hydrothermal vents maintained an unbalanced state with respect to the surrounding ancient, acidic ocean — one that could have provided so-called free energy to drive the emergence of life. In fact, the vents could have created two chemical imbalances. The first was a proton gradient, where protons — which are hydrogen ions — were concentrated more on the outside of the vent’s chimneys, also called mineral membranes. The proton gradient could have been tapped for energy — something our own bodies do all the time in cellular structures called mitochondria.
The second imbalance could have involved an electrical gradient between the hydrothermal fluids and the ocean. Billions of years ago, when Earth was young, its oceans were rich with carbon dioxide. When the carbon dioxide from the ocean and fuels from the vent — hydrogen and methane — met across the chimney wall, electrons may have been transferred. These reactions could have produced more complex carbon-containing, or organic compounds — essential ingredients of life as we know it. Like proton gradients, electron transfer processes occur regularly in mitochondria.
“Within these vents, we have a geological system that already does one aspect of what life does,” said Laurie Barge, second author of the study at JPL. “Life lives off proton gradients and the transfer of electrons.”
As is the case with all advanced life forms, enzymes are the key to making chemical reactions happen. In our ancient oceans, minerals may have acted like enzymes, interacting with chemicals swimming around and driving reactions. In the water world theory, two different types of mineral “engines” might have lined the walls of the chimney structures.
“These mineral engines may be compared to what’s in modern cars,” said Russell.
“They make life ‘go’ like the car engines by consuming fuel and expelling exhaust. DNA and RNA, on the other hand, are more like the car’s computers because they guide processes rather than make them happen.”
One of the tiny engines is thought to have used a mineral known as green rust, allowing it to take advantage of the proton gradient to produce a phosphate-containing molecule that stores energy. The other engine is thought to have depended on a rare metal called molybdenum. This metal also is at work in our bodies, in a variety of enzymes. It assists with the transfer of two electrons at a time rather than the usual one, which is useful in driving certain key chemical reactions.
“We call molybdenum the Douglas Adams element,” said Russell, explaining that the atomic number of molybdenum is 42, which also happens to be the answer to the “ultimate question of life, the universe and everything” in Adams’ popular book, “The Hitchhiker’s Guide to the Galaxy.” Russell joked, “Forty-two may in fact be one answer to the ultimate question of life!”
The team’s origins of life theory applies not just to Earth but also to other wet, rocky worlds.
“Michael Russell’s theory originated 25 years ago and, in that time, JPL space missions have found strong evidence for liquid water oceans and rocky sea floors on Europa and Enceladus,” said Barge. “We have learned much about the history of water on Mars, and soon we may find Earth-like planets around faraway stars. By testing this origin-of-life hypothesis in the lab at JPL, we may explain how life might have arisen on these other places in our solar system or beyond, and also get an idea of how to look for it.”
For now, the ultimate question of whether the alkaline hydrothermal vents are the hatcheries of life remains unanswered. Russell says the necessary experiments are jaw-droppingly difficult to design and carry out, but decades later, these are problems he and his team are still happy to tackle.
The California Institute of Technology in Pasadena manages JPL for NASA.

Gene Variant Confers Higher Alzheimer’s Risk For Women

Brett Smith for redOrbit.com – Your Universe Online

New research from scientists at Stanford University has revealed a genetic variant that raises the risk of developing Alzheimer’s disease in women, but not in men.

After analyzing information on numerous older individuals who were followed over time, the scientists recognized a genetic variant called ApoE4 that conveyed the sex-specific elevated risk level, according to their report in the Annals of Neurology.

While more women suffer from Alzheimer’s than men, women also tend to live longer increasing their odds of developing the cognitive disorder. However, the study researchers found the risk from the genetic variant remained even after considering age.

“Even after correcting for age, women appear to be at greater risk,” said Dr. Michael Greicius, study author and assistant professor of neurology at Stanford.

In the study, researchers considered records that had been stored in two large, openly available databases. In one repository, the scientists reviewed clinical assessments of 5,000 people whose cognitive test outcome was normal at the start and 2,200 people who had originally showed indications of mild cognitive impairment.

Within both groups, being an ApoE4 carrier showed higher probability of Alzheimer’s disease. Additionally, the team also saw that for those who began with normal cognitive function, the greater risk was only minimal for men, while women who had the ApoE4 variant had near double the odds of moving on to mild cognitive impairment or Alzheimer’s disease as people who didn’t.

“Our study showed that, among healthy older controls, having one copy of the ApoE4 variant confers a substantial Alzheimer’s disease risk in women, but not in men,” Greicius said.

From the second database, researchers examined imaging information and measurements of a number of biomarkers from spinal fluid that signal mild cognitive impairment and ultimately Alzheimer’s disease. Evaluation of 1,000 patients’ files from this collection not only validated ApoE4’s sex-specific effect, but it also produced evidence that may assist investigators in exploring, and potentially explaining, the molecular components connecting ApoE4 to Alzheimer’s disease, Greicius said.

The ApoE gene codes for a protein vital for moving fatty substances all over the body. This is especially critical in the neurological system, as brain function relies on quick rearrangement of these kinds of fatty substances along and among nerve cell membranes. The ApoE gene can be seen in three varieties – ApoE2, ApoE3 and ApoE4 – which are inherited based on variants in the gene’s sequence. This means that the Apo protein also comes in three versions, whose constructions and performances differ.

Most people carry two versions of the ApoE3 gene variant – one from each parent. However, approximately 20 percent have at least one copy of the risk-elevating ApoE4 gene, and a small proportion has two ApoE4 copies. Several previous reports have validated that ApoE4 is a high-risk factor for Alzheimer’s disease, with one copy of ApoE4 doubling or quadrupling that risk. Having two copies has been found to boost the risk of Alzheimer’s by 10 times.

Greicius said the results of the study means clinicians need to take different approaches to Alzheimer’s patients based on their sex.

“These days, a lot of people are getting genotyped either in the clinic or commercially,” he said. “People come to me and say, ‘I have an ApoE4 gene, what should I do?’ If that person is a man, I would tell him that his risk is not increased much if at all. If it’s a woman, my advice will be different.”

Climbing Mount Everest To Learn More About Type 2 Diabetes

April Flowers for redOrbit.com – Your Universe Online

Most people engage in extreme sports, such as climbing Mount Everest, in an attempt to gain insight about themselves. A group of researchers from the University of Southampton and University College London (UCL) climbed Mount Everest to gain new insights into the molecular process of how some people get Type II diabetes. Their findings, published in PLOS ONE, could lead to new ways of preventing people from getting the condition.

The study was designed to assess the mechanisms by which low oxygen levels in the body, called hypoxia, are associated with insulin resistance.

Insulin allows the body to regulate sugar levels, which is necessary for good health. When cells fail to respond to insulin in the body, the patient is said to have developed insulin resistance. This resistance causes there to be too much sugar in the body, which is toxic and leads to Type II diabetes.

Following sustained exposure—six to eight weeks—to hypoxia at high altitudes, the researchers found that several markers of insulin resistance were increased. The change in these biomarkers is related to increased blood levels of markers of inflammation and oxidative stress, as well.

The researchers collected their data as part of a study in 2007 called Caudwell Xtreme Everest. This study was coordinated by the UCL Center for Altitude, Space and Extreme environment medicine (CASE Medicine) and led by Southampton Professor of Anaesthesia and Critical Care, Mike Grocott.

Grocott is also the co-founder of UCL Case Medicine and currently leads the Critical Care Research Area within the Southampton National Institute for Health Research (NIHR) Respiratory Biomedical Research Unit.

“These results have given us useful insight into the clinical problem of insulin resistance. Fat tissue in obese people is believed to exist in a chronic state of mild hypoxia because the small blood vessels are unable to supply sufficient oxygen to fat tissue. Our study was unique in that it enabled us to see things in healthy people at altitude that which we might normally only see in obese people at sea level. The results suggest possible interventions to reduce progression towards full-blown diabetes, including measures to reduce oxidative stress and inflammation within the body,” Grocott said in a statement.

Twenty-four participants traveled to Mount Everest, undergoing assessments of glucose control, body weight changes and inflammation biomarkers at Everest Base Camp. Half of the study group remained at Base Camp, at an altitude of approximately 17,400 feet, while half climbed to a maximum elevation of approximately 29,000 feet—the highest point on the mountain. Measurements were taken for both groups at six weeks and eight weeks into the trip.

The research team wanted to increase scientific understanding of critically ill patients, as well as achieve the first ever blood oxygen measurement for a human at 27,600 feet on the Balcony of Everest—a small platform past Camp IV where climbers can rest and take in the view at the top of the world.

Hypoxia is a fundamental problem for patients who are critically ill, and this study is part of an extensive and continuing program of research into hypoxia, improving the care of the critically ill, and human performance at extreme altitudes. The same team of researchers has continued this program with Xtreme Everest 2 in the spring of 2013.

Dr Daniel Martin, Senior Lecturer and Honorary Consultant, UCL Division of Surgery and Interventional Science and Director of UCL CASE Medicine, added, “These exciting results give us a unique insight into the possible mechanism of insulin resistance in diabetes and provide some clues as to where we should be thinking about focusing further research on novel treatments for this disease. It also demonstrates the value of using healthy volunteers in studies carried out at high altitude to patients at sea level. Our high altitude experimental model for investigating every day illnesses that involve tissue hypoxia is a fantastic way to test hypotheses that would otherwise be very difficult to explore.”

People Often Perceive Calorie Content Based On Texture Of Food: Study

April Flowers for redOrbit.com – Your Universe Online

Our taste in food is an intimately personal thing. Some foods we savor, and some we despise. Tastes, texture and temperature are just a few things that can make a difference.

A team of researchers wondered if the way we chew and eat our food could also impact our overall consumption? Their results, published in a recent issue of the Journal of Consumer Research, suggest that people perceive foods that are hard, or have a rough texture, to have fewer calories.

The research team included Dipayan Biswas and Courtney Szocs of the University of South Florida, Aradhna Krishna of the University of Michigan, and Donald R. Lehmann of Columbia University.

“We studied the link between how a food feels in your mouth and the amount we eat, the types of food we choose, and how many calories we think we are consuming,” the team wrote in a statement.

The research team conducted five laboratory studies where participants were asked to sample foods that were hard, soft, rough, or smooth and then measured calorie estimations for the food.

The research team required participants to watch and evaluate a series of television ads in one of the studies. The participants were given cups filled with bite-sized brownie bits as tokens of appreciation for their time during the tests. After watching the ads, half the participants were questioned about the caloric content of the brownies and half were not. Each of these two groups were split in half again, with half receiving hard brownie bites and half receiving soft brownie bites.

Participants who were not made to focus on caloric content consumed a higher volume of brownies when they were soft instead of hard. The group that was asked to focus on caloric content, however, consumed a higher volume of hard brownie bites rather than soft ones.

The researchers suggest that brands interested in promoting the health benefits of their products can emphasize texture, as well as drawing attention to low-calorie foods. “Understanding how the texture of food can influence calorie perceptions, food choice, and consumption amount can help nudge consumers towards making healthier choices,” the authors conclude.

Eating Rice On A Regular Basis Linked To Better Health And Diet Quality

April Flowers for redOrbit.com – Your Universe Online
People can improve their diets, according to a new study from Baylor College of Medicine, simply by enjoying white or brown rice as part of their daily routine. The research, funded by the US Department of Agriculture and the USA Rice Federation, was published online in a recent issue of Food and Nutrition Sciences.
The researchers, led by Theresa Nicklas, DrPH, of Baylor College of Medicine, analyzed the National Health and Nutrition Examination Survey (NHANES) datasets from 2005-2010 for the study. Then, using a nationally representative sample of 14,386 US adults, they evaluated the association of rice consumption with overall diet quality and key nutrient intakes.
“Our results show that adults who eat rice had diets more consistent with what is recommended in the U.S. Dietary Guidelines, and they showed higher amounts of potassium, magnesium, iron, folate and fiber while eating less saturated fat and added sugars,” said Nicklas. “Eating rice is also associated with eating more servings of fruit, vegetables, meat and beans,” she added.
Americans consume approximately 27 pounds of enriched white and brown rice per person per year. Around 70 percent of rice consumption consists of enriched white rice. There are a variety of grain-based foods in the average American diet, with rice standing out because it is primarily eaten as an intact grain. Rice is naturally sodium free, with only trace amounts of fat, and no saturated fat. This allows consumers to control the addition of fat, salt and flavors at their own discretion.
The current study builds on the work of two prior studies that revealed the positive contribution of rice to diet quality. In 2009, an observational study used NHANES datasets and data from Continuing Survey of Food Intakes by Individuals (CSFII) to find that significantly less fat was consumed by rice eaters. Rice eaters also consume more iron, potassium, fiber, meat, vegetables and grains. A follow-up study in 2010 also used the NHANES datasets, but included children in the study group. This study confirmed that rice consumption was associated with greater intake of a range of healthier foods and nutrients. Because the majority of rice consumed was white rice, the researchers suggest that when rice is consumed with other foods such as fruit, vegetables, meat and beans, it provides valuable nutrients while boosting the beneficial effects on an individual’s diet.
These studies taken together demonstrate that if you focus on eating the right combination of foods, it will help Americans get closer to meeting their nutrient needs. The key recommendation of the Dietary Guidelines is, after all, that our goal should be to aim for a healthy eating pattern. These studies show that rice eaters are doing this,” said Anne Banville, vice president of the USA Rice Federation.
A human clinical trial also found that including brown or white rice to a meal increased satiety and feelings of fullness more than a calorically equivalent glucose solution control. When considered with the results of the cross-sectional findings, the research team suggests that both enriched white rice and whole grain brown rice should be recommended as part of a healthy diet.

New Screening Method Could Detect Autism In 9 Month Old Infants

redOrbit Staff & Wire Reports – Your Universe Online
The identification of two new biomarkers could help medical researchers identify autism spectrum disorders (ASD) in children as young as nine months old – one year earlier than the average screening age.
According to lead author Carole A. Samango-Sprouse, an associate clinical professor of pediatrics at George Washington University, head circumference and head tilting reflex are reliable ways to determine whether or not children between the ages of 9 and 12 months could be autistic.
While the US Centers for Disease Control and Prevention (CDC) report that ASD can be identified in youngsters who are at least two years old, most children are not diagnosed until the age of four. While multiple research papers claim parents of autistic children have anecdotally noticed developmental problems during the first year of life, the investigators point out there has been no official diagnostic method to identify those children.
“While the ‘gold standard’ screening tool is the M-CHAT questionnaire, it must be read and completed by parents and then interpreted by a health care provider,” said Samango-Sprouse.
“What physicians are missing is a quick and effective screening measure that can easily be given to all infants regardless of background and identify ASD before 12 months. This screening is also helpful in identifying those babies who may not initially appear to be at risk and would otherwise be missed until much later in life,” she added.
The authors looked at the use of head circumference and head tilting reflex as biomarkers that could potentially be used during non-illness-related primary care providers. Nearly 1,000 patients underwent both screenings during their four, six, and nine-month well-baby visits, and evaluated at the end of nine months, they explained.
Infants with head circumferences at or above the 75th percentile, those with a head circumference discrepancy of at least 10 percent in comparison to the baby’s height, or those who did not pass a head tilting reflex test were deemed to be at-risk for ASD or a developmental language delay. A neurodevelopmental specialist and pediatric neurologist were then brought in to evaluate those children and differentiate between the two disorders.
Forty-nine infants displayed abnormal results without previous diagnosis. Of those, 15 were identified as at-risk for autism and 34 were at-risk for developmental language delay. Furthermore, 14 of the 15 children who were at-risk for ASD were clinically diagnosed with the disorder when they turned three years old.
“We will continue looking at the efficacy of the head circumference and head tilting reflex as a screening tool for these disorders,” said Dr. Andrea Gropman, a contributor to the study and division chief of neurogenetics at Children’s National Hospital in Washington, DC.
“As with all developmental delays, especially ASD, the sooner we can identify those children who are at risk, the sooner we can intervene and provide appropriate treatment. In other words, the sooner we identify these delays, the better the outcome for those affected,” she added.
In related news, research published earlier this month in the Journal of Pediatric Nursing found that children suffering from ASD could benefit from interacting with dogs. According to the authors, those pets could provide the youngsters with unconditional love, companionship, stress relief and opportunities to learn responsibility.
Gretchen Carlisle, a research fellow at the University of Missouri College of Veterinary Medicine’s Research Center for Human-Animal Interaction, and her co-authors interviewed 70 parents with autistic children and found that nearly two-thirds of those families had pet dogs. Of those families, 94 percent said that the kids had bonded with their pets, and even 70 percent of non-dog owners said that their children liked dogs.

StarCraft II Study Indicates Cognitive Decline Begins At Age 24

redOrbit Staff & Wire Reports – Your Universe Online

If performance in computer or video games is any indication, a person has already reached his or her peak cognitive performance levels by the age of 24, according to new research published in a recent edition of the journal PLOS ONE.

Lead author Joe Thompson, a doctoral student in psychology at Simon Fraser University, and his colleagues set out to determine when humans begin to experience age-related decline in their cognitive motor skills, and how they compensate for those performance issues.

They reviewed the digital performance levels of more than 3,300 16- to 44-year-old players of the real-time strategy game StarCraft II, which the authors explain is an immensely popular title which often inspires competition amongst players for real-world money. The performance records can be readily replayed and represent nearly 900 hours of “strategic real-time cognitive-based moves performed at varied skill levels,” the authors noted.

By using complex statistical modeling, Thompson, his thesis supervisor and SFU professor Dr. Mark Blair and statistics and actuarial science doctoral student Andrew Henrey were able to sift through those hundreds of hours worth of data and distill information about how players were able to respond to their opponents, as well as their reaction times during matches.

“After around 24 years of age, players show slowing in a measure of cognitive speed that is known to be important for performance. This cognitive performance decline is present even at higher levels of skill,” Thompson, who completed the paper as his thesis, explained in a statement Friday.

“Older players, though slower, seem to compensate by employing simpler strategies and using the game’s interface more efficiently than younger players, enabling them to retain their skill, despite cognitive motor-speed loss,” he added. For example, the authors note that older gamers typically use short cuts and sophisticated command keys in order to help compensate for their decrease in real-time decision-making ability.

According to Thompson, the results suggest that the cognitive-motor capabilities of a person do not remain fixed throughout his or her entire adult life. They are in a state of constant flux, he said, and an individual’s daily performance levels are the result of the ongoing interplay between changes and adaptations.

While the PLOS ONE paper does not educate us about how our ever more computerized world could eventually impact our use of adaptive behaviors to make up for the decay of our cognitive motor skills, the authors said our increasingly digital world will continue to provide a wealth of data that can help scientists with similar social research projects in the near future.