High Arctic Species On Thin Ice

A new assessment of the Arctic’s biodiversity reports a 26 percent decline in species populations in the high Arctic.

Populations of lemmings, caribou and red knot are some of the species that have experienced declines over the past 34 years, according to the first report from The Arctic Species Trend Index (ASTI), which provides crucial information on how the Arctic’s ecosystems and wildlife are responding to environmental change.

While some of these declines may be part of a natural cycle, there is concern that pressures such as climate change may be exacerbating natural cyclic declines.

In contrast, population levels of species living in the sub-Arctic and low Arctic are relatively stable and in some cases, increasing. Populations of marine mammals, including bowhead whales found in the low Arctic, may have benefited from the recent tightening of hunting laws. Some fish species have also experienced population increases in response to rising sea temperatures.

“Rapid changes to the Arctic’s ecosystems will have consequences for the Arctic that will be felt globally. The Arctic is host to abundant and diverse wildlife populations, many of which migrate annually from all regions of the globe. This region acts as a critical component in the Earth’s physical, chemical, and biological regulatory system,” says lead-author Louise McRae from the Zoological Society of London (ZSL).

Data collected on migratory Arctic shorebirds show that their numbers have also decreased. Further research is now needed to determine whether this is the result of changes in the Arctic or at other stopover sites on their migration.

Louise McRae adds: “Migratory Arctic species such as brent goose, dunlin and turnstone are regular visitors to the UK’s shores. We need to sit up and take notice of what’s happening in other parts of the world if we want to continue to experience a diversity of wildlife on our own doorstep.”

The ASTI includes almost 1,000 datasets on Arctic species population trends, including representation from 35 percent of all known vertebrate species found in the Arctic.

Co-author Christoph Zöckler from the UNEP-World Conservation Monitoring Centre says: “The establishment of these results comes at a crucial time for finding accurate indicators to monitor global biodiversity as governments strive to meet their targets of reducing biodiversity loss.”

The findings of the first ASTI report will be presented at the ‘State of the Arctic’ Conference in Miami, USA. The full report will be available to download from www.asti.is on Wednesday March 17th, 2010.

The Arctic Species Trend Index (ASTI) uses population monitoring data to track trends in marine, terrestrial and freshwater Arctic vertebrate species. The index allows for a composite measure of the overall population trends of Arctic vertebrate populations. It can also be organized to display trends based on taxonomy, biome or region. The Index tracks almost 1000 Arctic vertebrate population data-sets by biome, taxa, migratory status and so on. The Arctic Species Trend Index was commissioned by the Arctic Council’s CAFF Circumpolar Biodiversity Monitoring Program. The development of the index was a collaboration between the CBMP, the Zoological Society of London, UNEP World Conservation Monitoring Centre and the Worldwide Fund for Nature.

The Circumpolar Biodiversity Monitoring Program (CBMP) was implemented by the Conservation of Arctic Flora and Fauna (CAFF) Working Group of the Arctic Council.  The Circumpolar Biodiversity Monitoring Program is the cornerstone program of CAFF. The Government of Canada currently leads the program. The purpose of the Program is to harmonize and enhance long-term biodiversity monitoring efforts across the Arctic. The Program operates as an international network of scientists, local resource users, community experts and others. The resulting information is used to assist decision making from the global to local level to conserve biodiversity. All eight Arctic Nations participate in the program. Participants also include six global-wide indigenous organizations, as well as over 60 organizational partners.

On the Net:

Doctor Decries British Hospital Ban on Sitting, Flowers

The practices of prohibiting hospital visitors from sitting beside a patient on the bed when visiting and banning flowers from hospital rooms in British health care facilities was the topic of a Wednesday editorial in the BMJ medical journal.

In the commentary, author Dr. Iona Heath, a London-based general practitioner, argued that, “Rules that mostly diminish rather than enhance the joys of life have no place in hospitals, where joy is too often in short supply.”

“Doctors should never be discouraged from sitting, because patients consistently estimate that they have been given more time when the doctor sits down rather than stands,” she wrote in the March 16 column. “Standing makes the conversation seem hurried even when it is not…[And] some of the most intimate and effective interactions between doctor and patient that I have either witnessed or experienced have occurred while the doctor has been sitting on the patient’s bed. Such interactions are precious and should be made easier rather than more difficult.”

When contacted by AP Medical Writer Maria Cheng for comment, officials from the UK Department of Health said that there was no national ban on either sitting or flowers, noting that such policy is dictated by individual hospitals and health care facilities.

“It is considered good practice by some (hospitals) that visitors and staff should not sit on beds, in order to reduce the risk of transmitting infections from one patient to the next,” the agency told Cheng.

In her editorial, Heath warned against becoming too concerned about these types of safety practices and neglecting to treat patients with “humanity, common sense, and even humor.”

“Too many patients report that the technological care in hospital is excellent but that the human dimension of care is often lacking,” she advises.

On the Net:

Drought Creating Drinking Water Shortage in China

Southwestern parts of China face a shortage of drinking water due to a rare drought that has dried up rivers and threatens to cripple farms in the Guizhou, Yunnan, and Sichuan provinces, as well as the Guangxi region and the city of Chongqing, the state-controlled Global Times is reporting.

According to information printed in a March 17, 2010 AFP article, the Global Times claims that rainfall in the above regions has been 60-percent below normal over the past six months.

A reported 17 million people in the Guizhou province were experiencing shortages of drinkable water, as 86 of the 88 cities in the region were under drought conditions.

The government is distributing emergency water supplies, according to the AFP story, and the Global Times reports that the drought conditions are the worst the country has faced in at least a century. Attempts to seed clouds and induce rainfall have been unsuccessful, according to Xinhua news agency reports, due to a lack of moisture in the clouds.

“Meteorologists have predicted the situation could worsen in coming months as hot and dry weather was expected to continue and water demand rises as farmers turn soon to their spring planting,” writes the AFP.

According to a Bloomberg report posted online at BusinessWeek.com, the drought has benefitted Chinese water supply companies, who witnessed their stocks rise on Wednesday.

“The drought has left people short of water and will benefit water companies because of demand for their resources,” Shenyin Wanguo Securities Company economist Li Huiyong told the news service in a phone interview. “It is controllable and hasn’t affected most of the main grain producing provinces.”

On the Net:

Preventive Behaviors Limited Household Transmission Of H1N1 Influenza

Simple, common sense behaviors, including having a discussion at home about how to prevent influenza, can help limit the spread of H1N1 in a household, according to a study of the initial outbreak in New York City in 2009. Published in the April 1 issue of The Journal of Infectious Diseases, the study is available online.

People with influenza symptoms are often told to stay home from work or school, which is why scientists need to understand how household transmission works and how to control it, not only in responding to H1N1 but also in preparing for future pandemics.

Anne Marie France, PhD, MPH, and her colleagues at the New York City Department of Health and Mental Hygiene and the Centers for Disease Control and Prevention (CDC) surveyed household members of ill students from the New York City high school where the H1N1 outbreak was first documented in April 2009. Because H1N1 was not yet established in the community, secondary cases of influenza-like illness were most likely acquired at home. One-third of the school’s students were sick with influenza and told to stay home, and 322 households representing 702 household contacts responded to the survey.  Seventy-nine contacts reported influenza-like illness, representing an 11.3 percent secondary attack rate (SAR), with half of the cases occurring within three days and 87 percent within seven days after the initial student reported symptoms.

Having a household discussion about how to prevent transmission was associated with a 40 percent reduction in risk for influenza among others in the household. Providing care for the sick student increased the risk among parents, the researchers found, while watching television and playing video games with the student was a risk factor for siblings.

The finding that a household discussion had a protective effect is especially relevant, given that a vaccine might not be available early in a pandemic. “This is important because it indicates that behavioral changes can be effective in decreasing the risk for secondary illness within a household,” Dr. France said. “Understanding the risk and prevention factors that determine household transmission is very important to containing influenza, particularly if the strain of influenza is severe, and it is determined that attempting to contain it is critical to the national management of a pandemic.”

The study also found that the risk of acquiring influenza-like illness was most strongly related to age, with the highest SAR (30 percent) among contacts under 5 years of age and the lowest (2.1 percent) in those aged 55 or older. The findings highlight that children can be the principal spreaders of an infection in the early stages of an epidemic, especially in the household, and suggest children should be the focus of preventive measures.

Future studies on household transmission “should attempt to measure the details of interaction between ill and initially non-ill household members,” Dr. France noted, such as hand washing and covering coughs, to determine how these behaviors, in addition to minimizing time spent with ill household members, factor into preventing transmission.

In an accompanying editorial, Ruth Lynfield, MD, of the Minnesota Department of Health, agreed and observed that the findings “are useful in reinforcing public health recommendations for infection control within households of infected individuals.” When early action is most important at the beginning of a pandemic, Dr. Lynfield wrote, implementation is best reinforced by “data that support simple interventions in the household that are important for infection prevention.”

The study also found a protective effect associated with preventive antiviral treatment, or prophylaxis. But the authors and the accompanying editorial highlight reports of the development of antiviral resistance and the need to reserve these drugs for influenza patients most at risk for developing complications, in line with recommendations from the Centers for Disease Control and Prevention (CDC).

Fast Facts

1. A household discussion about influenza prevention and transmission reduced the risk of family members passing on the virus to each other by 40 percent.

2. Transmission of the virus was rapid, with half of secondary influenza cases (in which one family member infected another) occurring within three days and almost 90 percent within one week.

3. To help prevent the spread of influenza, cover your nose and mouth when you cough or sneeze. Wash your hands often with soap and water.

On the Net:

Potent Radiation Treatment Provides Tumor Control For patients With Inoperable Lung Cancer

Early findings suggest a radiation therapy that involves numerous highly-focused and potent radiation beams provides targeted tumor control in nearly all patients, reduces treatment-related illness, and may ultimately improve survival for patients with inoperable non-small cell lung cancer, according to a study in the March 17 issue of JAMA, a theme issue on cancer.

Robert Timmerman, M.D., of the University of Texas Southwestern Medical Center, Dallas, presented the findings of the study at a JAMA media briefing.

Patients with inoperable early stage lung cancer are generally offered conventional radiation treatment (most commonly given during 20-30 outpatient treatments) or observed without specific cancer therapy. “Outcomes are not ideal with either approach. Conventional radiotherapy fails to durably control the primary lung tumor in 60 percent to 70 percent of patients. More than half of patients ultimately die specifically from progressive lung cancer with observation, and 2-year survival is less than 40 percent with either approach,” the authors write. Stereotactic body radiation therapy (SBRT) is a noninvasive cancer treatment in which numerous small, highly focused, and accurate radiation beams are used to deliver potent doses in 1 to 5 treatments to tumor targets.

Dr. Timmerman and colleagues conducted the Radiation Therapy Oncology Group (RTOG) 0236 trial, the first North American multicenter, cooperative group study to test SBRT in treating medically inoperable patients with early stage non-small cell lung cancer. The Phase 2 study included patients 18 years or older with biopsy-proven peripheral T1-T2N0M0 non-small cell tumors (measuring less than 5 cm. in diameter) and medical conditions that would not allow surgical treatment. Radiation treatment lasted between 1.5 and 2 weeks. The study opened May 2004 and closed October 2006, with data analyzed through August 2009. The final study population included 55 patients (44 with T1 tumors and 11 patients with T2 tumors), with a median (midpoint) follow-up of 34.4 months.

The primary outcome measured for the study was 2-year actuarial primary tumor control; secondary end points were disease-free survival (i.e., primary tumor, involved lobe, regional, and disseminated recurrence [the reappearance or return of a cancer in multiple areas of the body]), treatment-related toxicity and overall survival.

Of all the patients in the study, only one experienced a documented tumor recurrence or progression at the primary site. The 3-year primary tumor control rate was 97.6 percent. Three patients had recurrence within the involved lobe; the 3-year primary tumor and involved lobe (local) control rate was 90.6 percent. Combining local and regional failures, the 3-year local-regional control rate was 87.2 percent. Disseminated recurrence as some component of recurrence was reported in 11 patients. The 3-year rate of disseminated failure was 22.1 percent with 8 such failures occurring prior to 24 months.

Disease-free survival and overall survival at 3 years were 48.3 percent and 55.8 percent, respectively. Median disease-free survival and overall survival for all patients were 34.4 months and 48.1 months, respectively. Seven patients (12.7 percent) and 2 patients (3.6 percent) were reported to experience protocol-specified treatment-related grade 3 and 4 adverse events, respectively. No grade 5 treatment-related adverse events were reported. Higher grades indicate greater severity of adverse event, with grade 5 indicating death.

“The main finding in this prospective study was the high rate of primary tumor control (97.6 percent at 3 years). Primary tumor control is an essential requirement for the cure of lung cancer,” the authors write. “Stereotactic body radiation therapy as delivered in RTOG 0236 provided more than double the rate of primary tumor control than previous reports describing conventional radiotherapy.”

“The RTOG 0236 trial demonstrated that technologically intensive treatments like SBRT can be performed in a cooperative group so long as the proper infrastructure and support are put in place. The RTOG will be building on RTOG 0236 to (l) design a trial to address the rather high rate of disseminated failure observed after treatment, (2) complete a trial to determine a safe and effective dose for central lung tumors and (3) complete a trial to refine the dose of SBRT for peripheral tumors.”

(JAMA 2010;303[11]:1070-1076)

On the Net:

Mercury Craters Receive New Names

The International Astronomical Union (IAU) recently approved a proposal from the MESSENGER Science Team to confer names on 10 impact craters on Mercury. The newly named craters were imaged during the mission’s three flybys of Mercury in January and October 2008 and September 2009.

The IAU has been the arbiter of planetary and satellite nomenclature since its inception in 1919. In keeping with the established naming theme for craters on Mercury, all of the craters are named after famous deceased artists, musicians, or authors.

“All of the newly named features figure importantly in ongoing analysis of Mercury’s geological history,” says MESSENGER Principal Investigator Sean Solomon of the Carnegie Institution of Washington. “The MESSENGER Science Team is pleased that the IAU has responded promptly to our latest request for new names, so that the identities of these craters in the scientific literature can be clearly conveyed.”

The newly named craters include:

* Bek, named for the chief royal sculptor (active c. 1340 B.C.) during the reign of Pharaoh Akhenaten, a Pharaoh of the 18th dynasty of Egypt. Bek is credited with the development of the “Amarna Style,” the distinctive and often peculiar combination of the exceptionally mannered and the naturalistic.

* Copland, for Aaron Copland (1900-1990), an American composer of concert and film music, as well as an accomplished pianist. He was instrumental in forging a distinctly American style of composition and is widely known as the dean of American composers.

* Debussy, for Claude Debussy (1862-1918), among the most important of French composers and one of the most prominent figures working within the field of impressionist music. He was a central figure in European music at the turn of the 20th Century.

* Dominici, for Maria de Dominici (1645-1703), a Maltese sculptor and painter said to have made portable cult figures used for street processions on religious feast days.

* Firdousi, for HakÄ«m Abu’l-QÄÂsim FirdawsÄ« TÃ…«sÄ« (935-1020), a revered Persian poet and author of the ShÄÂhnÄÂmeh, the national epic of Persian people and of the Iranian world.

* Geddes, for Wilhelmina Geddes (1887-1955), an Irish stained-glass artist and member of the Arts and Crafts Movement. Her work represented a rejection of the Late Victorian approach, and she created a new view of men in stained glass windows, portraying them with close-shaven crew cuts.

* Hokusai, for Katsushika Hokusai (1760-1849), a Japanese artist and printmaker of the Edo period. He was Japan’s leading expert on Chinese painting and is best-known as author of the woodblock print series, Thirty-six Views of Mount Fuji, which includes the iconic and internationally recognized print, The Great Wave off Kanagawa, created during the 1820s.

* Kipling, for Rudyard Kipling (1865-1936), a British author and poet regarded as a major innovator in the art of the short story. He is best known for his works of fiction, poems, and many short stories, including those in The Jungle Book (1894).

* Picasso, for Pablo Picasso (1881-1973), a Spanish painter, draughtsman, and sculptor best known for co-founding the Cubist movement and for the wide variety of styles embodied in his work.

* Steichen, for Edward Steichen (1879-1973), an American photographer, painter, and art gallery and museum curator. He was the most frequently featured photographer in Alfred Stieglitz’s groundbreaking magazine Camera Work during its run from 1903 to 1917.

These 10 newly named craters join 42 other craters named since MESSENGER’s first Mercury flyby in January 2008.

On the Net:

The Hot – And Cold – Interventional Radiology Treatments For Recurrent Prostate Cancer

First reported cases of promising future treatment: magnetic resonance-guided ablation (destruction) using laser thermal therapy or cryoablation to treat recurrent prostate cancer following surgery, say Mayo Clinic clinicians

The first known patient cases using magnetic resonance-guided heat (laser interstitial thermal therapy) or cold (cryoablation) to treat prostate cancer recurrence after surgical removal of the prostate gland were presented by physicians at the Society of Interventional Radiology’s 35th Annual Scientific Meeting in Tampa. Many of these patients have also failed salvage radiation treatment and are often presented with limited therapeutic options. MR-guided focal therapy offers a new treatment choice for patients because of the improved detection of early prostate cancer recurrences best seen with MR imaging.

“Magnetic resonance-guided ablation may prove to be a promising new treatment for prostate cancer recurrences; it tailors treatment modality (imaging) and duration to lesion size and location and provides a less invasive and minimally traumatic alternative for men,” said David A. Woodrum, M.D., Ph.D., an interventional radiologist at the Mayo Clinic in Rochester, Minn. “The safe completion of four clinical cases using MR-guided ablation therapy to treat prostate cancer in patients who had failed surgery demonstrates this technology’s potential,” he said, stressing, however, that the application for using ablation therapy in treating prostate cancer is relatively new. In the study””a major collaboration of physicians from Mayo Clinic radiology and urology departments””the four patients with recurrent prostate cancer had previously been treated with radical prostatectomy, an operation to remove the entire prostate gland and adjacent structures. The men underwent salvage therapy treated with either MR-guided laser interstitial thermal therapy, which uses high temperatures generated by local absorption of laser energy, or cryoablation, generating freezing with extremely cold gas destroying cancerous tissue. By using MR imaging with temperature mapping and/or ice ball growth monitoring, clinicians tailored treatments to lesion size and location. “Immediately after treatment, we found no definite residual tumor. The treatment preserved the patients’ baseline sexual and urinary function and had no major complications,” added Woodrum.

Prostate cancer is the third most common cause of death from cancer in men and accounted for 27,360 deaths last year. Most of the 192,280 men diagnosed with cancer of the prostate (adenocarcinoma) in 2009 were older than 50 years of age. Prostate cancer management decisions often depend on the degree of spread or stage of the cancer. Treatment alternatives may include surgery, radiation therapy, cryosurgery, chemotherapy, active surveillance or manipulation of hormones that affect the cancer. Cancer recurrence rates after surgical resection can be as high as 15óˆž percent, leaving a significant number of men who require salvage therapy with radiation therapy or hormones; however, each has drawbacks. Some men continue to have detectable residual prostate cancer even after surgical removal and radiation treatment; for these individuals this potential new therapy with imaging guidance presents an innovative method to target and treat the cancer when all others have failed.

In this retrospective review, each of the four men was found to have post surgical recurrent prostate cancer detected by MR imaging. Two were treated with MR-guided laser interstitial thermal therapy; the other two were treated with cryoablation. Biopsy-proven cancer lesions ranged in size from 6óˆ½ millimeters and were located in the prostate bed just inferior to the bladder and anterior to the rectum, where the prostate gland had previously resided. The men had no detectable metastases at the time of treatment. For both ablation methods, two to three probes or applicators were used in each case. Intermittent MR imaging was employed during the procedures for placement of the probes/applicators and to actively monitor ablation size during treatment to completely cover the lesion.

While stressing that this treatment was in the formative stages, additional work is needed to see which patients will be best suited for the ablation procedure and to examine middle- and long-term results for efficacy, noted Lance A. Mynderse, M.D., a Mayo Clinic urologist.

Abstract 156: “MR-Guided Trans-perineal Laser and Cryoablation of Locally Recurrent Prostate Adenocarcinoma Following Radical Prostatectomy,” D.A. Woodrum, L.A. Mynderse, A. Kawashima, K.R. Gorny, T.D. Atwell, K.K. Amrami, H. Bjarnason, M.R. Callstrom and E.F. McPhail, all at Mayo Clinic, Rochester, Minn.; and B. Bolster, Siemens, Rochester, Minn., SIR 35th Annual Scientific Meeting March 13, 2010, Tampa, Fla. This abstract can be found at www.SIRmeeting.org.

On the Net:

NASA Finds Deep Ice Holds Living Creatures

Scientists trying to catch a glimpse of what the underbelly of an ice sheet in Antarctica looks like got the surprise of a lifetime when they found a shrimp-like amphipod and a jellyfish thriving in the subfreezing dark water.

Six hundred feet below the ice where no light is found, scientists had believed that nothing more than microbes could exist. But when the NASA team lowered a video camera to the depths to look around, they watched as a curious 3-inch-long Lyssianasid amphipod came swimming by and snagged on to the camera cable. Scientists also pulled up a tentacle from what they believe to be a jellyfish.

NASA ice scientist Robert Bindschadler, who will be presenting the findings and a video at an American Geophysical Union meeting Wednesday, told The Associated Press, “We were operating on the presumption that nothing’s there.”

The video may inspire experts to reassess what they know about life in harsh environments. It did have scientists reflecting on the possibilities that if shrimp-like creatures can thrive below 600 feet of sea ice, why not in other hostile places like Jupiter’s frozen moon Europa?

“They are looking at the equivalent of a drop of water in a swimming pool that you would expect nothing to be living in and they found not one animal but two,” said biologist Stacy Kim of the Moss Landing Marine Laboratories in California.

“This is a first for the sub-glacial environment with that level of sophistication,” said microbiologist Cynan Ellis-Evans of the British Antarctic Survey, who was intrigued by the finding. He noted that there have been similar findings of complex life retreating in ice shelves, but nothing this deep under the ice before. He did say that it was possible they were just passing through from far away and don’t live there permanently.

But Kim, a co-author of the study, doubts it. The site is at least 12 miles from open seas. It is unlikely that the two creatures swam from a great distance and were randomly filmed in such a small area, she said.

However, what puzzled scientists was what food source could be available there. While some microbes can make their own food out of chemicals in the ocean, the amphipod and the jellyfish cannot, Kim said.

The key question is “ËœHow do they survive’? “It’s pretty amazing when you find a huge puzzle like that on a planet where we thought we know everything,” Kim said. 

On the Net:

Swiss Company Unveils Watch Made From Dinosaur Poop

Looking for a unique gift to get that special someone? Perhaps you should check out the latest creation from Swiss watchmaker Artya and designer Yvan Arpa.

It’s a timepiece made almost entirely of dinosaur poop.

Yes, you read that right. Dinosaur poop. The Vesenaz, Switzerland-based company unveiled the designer coprolite wristwatch on Monday. The fossilized dung used in the watch’s creation came from a plant eater that lived in what is modern-day United States some 100 million years ago. The strap, on the other hand, was fashioned out of American cane toad.

According to an Artya corporate press release, “A relic of the Jurassic period, it has taken millions of years for this organic substance to embrace its present warm and matchless tints. Designed with an understated aesthetic sense, the dial is free of indexes or any other pointless features. In its mineral aspect, it forcefully underscores the pristine strength emanating from the very dawn of life. As a true memento, it is encircled in a round case sculpted in stainless steel grade 316 or, as an affirmation to its prehistoric lineage, in bronze with its characteristic blazing hues.”

“In the manner of all Artya collections, the watchcase is considered as a challenging arena for excelling in decoration,” the press release continues. “Entirely manufactured in the secrecy of the Guer Man workshops tucked away in Ticino, Manuel Zanetti has chiseled, engraved and hammered away to obtain masculine and grainy curves and patterns. With plenty of character, the case makes a firm statement by an impressive architectural array of bold, contrasting volumes that enhance its generous proportions to perfection.”

The timepiece features self-winding mechanical movement; hour, minute, and second functions; a rotor polished with sodium bicarbonate; a 40-plus hour power reserve; non-reflective sapphire coating; water resistance to 30-meters and a two-year warranty.

How much would all this cost you? The Associated Press (AP) reports that the timepiece will be valued at $11,290, while that might sound like a hefty price tag for a wristwatch, it’s not every day you can buy a watch made from fossilized dinosaur stool, right?

On the Net:

Seeking Dark Matter On A Desktop

Desktop experiments could point the way to dark matter discovery, complementing grand astronomical searches and deep underground observations. According to recent theoretical results, small blocks of matter on a tabletop could reveal elusive properties of the as-yet-unidentified dark matter particles that make up a quarter of the universe, potentially making future large-scale searches easier. This finding was announced today by theorists from the Stanford Institute for Materials and Energy Science (SIMES), a joint institute of the Department of Energy’s SLAC National Accelerator Laboratory and Stanford University, at the American Physical Society meeting in Portland, Oregon.

“Tabletop experiments can be extremely illuminating,” said condensed matter theorist Shoucheng Zhang, who published the results with SIMES colleagues Rundong Li, Jing Wang and Xiao-Liang Qi. “We can make observations in tabletop experiments that help us figure the deeper mysteries of the universe.”

In a paper published in the March 7 online edition of Nature Physics, Zhang and his colleagues describe an experimental set-up that could detect for the first time the axion, a theoretical tiny, lightweight particle conjectured to permeate the universe. With its very small mass and lack of electric charge, the axion is a candidate for the mysterious dark matter particle. Yet, despite much effort, the axion has never been observed experimentally.

That may change thanks to the SIMES theorists’ forefront research into topological insulators. In this small, newly discovered subset of materials, electrons travel with great difficulty through the interior but flow with much less resistance on the surface, much as they can in superconductive materials. Even better, they do this at room temperature. This leads to unusual properties that may be important for applications such as spintronics, an emerging technology that could allow for a new class of low-power, high-density, superior-performance electronic devices.

In their research into other applications for topological insulators, Zhang and his colleagues discovered that the electromagnetic behavior of topological insulators is described by the very same mathematical equations that describe the behavior of axions; wondrously, the laws of the universe related to axions are mirrored in this new class of materials. As a result of this mathematical parallel, the theorists posit that experiments on topological insulators can reveal much about the axions that are predicted to pervade the universe.

“That both are described by the same mathematical equation is the beauty of physics,” said Zhang. “Mathematics is so powerful””it means we can study these things in topological insulators as if they were a baby universe.”

In their paper, Zhang and his colleagues describe one particular class of topological insulator in which the parallel mathematics related to axions is most apparent, and suggest several experiments that could be performed to “see” axions in the electromagnetic behavior of topological insulators. These experiments could offer additional insight into the physical characteristics of the axion, insight that would simplify the astronomical search by giving observers a better idea of where to look for evidence of the axion hidden behind the overall roar of the universe.

“If we “Ëœsee’ an axion in a tabletop experiment, it will be extremely illuminating,” Zhang said. “It will help shed light on the dark matter mystery.”

Image Caption: Shoucheng Zhang of the Stanford Institute for Materials and Energy Science. (Photo courtesy Brad Plummer)

On the Net:

A Golden Bullet For Cancer

Nanoparticles provide a targeted version of photothermal therapy for cancer

In a lecture he delivered in 1906, the German physician Paul Ehrlich coined the term Zuberkugel, or “magic bullet,” as shorthand for a highly targeted medical treatment.

Magic bullets, also called silver bullets, because of the folkloric belief that only silver bullets can kill supernatural creatures, remain the goal of drug development efforts today.

A team of scientists at Washington University in St. Louis is currently working on a magic bullet for cancer, a disease whose treatments are notoriously indiscriminate and nonspecific. But their bullets are gold rather than silver. Literally.

The gold bullets are gold nanocages that, when injected, selectively accumulate in tumors. When the tumors are later bathed in laser light, the surrounding tissue is barely warmed, but the nanocages convert light to heat, killing the malignant cells.

In an article just published in the journal Small, the team describes the successful photothermal treatment of tumors in mice.

The team includes Younan Xia, Ph.D., the James M. McKelvey Professor of Biomedical Engineering in the School of Engineering and Applied Science, Michael J. Welch, Ph.D., professor of radiology and developmental biology in the School of Medicine, Jingyi Chen, Ph.D., research assistant professor of biomedical engineering and Charles Glaus, Ph.D., a postdoctoral research associate in the Department of Radiology.

“We saw significant changes in tumor metabolism and histology,” says Welch, “which is remarkable given that the work was exploratory, the laser ‘dose’ had not been maximized, and the tumors were ‘passively’ rather than ‘actively’ targeted.”

Why the nanocages get hot

The nanocages themselves are harmless. “Gold salts and gold colloids have been used to treat arthritis for more than 100 years,” says Welch. “People know what gold does in the body and it’s inert, so we hope this is going to be a nontoxic approach.”

“The key to photothermal therapy,” says Xia, “is the cages’ ability to efficiently absorb light and convert it to heat. “

Suspensions of the gold nanocages, which are roughly the same size as a virus particle, are not always yellow, as one would expect, but instead can be any color in the rainbow.

They are colored by something called a surface plasmon resonance. Some of the electrons in the gold are not anchored to individual atoms but instead form a free-floating electron gas, Xia explains. Light falling on these electrons can drive them to oscillate as one. This collective oscillation, the surface plasmon, picks a particular wavelength, or color, out of the incident light, and this determines the color we see.

Medieval artisans made ruby-red stained glass by mixing gold chloride into molten glass, a process that left tiny gold particles suspended in the glass, says Xia.

The resonance “” and the color “” can be tuned over a wide range of wavelengths by altering the thickness of the cages’ walls. For biomedical applications, Xia’s lab tunes the cages to 800 nanometers, a wavelength that falls in a window of tissue transparency that lies between 750 and 900 nanometers, in the near-infrared part of the spectrum.

Light in this sweet spot can penetrate as deep as several inches in the body (either from the skin or the interior of the gastrointestinal tract or other organ systems).

The conversion of light to heat arises from the same physical effect as the color. The resonance has two parts. At the resonant frequency, light is typically both scattered off the cages and absorbed by them.

By controlling the cages’ size, Xia’s lab tailors them to achieve maximum absorption.

Passive targeting

“If we put bare nanoparticles into your body,” says Xia, “proteins would deposit on the particles, and they would be captured by the immune system and dragged out of the bloodstream into the liver or spleen.”

To prevent this, the lab coated the nanocages with a layer of PEG, a nontoxic chemical most people have encountered in the form of the laxatives GoLyTELY or MiraLAX. PEG resists the adsorption of proteins, in effect disguising the nanoparticles so that the immune system cannot recognize them.

Instead of being swept from the bloodstream, the disguised particles circulate long enough to accumulate in tumors.

A growing tumor must develop its own blood supply to prevent its core from being starved of oxygen and nutrients. But tumor vessels are as aberrant as tumor cells. They have irregular diameters and abnormal branching patterns, but most importantly, they have thin, leaky walls.

The cells that line a tumor’s blood vessel, normally packed so tightly they form a waterproof barrier, are disorganized and irregularly shaped, and there are gaps between them.

The nanocages infiltrate through those gaps efficiently enough that they turn the surface of the normally pinkish tumor black.

A trial run

In Welch’s lab, mice bearing tumors on both flanks were randomly divided into two groups. The mice in one group were injected with the PEG-coated nanocages and those in the other with buffer solution. Several days later the right tumor of each animal was exposed to a diode laser for 10 minutes.

The team employed several different noninvasive imaging techniques to follow the effects of the therapy. (Welch is head of the oncologic imaging research program at the Siteman Cancer Center of Washington University School of Medicine and Barnes-Jewish Hospital and has worked on imaging agents and techniques for many years.)

During irradiation, thermal images of the mice were made with an infrared camera. As is true of cells in other animals that automatically regulate their body temperature, mouse cells function optimally only if the mouse’s body temperature remains between 36.5 and 37.5 degrees Celsius (98 to 101 degrees Fahrenheit).

At temperatures above 42 degrees Celsius (107 degrees Fahrenheit) the cells begin to die as the proteins whose proper functioning maintains them begin to unfold.

In the nanocage-injected mice, the skin surface temperature increased rapidly from 32 degrees Celsius to 54 degrees C (129 degrees F).

In the buffer-injected mice, however, the surface temperature remained below 37 degrees Celsius (98.6 degrees Fahrenheit).

To see what effect this heating had on the tumors, the mice were injected with a radioactive tracer incorporated in a molecule similar to glucose, the main energy source in the body. Positron emission and computerized tomography (PET and CT) scans were used to record the concentration of the glucose lookalike in body tissues; the higher the glucose uptake, the greater the metabolic activity.

The tumors of nanocage-injected mice were significantly fainter on the PET scans than those of buffer-injected mice, indicating that many tumor cells were no longer functioning.

The tumors in the nanocage-treated mice were later found to have marked histological signs of cellular damage.

Active targeting

The scientists have just received a five-year, $2,129,873 grant from the National Cancer Institute to continue their work with photothermal therapy.

Despite their results, Xia is dissatisfied with passive targeting. Although the tumors took up enough gold nanocages to give them a black cast, only 6 percent of the injected particles accumulated at the tumor site.

Xia would like that number to be closer to 40 percent so that fewer particles would have to be injected. He plans to attach tailor-made ligands to the nanocages that recognize and lock onto receptors on the surface of the tumor cells.

In addition to designing nanocages that actively target the tumor cells, the team is considering loading the hollow particles with a cancer-fighting drug, so that the tumor would be attacked on two fronts.

But the important achievement, from the point of view of cancer patients, is that any nanocage treatment would be narrowly targeted and thus avoid the side effects patients dread.

The TV and radio character the Lone Ranger used only silver bullets, allegedly to remind himself that life was precious and not to be lightly thrown away. If he still rode today, he might consider swapping silver for gold.

Image 1: The color of a suspension of nanocages depends on the thickness of the cages’ walls and the size of pores in those walls. Like their color, their ability to absorb light and convert it to heat can be precisely controlled. Credit: WUSTL

Image 2: Infrared images made while tumors were irradiated with a laser show that in nanocage-injected mice (left), the surface of the tumor quickly became hot enough to kill cells. In buffer-injected mice (right), the temperature barely budged. This specificity is what makes photothermal therapy so attractive as a cancer therapy. Credit: WUSTL

Image 3: Gold nanocages (right) are hollow boxes made by precipitating gold on silver nanocubes (left). The silver simultaneously erodes from within the cube, entering solution through pores that open in the clipped corners of the cube. Credit: WUSTL

On the Net:

‘Microtentacles’ Play Role In How Breast Cancer Spreads

University of Maryland research may provide potential target for new therapies to limit metastasis of primary cancers

Researchers at the University of Maryland Marlene and Stewart Greenebaum Cancer Center have discovered that “microtentacles,” or extensions of the plasma membrane of breast cancer cells, appear to play a key role in how cancers spread to distant locations in the body. Targeting these microtentacles might prove to be a new way to prevent or slow the growth of these secondary cancers, the scientists say.

They report in an article to be published online March 15, 2010, in the journal Oncogene that a protein called “tau” promotes the formation of these microtentacles on breast tumor cells which break away from primary cancers and circulate in the bloodstream. While twisted remnants of tau protein have been seen in the brain tissue of patients with Alzheimer’s disease, this is the first report that tau could play a role in tumor metastasis by changing the shape of cancer cells. These tau-induced microtentacles can help the cells reattach to the walls of small blood vessels to create new pockets of cancer.

“Our study demonstrates that tau promotes the creation of microtentacles in breast tumor cells. These microtentacles increase the ability of circulating breast tumor cells to reattach in the small capillaries of the lung, where they can survive until they can seed new cancers,” says the senior author, Stuart S. Martin, Ph.D., a researcher at the University of Maryland Greenebaum Cancer Center and associate professor of physiology at the University of Maryland School of Medicine. Michael A. Matrone, Ph.D., is the study’s lead author.

Healthy cells are programmed to die ““ a process called apoptosis ““ after they break off of epithelial layers that cover internal organs in the body. They also can be crushed if they are forced through small capillaries. However, cancer cells are able to survive for weeks, months and even years in the body. Once they are trapped in small blood vessels, the cells can squeeze through microscopic gaps in the vessels’ lining and spread to organs such as the brain, lung and liver.

“We hope that through our research, we will be able to identify drugs that will target the growth of these microtentacles and help to stop the spread of the original cancer. Drugs that reduce tau expression may hold potential to inhibit tumor metastasis,” Dr. Martin says.

He notes that metastatic cancers are the leading cause of death in people with cancer, but methods used to treat primary tumors have limited success in treating metastatic cancer. In breast cancer, metastases can develop years after primary tumors are first discovered.

Tau is present in a subset of chemotherapy-resistant breast cancers and is also associated with poor prognosis, but Dr. Martin adds, “While tau expression has been studied in breast cancers for contributing to chemotherapy resistance, the protein’s role in tumor cells circulating in the bloodstream hasn’t been investigated. And that’s the focus of our research.”

In this recent study, the University of Maryland researchers analyzed breast tumor cells from 102 patients and found that 52 percent had tau in their metastatic tumors and 26 percent (27 patients) showed a significant increase in tau as their cancer progressed. Twenty-two of these patients even had tau in metastatic tumors despite having none in their primary tumors.

Dr. Martin says more studies are needed to determine if tau is a clear predictor of metastasis. Given the complex nature of tumors, there most likely are other factors involved in causing cancers to spread, he says.

“Metastasis is a very major concern for people diagnosed with cancer, and the discovery of these microtentacles and the role that tau plays in their formation is a very exciting development that holds great promise for developing new drugs,” says E. Albert Reece, M.D., Ph.D., M.B.A., acting president of the University of Maryland, Baltimore, and dean of the University of Maryland School of Medicine.

The University of Maryland, Baltimore, has filed patents on the microtentacle discoveries of Dr. Martin’s lab group and is looking to partner with biopharmaceutical companies on new drug development. The researchers identified these cell extensions while they were studying the effects of two drugs that prevent cell division, or mitosis. Most chemotherapy drugs target cell division, aiming to slow or stop tumor growth.

Dr. Martin says his team found that a popular chemotherapy drug, taxol, actually causes cancer cell microtentacles to grow longer and allows tumor cells to reattach faster, which may have important treatment implications for breast cancer patients. Their studies are continuing.

“We think more research is needed into how chemotherapies that slow down cell division affect metastasis. The timing of giving these drugs can be particularly important. If you treat people with taxol before surgery to shrink the primary tumor, levels of circulating tumor cells go up 1,000 to 10,000 fold, potentially increasing metastasis,” he adds.

The study being published in Oncogene was funded by grants from the National Cancer Institute, the USA Medical Research and Materiel Command, and the Flight Attendants Medical Research Institute.

On the Net:

Einstein’s Theory Applies Beyond The Solar System

A team led by Princeton University scientists has tested Albert Einstein’s theory of general relativity to see if it holds true at cosmic scales. And, after two years of analyzing astronomical data, the scientists have concluded that Einstein’s theory, which describes the interplay between gravity, space and time, works as well in vast distances as in more local regions of space.

The scientists’ analysis of more than 70,000 galaxies demonstrates that the universe — at least up to a distance of 3.5 billion light years from Earth — plays by the rules set out by Einstein in his famous theory.

Ever since the physicist Arthur Eddington measured starlight bending around the sun during a 1919 eclipse and proved Einstein’s theory of general relativity, the scientific world has accepted its tenets. But until now, according to the team, no one had tested the theory so thoroughly and robustly at distances and scales that go beyond the solar system. 

Reinabelle Reyes, a Princeton graduate student in the Department of Astrophysical Sciences, along with co-authors Rachel Mandelbaum, an associate research scholar, and James Gunn, the Eugene Higgins Professor of Astronomy, outlined their assessment in the March 11 edition of Nature.

Other scientists collaborating on the paper include Tobias Baldauf, Lucas Lombriser and Robert Smith of the University of Zurich and Uros Seljak of the University of California-Berkeley.

The results are important, they said, because they shore up current theories explaining the shape and direction of the universe, including ideas about “dark energy,” and dispel some hints from other recent experiments that general relativity may be wrong.

“All of our ideas in astronomy are based on this really enormous extrapolation, so anything we can do to see whether this is right or not on these scales is just enormously important,” Gunn said. “It adds another brick to the foundation that underlies what we do.”

First published in 1915, Einstein’s general theory of relativity remains a pivotal breakthrough in modern physics. It redefined humanity’s understanding of the fabric of existence — gravity, space and time — and ultimately explained everything from black holes to the Big Bang.

The groundbreaking theory showed that gravity can affect space and time, a key to understanding basic forces of physics and natural phenomena, including the origin of the universe. Shockingly, the flow of time, Einstein said, could be affected by the force of gravity. Clocks located a distance from a large gravitational source will run faster than clocks positioned more closely to that source, Einstein said. For scientists, the theory provides a basis for their understanding of the universe and the foundation for modern research in cosmology.

In recent years, several alternatives to general relativity have been proposed. These modified theories of gravity depart from general relativity on large scales to circumvent the need for dark energy, an elusive force that must exist if the calculations of general relativity balance out. But because these theories were designed to match the predictions of general relativity about the expansion history of the universe, a factor that is central to current cosmological work, it has become crucial to know which theory is correct, or at least represents reality as best as can be approximated.

“We knew we needed to look at the large-scale structure of the universe and the growth of smaller structures composing it over time to find out,” Reyes said. The team used data from the Sloan Digital Sky Survey, a long-term, multi-institution telescope project mapping the sky to determine the position and brightness of several hundred million celestial objects.

By calculating the clustering of these galaxies, which stretch nearly one-third of the way to the edge of the universe, and analyzing their velocities and distortion from intervening material, the researchers have shown that Einstein’s theory explains the nearby universe better than alternative theories of gravity.

The Princeton scientists studied the effects of gravity on these objects over long periods of time. They observed how this elemental force drives galaxies to clump into larger collections of themselves and how it shapes the expansion of the universe. They also studied the effects of a phenomenon known as “weak” gravitational lensing on galaxies as further evidence.

In weak lensing, matter — galaxies and groups of galaxies — that is closer to viewers bends light to change the shape of more distant objects, according to Mandelbaum. The effect is subtle, making viewers feel as if they are looking through a window made of old glass. Studying data collected from telescope surveys of regions showing what the universe looked like 5 billion years ago, the scientists could search for common factors in the distortion of multiple galaxies.

And, because relativity calls for the curvature of space to be equal to the curvature of time, the researchers could calculate whether light was influenced in equal amounts by both, as it should be if general relativity holds true.

“This is the first time this test was carried out at all, so it’s a proof of concept,” Mandelbaum said. “There are other astronomical surveys planned for the next few years. Now that we know this test works, we will be able to use it with better data that will be available soon to more tightly constrain the theory of gravity.”

Astronomers made the discovery a decade ago that the expansion of the universe was speeding up. They attributed this acceleration to dark energy, which they hypothesized pervaded otherwise empty space and exerted a repulsive gravitational force. Dark energy could be a cosmological constant, proposed by Einstein in his theory of general relativity, or it could be a new form of energy whose properties evolve with time.

Firming up the predictive powers of Einstein’s theory can help scientists better understand whether current models of the universe make sense, the scientists said.

“Any test we can do in building our confidence in applying these very beautiful theoretical things but which have not been tested on these scales is very important,” Gunn said. “It certainly helps when you are trying to do complicated things to understand fundamentals. And this is a very, very, very fundamental thing.”

By Kitta MacPherson, Princeton

Image 1: This image shows some of the 70,000 luminous galaxies in the Sloan Digital Sky Survey analyzed in this paper. Image: Sloan Digital Sky Survey Collaboration

Image 2: Princeton University scientists (from left) Reinabelle Reyes, James Gunn and Rachel Mandelbaum led a team that analyzed more than 70,000 galaxies and demonstrated that the universe — at least up to a distance of 3.5 billion light years from Earth — plays by the rules set out by Einstein in his theory of general relativity. Image: Princeton University, Office of Communications, Brian Wilson

On the Net:

Birth Control Pill Could Fight Cancer, Heart Disease

According to a new study, published on the British Medical Journal website Thursday, women who have taken an oral contraceptive pill are less likely to experience fatal health problems.

The Royal College of General Practitioners’ (RCGP) Oral Contraception Study, which was led by University of Aberdeen professor Philip Hannaford, followed 46,000 women for 40 years.

According to a March 11 press release, “The results show that in the longer term, women who used oral contraception had a significantly lower rate of death from any cause, including heart disease and all cancers (notably bowel, uterine body and ovarian cancers)” compared to those who had never used the birth control pill.

“We have known for a while that whilst women use the pill they have a small excess risk of disease but that seems to wear off,” Hannaford told BBC Scotland. “What we have never known is, what are the really long-term effects? This study, after following up a large group of women for 39 years, has shown there is no increased risk among women who have used the pill, in fact there is a small 12-percent drop.”

According to the study, younger women were more at-risk that older women. Those under 30 years of age experienced 20 more deaths per 100,000, while 30-39 year olds experienced four additional deaths. However, women aged 40-49 years of age experienced 14 fewer deaths per 100,000, while that number jumped to 86 fewer deaths for 50-59 year olds, 122 fewer for 60-69 year olds, and 308 fewer for females over 70 years old.

These most recent findings contrast earlier ones from the RCGP Oral Contraception Study, which had reported that use of birth control pills could result in an increased risk of stroke or cardio-vascular disease, particularly among smokers and older females.

On the Net:

Glenn Close’ Genome Sequencing Complete

American movie star Glenn Close has joined an elite list of celebrities who have had their genome sequenced in the name of science.

Close joins the ranks of notable celebrities such as Archbishop Desmond Tutu and Craig Venter.

Glenn Close, who is known for her movie roles in “Fatal Attraction”, “The Big Chill” and “101 Dalmatians”, said the offer was too good to pass up.

“For me, anything that can move the science forward is worthwhile,” Close said in a telephone interview with Reuters. “It’s pretty well publicized that I have mental health issues in my family.”

Close is a founder of the nonprofit group BringChange2Mind (BC2M), which raises awareness about mental illness, including bipolar disorder and schizophrenia, both of which have affected her family. BC2M also provides support and information to the mentally ill and their families. She has spoken out about the legacy of mental illness in her own family.

Genome mapping company, Illumina, based in San Diego, which did Close’s genome, is one of many companies that have drastically reduced the cost of producing a map of the human genome. The first human genome cost $3 billion and took more than a decade to complete. Illumina charges $48,000 for the kind of sequencing they did for Close. The company declined to say if they charged her or not.

Scientists believe that within five years technology will be able to bring the cost down to around $1,000, which would be less than the cost of an advanced CT scan.

Scientists hope that genetic mapping of a person’s DNA will someday reveal genetic causes of common diseases or determine a person’s risk for genetic illness or disease. Eventually, they predict human genome mapping will be a routine part of the medical record.

Glenn Close’ husband, David Shaw, the founder and former head of IDEXX Laboratories Inc, had the connections to get his wife’s DNA mapped, rather than she getting it done through her own connections as a celebrity.

“Jay Flatley, who is the head of Illumina, called me up,” Close told Reuters. “He said there are very few named women who have gotten this done.” She was excited and proud to be one of the first, she added.

“We are very excited to work with Glenn Close to produce the first named female sequence,” said Jay Flatley, president and CEO of Illumina, in a recent press release. “We are entering a new era in genomic health where information from an individual’s genome will increasingly inform lifestyle decisions and ultimately assist with health management. Ms. Close has been active in health issues, and her participation helps bring attention to the potential benefits of individuals gaining access to their genetic information. With this information, physicians will be able to make better healthcare decisions for their patients in the future.”

Close said she will go over the results with a genetics counselor next month “and find out as much as I want to know.” She said that if something in her genome sparks scientific interest, she will consider making it public.

On the Net:

Huffing Biggest Drug Threat for Tweens

For parents, the frontlines of the war on drugs may involve common household items such as nail polish, glue, bleach, or air freshener — this according to the results of a new national health study.

In fact, according to information compiled by the Substance Abuse and Mental Health Services Administration (SAMHSA) from 2006 through 2008, more 12-year-old children are “huffing” or inhaling dangerous chemicals than using marijuana and cocaine combined.

Furthermore, according to a March 11 SAMHSA press release that helped kick off National Inhalants & Poisons Awareness Week, their findings “show a rate of lifetime inhalant use among 12 year olds of 6.9 percent, compared to a rate of 5.1 percent for nonmedical use of prescription type drugs; a rate of 1.4 percent for marijuana; a rate of 0.7 percent for use of hallucinogens; and a 0.1 rate for cocaine use.”

In an interview with Reuters, SAMHSA Administrator Pamela S. Hyde called the findings “frustrating because the danger comes from a variety of very common household products that are legal, they’re easy to get, they’re lying around the home and it’s easy for kids to buy them. Kids and parents don’t think of these things as dangerous because they were never meant to be used to be intoxicating.”

According to the National Drug Intelligence Center (NDIC), symptoms of inhalant abuse are similar to alcohol intoxication and include dizziness, hallucinations, belligerent behavior, impaired judgment, disorientation, inattentiveness, and depression, especially when substances are used over a long period of time.

Furthermore, the NDIC website warns that death from huffing harmful products “can occur after a single use or after prolonged use. Sudden sniffing death (SSD) may result within minutes of inhalant abuse from irregular heart rhythm leading to heart failure. Other causes of death include asphyxiation, aspiration, or suffocation.”

On the Net:

Cosmic ‘Dark Flow’ Tracked Deeper Into Universe

Distant galaxy clusters mysteriously stream at a million miles per hour along a path roughly centered on the southern constellations Centaurus and Hydra. A new study led by Alexander Kashlinsky at NASA’s Goddard Space Flight Center in Greenbelt, Md., tracks this collective motion — dubbed the “dark flow” — to twice the distance originally reported.

“This is not something we set out to find, but we cannot make it go away,” Kashlinsky said. “Now we see that it persists to much greater distances — as far as 2.5 billion light-years away.” The new study appears in the March 20 issue of The Astrophysical Journal Letters.

The clusters appear to be moving along a line extending from our solar system toward Centaurus/Hydra, but the direction of this motion is less certain. Evidence indicates that the clusters are headed outward along this path, away from Earth, but the team cannot yet rule out the opposite flow. “We detect motion along this axis, but right now our data cannot state as strongly as we’d like whether the clusters are coming or going,” Kashlinsky said.

The dark flow is controversial because the distribution of matter in the observed universe cannot account for it. Its existence suggests that some structure beyond the visible universe — outside our “horizon” — is pulling on matter in our vicinity.

Cosmologists regard the microwave background — a flash of light emitted 380,000 years after the universe formed — as the ultimate cosmic reference frame. Relative to it, all large-scale motion should show no preferred direction.

The hot X-ray-emitting gas within a galaxy cluster scatters photons from the cosmic microwave background (CMB). Because galaxy clusters don’t precisely follow the expansion of space, the wavelengths of scattered photons change in a way that reflects each cluster’s individual motion.

This results in a minute shift of the microwave background’s temperature in the cluster’s direction. The change, which astronomers call the kinematic Sunyaev-Zel’dovich (KSZ) effect, is so small that it has never been observed in a single galaxy cluster.

But in 2000, Kashlinsky, working with Fernando Atrio-Barandela at the University of Salamanca, Spain, demonstrated that it was possible to tease the subtle signal out of the measurement noise by studying large numbers of clusters.

In 2008, armed with a catalog of 700 clusters assembled by Harald Ebeling at the University of Hawaii and Dale Kocevski, now at the University of California, Santa Cruz, the researchers applied the technique to the three-year WMAP data release. That’s when the mystery motion first came to light.

The new study builds on the previous one by using the five-year results from WMAP and by doubling the number of galaxy clusters.

“It takes, on average, about an hour of telescope time to measure the distance to each cluster we work with, not to mention the years required to find these systems in the first place,” Ebeling said. “This is a project requiring considerable followthrough.”

According to Atrio-Barandela, who has focused on understanding the possible errors in the team’s analysis, the new study provides much stronger evidence that the dark flow is real. For example, the brightest clusters at X-ray wavelengths hold the greatest amount of hot gas to distort CMB photons. “When processed, these same clusters also display the strongest KSZ signature — unlikely if the dark flow were merely a statistical fluke,” he said.

In addition, the team, which now also includes Alastair Edge at the University of Durham, England, sorted the cluster catalog into four “slices” representing different distance ranges. They then examined the preferred flow direction for the clusters within each slice. While the size and exact position of this direction display some variation, the overall trends among the slices exhibit remarkable agreement.

The researchers are currently working to expand their cluster catalog in order to track the dark flow to about twice the current distance. Improved modeling of hot gas within the galaxy clusters will help refine the speed, axis, and direction of motion.

Future plans call for testing the findings against newer data released from the WMAP project and the European Space Agency’s Planck mission, which is also currently mapping the microwave background.

Image Caption: The colored dots are clusters within one of four distance ranges, with redder colors indicating greater distance. Colored ellipses show the direction of bulk motion for the clusters of the corresponding color. Images of representative galaxy clusters in each distance slice are also shown. Credit: NASA/Goddard/A. Kashlinsky, et al.

On the Net:

Study Affirms General Relativity, Existence Of Dark Matter

Sloan Digital Survey data provide test that rules out alternative theories of gravity

An analysis of more than 70,000 galaxies by University of California, Berkeley, University of Zurich and Princeton University physicists demonstrates that the universe ““ at least up to a distance of 3.5 billion light years from Earth ““ plays by the rules set out 95 years ago by Albert Einstein in his General Theory of Relativity.

By calculating the clustering of these galaxies, which stretch nearly one-third of the way to the edge of the universe, and analyzing their velocities and distortion from intervening material, the researchers have shown that Einstein’s theory explains the nearby universe better than alternative theories of gravity.

One major implication of the new study is that the existence of dark matter is the most likely explanation for the observation that galaxies and galaxy clusters move as if under the influence of some unseen mass, in addition to the stars astronomers observe.

“The nice thing about going to the cosmological scale is that we can test any full, alternative theory of gravity, because it should predict the things we observe,” said co-author Uros Seljak, a professor of physics and of astronomy at UC Berkeley, a faculty scientist at Lawrence Berkeley National Laboratory, and a professor of physics at the Institute of Theoretical Physics at the University of Zurich. “Those alternative theories that do not require dark matter fail these tests.”

In particular, the tensor-vector-scalar gravity (TeVeS) theory, which tweaks general relativity to avoid resorting to the existence of dark matter, fails the test.

The result conflicts with a report late last year that the very early universe, between 8 and 11 billion years ago, did deviate from the general relativistic description of gravity.

Seljak and his current and former students, including first authors Reinabelle Reyes, a Princeton University graduate student, and Rachel Mandelbaum, a recent Princeton Ph.D. recipient, report their findings in the March 11 issue of the journal Nature. The other co-authors are Tobias Baldauf, Lucas Lombriser and Robert E. Smith of the University of Zurich, and James E. Gunn, professor of physics at Princeton and father of the Sloan Digital Sky Survey.

Einstein’s General Theory of Relativity holds that gravity warps space and time, which means that light bends as it passes near a massive object, such as the core of a galaxy. The theory has been validated numerous times on the scale of the solar system, but tests on a galactic or cosmic scale have been inconclusive.

“There are some crude and imprecise tests of general relativity at galaxy scales, but we don’t have good predictions for those tests from competing theories,” Seljak said.

Such tests have become important in recent decades because the idea that some unseen mass permeates the universe disturbs some theorists and has spurred them to tweak general relativity to get rid of dark matter. TeVeS, for example, says that acceleration caused by the gravitational force from a body depends not only on the mass of that body, but also on the value of the acceleration caused by gravity.

The discovery of dark energy, an enigmatic force that is causing the expansion of the universe to accelerate, has led to other theories, such as one dubbed f(R), to explain the expansion without resorting to dark energy.

Tests to distinguish between competing theories are not easy, Seljak said. A theoretical cosmologist, he noted that cosmological experiments, such as detections of the cosmic microwave background, typically involve measurements of fluctuations in space, while gravity theories predict relationships between density and velocity, or between density and gravitational potential.

“The problem is that the size of the fluctuation, by itself, is not telling us anything about underlying cosmological theories. It is essentially a nuisance we would like to get rid of,” Seljak said. “The novelty of this technique is that it looks at a particular combination of observations that does not depend on the magnitude of the fluctuations. The quantity is a smoking gun for deviations from general relativity.”

Three years ago, a team of astrophysicists led by Pengjie Zhang of Shanghai Observatory suggested using a quantity dubbed EG to test cosmological models. EG reflects the amount of clustering in observed galaxies and the amount of distortion of galaxies caused by light bending as it passes through intervening matter, a process known as weak lensing. Weak lensing can make a round galaxy look elliptical, for example.

“Put simply, EG is proportional to the mean density of the universe and inversely proportional to the rate of growth of structure in the universe,” he said. “This particular combination gets rid of the amplitude fluctuations and therefore focuses directly on the particular combination that is sensitive to modifications of general relativity.”

Using data on more than 70,000 bright, and therefore distant, red galaxies from the Sloan Digital Sky Survey, Seljak and his colleagues calculated EG and compared it to the predictions of TeVeS, f(R) and the cold dark matter model of general relativity enhanced with a cosmological constant to account for dark energy.

The predictions of TeVeS were outside the observational error limits, while general relativity fit nicely within the experimental error. The EG predicted by f(R) was somewhat lower than that observed, but within the margin of error.

In an effort to reduce the error and thus test theories that obviate dark energy, Seljak hopes to expand his analysis to perhaps a million galaxies when SDSS-III’s Baryon Oscillation Spectroscopic Survey (BOSS), led by a team at LBNL and UC Berkeley, is completed in about five years. To reduce the error even further, by perhaps as much as a factor of 10, requires an even more ambitious survey called BigBOSS, which has been proposed by physicists at LBNL and UC Berkeley, among other places.

Future space missions, such as NASA’s Joint Dark Energy Mission (JDEM) and the European Space Agency’s Euclid mission, will also provide data for a better analysis, though perhaps 10-15 years from now.

Seljak noted that these tests do not tell astronomers the actual identity of dark matter or dark energy. That can only be determined by other types of observations, such as direct detection experiments.

By Robert Sanders, UC Berkeley

Image 1: A partial map of the distribution of galaxies in the Sloan Digital Sky Survey, going out to a distance of 7 billion light years. The amount of galaxy clustering that we observe today is a signature of how gravity acted over cosmic time, and allows us to test whether general relativity holds over these scales. (M. Blanton, Sloan Digital Sky Survey)

Image 2: An image of a galaxy cluster in the Sloan Digital Sky Survey, showing some of the 70,000 bright elliptical galaxies that were analyzed to test general relativity on cosmic scales. (Sloan Digital Sky Survey)

On the Net:

SpaceX Aborts Falcon 9 Rocket Test

On Tuesday, Space Exploration aborted the engine test of its Falcon 9 rocket, which will eventually fly cargo and astronauts to the International Space Station (ISS).

The company planned to fired up the engines at its Cape Canaveral, Florida launch site, where the rocket is being prepped and readied for a company-sponsored demonstration flight in the coming months.

The test was aborted just two seconds before engine ignition.

A NASA video camera showed flames and small puffs of smoke around the base of the rocket.

Space Exploration Technologies said in a statement that the flames were from the burn-off of liquid oxygen and kerosene.

“Given that this was our first abort event on this pad, we decided to scrub for the day to get a good look at the rocket before trying again. Everything looks great at first glance,” spokeswoman Emily Shanklin said in an e-mail to Reuters.

The rocket lies just south of the space shuttle launch pads at the Kennedy Space Center on a refurbished oceanside launch pad at Cape Canaveral Air Force Station.

SpaceX is building launch vehicles and spacecraft to fill the void that will be left when the government-run space shuttles are retired. 

The company has 15 Falcon 9 contracts with NASA, 12 of which are cargo resupply missions to the space station.  The contracts are worth about $1.9 billion.

Options for additional flights for cargo-deliver would help the contract be valued at over $3 billion.

President Barack Obama plans to add $6 billion to NASA’s budget over the next five years to help develop space taxis, which would transport astronauts to and from the space station.

NASA has already turned over station crew transportation to Russia with the space shuttle fleets planned retirement later on this year.  Russia charges the United States about $51 million per seat for rides on its Soyuz rockets to the ISS.

NASA has divided up its commercial rocket development and cargo delivery services with SpaceX and Virginia-based Orbital Science Corp., which has plans to debut its Taurus II rocket and Cygnus spacecraft sometime before April 2011.

On the Net:

What Causes The Alcohol Cravings That Drive Relapse?

New research provides exciting insight into the molecular mechanisms associated with addiction and relapse. The study, published by Cell Press in the March 11 issue of the journal Neuron, uncovers a crucial mechanism that facilitates motivation for alcohol after extended abstinence and opens new avenues for potential therapeutic intervention.

Previous work has suggested that people, places, and objects associated with alcohol use are potent triggers for eliciting relapse and that cravings for both alcohol and drugs can increase across protracted abstinence. However, the specific molecular mechanisms that underlie pathological alcohol seeking are not well defined.

“Animal paradigms can model crucial aspects of human addiction, and these paradigms will help elucidate the molecular and cellular mechanisms that drive drug-seeking behaviors and, as a consequence, facilitate the development of novel therapeutic interventions for addiction,” explains lead study author Dr. F. Woodward Hopf from the Ernest Gallo Clinic and Research Center at the University of California, San Francisco.

Dr. Hopf and colleagues were particularly interested in studying how alcohol addiction impacted a part of the brain called the nucleus accumbens (NAcb) core that is known to be important for allowing stimuli to drive motivated, goal-directed behaviors. The researchers examined the brains of rats that had experienced nearly 2 months of alcohol or sugar self-administration followed by a 3 week abstinence period.

The rats who had consumed alcohol, but not those who had consumed sugar, exhibited an increased electrical activity in the NAcb core after abstinence. The increased activity was due to an inhibition of small-conductance calcium-activated potassium channels (SK).

Importantly, pharmacological activation of SK channels produced greater inhibition of NAcb activity in the alcohol- versus sucrose-abstinent rats and significantly reduced alcohol but not sucrose seeking after abstinence. The authors concluded that decreased SK currents and increased excitability in the NAcb core represents a critical mechanism that facilitates motivation to seek alcohol after abstinence.

“Our findings are particularly exciting because the FDA-approved drug chlorzoxazone, which has been used for more than 30 years as a muscle relaxant, can activate SK channels,” says Dr. Antonello Bonci, a senior author on the project. “Although SK channels are not the only target of this drug and it can present a variety of clinical side effects, it provides an unexpected and very exciting opportunity to design human clinical trials to examine whether chlorzoxazone, or other SK activators, reduce excessive or pathological alcohol drinking.”

The researchers include F. Woodward Hopf, M. Scott Bowers, Shao-Ju Chang, Billy T. Chen, Miguel Martin, Taban Seif, Saemi L. Cho, Kay Tye, and Antonello Bonci, of University of California, San Francisco, San Francisco, CA

On the Net:

Pottery Leads to Discovery of Peace-seeking Women in American Southwest

From the time of the Crusades to the modern day, war refugees have struggled to integrate into their new communities.  They are often economically impoverished and socially isolated, which results in increased conflict, systematic violence and warfare, within and between communities as the new immigrants interact with and compete with the previously established inhabitants. Now, University of Missouri researcher Todd VanPool believes pottery found throughout the North American Southwest comes from a religion of peace-seeking women in the violent, 13th-century American Southwest.  These women sought to find a way to integrate newly immigrating refugees and prevent the spread of warfare that decimated communities to the north.

First discovered in 1930’s Arizona, Salado pottery created a debate among archaeologists. According to VanPool, the Salado tradition is a grassroots movement against violence. The mystery of the pottery’s origin and significance was known as “the Salado problem.” This southwestern pottery was found among three major cultural areas of the ancient southwest: the ancestral Puebloan in northern Arizona and New Mexico, the Mogollon of southern New Mexico and the Hohokam of central and southern Arizona, all with different religious traditions. Even though the pottery was found in three different cultural areas, the pottery communicated the same, specific set of religious messages. It was buried with both the elite and non-elite and painted with complex, geometric motifs and animals, such as horned serpents. Instead of celebrating local elites, the symbols in Salado pottery emphasized fertility and cooperation.

“In my view, the fact that the new religion is reflected solely in pottery, a craft not usually practiced by men, suggests that it was a movement that helped bring women together and decreased competition among females,” said VanPool, who is an assistant professor of anthropology in the MU College of Arts and Science. “Women across the region may have been ethnically diverse, but their participation in the same religious system would have helped decrease conflict and provided a means of connecting different ethnic groups.”

Salado pottery dates from the 13th to 15th centuries in which there was major political and cultural conflict in the American Southwest. Brutal executions and possible cannibalism forced thousands of people to abandon their native regions and move to areas of Arizona and New Mexico. Another source of conflict appeared after the female refugees and their children arrived in their new homelands.

“Conflict was defused through the direct action of women who sought to decrease the tensions that threatened to destroy their communities,” VanPool said. “The rise of the Salado tradition allowed threatened communities to stabilize over much of modern-day Arizona and new Mexico, altering the course of Southwestern prehistory. Given that the Salado system lasted from 1275 to around 1450, it was most certainly successful.”

VanPool’s research has been published in Archaeology magazine. A more extended version has been published as a chapter in Innovations in Cultural Systems: Contributions from Evolutionary Anthropology, published by MIT Press (2010).

On the net:

Are Home Abortion Drugs Safe?

Swedish scientists are touting both the safety and effectiveness of home-based abortion medications for women who are 50 to 63 days pregnant, according to a March 9, 2010 Reuters report.

Researchers at the Karolinska Institutet in Stockholm, led by Dr. Helena Kopp Kallner, state that this is the first survey to look at the use of the home-based procedure beyond 49 days post-conception. The procedure requires women to take a dose of mifepristone (also known as RU-486), followed three to four days later by a dose of misoprostol (Cytotec).

According to the Reuters report, Kallner and her staff “offered the option of surgical or medical abortion for all women up to 63 days pregnant who sought treatment at their clinic between January 2004 and April 2007.” Nearly 400 of the women chose the home-based treatment.

The procedure was a success for all but four of the 203 women who were 50 days pregnant or less, as well as for 186 of the 192 women who were between 50 and 62 days pregnant. Ten of the women required surgery to successfully complete their abortions, while 60% of all those participating required additional pain medication during the process.

According to the report, subjects who were further along in pregnancy reported more bleeding, but the difference between this group and the women who weren’t so far on in their pregnancy could have been due to chance because of the small number of people in the study. The researchers concluded that home misoprostol use is “safe and highly acceptable also to women with gestational length of 50-63 days as compared with shorter gestation.”

Mifepristone is a synthetic steroid compound which can be used as an emergency contraceptive in addition to an abortion-inducing agent. It was designed by Roussel Uclaf S.A., a French healthcare and chemical corporation. It was approved for home use in the United States in 2000, though many European countries require that it be administered in a hospital or health clinic.

On the Net:

New Scale Developed To Measure Anxiety Outcomes

Large validation study shows a reliable and valid tool for routine practice

A new questionnaire and outcomes measurement scale developed by the department of psychiatry at Rhode Island Hospital has proven to be a reliable and valid measure of anxiety. The scale can easily be incorporated into routine clinical practice when treating psychiatric disorders. The study appears online ahead of print in the Journal of Clinical Psychiatry.

To determine the impact of treatment on any medical disorder, it is necessary to evaluate outcomes. Standardized scales are increasingly recommended as an outcome measurement tool in the treatment of psychiatric disorders. If scales are to be incorporated into clinical practices, it is necessary to develop measures that are feasible and have good psychometric properties. With this in mind, Mark Zimmerman, MD, director of outpatient psychiatry at Rhode Island Hospital, and his colleagues developed the Clinically Useful Anxiety Outcome Scale (CUXOS).

As Zimmerman says, “If the optimal delivery of mental health treatment ultimately depends on examining outcome, then precise, reliable, valid, informative, and user-friendly measurement is critical to evaluating the quality and efficiency of care in clinical practice.” He also notes, “Clinicians are already overburdened with paperwork, and adding to this load by requiring repeated detailed evaluations using instruments that are available is unlikely to meet success.” The researchers note that only 11 percent of the psychiatrists are routinely using standardized measures to assess outcomes when treating depression or anxiety disorders.

The CUXOS was designed to be brief for patients to complete and then quickly scored by a clinician. In their study, nearly 1,000 psychiatric outpatients completed the CUXOS, which took less than one and a half minutes to complete. Clinicians rated the severity of depression, anxiety, and anger on standardized scales and each CUXOS could be scored in less than 15 seconds.

The researchers also had a subset of patients complete other self-report symptom severity scales in order to examine discriminant and convergent validity. Another subset completed the CUXOS twice in order to examine test-retest reliability. In addition, sensitivity to change was examined in patients with panic disorder and generalized anxiety disorder.

Zimmerman, who is also an associate professor of psychiatry and human behavior at The Warren Alpert Medical School of Brown University, says that the scale was found to have high internal consistency and test-retest reliability. Further, it was more highly correlated with other self-report measures of anxiety than with measures of depression, substance abuse problems, eating disorders and anger. It was also more closely aligned with clinician severity ratings of anxiety than depression and anger, and the CUXOS scores were significantly higher in psychiatric outpatients with anxiety disorders than other psychiatric disorders. Finally, it was found to be a valid measure of symptom change.

Zimmerman says, “We believe that the use of standardized scales should be the standard of care and routinely used to measure outcome when treating psychiatric disorders. Only in this way can we ensure that we are having an impact on our patients.”

The researchers also note that there is no shortage of self-report questionnaires, and the development of any new scale should be questioned. They believe, however, that the CUXOS distinguishes itself in several respects and is intended as a general measure of the severity of psychic and somatic anxiety.

“We have developed what we believe to be an effective tool that can easily be incorporated into clinicians’ routines. However, future research should explore both clinicians’ and patients’ perspectives as to whether the use of a general or disorder specific scales is preferred,” Zimmerman concludes.

The study was based on work in the Rhode Island Methods to Improve Diagnostic Assessment and Services (MIDAS) Project, for which Zimmerman is the principal investigator. It is a unique integration of research quality diagnostic methods into a community-based outpatient practice affiliated with an academic medical center.

Along with Zimmerman, other researchers in the study are Iwona Chelminski, PhD, Diane Young, PhD, and Kristy Dalrymple, PhD, all of Rhode Island Hospital and Alpert Medical School.

On the Net:

FCC Plans to Give High-Speed Internet to All Americans

The Federal Communications Commission wants to utilize government programs that could help make high-speed Internet accessible for all Americans.

The Universal Service Fund, a program that subsidizes telephone service in poor and rural areas, is being studied by the FCC, who wants to revamp the program as part of their national broadband plan that is due in Congress on March 17.

The FCC says it wants to transform the program over the next ten years to pay for high-speed Internet access instead of the voice services it currently finances. Creating a new program — the Connect America fund — within the Universal Service Fund to subsidize broadband is their main goal. They would also like to see a Mobility Fund to expand the reach of 3G wireless networks.

“It’s time to migrate this 20th-century program,” said Blair Levin, the FCC official overseeing the broadband plan. “We need to move the current system from the traditional networks to the new networks.”

The Universal Service Fund was originally established to ensure that all Americans had access to a basic telephone line. Today, the program subsidizes phone service for the poor, funds Internet access for education and even pays for high-speed Internet in rural health networks. But its main function as still bringing phone service to remote areas around the country where it is financially absurd for private companies to build networks.

Funding for the program comes from surcharges that businesses and consumers pay on their long-distant bills. The $8-billion-a-year program is losing its revenue base, and with rising interest from the FCC to hopefully use it for broadband subsidies, something more may need to be done to keep it afloat.

The FCC plans to lay out several options to pay for the proposals it outlined, including one that requires no extra financing from Congress and one that would accelerate the construction of broadband networks if Congress approves a $9 billion one-time pledge.

The FCC says they also plan to save money by subsidizing only one broadband service in any area. Critics of the program are worried that if the plan goes through, the FCC would look to wireless companies for broadband instead of landline systems. In response, Levin said Connect America would not favor one technology over another.

Any changes made to the Universal Service Fund would also affect the “intercarrier compensation” system. The FCC says its proposal also outlines revamping that system as well. Rural phone companies rely heavily on both systems.

On the Net:

Universal Service Fund

Federal Communications Commission

For California Vintners, It’s Not Easy Being Green

“Green” labels do not pack the same wallop for California wines that they do for low-energy appliances, organically grown produce and other environmentally friendly products, but it’s not because there’s anything wrong with the wine, a new UCLA-led study has found.
 
In fact, wines made with organically grown grapes actually rate higher on a widely accepted ranking, said Magali Delmas, a UCLA environmental economist and the study’s lead author. And these wines tend to command a higher price than their conventionally produced counterparts, so long as wineries don’t use the word “organic” on their labels.
 
But when wineries do use eco-labels, prices plummet.
 
“You’ve heard of the French paradox?” quipped Delmas, associate professor of management at UCLA’s Institute of the Environment and the UCLA Anderson School of Management. “Well, this is the American version. You’d expect anything with an eco-label to command a higher price, but that’s just not the case with California wine.”
 
The anomaly points to a marketing conundrum for environmentally friendly vintners and a buying opportunity for oenophiles, say Delmas and her co-author, Laura E. Grant, a Ph.D. candidate in environmental science and management at the University of California, Santa Barbara.
 
“Wine made with organic grapes “” especially if it has an eco-label “” is a really good deal,” Grant said. “For the price of conventional wine, you get a significantly better quality wine.”
 
The findings appear in the current issue of the peer-reviewed scholarly journal Business and Society, the official organ of the International Association for Business and Society. The organization is devoted to research on corporate social responsibility and sustainability issues.
 
Delmas, an economist and sociologist by training, specializes in analyzing incentives that induce companies to engage in environmentally beneficial practices. Grant, also an economist, is married to a sommelier.
 
The researchers studied 13,426 wines from 1,495 California wineries. Vintages ranged from 1998 to 2005, and more than 30 varietals and 25 appellations were represented.
 
First, Delmas and Grant tracked down each wine’s rating from Wine Spectator, a prominent wine publication. Then they tabulated the number of wines made with grapes that had been certified by a third party as organically grown, a grueling and expensive process that obligates the vineyard to devote considerably more time and effort to cultivating grapes than conventional agricultural methods, which rely on chemical herbicides, pesticides and fertilizers.
 
The researchers also looked at whether wineries chose to label their certified wines as organically grown or whether they chose to keep their efforts to themselves.
 
Certification and eco-labels had no impact on pricing or ratings for cheaper wines, the researchers found. But using organically grown grapes proved be a double-edged sword for wines that cost more than $25.
 
So long as they didn’t carry eco-labels, these wines commanded a 13-percent higher price than conventionally produced wines of the same varietal, appellation and year. Their ratings on Wine Specator’s 100-point scale, in which wines tend to range between the mid-50s and high 90s, were also higher. Wines made from organically grown grapes averaged one point higher than their conventionally produced counterparts.
 
While the higher Wine Spectator scores still prevailed when producers slapped eco-labels on their bottles, the financial rewards for going to the trouble of making certified wine evaporated. The “made from organically grown grapes” label not only wiped out the price premium for using certified grapes but actually drove prices 7 percent below those for conventionally produced wines, the researchers found.
 
The average price for a wine with an eco-label was $37.65. By contrast, a certified wine without an eco-label commanded an average price of $40.54.
 
While the researchers don’t have an easy explanation for the price drop associated with eco-labeling, they aren’t stumped when it comes to the higher price that certified wines are able to commend.
 
“Wine made with organically grown grapes is higher quality,” Delmas said. “Growers have to devote more time and attention and take better care of organically certified vines than conventional vines, and our results show that these efforts are apparent in the product.”
 
In addition to being less pure, grapes grown with pesticides, herbicides and inorganic fertilizers interfere with a vine’s ability to absorb naturally occurring chemicals in soil, according to vintners quoted in the study. As a result, wines made with organically grown grapes are more likely absorb these chemicals, which are said to provide the distinctive flavor of the site where the grapes were grown “” a wine’s much-prized “terroir.”
 
Still, the researchers believe vintners will be surprised at the magnitude of the impact that certification has on price and quality. Delmas and Grant suspect that the price-penalty associated with eco-labels will be less surprising for vintners. In their study, the researchers found that only one-third of vintners using organically certified grapes advertised the fact on wine labels.
 
“Producers of two-thirds of these wines must suspect that consumers, for whatever reason, wouldn’t appreciate the use of organically grown grapes,” Delmas said. “Otherwise, why would they refrain from drawing attention to this benefit on their labels?” 
 
As for the reasons that eco-labels drive down prices, the researchers have a number of theories. Many have to do with confusion in consumers’ minds over the difference between wine made with organically grown grapes and organic wine, which is made without the benefit of such chemical preservatives as sulfites. Preservatives can be used in certified wine.
 
“Organic wine earned its bad reputation in the ’70s and ’80s,” Grant said. “Considered ‘hippie wine,’ it tended to turn to vinegar more quickly than non-organic wine. This negative association still lingers.”
 
Even today, the absence of sulfites reduces the shelf-life of organic wines, making them less stable, the researchers said.
 
“Without added sulfites, the wine turns into vinegar after a while, and you’re likely to lose out on the opportunity for your wine to mature into something considerably richer than when purchased, which is the promise of fine wine,” Delmas said. “So while no-sulfites-added is fine for white wines such as Chardonnay that you usually drink ‘young,’ it is not good for a red wine like a Cabernet Sauvignon that you want to keep to drink in a year or two.”
 
Moreover, the benefits of wine from organically grown grapes may not be as clear to consumers as the benefits from other environmentally friendly products. Researchers who have looked into the motives of consumers of green products have found that benefitting the environment is only one incentive, and probably not the strongest one. Generally, green consumers are primarily motivated by some kind of personal benefit.
 
“Consumers buy organically grown food because they think it is going to improve their health,” Delmas said. “That motivation doesn’t go a long way with wine. If consumers want to drink something healthy, they’ll reach for wheat grass, not an alcoholic beverage.”
 
That all could change once consumers realize that wine made with organic grapes actually holds the prospect of another compelling personal benefit: a better-tasting product.
 
“Vintners and regulators really need to communicate better what wine with organically grown grapes means and the potential impact on quality,” Delmas said. “I don’t think they’ve done that, and I think it’s too bad. It’s a real missed opportunity.”

By Meg Sullivan, UCLA

On the Net:

Who Does What On Wikipedia?

The quality of entries in the world’s largest open-access online encyclopedia depends on how authors collaborate, UA Eller College Professor Sudha Ram finds.

The patterns of collaboration between Wikipedia contributors have a direct effect on the data quality of an article, according to a new paper co-authored by a University of Arizona professor and graduate student.

Sudha Ram, a UA’s Eller College of Management professor, co-authored the article with Jun Liu, a graduate student in the management information systems department (MIS). Their work in this area received a “Best Paper Award” at the International Conference on Informations Systems, or ICIS.

“Most of the existing research on Wikipedia is at the aggregate level, looking at total number of edits for an article, for example, or how many unique contributors participated in its creation,” said Ram, who is a McClelland Professor of MIS in the Eller College.

“What was missing was an explanation for why some articles are of high quality and others are not,” she said. “We investigated the relationship between collaboration and data quality.”

Wikipedia has an internal quality rating system for entries, with featured articles at the top, followed by A, B, and C-level entries. Ram and Liu randomly collected 400 articles at each quality level and applied a data provenance model they developed in an earlier paper.

“We used data mining techniques and identified various patterns of collaboration based on the provenance or, more specifically, who does what to Wikipedia articles,” Ram says. “These collaboration patterns either help increase quality or are detrimental to data quality.”

Ram and Liu identified seven specific roles that Wikipedia contributors play.

Starters, for example, create sentences but seldom engage in other actions. Content justifiers create sentences and justify them with resources and links. Copy editors contribute primarily though modifying existing sentences. Some users ““ the all-round contributors ““ perform many different functions.

“We then clustered the articles based on these roles and examined the collaboration patterns within each cluster to see what kind of quality resulted,” Ram said. “We found that all-round contributors dominated the best-quality entries. In the entries with the lowest quality, starters and casual contributors dominated.”

To generate the best-quality entries, she says, people in many different roles must collaborate. Ram and Liu suggest that the results of this study should spark the design of software tools that can help improve quality.

“A software tool could prompt contributors to justify their insertions by adding links,” she said, “and down the line, other software tools could encourage specific role setting and collaboration patterns to improve overall quality.”

The impetus behind the paper came from Ram’s involvement in UA’s $50 million iPlant Collaborative, which aims to unite the international scientific community around solving plant biology’s “grand challenge” questions. Ram’s role as a faculty advisor is to develop a cyberinfrastructure to facilitate collaboration.

“We initially suggested wikis for this, but we faced a lot of resistance,” she said. Scientists expressed concerns ranging from lack of experience using the wikis to lack of incentive.

“We wondered how we could make people collaborate,” Ram said. “So we looked at the English version of Wikipedia. There are more than three million entries, and thousands of people contribute voluntarily on a daily basis.”

The results of this research have helped guide recommendations to the iPlant collaborators.

“If we want scientists to be collaborative,” Ram said, “we need to assign them to specific roles and motivate them to police themselves and justify their contributions.”

By Liz Warren-Pederson, UA Eller College of Management

On the Net:

The Battle For Earth’s Atmosphere

Scientists at the University of Rochester have discovered that the Earth’s magnetic field 3.5 billion years ago was only half as strong as it is today, and that this weakness, coupled with a strong wind of energetic particles from the young Sun, likely stripped water from the early Earth’s atmosphere.

The findings, presented in today’s issue of Science, suggest that the magnetopause””the boundary where the Earth’s magnetic field successfully deflects the Sun’s incoming solar wind””was only half the distance from Earth it is today.

“With a weak magnetosphere and a rapid-rotating young Sun, the Earth was likely receiving as many solar protons on an average day as we get today during a severe solar storm,” says John Tarduno, a geophysicist at the University of Rochester and lead author of the study. “That means the particles streaming out of the Sun were much more likely to reach Earth. It’s very likely the solar wind was removing volatile molecules, like hydrogen, from the atmosphere at a much greater rate than we’re losing them today.” Tarduno says the loss of hydrogen implies a loss of water as well, meaning there may be much less water on Earth today than in its infancy.

To find the strength of the ancient magnetic field, Tarduno and his colleagues from the University of KwaZulu-Natal visited sites in Africa that were known to contain rocks in excess of 3 billion years of age. Not just any rocks of that age would do, however. Certain igneous rocks called dacites contain small millimeter-sized quartz crystals, which in turn have tiny nanometer-sized magnetic inclusions. The magnetization of these inclusions act as minute compasses, locking in a record of the Earth’s magnetic field as the dacite cooled from molten magma to hard rock. Simply finding rocks of this age is difficult enough, but such rocks have also witnessed billions of years of geological activity that could have reheated them and possibly changed their initial magnetic record. To reduce the chance of this contamination, Tarduno picked out the best preserved grains of feldspar and quartz out of 3.5 billion-year-old dacite outcroppings in South Africa.

Complicating the search for the right rocks further, the effect of the solar wind interacting with the atmosphere can induce a magnetic field of its own, so even if Tarduno did find a rock that had not been altered in 3.5 billion years, he had to make sure the magnetic record it contained was generated by the Earth’s core and not induced by the solar wind.

Once he isolated the ideal crystals, Tarduno used a device called a superconducting quantum interface device, or SQUID magnetometer, which is normally used to troubleshoot computer chips because it’s extremely sensitive to the smallest magnetic fields. Tarduno pioneered the use of single crystal analyses using SQUID magnetometers. However, for this study, even standard SQUID magnetometers lacked the sensitivity. Tarduno was able to employ a new magnetometer, which has sensors closer to the sample than in previous instruments.

Using the new magnetometer, Tarduno, Research Scientist Rory Cottrell, and University of Rochester students were able to confirm that the 3.5 billion-year-old silicate crystals had recorded a field much too strong to be induced by the solar wind-atmosphere interaction, and so must have been generated by Earth’s core.

“We gained a pretty solid idea of how strong Earth’s field was at that time, but we knew that was only half the picture,” says Tarduno. “We needed to understand how much solar wind that magnetic field was deflecting because that would tell us what was probably happening to Earth’s atmosphere.”

The solar wind can strip away a planet’s atmosphere and bathe its surface in lethal radiation. Tarduno points to Mars as an example of a planet that likely lost its magnetosphere early in its history, letting the bombardment of solar wind slowly erode its atmosphere. To discover what kind of solar wind the Earth had to contend with, Tarduno employed the help of Eric Mamajek, assistant professor of physics and astronomy at the University of Rochester.

“There is a strong correlation between how old a Sun-like star is and the amount of matter it throws off as solar wind,” says Mamajek “Judging from the rotation and activity we expect of our Sun at a billion years of age, we think that it was shedding material at a rate about 100 times stronger than the average rate observed in modern times.”

While the life cycle of stars like our Sun is well known, says Mamajek, astrophysicists have only a handful of stars for which they know the amount of mass lost as solar wind. Mamajek says the amount of X-rays radiated from a star, regardless of its apparent brightness, can give a good estimate of how much material the star is radiating as solar wind. Through the Sun at this age was likely about 23% dimmer than it would appear to us today, it was giving off much more radiation as X-rays, and driving a much more powerful solar wind.

“We estimate the solar wind at that time was a couple of orders of magnitude stronger,” says Mamajek. “With Earth’s weaker magnetosphere, the standoff point between the two was probably less than five Earth radii. That’s less than half of the distance of 10.7 radii it is today.”

Tarduno says that in addition to the smaller magnetopause allowing the solar wind to strip away more water vapor from the early Earth, the skies might have been filled with more polar aurora. The Earth’s magnetic field bends toward vertical at the poles and channels the solar wind toward the Earth’s surface there. When the solar wind strikes the atmosphere, it releases photons that appear as shifting patterns of light at night.

With the weakened magnetosphere, the area where the solar wind is channeled toward the surface””an area called the magnetic polar cap””would have been three times larger than it is today, says Tarduno.

“On a normal night 3.5 billion years ago you’d probably see the aurora as far south as New York,” says Tarduno.

The study involved colleagues from the University of KwaZulu-Natal (South Africa), NASA, the Chinese Academy of Geological Sciences (Beijing), and University of Oslo (Norway) and was supported by the John Simon Guggenheim Foundation and the National Science Foundation.

Image 1: The larger auroral oval relative to the modern is the result of a weaker dipole magnetic field and stronger solar wind dynamic pressure. The auroral intensity is brighter due to solar wind densities many times greater than those today, and the dominant color reflects greater energies of the precipitating particles and the mildly reducing Paleoarchean atmosphere. Credit: Courtesy J. Tarduno and R. Cottrell. University of Rochester

Image 2: The Barberton Mountain Land, South Africa, is the site yielding data on the ancient magnetic field. Credit: University of Rochester

On the Net:

The Precursors of Life-enabling Molecules in the Orion Nebula

ESA’s Herschel Space Observatory has revealed the chemical fingerprints of potential life-enabling organic molecules in the Orion Nebula, a nearby stellar nursery in our Milky Way galaxy. This detailed spectrum, obtained with the Heterodyne Instrument for the Far Infrared (HIFI) – one of Herschel’s three innovative instruments – demonstrates the gold mine of information that Herschel-HIFI will provide on how organic molecules form in space.

The spectrum, one of the first to be obtained with HIFI since it returned to full health in January 2010 following technical difficulties, clearly demonstrates that the instrument is working well. Striking features in the HIFI spectrum include a rich, dense pattern of “spikes”, each representing the emission of light from a specific molecule in the Orion Nebula. This nebula is known to be one of the most prolific chemical factories in space, although the full extent of its chemistry and the pathways for molecule formation are not well understood. By sifting through the pattern of spikes in this spectrum, astronomers have identified a few common molecules that appear everywhere in the spectrum. The identification of the many other emission lines is currently ongoing.

By clearly identifying the lines associated with the more common molecules, astronomers can then begin to tease out the signature of particularly interesting molecules that are the direct precursors to life-enabling molecules. A characteristic feature of the Orion spectrum is the spectral richness: among the molecules that can be identified in this spectrum are water, carbon monoxide, formaldehyde, methanol, dimethyl ether, hydrogen cyanide, sulphur oxide, sulphur dioxide and their isotope analogues. It is expected that new organic molecules will also be identified.

“This HIFI spectrum, and the many more to come, will provide a virtual treasure trove of information regarding the overall chemical inventory and on how organics form in a region of active star formation. It harbors the promise of a deep understanding of the chemistry of space once we have the full spectral surveys available,” said Edwin Bergin of the University of Michigan, principal investigator of the HEXOS Key Program on Herschel.

Unprecedented high resolution

HIFI was designed to provide extremely high-resolution spectra and to open new wavelength ranges for investigation, which are inaccessible to ground-based telescopes. “It is astonishing to see how well HIFI works,” said Frank Helmich, HIFI principal investigator of SRON Netherlands Institute for Space Research. “We obtained this spectrum in a few hours and it already beats any other spectrum, at any other wavelength, ever taken of Orion. Organics are everywhere in this spectrum, even at the lowest levels, which hints at the fidelity of HIFI. The development of HIFI took eight years but it was really worth waiting for.”

“HIFI’s unprecedented high resolution and stability allows us to construct very detailed models of the density and temperature structure of star-forming clouds,” said Tom Phillips of the California Institute for Technology. “This view allows us to pierce the veil of star formation and more directly study the chemistry associated with the birth of stars, planets, and in some sense, life.”

The spectrum was obtained only one month after HIFI resumed operations on-board Herschel. In August 2009, HIFI experienced an unexpected voltage spike in the electronic system, probably caused by a high-energy cosmic particle, resulting in the instrument shutting down. The mission team studied the problem and developed a solution that prevents harmful side-effects of this type of event. On 14 January, 2010, HIFI was successfully switched back on using its spare electronics and restarted a sequence of testing and verification, ahead of science observations commencing on 28 February. It now rejoins the other two Herschel instruments, SPIRE and PACS, in their exploration of the far-infrared Universe.

Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia, with important participation from NASA.

HIFI, a high resolution spectrometer was designed and built by a nationally-funded consortium led by SRON Netherlands Institute for Space Research. The consortium includes institutes from France, Germany, USA, Canada, Ireland, Italy, Poland, Russia, Spain, Sweden, Switzerland, and Taiwan.

Identification of the many spectral features visible in the Orion spectrum with transitions of particular molecular species requires sophisticated molecular spectroscopy databases, which collect the results from many years of laboratory spectroscopy work. The assignments for this HIFI spectrum were made using the Cologne Database of Molecular Spectroscopy (CDMS) and an equivalent database maintained at NASA’s Jet Propulsion Laboratory.

Image Caption: The HIFI spectrum of the Orion Nebula, superimposed on a Spitzer image of Orion.  A characteristic feature is the spectral richness: among the organic molecules identified in this spectrum are water, carbon monoxide, formaldehyde, methanol, dimethyl ether, hydrogen cyanide, sulphur oxide, sulphur dioxide and their isotope analogues. It is expected that new molecules will also be identified. This spectrum is the first glimpse at the spectral richness of regions of star and planet formation. It harbors the promise of a deep understanding of the chemistry of space once the complete spectral surveys are available. This HIFI spectrum was obtained for the Herschel HEXOS Key Program ““ a scientific investigation using the Herschel HIFI and PACS instruments to perform full line surveys of five sources in the Orion and Sagittarius B2 molecular clouds. The scientific rights of these Herschel observations are owned by the HEXOS consortium, led by E. Bergin (University of Michigan). Credit: ESA, HEXOS and the HIFI consortium

On the Net:

Supermarket Lighting Enhances Nutrient Level Of Fresh Spinach

Far from being a food spoiler, the fluorescent lighting in supermarkets actually can boost the nutritional value of fresh spinach, scientists are reporting. The finding could lead to improved ways of preserving and enhancing the nutritional value of spinach and perhaps other veggies, they suggest in a study in ACS’ bi-weekly Journal of Agricultural and Food Chemistry.

Gene Lester, Donald J. Makus, and D. Mark Hodges note that fresh spinach is a nutritional powerhouse, packed with vitamin C, vitamin E, folate (a B vitamin), and healthful carotenoid antioxidants. Supermarkets often display fresh spinach in clear plastic containers at around 39 degrees Fahrenheit in showcases that may be exposed to fluorescent light 24 hours a day. Lester, Makus, and Hodges wondered how this continuous light exposure might affect spinach’s nutritional value.

The scientists exposed fresh spinach leaves to continuous light or darkness during simulated retail storage conditions for three to nine days. Spinach stored in light for as little as three days had significantly higher levels of vitamins C, K, E, and folate. They also had higher levels of the healthful carotenoids (plant pigments) lutein and zeaxanthin. During continuous light exposure after nine days, levels of folate increased between 84 and 100 percent, for instance. Levels of vitamin K increased between 50 and 100 percent, depending on the spinach variety tested. By contrast, spinach leaves stored under continuous darkness tended to have declining or unchanged levels of nutrients, the scientists say.

Image Caption: Spinach on display under 24-hour light in supermarkets actually gains in content of some nutrients. Credit: Marc Villalobos, USDA-ARS

On the Net:

Warmer Weather Brings On More Cocaine-Related Deaths

In a study published online March 2 in the journal Addiction, researchers in the United States have discovered that accidental overdose deaths involving cocaine rise when the average weekly ambient temperature passes 24 degrees Celsius (75 degrees Fahrenheit). Using mortality data from New York City’s Office of the Chief Medical Examiner for 1990 through 2006, and temperature data from the National Oceanic and Atmospheric Association, researchers found that accidental overdose deaths that were wholly or partly attributable to cocaine use rose significantly as the weekly ambient temperature passed 24 degrees Celsius. The number of cocaine-related overdose deaths continued to rise as temperatures continued to climb.

Cocaine-related overdose deaths increase as the ambient temperature rises because cocaine increases the core body temperature, impairs the cardiovascular system’s ability to cool the body, and decreases the sense of heat-related discomfort that ordinarily motivates people to avoid becoming overheated. Cocaine users who become overheated (hyperthermic) can overdose on lower amounts of cocaine because their bodies are under more stress.

The study’s findings correct previous research that associated an increase in cocaine-related mortality with much higher temperatures (31.1 degrees Celsius, or 87.9 degrees Fahrenheit). Because cocaine-related overdose fatalities begin to rise at lower ambient temperatures than was previously thought, it is now apparent that cocaine users are at risk for longer periods of each year. Between 1990 and 2006, the average weekly temperature in New York City rose above 24 degrees Celsius for about seven weeks per year.

The study showed no difference in the number of drug overdoses in New York City among those weeks where the average temperature was between -10 and 24 degrees Celsius. Above 24 degrees Celsius, however, there were 0.25 more drug overdoses per 1,000,000 residents per week for every two degrees increase in weekly average temperature. Given that over 8.2 million people live in New York City, the study’s findings predict that at least two more people per week will die of a drug overdose in the city for each two degree rise in temperature above 24 degrees Celsius, compared to weeks with average temperatures of 24 degrees and below.

The authors of this study point out the need for public health interventions in warm weather, such as delivering health-related warnings to high-risk groups. Prevention efforts could also include making air conditioning available in locations where cocaine use is common such as urban areas with a known high prevalence of cocaine use, and within those urban areas, particular neighborhoods with elevated numbers of cocaine-related deaths or arrests. As lead author Dr. Amy Bohnert explains, “Cocaine users are at a high risk for a number of negative health outcomes and need public health attention, particularly when the weather is warm.”

Bohnert A., Prescott M., Vlahov D., Tardiff K., and Galea S. Ambient temperature and risk of death from accidental drug overdose in New York City, 1990-2006. Addiction 2010; doi:10.1111/j.1360-0443.2009.02887.x

On the Net:

Many Kids With Insomnia Have Heart Problems

SAN FRANCISCO — Children with insomnia and shorter sleep duration had impaired modulation of heart rhythm during sleep, Pennsylvania researchers reported at the American Heart Association’s 50th Annual Conference on Cardiovascular Disease Epidemiology and Prevention.

In a study of young children, researchers showed that insomnia symptoms were consistently associated with impaired heart variability measures. They also found a significant but less consistent pattern with shortened sleep duration and decreased heart rate variability.

Heart rate variability is the beat-to-beat variations of heart rate. In a healthy person, beat-to-beat intervals change slightly in response to automatic functions like breathing.

The study included 612 elementary school children in the first to fifth grades. The children were average age 9, and 25 percent were non-white and 49 percent were boys. All were generally in good health. Their parents completed the Pediatric Behavior Scale, including two questions that focused on symptoms of insomnia.

Researchers examined the children overnight in a sleep laboratory with polysomnography (PSG), a standardized method for measuring sleep disorders. The researchers measured sleep duration, trouble falling asleep, the number of wake-ups and problems going back to sleep if awakened. They also measured cardiac autonomic modulation (CAM), the balance of the sympathetic and the parasympathetic control of the heart rate rhythm.

A balance is needed between the sympathetic modulation that “excites” the heart and the parasympathetic modulation that “calms” the heart, said Mr. Fan He, the lead-author of the study and a graduate student at Penn State University College of Medicine in Hershey, Pa. “The balance between the sympathetic and the parasympathetic provides a favorable profile for the heart.”

The study showed:

— Children with reported insomnia had impaired CAM with a shift towards more sympathetic or excitable activation of the heart rhythm. There was a 3 percent to 5 percent reduction in the parasympathetic modulation of heart rhythm in children with insomnia.
— Children with longer sleep duration had a slower heart rate indicative of a balance of heart rhythm, with a shift towards more parasympathetic modulation. The heart rate of children who slept eight hours was two beats per minute slower than that of kids who slept only seven hours.
— Insomnia and short sleep duration, even in young children, resulted in a physiological activation of the sympathetic modulation.

“Kids who sleep a longer duration have a healthier heart regulation profile compared to kids who sleep shorter durations,” said Duanping Liao, M.D., Ph.D., co-author of the study and professor of epidemiology at Penn State University College of Medicine in Hershey, Pa. “Their hearts are more excitable if they have insomnia. If the heart is too excited, that means it is beating too fast and usually that isn’t good. These data indicate that among young children with insomnia symptoms reported by their parents, there already is an impairment of cardiovascular autonomic regulation, long before they reach the traditional high-risk period for cardiovascular disease.”

Parents should encourage their children to have healthy bedtime habits that encourage sleep, Liao said. “Watching television before going to bed and waking up to return text messages are examples of activities that could have a harmful affect on healthy sleep patterns in children.”

Liao called for further studies in children to determine the impact of sleep deprivation and stress and the possible long-term risk of cardiovascular disease and obesity. “Previous studies have shown a strong association of heart rhythm regulation and heart risk in adults. It’s quite possible that this kind of stress can have a long-term impact even at a young age.”

The study was funded by the National Heart, Lung, and Blood Institute.

Other co-authors are: Xian Li, M.D., M.S.; Sol Rodriguez-Colon, M.S.; Alexandros N. Vgontzas, M.D.; Chuntao Wu, M.D., Ph.D.; and Edward O. Bixler, Ph.D. Author disclosures are on the abstract.

On the Net:

American Heart Association

ESO Telescope Spies The Cosmic Bat

An Island of Stars in the Making on the Outskirts of Orion

The delicate nebula NGC 1788, located in a dark and often neglected corner of the Orion constellation, is revealed in a new and finely nuanced image that ESO is releasing today. Although this ghostly cloud is rather isolated from Orion’s bright stars, the latter’s powerful winds and light have had a strong impact on the nebula, forging its shape and making it home to a multitude of infant suns.

Stargazers all over the world are familiar with the distinctive profile of the constellation of Orion (the Hunter). Fewer know about the nebula NGC 1788, a subtle, hidden treasure just a few degrees away from the bright stars in Orion’s belt.

NGC 1788 is a reflection nebula, whose gas and dust scatter the light coming from a small cluster of young stars in such a way that the tenuous glow forms a shape reminiscent of a gigantic bat spreading its wings. Very few of the stars belonging to the nebula are visible in this image, as most of them are obscured by the dusty cocoons surrounding them. The most prominent, named HD 293815, can be distinguished as the bright star in the upper part of the cloud, just above the center of the image and the pronounced dark lane of dust extending through the nebula.

Although NGC 1788 appears at first glance to be an isolated cloud, observations covering a field beyond the one presented in this image have revealed that bright, massive stars, belonging to the vast stellar groupings in Orion, have played a decisive role in shaping NGC 1788 and stimulating the formation of its stars. They are also responsible for setting the hydrogen gas ablaze in the parts of the nebula facing Orion, leading to the red, almost vertical rim visible in the left half of the image.

All the stars in this region are extremely young, with an average age of only a million years, a blink of an eye compared to the Sun’s age of 4.5 billion years. Analyzing them in detail, astronomers have discovered that these “preschool” stars fall naturally into three well separated classes: the slightly older ones, located on the left side of the red rim, the fairly young ones, to its right, making up the small cluster enclosed in the nebula and illuminating it, and eventually the very youngest stars, still deeply embedded in their nascent dusty cocoons, further to the right. Although none of the latter are visible in this image because of the obscuring dust, dozens of them have been revealed through observations in the infrared and millimeter wavelengths of light.

This fine distribution of stars, with the older ones closer to Orion and the younger ones concentrated on the opposite side, suggests that a wave of star formation, generated around the hot and massive stars in Orion, propagated throughout NGC 1788 and beyond.

This image has been obtained using the Wide Field Imager on the MPG/ESO 2.2-meter telescope at ESO’s La Silla Observatory in Chile.

More information

ESO, the European Southern Observatory, is the foremost intergovernmental astronomy organization in Europe and the world’s most productive astronomical observatory. It is supported by 14 countries: Austria, Belgium, the Czech Republic, Denmark, France, Finland, Germany, Italy, the Netherlands, Portugal, Spain, Sweden, Switzerland and the United Kingdom. ESO carries out an ambitious program focused on the design, construction and operation of powerful ground-based observing facilities enabling astronomers to make important scientific discoveries. ESO also plays a leading role in promoting and organizing cooperation in astronomical research. ESO operates three unique world-class observing sites in Chile: La Silla, Paranal and Chajnantor. At Paranal, ESO operates the Very Large Telescope, the world’s most advanced visible-light astronomical observatory and VISTA, the largest survey telescope. ESO is the European partner of a revolutionary astronomical telescope ALMA, the largest astronomical project in existence. ESO is currently planning a 42-meter European Extremely Large optical/near-infrared Telescope, the E-ELT, which will become “the world’s biggest eye on the sky”.

Image 1: The delicate nebula NGC 1788, located in a dark and often neglected corner of the Orion constellation, is revealed in this finely nuanced image. Although this ghostly cloud is rather isolated from Orion’s bright stars, their powerful winds and light have a strong impact on the nebula, forging its shape and making it a home to a multitude of infant suns. This image has been obtained using the Wide Field Imager on the MPG/ESO 2.2-meter telescope at ESO’s La Silla Observatory in Chile. It combines images taken through blue, green and red filters, as well as a special filter designed to let through the light of glowing hydrogen. The field is about 30 arcminutes across; North is up, and East to the left. Credit: ESO

Image 2: The delicate nebula NGC 1788 is located in a dark and often neglected corner of the Orion constellation. Although this ghostly cloud is rather isolated from Orion’s bright stars, their powerful winds and light have a strong impact on the nebula, forging its shape and making it a home to a multitude of infant suns. This image from the Digitized Sky Survey 2 covers a field of view of 3 x 2.9 degrees, and shows that the Bat Nebula is part of much larger nebulosity. Credit: ESO/Digitized Sky Survey 2

On the Net:

Insurance To Cover Medical Marijuana Use Nationwide

On Monday, a Rancho Cordova-based insurer launched the first nationally available insurance coverage designed specifically for the medical marijuana industry, according to the Sacramento Bee.

Only 14 states currently allow the use of cannabis for medical purposes, but Statewide Insurance Services is offering coverage in all 50 states.

“Given the growth in the industry, I think it’s only a matter of time” before other states allow medical marijuana,  Mike Aberle, a commercial insurance agent with the local firm and national director of its Medical Marijuana Specialty Division, told the Bee.

He added: “Now that we can offer (services) in all 50 states, we can start the minute they go legal, without delay.”

Arberle said the program covers “all aspects of the industry,” including medical marijuana dispensaries (MDDs for short), workers’ compensation, general liability, auto insurance (motor vehicles that transport the product), equipment breakdown/damage, property/product loss (including pot spoilage) and operations related to marijuana growing.

California voters first opened the door for dispensaries and commercial insurers in 1996 when they approved Proposition 215, which allows physicians to recommend cannabis for treatment of cancer, anorexia, AIDS, chronic pain, spasticity, glaucoma, arthritis, migraines or “any other illness for which marijuana provides relief.”

The Obama administration said last year that it would not arrest marijuana growers and sellers that abide by state laws.  Federal officials prosecuted them previously.

Some in the medical marijuana industry estimate that there are over 2,000 dispensaries statewide.

Aberle started the process of forming Statewide’s MMD unite in 2007.  It has provided insurance to clients in California, Colorado, New Mexico and Rhode Island since then.

Aberle said he started ramping up the national program last year.

He said premiums range from $650 annually up to $25,000 a year, with different variables affecting the price.  He said that a typical policy has annual premiums in the $1,000 to $4,000 range.

Max Del Real, a lobbyist with California Capitol Solutions in Sacramento, told teh Bee that Statewide’s national program is a milestone in an industry that needs insurance protections for everyone in the distribution chain, from growers of the product to those that use it.

“It’s very big, especially right now with public safety. Safety protocols need to be put into place,” he said.

Del Real said he represents dispensaries and other segments of the medical marijuana industry throughout California.  He said that growers remain the most unprotected group.

“How do we move out of residential areas and into commercial and industrial space?” he asked. “A lot of people are trying to get their minds around the cultivation of medical marijuana.”

Del Real said governments throughout California have decided numerous issues, like whether they will require insurance for dispensaries.
 
“There is a big thing of catching up going on,” he said. “Each community is passing its own laws, and that becomes problematic.”

The growth of MMDs has come so fast that some cities in California have drafted ordinances and moratoriums to halt new openings.

On the Net:

Climategate Scientist Admits To "ËœAwful Emails’

A British climate researcher at the heart of the Climategate row scandal admitted he wrote some “pretty awful” emails to skeptics when he refused their requests for data.

However, Phil Jones, of the University of East Anglia’s Climatic Research Unit, said his decision not to release the data about temperatures from around the world was right, and it was not “standard practice” to do so.

“I have obviously written some pretty awful emails,” Jones told British lawmakers in response to a question about a message he sent to a skeptic in which he refused to release data saying he believed it would be misused.

The scientist’s confession came at a parliamentary hearing in Britain.

The Climategate row came under fire ahead of key climate talks in Copenhagen in December, after over 1,000 emails and 3,000 other documents were hacked from the university’s server and posted online.

Skeptics said they had evidence that scientists were manipulating climate data in a bid to exaggerate the case for manmade global warming as world leaders met to try and strike a new accord on climate change.

Jones had referred in one private email to a “trick” being employed to tweak temperature statistics to “hide the decline.”

Since then, he has insisted the emails had been taken out of context and allegations were labeled that he was exaggerating warming evidence as “complete rubbish.”
 
Jones said in his defense that the data was not publicly available in the U.S., adding scientific journals, which published his papers, had never asked to see it.

He also said the unit struggled after being hit by a “deluge” of requests for data last July, made under freedom of information legislation.

The scientists said that 80 percent of the data used to create a series of average global temperatures showing the world was getting warmer had been released.

Jones insisted that the scientific findings on climate change were robust and verifiable.

Climategate is a term dubbed by the British media about the many investigations into the scandal, specifically at the disclosure of data from the unit.

On the Net:

Obesity And Depression Linked

Obesity appears to be associated with an increased risk of depression, and depression also appears associated with an increased risk of developing obesity, according to a meta-analysis of previously published studies in the March issue of Archives of General Psychiatry, one of the JAMA/Archives journals.

“Both depression and obesity are widely spread problems with major public health implications,” the authors write as background information in the article. “Because of the high prevalence of both depression and obesity, and the fact that they both carry an increased risk for cardiovascular disease, a potential association between depression and obesity has been presumed and repeatedly been examined.” Understanding the relationship between the two conditions over time could help improve prevention and intervention strategies.

Floriana S. Luppino, M.D., of Leiden University Medical Center and GGZ Rivierduinen, Leiden, the Netherlands, and colleagues analyzed the results of 15 previously published studies involving 58,745 participants that examined the longitudinal (over time) relationship between depression and overweight or obesity.

“We found bidirectional associations between depression and obesity: obese persons had a 55 percent increased risk of developing depression over time, whereas depressed persons had a 58 percent increased risk of becoming obese,” the authors write. “The association between depression and obesity was stronger than the association between depression and overweight, which reflects a dose-response gradient.”

Sub-analyses demonstrated that the association between obesity and later depression was more pronounced among Americans than among Europeans, and stronger for diagnosed depressive disorder compared with depressive symptoms.

Evidence of a biological link between overweight, obesity and depression remains uncertain and complex, but several theories have been proposed, the authors note. Obesity may be considered an inflammatory state, and inflammation is associated with the risk of depression. Because thinness is considered a beauty ideal in both the United States and Europe, being overweight or obese may contribute to body dissatisfaction and low self-esteem that places individuals at risk for depression. Conversely, depression may increase weight over time through interference with the endocrine system or the adverse effects of antidepressant medication.

The findings are important for clinical practice, the authors note. “Because weight gain appears to be a late consequence of depression, care providers should be aware that within depressive patients weight should be monitored. In overweight or obese patients, mood should be monitored. This awareness could lead to prevention, early detection and co-treatment for the ones at risk, which could ultimately reduce the burden of both conditions,” they conclude.

(Arch Gen Psychiatry. 2010;67[3]:220-229)

On the Net:

New Tool Measures Treatment Effectiveness for Stem Cell Transplant Complications

First-of-its-kind tool will allow for better screening, more accurate assessment of treatment effectiveness for stem cell transplant complications

ORLANDO, FL – Researchers from The University of Texas M. D. Anderson Cancer Center have developed a new assessment tool to measure the severity of symptoms that can complicate stem cell transplantation. The tool assesses symptoms resulting from chronic graft-versus-host disease (cGVHD), and was presented with supporting research at the 2010 Bone and Marrow Transplant Tandem Meeting.

cGVHD – a chronic disease that requires the close management of symptoms for an indefinite period of time – usually develops more than three months after allogeneic hematopoietic stem cell transplantation (alloHSCT). cGVHD occurs when transplanted donor cells recognize the recipient as foreign and begin to attack the patient’s organs and tissue.

The disease causes physically debilitating symptoms in 40 to 80 percent of transplant patients. Inadequate diagnosis and assessment is a major barrier to successful treatment of symptoms. According to the Center for International Blood and Marrow Transplant Research, approximately 19,000 allogeneic stem cell transplants were performed worldwide in 2006.

Using the existing M. D. Anderson Symptom Inventory, or core MDASI, a systematic, patient-reported outcome measure for clinical and research use, researchers developed a reliable and sensitive measuring system for cGVHD. On a scale of zero to 10, the new tool rates the severity of symptoms common to patients with the disease and to what extent those symptoms interfere with their daily life. The MDASI-cGVHD is one of 11 MDASI tools for symptom management used by clinicians at M. D. Anderson.

“There was a real need to develop this tool because chronic graft-versus-host disease is a vexing side effect that can become a serious condition in a very short period of time, threatening the success of the transplant and creating a dilemma for many patients who do not live near where they received their transplant or do not have access to a transplant specialist,” said Loretta Williams, Ph.D., R.N., A.O.C.N., O.C.N., an instructor in the Department of Symptom Research at M. D. Anderson and lead author on the study. “Now we can assess quickly in person or over the phone whether a patient has developed symptoms of cGVHD and make arrangements for the patient to receive further assessment and treatment if necessary.”

To develop the tool and establish its validity and reliability, researchers studied 116 patients with cGVHD and 58 without cGVHD, all of whom had had a transplant more than three months before. Using the core MDASI, each patient rated the severity of 13 symptoms common in cancer patients and to what extent those symptoms affected six “interference items,” or general aspects and activities of daily life. Patients also rated 14 additional cGVHD-specific symptom items, generated from patient interviews and expert panel ratings.

Statistical analysis determined that five of the 14 cGVHD-related symptoms were clinically significant and unique to cGVHD patients, including muscle weakness, skin problems, eye problems, joint stiffness or soreness and changes in sexual function. The resulting comprehensive MDASI, which combined the 13 core symptoms with the five cGVHD symptoms, was sensitive to the presence of the disease. Results from the survey also correlated significantly with a patient’s report of overall quality of life.

“For the first time ever, we have a reliable tool to provide better support to patients following their transplant,” Williams said. “The survey will help open the conversation about chronic graft-versus-host disease between doctors and patients and identify complications more quickly. This may lead to better outcomes following stem cell transplantation.”

The MDASI measures symptom severity and interference levels on a scale of 0 to 10, with zero meaning the symptom was not present and did not interfere with a patient’s daily life, and 10 meaning the symptom’s severity and interference in daily life was the worst imaginable. Core symptoms include pain, fatigue, nausea, shortness of breath, poor memory, difficulty sleeping, lack of appetite, dry mouth, sadness, vomiting and numbness or tingling. The six “interference items” include general activity, mood, work, personal relations, walking and overall enjoyment of life.

On the Net:

University of Texas M. D. Anderson Cancer Center

Pandemic Flu, Like H1N1, Shows Signs Of Tamiflu Resistance

If the behavior of the seasonal form of the H1N1 influenza virus is any indication, scientists say that chances are good that most strains of the pandemic H1N1 flu virus will become resistant to Tamiflu, the main drug stockpiled for use against it.

Researchers at Ohio State University have traced the evolutionary history of the seasonal H1N1 influenza virus, which first infected humans during the 1918 pandemic. It is one of three seasonal influenza A viruses that commonly infect humans. The others are H1N2 and H3N2.

Within H1N1, two strains of virus circulate in humans: a seasonal form and the pandemic form of influenza known as swine flu, which has sickened millions and killed thousands of people since it first emerged in North America last spring.

Over time, the H1N1 strain of seasonal influenza surviving around the world has developed mutations that have caused it to become resistant to oseltamivir-based agents. Tamiflu is the brand name for oseltamivir phosphate.

“Something happened in 2008, when drug resistance took hold,” said Daniel Janies, associate professor of biomedical informatics at Ohio State and primary author of the study. “The drug-resistant isolates became the ones that survived all over the world. This is just static now. The seasonal H1N1 influenza virus is fixed at resistant.”

Janies and colleagues have traced the history of the same mutation in the pandemic H1N1 strain of the virus as well, with data from its emergence last spring until December 2009. And they are starting to see the same kinds of mutation in this virus ““ changes to an amino acid that allow the virus to resist the effects of oseltamivir ““ that they saw in the seasonal H1N1 flu.

“It is a pretty good bet that whatever pressure is in the environment, excessive use of Tamiflu or something else, that was driving seasonal influenza to become resistant to Tamiflu is also going to apply to pandemic influenza,” Janies said. “We can see it happening already.

“This has potential to indicate that we are going to have to think of something else to use to treat pandemic H1N1 influenza.”

The same study showed that resistance to a second antiviral drug ““ zanamivir, known by the brand name Relenza ““ is not as prevalent, suggesting this medication might be a good alternative to Tamiflu, he said.

The research appears online in the International Journal of Health Geographics.

So far, most pandemic H1N1 strains that have been isolated from humans are susceptible to Tamiflu. As of Feb. 3, 2010, 225 cases of pandemic H1N1 were reported to be resistant to the drug out of the predicted millions of cases of illnesses with swine flu across the United States and elsewhere in the world.

But those resistant cases, as well as the way mutations have led to Tamiflu resistance in seasonal H1N1, offer clues about how the virus changes itself to survive against the popular drug.

The two types of H1N1 virus, seasonal and pandemic, are similar on the surface, where their proteins interact with cells in the human body. But the internal genes of the viruses are configured differently.

The researchers zeroed in on specific points in the neuraminidase protein ““ this protein is what the “N” refers to in these virus subtype names. Resistance to oseltamivir in H1N1 can evolve as a result of a point mutation at one of several locations on this protein, Janies said.

He and colleagues analyzed mutations in neuraminidase proteins from 1,210 seasonal H1N1 viruses isolated around the world between September 2004 and December 2009. For pandemic H1N1, the researchers examined mutations in specific points on neuraminidase proteins of 1,824 viruses collected between March 2009 and December 2009.

“With the rapid availability of public sequence data on pandemic influenza, we are able to essentially watch evolution in real time,” Janies said.

Once they selected the isolates for study, the researchers used powerful supercomputers to analyze the evolution of these proteins and their various mutations. The computational power allows them to match similar regions on the proteins and put the mutation data into context in time and geography.
              
One result of these computations is called a phylogenetic tree, which documents the history of mutations ““ including those that cause drug resistance. Phylogenetics is the study of the evolutionary relationships and features among various biological species, genes or proteins that share a common ancestor.

In tracing the history of neuraminidase in pandemic and seasonal H1N1, the group found that mutations in the same amino acid position in both seasonal and pandemic H1N1 drove the viruses toward resistance to antivirals.

“Basically a change in the amino acid changes how the neuraminidase protein folds, and the molecule in Tamiflu no longer has the ability to interfere with the virus,” Janies said.

The researchers also used a technique in which they compared different types of mutations ““ those that do cause antiviral resistance and others that don’t have that effect ““ to see which type of mutation is more common.

“We look at the ratio of mutations that do confer resistance vs. those that don’t, and if the ratio is higher than 1, it means that change is being promoted by natural selection rather than chance. Something is driving the evolution of drug resistance,” Janies said. “We could see that happening in seasonal influenza and in the data we have so far for pandemic influenza, as well.

“A Darwinian would say that something changed that made the Tamiflu-resistant strain more fit than the wild type,” he said.

The group also examined mutations that alter these two strains of H1N1 viruses’ responses to Relenza. Resistance to that drug is relatively rare, Janies said, which could be attributed to less frequent use of the drug or to the possibility that mutations leading to resistance to Relenza aren’t tolerated by the virus itself, so those strains die off.

Janies noted that there is another phenomenon with flu that could further make the pandemic strains difficult to treat. In at least 50 geographic regions identified by the analysis, both seasonal and pandemic H1N1 viruses are co-circulating, including Tamiflu-resistant strains. Because the flu virus in general is not precise when it makes copies of itself, this means that a drug-susceptible pandemic strain might exchange a gene with a drug-resistant viral strain and add it to the new genome.

“And then we would have drug-resistant pandemic influenza without any mutation. It’s a random swap of the whole gene,” Janies said of this phenomenon, which is called reassortment.

“That’s how we got into this situation with pandemic influenza. We have something that’s called pandemic H1N1, but all of its internal genes are different. It underwent a few rounds of reassortment and it’s a virus we’ve never seen before because its genome is highly reshuffled compared to seasonal H1N1. This same process could confer resistance to a drug,” he said.

The researchers have plotted areas where pandemic influenza and drug-resistant seasonal influenza circulate together into Google Earth using software called Pointmap. Regions in the United States and Japan are among those in which pandemic flu isolates carry the Tamiflu-resistant mutation. The regions of co-circulation can be seen at http://pointmap.osu.edu.

The computing power used in this study was supplied by the Ohio Supercomputer Center and the Ohio State University Medical Center. This work is funded by the U.S. Army Research Laboratory and Office.

Janies co-authored the study with Igor Voronkin, Jonathon Studer, Jori Hardman, Boyan Alexandrov, Travis Treseder and Chandni Valson, all of Ohio State’s Department of Biomedical Informatics.

By Emily Caldwell, Ohio State University

On the Net:

Many Parents Still Believe Vaccines Can Cause Autism

Research has found that one in four U.S. parents believe that vaccines can cause autism in healthy children.

The study, which was based on a survey of 1,552 parents, found that most of them still continue to follow the advice of their children’s doctors.  Extensive research has found no tie between autism and vaccines.

“Nine out of 10 parents believe that vaccination is a good way to prevent diseases for their children,” lead author Dr. Gary Freed of the University of Michigan told the Associated Press (AP). “Luckily their concerns don’t outweigh their decision to get vaccines so their children can be protected from life-threatening illnesses.”

Dr. Melinda Wharton of the U.S. Centers for Disease Control and Prevention told AP that in 2008, unvaccinated school-age children contributed to measles outbreaks in California, Illinois, Washington, Arizona and New York.  Thirteen percent of the kids who were sick that year were hospitalized.

“It’s fortunate that everybody recovered,” Wharton added, noting that measles can be deadly. “If we don’t vaccinate, these diseases will come back.”

A 1998 study led to the fear of a vaccine-autism connection.  The retraction came after a council that regulates Britain’s doctors decided that the study’s author acted dishonestly and unethically.

The University of Michigan based the new study on a survey of parents long before the 1998 study was retracted.  There has been a lot written about research that has failed to link vaccines and autism.  Mainstream advocacy groups like Autism Speaks strongly encourage parents to have their kids vaccinated.

“Now that it’s been shown to be an outright fraud, maybe it will convince more parents that this should not be a concern,” said Freed, whose study appears in the April issue of Pediatrics, released Monday.

Some doctors have taken a stand against the allegations, telling parents that refuse to have their child vaccinated to find another doctor.

A statement from a group of doctors in Philadelphia outlines its doctors’ adamant support for government recommended vaccines and their belief that “vaccines do not cause autism or other developmental disabilities.”

“Furthermore, by not vaccinating your child you are taking selfish advantage of thousands of others who do vaccinate their children … We feel such an attitude to be self-centered and unacceptable,” the statement says, urging those who “absolutely refuse” vaccines to find another physician.

“We call it the manifesto,” Dr. Bradley Dyer of All Star Pediatrics in Lionville, Pa told AP.

Dyer said that dozens of doctors have asked to distribute the statement, and only a handful of parents have taken their children to other places.

“Parents have said, ‘Thank you for saying that. We feel much better about it,'” Dyer said.

The new study is based on results from parents with children 17 and younger that filled out questions online.  It used a sample from a randomly selected pool of nationally representative participants.  Parents were given Internet access if they did not already have it, just to ensure families of all incomes were included.  The survey did not mention vaccines in the invitation to participate, and vaccine questions were among others on unrelated topics.

Twenty-five percent of parents said they thought “some vaccines cause autism in healthy children.”  Of the mothers that participated, 29 percent agreed with that statement, while 17 percent of fathers agreed with it.

About 12 percent of parents said they would refuse a vaccine for their child that a doctor recommended.  Fifty-six percent of those said they had refused the relatively new vaccine against human papillomavirus, or HPV, which causes cervical cancer.  Thirty-two percent refused vaccines against meningococcal disease, 32 percent refused chickenpox vaccines and 18 percent refused measles-mumps-rubella.

Parents that refused the HPV vaccine cited various reasons. 

Parents that refuse the MMR vaccine said they had read or heard about problems with it or felt its risk were too great.

Dr. Gary S. Marshall of the University of Louisville School of Medicine and author of a vaccine handbook for doctors said the findings would help doctors craft better ways to talk with parents.

“For our children’s sake, we have to think like scientists,” Marshall, who was not involved in the new study, told AP. “We need to do a better job presenting the data so parents understand how scientists have reached this conclusion that vaccines don’t cause autism.”

On the Net:

Measuring The Age And Size Of The Universe

Using entire galaxies as lenses to look at other galaxies, researchers have a newly precise way to measure the size and age of the universe and how rapidly it is expanding, on par with other techniques. The measurement determines a value for the Hubble constant, which indicates the size of the universe, and confirms the age of the universe as 13.75 billion years old, within 170 million years. The results also confirm the strength of dark energy, responsible for accelerating the expansion of the universe.

These results, by researchers at the Kavli Institute for Particle Astrophysics and Cosmology (KIPAC) at the US Department of Energy’s SLAC National Accelerator Laboratory and Stanford University, the University of Bonn, and other institutions in the United States and Germany, will be published in The Astrophysical Journal in March. The researchers used data collected by the NASA/ESA Hubble Space Telescope, and showed the improved precision they provide in combination with the Wilkinson Microwave Anisotropy Probe (WMAP).

The team used a technique called gravitational lensing to measure the distances light traveled from a bright, active galaxy to the earth along different paths. By understanding the time it took to travel along each path and the effective speeds involved, researchers could infer not just how far away the galaxy lies but also the overall scale of the universe and some details of its expansion.

Oftentimes it is difficult for scientists to distinguish between a very bright light far away and a dimmer source lying much closer. A gravitational lens circumvents this problem by providing multiple clues as to the distance light travels. That extra information allows them to determine the size of the universe, often expressed by astrophysicists in terms of a quantity called Hubble’s constant.

“We’ve known for a long time that lensing is capable of making a physical measurement of Hubble’s constant,” KIPAC Kavli Fellow Phil Marshall said. However, gravitational lensing had never before been used in such a precise way. This measurement provides an equally precise measurement of Hubble’s constant as long-established tools such as observation of supernovae and the cosmic microwave background. “Gravitational lensing has come of age as a competitive tool in the astrophysicist’s toolkit,” Marshall said.

Though researchers do not know when light left its source, they can still compare arrival times. Marshall likens it to four cars taking four different routes between places on opposite sides of a large city, such as Stanford University to Lick Observatory, through or around San Jose. And like automobiles facing traffic snarls, light can encounter delays, too.

Though researchers do not know when light left its source, they can still compare arrival times. Marshall likens it to four cars taking four different routes between places on opposite sides of a large city, such as Stanford University to Lick Observatory, through or around San Jose. And like automobiles facing traffic snarls, light can encounter delays, too.

“The traffic density in a big city is like the mass density in a lens galaxy,” Marshall said. “If you take a longer route, it need not lead to a longer delay time. Sometimes the shorter distance is actually slower.”

The gravitational lens equations account for all the variables such as distance and density, and provide a better idea of when light left the background galaxy and how far it traveled.

In the past, this method of distance estimation was plagued by errors, but physicists now believe it is comparable with other measurement methods. With this technique, the researchers have come up with a more accurate lensing-based value for Hubble’s constant, and a better estimation of the uncertainty in that constant. By both reducing and understanding the size of error in calculations, they can achieve better estimations on the structure of the lens and the size of the universe.

There are several factors scientists still need to account for in determining distances with lenses. For example, dust in the lens can skew the results. The Hubble Space Telescope has infra-red filters useful for eliminating dust effects. The images also contain information about the number of galaxies lying around the line of vision; these contribute to the lensing effect at a level that needs to be taken into account.

Marshall says several groups are working on extending this research, both by finding new systems and further examining known lenses. Researchers are already aware of more than twenty other astronomical systems suitable for analysis with gravitational lensing.

This research was supported in part by the Department of Energy Office of Science.

The Kavli Institute for Particle Astrophysics and Cosmology, initiated by a grant from Fred Kavli and the Kavli Foundation, is a joint institute of Stanford University and SLAC National Accelerator Laboratory.

SLAC is a multi-program laboratory exploring frontier questions in astrophysics, photon science, particle physics and accelerator research. Located in Menlo Park, California, SLAC is operated by Stanford for the U.S. Department of Energy Office of Science.

Image 1: When a large nearby object, such as a galaxy, blocks a distant object, such as another galaxy, the light can detour around the blockage. But instead of taking a single path, light can bend around the object in one of two, or four different routes, thus doubling or quadrupling the amount of information scientists receive. As the brightness of the background galaxy nucleus fluctuates, physicists can measure the ebb and flow of light from the four distinct paths, such as in the B1608+656 system imaged above. (Image courtesy Sherry Suyu.)

Image 2: KIPAC Kavli Fellow Phil Marshall

On the Net:

Poorer Breast Cancer Survival Associated With Micrometastases In Axillary Lymph Nodes

Metastases that were 2 millimeters or less in diameter (“micrometastases”) in axillary lymph nodes detected on examination of a single section of the lymph nodes were associated with poorer disease-free and overall survival in breast cancer patients, according to a new study published online February 26 in the Journal of the National Cancer Institute.

The prognostic relevance of isolated tumor cells and micrometastases in lymph nodes in breast cancer patients has become a major issue since the introduction of the sentinel lymph node procedure. Recently, patients with minimal lymph node involvement detected after a sentinel node procedure in the Dutch MIRROR study were found to have a reduced disease-free survival.

To better understand this issue, Maaike de Boer, M.D., and Vivianne C.G. Tjan-Heijnen, M.D., Ph.D., Division of Medical Oncology, at Maastricht University Medical Centre, the Netherlands, and colleagues performed a systematic review of literature published between 1977 and 2008 on the association of isolated tumor cells and micrometastases in axillary lymph nodes and survival. A total of 58 articles were included and divided into three categories according to the methods used to detect the small metastases: cohort studies with single-section pathological examination of axillary lymph nodes; occult metastases studies with retrospective examination of negative lymph nodes by step sectioning and/or immunohistochemistry; and sentinel lymph node biopsy studies with intensified work-up of the sentinel but not of the non-sentinel lymph nodes.

The presence (vs. the absence) of metastases that were 2 mm or less in diameter was associated with poorer overall survival among cohort studies and with poorer overall survival and poorer disease-free survival among occult metastases studies.

On the Net:

The Pig And It Pancreas

A unique model for a common disease

The increasing prevalence of type 2 diabetes places a huge burden on its victims and poses a tremendous challenge to healthcare systems. Half of all heart attacks and stroke cases, but also many other deleterious conditions, can be ascribed to the effects of this metabolic syndrome. In Germany alone, some seven million people currently suffer from the disease, and the number of cases worldwide is projected to reach 370 million by the year 2030. Type 2 diabetes results from a combination of genetic and environmental factors which cause the organism to become resistant to the action of insulin. This hormone controls the level of glucose in the blood, so insulin resistance leads to a chronic rise in glucose concentrations. A team of LMU researchers led by Professor Eckhard Wolf and Professor Rdiger Wanke has now introduced a new model system for the study of the disease. They have created a genetically modified strain of pigs that consistently develop the essential symptoms of type 2 diabetes. “The physiology of the pig is actually very similar to that of humans”, says Wolf. “Our model therefore provides a unique tool for the development and testing of new approaches to the diagnosis and therapy of diabetes.”

After a meal, the concentration of glucose in the blood rises, causing the beta-cells of the pancreas to secrete a correlated amount of insulin. The hormone in turn stimulates uptake of glucose by several tissues, including the skeletal muscles. In cases of type 2 diabetes, this regulatory circuit is disturbed. Cells exposed to insulin fail to respond, and the consequent failure to remove the glucose causes its level to remain high. This state of chronic hyperglycemia has deleterious effects on many organ systems, leading to cardiovascular disease, kidney failure and blindness, for example. Up until a few decades ago, the disease, which remains incurable, was largely confined to the elderly, but it has since become more and more prevalent among young adults, adolescents and even children. The younger the age of onset, the greater the chance that increasingly severe conditions will develop as time passes.

The so-called incretin hormones, GIP (short for glucose-dependent insulin-releasing polypeptide) and GLP-1 (glucagon-like peptide 1), are produced in the intestine after ingestion of a meal, and are transported via the circulation to the pancreas. There they stimulate the synthesis and secretion of insulin by binding to specific receptor molecules on the beta cells. GLP-1 has already proven effective in the treatment of diabetes. GIP, on the other hand, has shown very limited efficacy in patients with diabetes, and whether this lack of responsiveness is a cause or a consequence of the diabetic condition itself remains controversial. “In our genetically modified (transgenic) pigs, which produce a partially defective GIP receptor, the response to GIP is also very weak”, reports Dr. Simone Renner, who is first author on the new publication and research associate at the chair for Molecular Animal Breeding and Biotechnology. “Our results suggest that inability to respond to GIP leads not only to a fall in glucose utilization and insulin secretion, but is also associated with a reduction in the mass of beta cells in the pancreas. This would argue that impaired response to GIP is more likely to be a cause rather than a consequence of diabetes. We hope that our model will help to accelerate the translation of the latest research findings into clinical applications.”

The pig is a particularly suitable model, because its metabolism and physiology closely resemble our own. The transgenic pigs not only display a weak response to GIP, they also display other traits that are typical of type 2 diabetes in humans. For instance, the efficiency of both glucose utilization and insulin secretion falls off with increasing age, as in humans. The number of insulin-producing beta cells in the pancreas is also lower than normal, due to the fact that the cells divide less frequently. Thus, the new model system provides a variety of opportunities for innovative research on diabetes. Among other things, the system should be ideal for testing and improving therapeutic regimes based on incretins, which already represent an important treatment option. One might also be able to utilize the system in the development of imaging techniques for direct measurement of beta cell mass in patients. Indeed, the Munich team have now established a total of four different genetic models that are relevant to diabetes, and therefore provided researchers with a unique and invaluable research resource.

Publication: “Glucose intolerance and reduced proliferation of pancreatic-cells in transgenic pigs with impaired GIP function”, Simone Renner, Christiane Fehlings, Nadja Herbach, Andreas Hofmann, Dagmar C. von Waldthausen, Barbara Keßler, Karin Ulrichs, Irina Chodnevskaja, Vasiliy Moskalenko, Werner Amselgruber, Burkhard Göke, Alexander Pfeifer, Rdiger Wanke, Eckhard Wolf, Diabetes Online Ahead of Print, 26 February 2010, doi:10.2337/db09-0519)

On the Net:

Mars Express Ready For Closest Approach To Phobos

On March 3, 2010 Mars Express will make its closest ever approach to Phobos, the larger of the two Martian moons. During a series of flybys, spanning six weeks, all seven instruments onboard Mars Express will be utilized to study Phobos. The close approach provides a first opportunity to perform a unique gravity experiment that may reveal the distribution of mass within this intriguing moon.

ESA’s Mars Express spacecraft orbits the Red Planet in a highly elliptical, polar orbit that brings it close to Phobos every five months. It is the only spacecraft, currently in orbit around Mars, whose orbit reaches far enough away from the planet to provide a close-up view of Phobos. Over the course of twelve flybys, taking place between February 16 and March 26, 2010, Mars Express will pass within 1400 km of the surface of Phobos. The Mars orbiter will make its closest ever approach to Phobos- just 50 km above the surface – on March 3, 2010.

The suite of seven experiments onboard Mars Express are primarily used to study the atmosphere, surface and subsurface of the Red Planet. These science instruments can also be used to investigate Phobos. During this current series of flybys all Mars Express instruments will be used to study Phobos, taking advantage of not only the close approach to the moon but also, for the gravity experiment during the closest flyby, the proximity of Mars to the Earth.

Phobos ““ a moon of unknown origin

Phobos, the larger of the two Martian moons, remains one of the few objects in the Solar System whose location cannot be easily explained. By studying Phobos with the Mars Express instruments scientists are hoping to contribute to the understanding of the moon’s nature and origin. Phobos (and Deimos) could be captured asteroids ““ early measurements of the composition of both moons were compatible with this idea ““ or they could have formed from material that was ejected following a large collision with Mars. Additional theories are that the moons could be survival planetisimals, or formed from the break-up of a moon that was created early in the formation of the Solar System. Knowing how the mass is distributed within Phobos is an important step in understanding the interior of the moon and this in turn will provide crucial insight into the moon’s origin.

Studying Phobos close-up ““ unique science

At a distance of just 50 km above the surface of Phobos Mars Express will make the most precise measurements to date of the moon’s gravity field using the X-band (8.4 GigaHertz) channel of the Mars Radio Science (MaRS) instrument. This instrument relies on the observation of the phase, amplitude, polarization and propagation times of radio signals transmitted from the spacecraft and received at ground station antennas on Earth. The radio signals are affected by the medium through which the signals propagate, by the gravitational influence of Mars on the spacecraft and finally by the performance of the various systems involved both on the spacecraft and on ground. In addition, during this series of close flybys, the gravitational attraction of Phobos will slightly disturb the trajectory of the spacecraft. The difference between predicted trajectory (without Phobos) and the actually observed trajectory will lead to the determination of the forces acting on the spacecraft and from them the gravity field of the moon.  To make these measurements, the spacecraft operates in two-way link mode with an X-band uplink and downlink.

This current series of flybys happen to occur when the orbits of Earth and Mars bring them close together which means that Mars Express will be ideally positioned to maximize the signal-to-noise ratio of the two-way X-band radio-link. Nasa’s Deep Space Network (DSN) 70 meter radio station at Robledo, Spain, will track the radio signal from Mars Express and will pick up the subtle changes in the signal due to the doppler effect as the gravity of Phobos affects the spacecraft’s velocity. In addition, the ESA Cebreros station will also be listening to the signal.

Mapping the mass distribution of Phobos

Analysis of Mars Express data will provide key coefficients of the gravity field. The most important coefficient, the mass of Phobos, has been determined from previous flybys at higher altitudes, but it does not provide any information about how the mass is distributed. Calculation of the density of Phobos, using the mass and volume, gives a value too low to be consistent with a solid, non-porous body, which has led to speculation about the composition of the moon and about how its mass is distributed.

Measurement of the gravity field coefficients from a lower altitude, as will be achieved during this series of flybys, will provide increased accuracy of the mass and allow the subsequent, smaller coefficients, such as the J2 coefficient, to be determined for the first time. To determine the mass distribution of Phobos these coefficients are required along with the libration, a measure of how Phobos rotates ““ this has already been determined from Mars Express HRSC images. Knowledge of these various parameters allows the three principle moments of inertia to be derived – these in turn describe the mass distribution of Phobos. Models of Phobos’ interior are being developed and will be tested against the findings of the current and future close flybys.

Studying Phobos close-up ““ continuing investigation

In addition to the new science performed during the gravity experiment, this series of flybys will see Mars Express build on knowledge gained from previous flybys. HRSC data obtained during previous flybys has led to the development of a new topographical atlas of Phobos (see M. Wählisch et al. (2009) for further details and the Phobos atlas website). On-going investigations include: improving the accuracy of the location of Phobos (see J. Oberst et al. (2006), V. Lainey et al. (2007), K. Willner et al. (2008), P. Rosenblatt et al. (2008)) and therefore knowledge of its constantly changing orbit as it spirals slowly towards Mars; measurements of the surface to determine its composition (see B. Gondet et al. (2008), S. Perrier et al. (2004)), study of the origin of grooves (see J. Murray et al. (2006)), shape (see K. Willner et al. (2010)) and sub-surface properties; as well as studying how the surface interacts with the solar wind.

Mars Express data will provide an important contribution to understanding the nature and origin of Phobos but this alone will not provide a definitive answer. Further exploration is required, and in 2011 the Russian Phobos-Grunt (Phobos-Soil) mission is scheduled to launch to retrieve a sample from Phobos to return for study on Earth. Images taken by the Mars Express HRSC instrument during this series of flybys will be used to support the final selection of the Phobos-Grunt landing site.

Reference publications

M. Wählisch et al., “A new topographic image atlas of Phobos”, Earth Planet. Sci. Lett. (2009), doi:10.1016/j.epsl.2009.11.003

K. Willner et al., “Phobos control point network, rotation, and shape”, Earth Planet. Sci. Lett. (2009),  doi:10.1016/j.epsl.2009.07.033

J. Oberst et al., “Astrometric observations of phobos and deimos with the SRC on Mars Express”, Astronomy and Astrophysics, Volume 447, Number 3, pp 1145 – 1151, 2006, doi:10.1051/0004-6361:20053929

V. Lainey et al., “First numerical ephemerides of the Martian moons”, Astronomy and Astrophysics, Volume 465, Number 3, pp 1075 ““ 1084, 2007, doi:10.1051/0004-6361:20065466

K. Willner et al., “New astrometric observations of Phobos with the SRC on Mars Express”, Astronomy and Astrophysics, Volume 488, Number 1, pp 361 ““ 364, 2008, doi:10.1051/0004-6361:200809787

P. Rosenblatt et al., “Accurate Mars Express orbits to improve the determination of the mass and ephemeris of the Martian moons”, Planetary and Space Science, Volume 56, Issue 7, pp 1043 ““ 1053,  2008  doi:10.1016/j.pss.2008.02.004

J. Murray et al., “New Evidence on the Origin of Phobos’ Parallel Grooves from HRSC Mars Express”, 37th Annual Lunar and Planetary Science Conference, March 13-17, 2006, League City, Texas, abstract no. 2195

B. Gondet et al., “Phobos Observations by the OMEGA/Mars Express Hyperspectral Imager”, 39th Lunar and Planetary Science Conference, March 10-14, 2008, League City, Texas, abstract no. 1832

S. Perrier et al., “Spatially Resolved UV albedo spectra of PHOBOS with SPICAM on Mars Express”, American Astronomical Society, DPS meeting 36, Session 31.09, Bulletin of the American Astronomical Society, Volume 36, p 1137, 2004

Image 2: Orbits of Phobos and Mars Express. Credit: ESA

Image 3: Digital terrain model of Phobos derived from HRSC data. Credit: M. Wählisch et al. (2009)

On the Net:

The Psychology of Anthropomorphism and Dehumanization

People talk to their plants, pray to humanlike gods, name their cars, and even dress their pets up in clothing. We have a strong tendency to give nonhuman entities human characteristics (known as anthropomorphism), but why? In a new report in Current Directions in Psychological Science, a journal of the Association for Psychological Science, psychological scientists Adam Waytz from Harvard University and Nicholas Epley and John T. Cacioppo from University of Chicago, examine the psychology of anthropomorphism. 

The term anthropomorphism was coined by the Greek philosopher Xenophanes when describing the similarity between religious believers and their gods “” that is, Greek gods were depicted having light skin and blue eyes while African gods had dark skin and brown eyes. Neuroscience research has shown that similar brain regions are involved when we think about the behavior of both humans and of nonhuman entities, suggesting that anthropomorphism may be using similar processes as those used for thinking about other people.

Anthropomorphism carries many important implications. For example, thinking of a nonhuman entity in human ways renders it worthy of moral care and consideration. In addition, anthropomorphized entities become responsible for their own actions “” that is, they become deserving of punishment and reward.

Although we like to anthropomorphize, we do not assign human qualities to each and every single object we encounter. What accounts for this selectivity? One factor is similarity. An entity is more likely to be anthropomorphized the more similar it appears to humans (for example, through humanlike movements or physical features like a face). Various motivations may also influence anthropomorphism. For example, lacking social connections with other people might motivate lonely individuals to seek out connections from nonhuman items. Anthropomorphism helps us to simplify and make more sense of complicated entities. The authors observe that, according to the World Meteorological Organization, “the naming of hurricanes and storms “” a practice that originated with the names of saints, sailors’ girlfriends, and disliked political figures “” simplifies and facilitates effective communication to enhance public preparedness, media reporting, and the efficient exchange of information.”

Anthropomorphism in reverse is known as dehumanization “” when humans are represented as nonhuman objects or animals. There are numerous historical examples of dehumanization including the Nazis’ persecution of Jews during the Holocaust and torture at the Abu-Ghraib prison in Iraq. These examples also suggest that those engaging in dehumanization are usually part of a cohesive group acting against outsiders “” that is, individuals who feel socially connected may have an increased tendency towards dehumanization. The authors note, “Social connection may have benefits for a person’s own health and well-being but may have unfortunate consequences for intergroup relations by enabling dehumanization.”

The authors conclude that few of us “have difficulty identifying other humans in a biological sense, but it is much more complicated to identify them in a psychological sense.”

On the Net:

Mother’s Sensitivity May Help Language Growth In Autistic Children

Researchers at the University of Miami show that maternal responsiveness can predict language growth among children in the early stages of autism

A new study by researchers from the University of Miami shows that maternal sensitivity may influence language development among children who go on to develop autism. Although parenting styles are not considered as a cause for autism, this report examines how early parenting can promote resiliency in this population. The study entitled, “A Pilot Study of Maternal Sensitivity in the Context of Emergent Autism,” is published online this month and will appear in an upcoming issue of the Journal of Autism and Developmental Disorders.

“Language problems are among the most important areas to address for children with autism, because they represent a significant impairment in daily living and communication,” says Daniel Messinger, associate professor in the department of psychology at the University of Miami (UM) College of Arts and Sciences and principal investigator of a larger study of infants at-risk for autism, which includes this study.

Maternal sensitivity is defined in the study as a combination of warmth, responsiveness to the child’s needs, respect for his or her emerging independence, positive regard for the child, and maternal structuring, which refers to the way in which a mother engages and teaches her child in a sensitive manner. For example, if a child is playing with colored rings, the mother might say, “This is the green ring,” thus teaching the child about his environment, says Messinger.

In this study, maternal sensitivity (and primarily, sensitive structuring) was more predictive of language growth among toddlers developing autism than among children who did not go on to an autism diagnosis. One possible explanation is that children with autism may be more dependent on their environment to learn certain skills that seem to come more naturally to other children.

“Parenting may matter even more for children with developmental problems such as autism because certain things that tend to develop easily in children with typical neurological development, like social communication, don’t come as naturally for kids with autism, so these skills need to be taught,” says Jason K. Baker, a postdoctoral fellow at the Waisman Center, University of Wisconsin-Madison, who conducted the study with Messinger while at UM.

For the study, 33 children were assessed in the lab at 18, 24, 30 and 36 months of age. Some of the children had an older sibling diagnosed with autism and were considered high risk for autism.

At the 18-month assessment, the researchers videotaped a five minute period of mother and child free play in which the mothers were asked to play as they would at home. Aspects of maternal sensitivity were scored on seven-point scales ranging from absence of sensitive behavior to extremely sensitive behavior. Children’s language was assessed at 2 and 3 years. At the 3 year visit, when the children were old enough to be evaluated, 12 of children from the high risk group received an autism-spectrum diagnosis.

The study was funded by the National Institutes of Health. Its findings parallel previous treatment research indicating that when children with autism increase their connection to the environment they do much better, Baker says. Understanding the benefits of sensitive structuring in the development of language among young children with emergent autism provides scientific support for early intervention programs that focus on parent-child interactions. “We know that parenting doesn’t cause autism. The message here is that parents can make a difference in helping their children fight against autism,” Baker says.

On the Net:

ESA Highlights Potential Of Satellite Data For The European Investment Bank

The European Investment Bank has an annual lending portfolio of around 75 billion euros (approximately $101 billion USD), operating globally in more than 130 countries. The Bank has been increasingly mainstreaming environmental considerations into its lending portfolio, boosting the need to monitor the impact of the projects it funds. As its environmental commitments have increased, so too has the demand for geospatial information.

Earth observation (EO) from space can provide consistent, accurate and timely information on the state of the global environment that could help the Bank to assess the feasibility, monitor the progress and quantify the environmental impact of its investment projects.

ESA began collaborating with the European Investment Bank (EIB) with three EIB projects chosen as pilots for initial small-scale demonstrations to show how EO can help to monitor their implementation. Specific EO-information services relevant to each project were defined with the help of the Luxembourg-based company LuxSpace, acting as a local technical agent on behalf of ESA. Under the technical guidance of ESA, these specifications were used to select the best offers from value-adding companies across Europe to carry out the work. 

First demonstrations

In 2007, the EIB signed a loan for a nickel-cobalt mining and processing project in the Ambatovy region in Eastern Madagascar. The project includes the development of an inland mine site, a refinery close to the harbor on the coast and a connecting 220-km-long pipeline.

Due to the impact on the environment caused by the construction, the borrowers proposed a series of measures to manage the conservation of biodiversity. The measures included the establishment of forest buffer zones at the mine site, the management of a substantial off-set area of primary forest and the monitoring of biodiversity fragmentation along the pipeline.

To assess the impact of the project and to monitor the effectiveness of the adopted measures, the Bank required information on the mining area, an area covered by primary forest, and the 1-km buffer zone around the pipeline.

ESA selected the Belgium-based Keyobs service provider to supply the EIB with a set of up-to-date high-resolution orthophoto maps and land-cover maps of the primary forests and the disturbed areas, based on the most recent EO data. Five SPOT-5 satellite images acquired during 2008 were georeferenced and orthorectified using Landsat ETM+ images and the 90-m digital elevation model.

“The service has been a success. It provides valuable information about the land cover of the area, demonstrates to the users exactly what can be monitored and allows them to make an informed decision about whether and how to proceed with EO-based monitoring of this important project,” said the EIB’s Mr Eberhard Gschwindt, Technical Advisor in the EIB Projects Directorate.

The second EIB demonstration focused on reforestation monitoring on Kolombangara, a small island located in the Western Province of the Solomon Islands. Replanting the forests began in 2007 and will continue until 2011.

The EIB needed information on the progress of replanting of the forests. To achieve this, historic and recent satellite data were used to set the baseline that enabled land cover change detection maps to be created. ESA selected the Danish-based Geographic Resource Analysis and Science (GRAS) service provider who supplied the EIB with mosaics of satellite images covering the entire island, providing information on the progress of the reforestation as well as the impact on the remaining regions of the island.

“The service provides useful information about the land cover of the area, both in the EIB project area and on the rest of the island, and it verifies the progress of the replanting,” said EIB’s Mr Harald Jahn, Head of Division Services & SMEs, Argo-Industry (including Biofuels) in the Projects Directorate.

The third demonstration concerned the construction of the Egnatia Highway in Greece. The EIB provided investment support for the road transport project in 2002, and construction is now in its final stage around the 37-km-long section of the West Egnatia Motorway from Panagia to Grevena.

To monitor the progress of the construction work and to assess its impact on the landscape during and after the project, the EIB requested up-to-date information on the construction works and on the fragmentation of sensitive areas, with special attention placed on the brown bear habitat in the Pindos forest region.

ESA selected Geoville, an international consulting company based in Austria and Luxemburg, to produce and deliver a time series of land cover/habitat maps based on historic and recent satellite imagery from SPOT, showing the state of the landscape before and during the construction works.

“The service provided the required information on the progress of the construction work. We have no doubts about the technical soundness, but we had difficulty understanding what the results mean for the project,” said the EIB’s Mr Claus Eberhard, Economist in the Projects Directorate. “We would now like to take this forward by discussing the potential, and the limits, of these tools, based on what we’ve learned during this pilot exercise.”

Next steps

Following these initial demonstrations, ESA held a workshop at EIB to discuss the value of EO to support investment and development projects. The EIB has expressed further interest in EO capabilities and is currently looking at additional projects to start a more substantial evaluation together with ESA of the benefits of EO information across a wider range of the Bank’s activities.

“As a pilot study, this has been a very useful learning exercise for all involved. From our user perspective it has allowed us to understand better the potential ““ as well as the possible shortcomings ““ of EO information. We intend now to build on this initial experience, considering the possibility of introducing a systematic use of EO for project monitoring and longer-term project evaluation,” said the EIB’s Mr Peter Carter, Associate Director Environment and Social Office of the Projects Directorate.

Image 1: The Danish-based Geographic Resource Analysis and Science (GRAS) service provider, selected by ESA, supplied the European Investment Bank with this mosaic of satellite images, based on SPOT and Quickbird, to help it monitor the progress of a re-plantation project on Kolombangara, a small island located in the Western Province of the Solomon Islands. Replanting the forests began in 2007 and will continue until 2011. Credits: Geographic Resource Analysis and Science (GRAS)

Image 2: The Belgium-based Keyobs service provider, selected by ESA, supplied the European Investment Bank with satellite images to help it assess the impact of a nickel-cobalt mining and processing project in the Ambatovy region in Eastern Madagascar. The satellite images, acquired during 2008 by SPOT5 and Landsat ETM+, were classified to extract the land cover information. Principal classes are composed by forested areas, non forested areas and bare soils. Credits: Keyobs

Image 3: Geoville, an international consulting company based in Austria and Luxemburg, was selected by ESA to supply the European Investment Bank with a time series of land cover/habitat maps based on historic and recent satellite imagery from SPOT, showing the state of the landscape before and during the construction works of the Egnatia Highway project in Greece. Credits: Geoville

Image 4: Geoville, an international consulting company based in Austria and Luxemburg, was selected by ESA to supply the European Investment Bank with a time series of land cover/habitat maps based on historic and recent satellite imagery from SPOT, showing the state of the landscape before and during the construction works of the Egnatia Highway project in Greece. This high-resolution land cover map was created to serve as the future baseline. The update was done in a way to guarantee compatibility with existing data and indicators. Compared to the Corine Land Cover-based maps, this map provides a much higher level of spatial detail of the land cover information and therefore allows more precise analysis of changes. The results show that the impact of the road construction is strongest during the construction phase. Once finalized, the fragmentation will decrease again due to the removal of temporary roads. Credits: Geoville

On the Net:

Massive Planet Being Torn Apart By Its Own Tides

Scientists have opportunity to watch a planetary “death march”

An international group of astrophysicists has determined that a massive planet outside our Solar System is being distorted and destroyed by its host star ““ a finding that helps explain the unexpectedly large size of the planet, WASP-12b.

It’s a discovery that not only explains what’s happening to WASP-12b; it also means scientists have a one-of-a-kind opportunity to observe how a planet enters this final stage of its life. “This is the first time that astronomers are witnessing the ongoing disruption and death march of a planet,” says UC Santa Cruz professor Douglas N.C. Lin,. Lin is a co-author of the new study and the founding director of the Kavli Institute for Astronomy and Astrophysics (KIAA) at Peking University, which was deeply involved with the research.

The findings are being published in the February 25 issue of Nature.

The research was led by Shu-lin Li of the National Astronomical Observatories of China. A graduate of KIAA, Li and a research team analyzed observational data on the planet to show how the gravity of its parent star is both inflating its size and spurring its rapid dissolution.

WASP 12-b, discovered in 2008, is one of the most enigmatic of 400-plus planets that have been found outside our Solar System over the past 15 years. It orbits a star, in the constellation Auriga, roughly similar in mass to our Sun. Like most known extra-solar planets, it is large and gaseous, resembling Jupiter and Saturn in this respect. But unlike Jupiter, Saturn or most other extra-solar planets, it orbits its parent star at extremely close range ““ 75 times closer than the Earth is to the Sun, or just over 1 million miles. It is also larger than astrophysical models would predict. Its mass is estimated to be almost 50% larger than Jupiter’s and its 80% larger, giving it six times Jupiter’s volume. It is also unusually hot, with a daytime temperature of more than 2500°C.

Some mechanism must be responsible for expanding this planet to such an unexpected size, say the researchers. They have focused their analysis on tidal forces, which they say are strong enough to produce the effects observed on WASP 12b.

On Earth, tidal forces between the Earth and the Moon cause local sea levels rise and fall modestly twice a day. WASP-12b, however, is so close to its host star that the gravitational forces are enormous. The tremendous tidal forces acting on the planet completely change the shape of the planet into something similar to that of a rugby or American football.

These tides not only distort the shape of WASP 12-b. By continuously deforming the planet, they also create friction in the its interior. The friction produces heat, which causes the planet to expand. “This is the first time that there is direct evidence that internal heating (or “Ëœtidal heating’) is responsible for puffing up the planet to its current size,” says Lin.

Huge as it is, WASP 12-b faces an early demise, say the researchers. In fact, its size is part of its problem. It has ballooned to such a point that it cannot retain its mass against the pull of its parent star’s gravity. As the study’s lead author Li explains, “”WASP-12b is losing its mass to the host star at a tremendous rate of six billion metric tons each second. At this rate, the planet will be completely destroyed by its host star in about ten million years. This may sound like a long time, but for astronomers it’s nothing. This planet will live less than 500 times less than the current age of the Earth.”

The material that is stripped off WASP-12b does not directly fall onto the parent star. Instead, it forms a disk around the star and slowly spirals inwards. A careful analysis of the orbital motion of WASP-12b suggests circumstantial evidence of the gravitational force of a second, lower-mass planet in the disk. This planet is most likely a massive version of the Earth — a so-called “super-Earth.”

The disk of planetary material and the embedded super-Earth are detectable with currently available telescope facilities. Their properties can be used to further constrain the history and fate of the mysterious planet WASP-12b.

In addition to KIAA, support for the WASP 12-b research came from NASA, Jet Propulsion Laboratory and the National Science Foundation. Along with Li and Lin, co-authors include UC Santa Cruz professor Jonathan Fortney and Neil Miller, a graduate student at the university.

Image 1: Illustration of WASP-12b in orbit about its host star (Credit: ESA/C Carreau)

Image 2: The WASP-12 system. The massive gas giant WASP-12b is shown in purple with the transparent region representing its atmosphere. The gas giant planet’s orbit is somewhat non-circular. This indicates that there is probably an unseen lower mass planet in the system, shown in brown, that is perturbing the larger planet’s orbit. Mass from the gas giant’s atmosphere is pulled off and forms a disk around the star, shown in red. (Courtesy: KIAA/Graphic: Neil Miller)

On the Net:

Kavli Institute for Astronomy and Astrophysics (KIAA) at Peking University

Nature

Neutrons Playing Big Role In Scientific Advancement

Unless you’re interested in isotopic labeling, neutrons don’t figure much into chemistry. Neutral in charge and a bit bigger than a proton, the neutron neither gives an atom its name nor determines much about its reactivity.

But neutrons have some unsung properties that make them useful for investigating matter. Because they are neutral, they can penetrate deeper into a sample than electrons can. Because they have mass and spin, they have a magnetic moment and can probe magnetism. Because they interact with nuclei rather than electron orbitals, they are sensitive to light elements and can even distinguish between hydrogen and deuterium. And they’re nondestructive. These features are inspiring researchers to use neutrons to analyze a variety of materials, from coal and complex fluids to cell membranes and membrane proteins and including magnetic materials.

“As far as I’m concerned, neutrons are the most powerful structural probe that inorganic and materials chemists have to characterize their materials,” says Robert J. Cava, a Princeton University chemistry professor who uses neutrons to characterize the structure and magnetic properties of transition-metal complexes. “Nothing is better than a neutron.”

Unlike most techniques that use electromagnetic radiation or electrons to probe samples, however, neutrons cannot easily be generated in individual labs. The two primary ways to obtain them involve nuclear reactions: One is through splitting uranium for a net yield of two neutrons. The other is through spallation, in which a beam of protons is aimed at a heavy target, such as tungsten or mercury, to induce a different nuclear reaction that also ejects neutrons.

The U.S. has three main neutron user facilities. The Center for Neutron Research at the National Institute of Standards & Technology (NIST) uses a uranium reactor. Oak Ridge National Laboratory (ORNL) has two neutron sources, one reactor and one spallation. And at Los Alamos National Laboratory (LANL), spallation rules.

All neutron experiments involve directing a beam of neutrons at a sample and then detecting the angle and the energy at which the neutrons are scattered after interacting with the atoms in the sample. The two kinds of neutron sources each have their own benefits. Spallation yields intense, pulsed neutron beams, allowing time-of-flight experiments, says Stephen E. Nagler, chief scientist of the Neutron Scattering Science Division at ORNL. The continuous stream of neutrons from reactors, however, can work better when researchers want to cool the neutrons so they have longer wavelengths, which can be important when looking at protein or polymer conformations in which the length scales of interest are hundreds or thousands of angstroms.

One use of neutron-scattering experiments is to study porous materials. In one example, a group led by Andrzej P. Radliski of Griffith University’s Nanoscale Science & Technology Centre, in Australia, and Yuri B. Melnichenko of ORNL investigated the ability of underground coal to sequester CO2 (Langmuir 2009, 25, 2385). When a neutron beam is directed at a sample of coal, how the neutrons scatter from the sample gives information about the sample’s size distribution, density, and chemical composition of pores.

The researchers found that coal can pack five times the CO2 present in the same volume of air at the same temperature and pressure. Condensation of CO2 is most pronounced in pores of about 20″“40 Ó¦ in diameter, compared with larger pores, and is also affected by local mineral composition. The coal microstructures also are not affected by exposure to CO2, at least for a period of several days. The results, Melnichenko says, show that neutron scattering can be used to pinpoint the sorption capacity of different coals. The technique may also be of use in studying membranes that capture CO2 from flue gases at coal-burning power plants.

But it’s not just solids that can be analyzed by using neutrons. Norman J. Wagner, a chemical engineering professor at the University of Delaware, is interested in the property of shear thickening, or the ability of some suspensions””think cornstarch mixed with water””to become solidlike under stress. He studies the neutron scattering of colloidal suspensions under flow conditions to try to understand the mechanism of shear thickening. “Small-angle neutron scattering is a unique way to probe something while it’s flowing so you can see how the nano- and microscale structure is changing,” Wagner says.

Shear thickening is a property to be exploited in applications such as “soft” body armor, but it can also be the bane of materials processing, leading to stopped-up equipment or ruined products, Wagner says. Using neutron scattering, he and colleagues studied silica particles in polyethylene glycol to try to validate the fundamental theory behind the behavior of shear-thickening fluids””to figure out what structures are being formed and compare the results against theoretical models (Rheol. Acta. 2009, 48, 897).

They found that, as predicted, fluids under stress self-assemble into individual close-packed, transient “hydroclusters”””colloidal particles that have been squeezed together””rather than larger aggregates. This flow-induced hydroclustering is responsible for the dramatic increase in solution viscosity and elasticity that occurs during shear thickening. The basic knowledge obtained from neutron-scattering measurements will lead to improved processes and products down the road, Wagner says. “If we can measure the nanostructure, we can rationally engineer better nanocomposites,” he says.

Solution properties such as diffusion dynamics can also be studied by neutron scattering. Eugene Mamontov, Huimin Luo, and Sheng Dai at ORNL have looked at protic ionic liquids, which are of interest as proton-conducting fuel-cell electrolytes and novel media for gas separation. Techniques such as X-ray scattering cannot illuminate hydrogen atoms, Dai notes, so neutron scattering is crucial to understanding how the liquids work.

In particular, the group found that the ionic liquid N,N,N´,N´-tetramethylguanidinium bis(perfluoroethylsulfonyl)imide has two different proton-diffusion processes at temperatures above its melting point (J. Phys. Chem. B 2009, 113, 159). The faster process is spatially restricted to an area with a radius of about 8 Ó¦ and is localized motion that is correlated to rotational dynamics, Dai says. The slower process involves long-range transfer of the protons on the NH2 group of the guanidinium cation and is likely the key to providing proton conduction””the quality necessary for an ionic liquid to be a good electrolyte.

Neutrons can also be used to study biological systems, especially membranes and membrane proteins that have proven difficult to look at by other techniques. NIST research chemist Hirsh Nanda and colleagues have been using neutron scattering to investigate the formation of HIV type 1. One aspect of this research is to study the conformations of the Gag protein, which assembles on the inner surface of the infected cell’s membrane and is cleaved into several domains that eventually bud into a new virus.

Much of the work on Gag has focused on its “compact” solution conformations, but Gag is known to take on a more extended structure during viral assembly. Nanda and colleagues are using a tethered membrane system developed at NIST (Biointerphases 2007, 2, 21) to study the interaction of the protein with membranes. By analyzing the neutron-scattering patterns of Gag on the membrane, they’re hoping to elucidate the path of the conformational changes and eventual viral assembly.

So far, they’ve found that Gag by itself stays in its compact form on the membrane. “But another function of the protein is to drag RNA into the virus,” Nanda says. “If we also introduce nucleic acids, we can induce a change in conformation to the extended form. You need interactions of protein with both membrane and viral RNA to produce the extended conformation.” The RNA also appears to cross-link several Gag molecules together to form an extended conformation. The group is now studying other protein interactions that may also modulate Gag protein extension and lead to viral budding.

Other researchers are using neutron scattering to look in detail at cell membranes themselves. Ka Yee C. Lee, a chemistry professor at the University of Chicago, is using neutrons to study the interactions of lipids in cell membranes and other molecules, such as proteins, polymers, or additional lipids.

Lee does neutron experiments alongside X-ray measurements, in particular to find out how a protein associates with a membrane and how the interaction affects the ordering or packing of the lipids. “If we rely only on X-ray studies, we sometimes don’t get the contrast necessary to distinguish between the lipid group and a protein or peptide,” Lee says. If she selectively deuterates the lipids and uses deuterated water in her solutions, however, neutrons can then illuminate the details of the protein-membrane interactions.

One of Lee’s projects has focused on the role of cholesterol in membranes, to answer fundamental questions of whether and how cholesterol interacts with other membrane components. “There is a lot of interest in what people now refer to as a lipid raft, or domains that are formed at the membrane surface that can lead to local sequestration of proteins” for functions such as signaling, Lee says. One theory explaining how this happens involves cholesterol interacting with other membrane lipid components to give rise to the raft. In keeping with this theory, initial work from Lee’s lab indicates that lipids and cholesterol can associate to form a kind of lipidic alloy (Phys. Rev. Lett. 2009, 103, 028103), and her group is now examining the molecular interactions and their dynamics.

For his part, Princeton’s Cava uses neutron diffraction to study inorganic solids. For the ability to nail down the positions of light elements such as B, C, N, or O, or alkali metals such as Li and Na, neutron scattering “is the method of choice by which solid-state chemists determine the structure and formulas of the compounds they synthesize,” Cava says. The technique can also be used to elucidate the magnetic structure of materials.

In one recent study, Cava, his former graduate student D. Vincent West, and colleagues used neutron diffraction to study a new family of anhydrous sulfates, A2+Mn5(SO4)6, where A is Pb, Ba, or Sr (J. Solid State Chem. 2009, 182, 1343). They expected that the tetrahedral structure of SO4 2″“ would yield a triangular crystal lattice in the compounds. The triangular geometry in turn would lead to a phenomenon called geometric frustration, which neutralizes typical nearest-neighbor magnetic interactions and opens the system to ground states with novel properties.

The compounds turned out to be composed of a unique, Mn2+-containing Mn2O9 dimer along with chains of alternating MnO6-AO12 polyhedra. The structure is also layered, with the cations lining up akin to stacked honeycombs.

On the magnetism front, all three compounds transition from disordered to ordered magnetic moments below 10 K, which Cava, West, and colleagues believe stems from the magnetic interaction between the two Mn atoms in the Mn2O9 dimers. Other differences in how the compounds respond to magnetic fields likely involve the magnetic moments within the polyhedral chains, where the Pb, Ba, or Sr atom can more readily affect the compound’s magnetic properties. “The chemistry of these materials suggests a broader family of materials whereby the magnetic properties can be tuned through chemical substitution,” West says. “In addition, the unique triangular geometry in this structure represents a new perspective that may prove valuable for understanding magnetism in solids.”

All of the neutron user facilities in the U.S. are in various phases of expansion. NIST’s Center for Neutron Research is adding a second instrument hall, aiming to increase its measurement capacity and users by 25% when the project is completed in 2012. ORNL is also planning to add a second experimental hall to its spallation neutron source, a move that would double its capacity.

Doubling of capacity is also a goal at LANL, although the funding for that has not yet been secured, says Alan J. Hurd, director of LANL’s Manuel Lujan Jr. Neutron Scattering Center. Hurd notes that all of the neutron centers in the U.S. tend to have twice the requests for experiment time than they can accommodate, so he has no doubt that the extra capacity will be well used once it’s built.

“We’re finding more and more that neutrons can answer questions you really can’t get at in another way,” Delaware’s Wagner says. With greater knowledge of neutrons’ capabilities and increased availability, scientific progress undoubtedly awaits.

On the Net:

American Chemical Society

An Emotion Detector For Baby

Baby monitors of the future could translate infant cries, so that parents will know for certain whether their child is sleepy, hungry, needing a change, or in pain. Japanese scientists report details of a statistical computer program that can analyze a baby’s crying in the International Journal of Biometrics.

As any new parent knows, babies have a very loud method of revealing their emotional state – crying. Unfortunately, the parenting handbook does not offer guidance on how to determine what the crying means. Parents sometimes learn with experience that their child’s cries may be slightly different depending on their cause, whether hunger or discomfort.

Now, engineers in Japan have turned to an approach to product design, known as kansei engineering, invented in the 1970s by Professor Mitsuo Nagamachi, Dean of Hiroshima International University, which aims to “measure” feelings and emotions.

Tomomasa Nagashima of the Department of Computer Science and Systems Engineering, at Muroran Institute of Technology, in Hokkaido and colleagues explain that the fundamental problem in building an emotion detector for baby’s crying is that the baby cannot confirm verbally what its cries mean. Various researchers have tried to classify infant emotions based on an analysis of the crying pattern but with little success so far.

The team has employed sound pattern recognition approach that uses a statistical analysis of the frequency of cries and the power function of the audio spectrum to classify different types of crying. They were then able to correlate the different recorded audio spectra with a baby’s emotional state as confirmed by the child’s parents. In their tests recordings of crying babies with a painful genetic disorder, were used to make differentiating between the babies’ pained cries and other types of crying more obvious. They achieved 100% success rate in a validation to classify pained cries and “normal” cries.

The research has developed a sound theoretical method for classification of infant emotions, although limited to a specific emotion, based on analysis of the audio spectra of the baby’s cries. The technique might one day be incorporated into a portable electronic device, or app, to help parents or carers decide on a course of action when their child is crying.

“Statistical method for classifying cries of baby based on pattern recognition of power spectrum” in Int. J. Biometrics, 2010, 2, 113-123

On the Net:

The Toxicity Of Antimicrobial Silver In Products Can Be Reduced

Chemists at the University of Helsinki have managed to manufacture new polymer-stabilized silver nanoparticles. The result is significant because the antimicrobial characteristics of silver are used in textiles, floor coatings and paints even though the impact on health of silver nanoparticles are not entirely known. Finnish researchers now think that exposure to silver can be reduced by chemically binding the nanoparticles to polymers. The research results will soon be published in a leading journal in the field, Colloid and Polymer Science.

Nanoparticles (a nanometer is equal to one billionth of a meter) are a topic of debate both in research and everyday life. The antimicrobial characteristics of silver, on the other hand, have been well-known for a long time and it has numerous commercial applications. Supermarkets carry an abundance of products with added silver or silver nanoparticles. These include antimicrobial textiles, containers, shower curtains, tabletops, floor coatings, paints and glues. Colloidal silver water for internal use as well as creams and deodorants, and even wound dressing products, containing silver that are used externally are also available.

In the US, the registration of new insecticides containing silver nanoparticles has raised debate about their safety. The question can be justifiably asked as to whether conclusions on the toxicity of silver nanoparticles can be drawn on the basis of earlier safety information on the toxicity of silver ions and metallic silver (1).

The method developed at the University of Helsinki is a solution to reducing the toxicity of silver. Nanoparticles can be manufactured through various methods that are based on reducing metallic salts, in this case silver nitrate, in the presence of a stabilizing compound. Polymer-stabilized silver nanoparticles have been successfully manufactured at the Laboratory of Polymer Chemistry at the University of Helsinki. The work has exploited the laboratory’s prior experience with gold nanoparticles and the expertise of the School of Science and Technology of the Aalto University and its European cooperation partners.

In Helsinki, the stabilizing component used in the manufacturing process is a polymer with a reactive thiol end group. It is known that thiol groups bind effectively with silver, which enables the effective colloidal stabilization of silver nanoparticles and binding to polymers. The polymer is in itself a soft, rubber-like acrylate, which contains a water-soluble block that enables silver ions to be released from the otherwise hydrophobic coating. The idea is that these silver nanoparticles could be used as a coating or its component.

Many mechanisms relating to the toxicity of silver to micro-organisms have been put forward. It has been demonstrated that silver ions react in cells with the thiol groups of proteins. There is also evidence to show that silver ions damage DNA by inhibiting its replication. Silver’s ability to form extremely sparingly soluble salts is also considered one of its impact mechanisms. When the chloride ions precipitate as silver chloride from the cytoplasm of cells, cell respiration is inhibited. The antibacterial efficiency of silver nanoparticles is also well-known, especially against Gram-negative bacteria such as E.coli. The silver nanoparticles work by releasing silver ions and by penetrating cells.

Silver, silver ions and silver nanoparticles have generally been considered to be quite harmless to people. However, the most recent research has demonstrated that nanoparticles also penetrate mammalian cells and damage the genotype. There is even evidence to suggest that silver nanoparticles may actively find their way into cells through endocytosis. Inside the cell, hydrogen peroxide formed in cell respiration oxidizes silver nanoparticles and releases silver ions from them, consequently increasing the toxicity. Thus, it can even be assumed that silver nanoparticles are cyto- or genotoxic. Moreover, it has been demonstrated that silver nanoparticles penetrate the skin via pores and glands. If the skin is damaged, this facilitates the penetration of silver particles through the skin.

It is therefore important that coatings containing silver nanoparticles do not release nanoparticles. According to Finnish researchers, the effect of the coating should only be based on silver ions dissolving from them. Consequently, nanoparticles should be as well bound to the coating as possible, enabling a reduction in the possible exposure to silver nanoparticles.

References:

1) Erickson B.E. (2009) Nanosilver pesticides. Chemical and Engineering news, 87, 48, 25-26.

2) Niskanen, Jukka; Shan, Jun; Tenhu, Heikki; Jiang, Hua; Kauppinen, Esko; Barranco, Violeta; Pico, Fernando; Yliniemi, Kirsi; Kontturi, Kyösti. Synthesis of copolymer-stabilized silver nanoparticles for coating materials. Colloid and Polymer Science, in press. DOI: 10.1007/s00396-009-2178-x

On the Net:

Gene Mutation Linked To Autism-Like Symptoms In Mice

When a gene implicated in human autism is disabled in mice, the rodents show learning problems and obsessive, repetitive behaviors, researchers at UT Southwestern Medical Center have found.

The researchers also report that a drug affecting a specific type of nerve function reduced the obsessive behavior in the animals, suggesting a potential way to treat repetitive behaviors in humans. The findings appear in the Feb. 24 issue of the Journal of Neuroscience.

“Clinically, this study highlights the possibility that some autism-related behaviors can be reversed through drugs targeting specific brain function abnormalities,” said Dr. Craig Powell, assistant professor of neurology and psychiatry at UT Southwestern and the study’s senior author.

“Understanding one abnormality that can lead to increased, repetitive motor behavior is not only important for autism, but also potentially for obsessive-compulsive disorder, compulsive hair-pulling and other disorders of excessive activity,” Dr. Powell said.

The study focused on a protein called neuroligin 1, or NL1, which helps physically hold nerve cells together so they can communicate better with one another. Mutations in proteins related to NL1 have been implicated in previous investigations to human autism and mental retardation.

In the latest study, the UT Southwestern researchers studied mice that had been genetically engineered to lack NL1. These mice were normal in many ways, but they groomed themselves excessively and were not as good at learning a maze as normal mice.

The altered mice showed weakened nerve signaling in a part of the brain called the hippocampus, which is involved in learning and memory, and in another brain region involved in grooming.

When treated with a drug called D-cycloserine, which activates nerves in those brain regions, the excessive grooming lessened.

“Our goal was not to make an ‘autistic mouse’ but rather to understand better how autism-related genes might alter brain function that leads to behavioral abnormalities,” Dr. Powell said. “By studying mice that lack neuroligin-1, we hope to understand better how this molecule affects communication between neurons and how that altered communication affects behavior.

“This study is important because we were able to link the altered neuronal communication to behavioral effects using a specific drug to ‘treat’ the behavioral abnormality.”

Future studies, Dr. Powell said, will focus on understanding in more detail how NL1 operates in nerve cells.

Other UT Southwestern researchers participating in the study were co-lead authors Jacqueline Blundell, former postdoctoral researcher in neurology, and Dr. Cory Blaiss, postdoctoral researcher in neurology; Felipe Espinosa, senior research scientist in neurology; and graduate student Christopher Walz.

Researchers at Stanford University also contributed to this work.

The research was supported by Autism Speaks, the Simons Foundation, the National Institute of Mental Health, BRAINS for Autism, and the Hartwell Foundation.

On the Net: