CO2 is The Missing Link To Past Global Climate Changes

CO2 levels explain why temperatures in tropical and arctic waters have risen and fallen together for the past 2.7 million years

Increasingly, the Earth’s climate appears to be more connected than anyone would have imagined. El Niño, the weather pattern that originates in a patch of the equatorial Pacific, can spawn heat waves and droughts as far away as Africa.

Now, a research team led by Brown University has established that the climate in the tropics over at least the last 2.7 million years changed in lockstep with the cyclical spread and retreat of ice sheets thousands of miles away in the Northern Hemisphere. The findings appear to cement the link between the recent Ice Ages and temperature changes in tropical oceans. Based on that new link, the scientists conclude that carbon dioxide has played the lead role in dictating global climate patterns, beginning with the Ice Ages and continuing today.

“We think we have the simplest explanation for the link between the Ice Ages and the tropics over that time and the apparent role of carbon dioxide in the intensification of Ice Ages and corresponding changes in the tropics,” said Timothy Herbert of Brown University and the lead author of the paper in Science. Herbert added, “but we don’t know why. The answer lies in the ocean, we’re pretty sure.”

Candace Major of the National Science Foundation agrees: “This research certainly supports the idea of global sensitivity of climate to carbon dioxide as the first order of control on global temperature patterns,” she says. “It also points to a strong sensitivity of global temperature to the levels of greenhouse gases on very long timescales, and shows that resulting climatic impacts are felt from the tropics to the poles.”

The research team, including scientists from Luther College in Iowa, Lafayette College in Pennsylvania, and the University of Hong Kong, analyzed cores taken from the seabed at four locations in the tropical oceans: the Arabian Sea, the South China Sea, the eastern Pacific and the equatorial Atlantic Ocean.

The cores tell the story. Sedimentary cores taken from the ocean floor in four locations show that climate patterns in the tropics have mirrored Ice Age cycles for the last 2.7 million years and that carbon dioxide has played the leading role in determining global climate patterns. The researchers zeroed in on tropical ocean surface temperatures because these vast bodies, which make up roughly half of the world’s oceans, in large measure orchestrate the amount of water in the atmosphere and thus rainfall patterns worldwide, as well as the concentration of water vapor, the most prevalent greenhouse gas.

Looking at the chemical remains of tiny marine organisms that lived in the sunlit zone of the ocean, the scientists were able to extract the surface temperature for the oceans for the last 3.5 million years, well before the beginning of the Ice Ages. Beginning about 2.7 million years ago, the geologists found that tropical ocean surface temperatures dropped by 1 to 3 degrees C (1.8 to 5.4 F) during each Ice Age, when ice sheets spread in the Northern Hemisphere and significantly cooled oceans in the northern latitudes. Even more compelling, the tropics also changed when Ice Age cycles switched from roughly 41,000-year to 100,000-year intervals.

“The tropics are reproducing this pattern both in the cooling that accompanies the glaciation in the Northern Hemisphere and the timing of those changes,” Herbert said. “The biggest surprise to us was how similar the patterns looked all across the tropics since about 2.7 million years ago. We didn’t expect such similarity.”

Climate scientists have a record of carbon dioxide levels for the last 800,000 years–spanning the last seven Ice Ages–from ice cores taken in Antarctica. They have deduced that carbon dioxide levels in the atmosphere fell by about 30 percent during each cycle, and that most of that carbon dioxide was absorbed by high-latitude oceans such as the North Atlantic and the Southern Ocean. According to the new findings, this pattern began 2.7 million years ago, and the amount of atmospheric carbon dioxide absorbed by the oceans has intensified with each successive Ice Age. Geologists know the Ice Ages have gotten progressively colder–leading to larger ice sheets–because they have found debris on the seabed of the North Atlantic and North Pacific left by icebergs that broke from the land-bound sheets.

“It seems likely that changes in carbon dioxide were the most important reason why tropical temperatures changed, along with the water vapor feedback,” Herbert said.

Herbert acknowledges that the team’s findings leave important questions. One is why carbon dioxide began to play a major role when the Ice Ages began 2.7 million years ago. Also left unanswered is why carbon dioxide appears to have magnified the intensity of successive Ice Ages from the beginning of the cycles to the present. The researchers do not understand why the timing of the Ice Age cycles shifted from roughly 41,000-year to 100,000-year intervals.

Contributing authors are Laura Cleaveland Peterson at Luther College, Kira Lawrence at Lafayette College and Zhonghui Liu at the University of Hong Kong. The U.S. National Science Foundation and the Evolving Earth Foundation funded the research. The cores came from the Ocean Drilling Program, sponsored by the NSF, and the Integrated Ocean Drilling Program.

Image 2: Sedimentary cores taken from the ocean floor in four locations show that climate patterns in the tropics have mirrored Ice Age cycles for the last 2.7 million years and that carbon dioxide has played the leading role in determining global climate patterns. Cores from site 806 were used as controls. Credit: Timothy Herbert, Brown University

On the Net:

Scientist Predicts Human Extinction In 100 Years

A professor of microbiology believes that humans will be wiped out in a few decades.

Frank Fenner, professor at the Australian National University and the man who helped eradicate smallpox, told The Australian newspaper this week that “Homo sapiens will become extinct, perhaps within 100 years.”

“A lot of other animals will, too. It’s an irreversible situation. I think it’s too late. I try not to express that because people are trying to do something, but they keep putting it off.”

Fenner played a leading role in helping the variola virus that causes smallpox find extinction.  He has also received many awards and honors, as well as published hundreds of scientific papers and written or co-written 22 books.

Fenner says the real trouble is the population explosion and “unbridled consumption.”

According to the U.N., the number of Homo sapiens is projected to exceed 6.9 billion this year.  Fenner is pessimistic about the outcome of cutting greenhouse gas emissions.

“We’ll undergo the same fate as the people on Easter Island,” he says. “Climate change is just at the very beginning. But we’re seeing remarkable changes in the weather already.”

“The Aborigines showed that without science and the production of carbon dioxide and global warming, they could survive for 40,000 or 50,000 years. But the world can’t. The human species is likely to go the same way as many of the species that we’ve seen disappear.”

“Mitigation would slow things down a bit, but there are too many people here already.”

Other scientists share his opinion as well.

Stephen Boyden, a colleague of Fenner and a long-time friend, says there is deep pessimism among some ecologists.

“Frank may be right, but some of us still harbor the hope that there will come about an awareness of the situation and, as a result, the revolutionary changes necessary to achieve ecological sustainability,” Boyden, an immunologist who turned to human ecology later in his career, told The Australian.

“That’s where Frank and I differ. We’re both aware of the seriousness of the situation, but I don’t accept that it’s necessarily too late. While there’s a glimmer of hope, it’s worth working to solve the problem. We have the scientific knowledge to do it but we don’t have the political will.”

Fenner, who is now 95-years-old, will open the Healthy Climate, Planet and People symposium at the Australian Academy of Science next week, as part of the AAS Fenner conference series. 

“As the population keeps growing to seven, eight or nine billion, there will be a lot more wars over food,” he says.

“The grandchildren of today’s generations will face a much more difficult world.”

On the Net:

Low Calcium Intake Linked With Increased Risk Of Osteoporosis And Hypertension In Postmenopausal Women

Italian postmenopausal women who have a low calcium intake show a higher risk of developing both osteoporosis and hypertension (a chronic medical condition in which arterial blood pressure is elevated) than those who consume higher levels of calcium according to research presented today at EULAR 2010, the Annual Congress of the European League Against Rheumatism in Rome, Italy.

In this Italian study of 825 postmenopausal women with hypertension, a significantly increased proportion of women (35.4%) who consumed a lower amount of calcium through intake from dairy sources, had a concurrent diagnosis of both hypertension and osteoporosis, compared with women who consumed a higher amount of calcium (19.3% p<0.001).

Further statistical analyses revealed that a lower calcium intake was associated with an increased risk of developing hypertension or osteoporosis over time when compared with controls (Odds Ratio (OR) hypertension: 1.43; 95%CI: 1.12-1.82, osteoporosis: OR 1.46; CI: 1.15-1.85). Women who consumed a lower amount of calcium were shown to be most likely to develop both conditions over time compared with women consuming a higher amount of calcium (OR 1.60; CI: 1.09-2.34).

“Our study confirms that there may be a link between hypertension and low bone mass and that a low calcium intake could be a risk factor for the development of osteoporosis in postmenopausal women” said Professor Maria Manara, Department of Rheumatology, Gaetano Pini Institute, Milan, Italy, and lead author of the study. “Our study has also shown that a low calcium intake from dairy foods may be involved in this association and could be considered a risk factor for the development of hypertension and osteoporosis.”

The 825 subjects involved in the study were recruited from a cohort of 9,898 postmenopausal women referred to the Osteometabolic unit of the Gaetano Pini Institute in Italy, from 2002. Calcium intake from dairy sources was assessed by the number of standard servings of ~300mg calcium consumed by women in a week and subjects were stratified into ‘quartiles'(lower 25%, median 50% and upper 25%). For each case, three controls were selected and matched for age. Women who had been treated with diuretics (drugs known to affect the generation of new bone material) were excluded from the study.

On the Net:

NASA Prepares for Potentially Damaging 2011 Meteor Shower

NASA is evaluating the potential risks to spacecraft posed by the upcoming Draconid meteor shower in 2011, a seven-hour storm of tiny space rocks that could possibly damage major Earth-orbiting spacecraft like the International Space Station.

Meteor shower risk assessment is more art than science, and some variations have been predicted for the 2011 Draconids by meteor forecasters. Because of this, spacecraft operators are being notified to weigh out defensive steps.

Current forecast models predict a strong Draconid storm, possibly full-blown, on October 8, 2011, according to William Cooke of the Meteoroid Environment Office at NASA’s Marshall Space Flight Center in Huntsville, Alabama.

Cooke confirmed that the Draconids do pose some threats to spacecraft.

Cooke and Danielle Moser of Stanley, Inc., also of Huntsville, presented their Draconid data at Meteoroids 2010 — an international conference on minor bodies in the solar system held May 24-28 in Breckenridge, Colorado.

Predicted intensity rates for the 2011 storm could range from a few dozen to several hundred per hour.

A meteoroid stream model at the Marshall Space Flight Center takes past Draconid showers into account in predicting a maximum hourly rate, and suggests that the 2011 shower could reach several hundred per hour.

Cooke told Space.com that past strong Draconid storms of 1985 and 1998 caused no significant electrical problems to spacecraft, but said that should not be taken lightly with the upcoming 2011 event.

Although, due to the Draconids slow speed, the chance of electrical anomalies is relatively low, Cooke noted.

Also, some spacecrafts are well protected for such issues. The International Space Station, for example, is heavily armored against orbital debris. “We don’t expect anything to go wrong there,” said Cooke.

“I have no concerns about the space station. Even if the Draconids were a full-scale meteor storm I would be confident that the space station program would take the right steps to mitigate the risk,” Cooke said.

He noted that the crewmates on board the ISS, however, should avoid doing any spacewalks during the event.

For Hubble, if the risk seems high enough, operators will most likely point the observatory away from the Draconid radiant — the point from which the shower emanates.

“Any time you take a mitigation strategy, like changing a spacecraft’s attitude or turning off high-voltage, that incurs risk as well,” Cooke said.

Each spacecraft is unique, and components have differing thresholds for damage, so programs are encouraged to conduct analytical studies to determine whether or not mitigation strategies are necessary for next year’s Draconids.

Cooke noted that the threat to spacecraft from meteor showers in the past, such as the Leonids in 1998, produced more hype than it did impacts.

“We really didn’t understand what was going on,” he added. “Now we have a much better feel. But the Leonids did sensitize spacecraft operators to worry about meteor showers. Perhaps, sometimes, they worry more than they should.”

On the Net:

Crocs And Fish Key To Human Evolution

Almost two million years ago, early humans began eating food such as crocodiles, turtles and fish ““ a diet that could have played an important role in the evolution of human brains and our footsteps out of Africa, according to new research.

In what is the first evidence of consistent amounts of aquatic foods in the human diet, an international team of researchers has discovered early stone tools and cut marked animal remains in northern Kenya. The work has just been published in the Proceedings of the National Academy of Science (PNAS).

“This site in Africa is the first evidence that early humans were eating an extremely broad diet,” says Dr Andy Herries from the University of New South Wales (UNSW), the only researcher from Australia to have worked with the team. The project represents a collaborative effort with the National Museums of Kenya and is led by David Braun of the University of Cape Town in South Africa and Jack Harris of Rutgers University in the US.

The researchers found evidence of the early humans eating both freshwater fish and land animals at the site in the northern Rift Valley of Kenya. It is thought that small bodied early Homo would have scavenged the remains of these creatures, rather than hunting for them.

“This find is important because fish in particular has been associated with brain development and it is after this period that we see smaller-brained hominin species evolving into larger-brained Homo species- Homo erectus – the first hominin to leave Africa,” says Dr Herries, of the School of Medical Sciences.

“A broader diet as suggested by the site’s archaeology may have been the catalyst for brain development and humanity’s first footsteps out of Africa.”

Dr Herries dated the archeological remains using palaeomagnetism, a technique that identifies the fossilised direction of the Earth’s magnetic field in sediments.

On the Net:

Naturally Occurring Protein Prevents, Reverses Brain Damage Caused By Meningitis

Studies suggest role for IL-10 in prevention and treatment of potentially devastating neurological disease in newborns

This bacterium, Escherichia coli K1, is the most common cause of meningitis in premature infants and the second most common cause of the disease in newborns. “The ineffectiveness of antibiotics in treating newborns with meningitis and the emergence of antibiotic-resistant strains of bacteria require new strategies,” explains Nemani V. Prasadarao, PhD, associate professor of infectious disease at Childrens Hospital Los Angeles.

Meningitis is the irritation of membranes covering the brain and spinal cord. This irritation can result from viral or bacterial infection. Bacterial meningitis can be very serious, possibly resulting in hearing loss, brain damage, or death, even when treated. Although the mortality rate can be decreased through use of antibiotics significant neurological consequences, like mental retardation, still occur in 30 to 40 percent of survivors.

“A recent surge in antibiotic-resistant strains of E. coli K1 is likely to significantly increase the rates of illness and death,” said Prasadarao. “Also, the diagnosis of meningitis is difficult until the bacteria reach the cerebrospinal fluid. By that time, brain damage has begun. With large numbers of circulating bacteria, treatment with antibiotics can result in biochemical reactions that may cause septic shock and ultimately, organ failure. So identifying alternatives to antibiotic therapy is crucial.”

One of a class of proteins known as cytokines, IL-10 is involved in immune function. “We found that during an episode of bacteremia, when a large number of bacteria are circulating in normally sterile blood, IL-10 acts to clear antibiotic-sensitive as well as antibiotic-resistant E. coli from the circulation of infected mice,” said Rahul Mittal, Ph.D., lead author on the paper and a post-doctoral fellow in Prasadarao’s lab.

They also determined that E. coli infection produced damage to the mouse brain comparable to that seen in humans. Three-dimensional imaging studies of infected animal and human infant brains showed similar gross morphological changes. “When we gave IL-10 to mice 48 hours after infection, those changes to the brain were reversed,” said Mittal.

Tumor necrosis factor (TNF) is a cytokine active in producing inflammation. When the researchers replicated these experiments using antibiotic or anti-TNF, brain damage resulting from E. coli infection was not prevented.

The team also discovered a mechanism of action for IL-10 protection. In culture, using mouse and human white blood cells called neutrophils, they found that exposing these cells to IL-10 produced an increase in the number of a certain type of receptor on the surface of the neutrophils. An increase in the CR 3 receptor led to enhanced killing of bacteria.

Another white blood cell, called a macrophage, works to clear bacteria from the blood by engulfing or “eating” the pathogen. Similar to what was seen in neutrophils, macrophages treated with IL-10 showed an increase in CR 3 receptors that enhanced their ability to destroy invading bacteria.

To confirm that the CR 3 receptor is critical to the protective effect of IL-10 against E. coli, CR 3 expression was suppressed in a group of mice. Before exposing the animals to bacteria, white blood cells were examined and the CR 3 receptor was determined to be absent. These animals were exposed to E. coli and then treated with IL-10. The mice were found subsequently to have bacteria in the CSF and morphological changes indicating brain damage. The protective effect of IL-10 during bacteremia was absent in animals without CR 3 receptors. The researchers further concluded that the crucial increase in CR 3 receptors was a result of IL-10 suppressing an important inflammatory agent, prostaglandin E-2.

“Since diagnosing meningitis is difficult until bacteria reach the central nervous system, finding an agent that can clear the bacteria while also preventing or restoring the damaged brain is very exciting,” said Mittal.

These studies provide a basis for exploring the use of IL-10 in newborns.

Co-authoring the paper published in the Journal of Experimental Medicine (May 24, 2010) are Rahul Mittal and Kerstin Goth, of Childrens Hospital Los Angeles, Ignacio Gonzalez-Gomez, Ashok Panigrahy, and Nemani V. Prasadarao, of Childrens Hospital Los Angeles and the Keck School of Medicine of USC , Los Angeles, and Richard Bonnet, Universite Auvergne, France.

On the Net:

Leaded Gasoline A Source Of Lead Exposure In Latter 20th Century

Leaded gasoline was responsible for about two-thirds of toxic lead that African-American children in Cleveland ingested or inhaled during the latter two-thirds of the 20th century, according to a new study in Science of the Total Environment.

Researchers from Case Western Reserve University say what they’ve shown in Cleveland probably applies to many cities across the U.S. and reinforces concerns about the health threat for children in countries still using leaded gasoline. However, they emphasize that the results do not minimize the ongoing importance of current childhood lead exposure due to persistence and deterioration of leaded paint which was used as late as the 1960’s.

Extrapolation from lead analyses of teeth from 124 residents of urban Cleveland neighborhoods show that “at the peak of leaded gasoline usage, in the 1960’s and early 70’s, the levels of lead in the bloodstream were likely to be toxic,” said Norman Robbins, emeritus professor of neurosciences at Case Western Reserve School of Medicine. Research of others has shown that these levels of lead are associated with significant neurological and behavioral defects lasting into adulthood, he said.

“It raises the question, has leaded gasoline had a lasting effect on many present-day Cleveland adults?” Robbins said.

Robbins, who began the study 17 years ago, put together an interdisciplinary team to determine what was the predominant recent historic source of lead exposure within the city. Leaded gasoline, lead paint, and lead soldering in food cans had been implicated.

“The findings are important today,” said Jiayang Sun, professor of statistics at CWRU and a co-author of the study. “Some countries are still using leaded gasoline.”

The United Nations Environment Program says Afghanistan, Myanmar and North Korea rely on leaded gasoline while Algeria, Bosnia, Egypt, Iraq, Serbia and Yemen sell both leaded and unleaded gasoline.

The researchers here used a comprehensive analysis of data collected from multiple sources, including the Cleveland tooth enamel data from 1936 to 1993, two different Lake Erie sediment data sets (one collected by faculty from the CWRU geological sciences department), data from the Bureau of Mines and traffic data from the Ohio Bureau of Motor Vehicles.

Because blood tests to determine lead levels were unreliable prior to the mid 1970s, the team used lead levels in the enamel of teeth removed from adults at Cleveland dental clinics to determine their childhood lead exposure.

James A. Lalumandier, chair of the Department of Community Dentistry at CWRU’s School of Dental Medicine, obtained teeth, which were removed from for dental reasons. Richard A. Shulze, a former dental student now in private practice, developed the method to extract lead samples from the enamel.

They trimmed the outer layers to reveal lead trapped within the enamel of developing first and second molars. Like trees, teeth grow in layers around the center, Lalumandier said. The enamel layers in first and second molars provide a permanent record of the lead to which the tooth’s owner was exposed, with mid-points of lead incorporation at about ages 3 and 7, respectively. The researchers obtained the birthplace, age, sex and race of the owners and wound back the clock.

Chemistry Professor Michael E. Ketterer began the lead analysis at John Carroll University in Cleveland and continued after moving to Northern Arizona University.

Lead levels in the teeth were compared to reliable blood levels taken in the 1980s and 1990s, Lake Erie sediment cores that reflect atmospheric lead levels of the past, as well as leaded gasoline use by year.

Sun and former PhD student Zhong-Fa Zhang, now at the Wistar Institute in Philadelphia, who joined in the study in late 2003, developed and applied modern statistical methods to mine the information and compare data curves created with the tooth, blood, sediment and usage data. The impact of the new statistical technique motivated by this study, goes beyond this lead application; it has a general application to simultaneous comparison of curves needed in other biomedical data applications.

The data shows leaded gasoline was the primary cause of exposure, with lead levels in teeth comparatively low in 1936 and increasing dramatically, mirroring the usage of leaded gas and atmospheric lead levels, which tripled from the 1930s to the mid 1960’s. The time dependency of the lead isotope ratios in tooth enamel, measured in Ketterer’s laboratory, also closely matched that of atmospheric deposition from gasoline.

If the main source of lead in teeth had been lead from paint and food can solder that were commonly used at the turn of the last century up through the 1960s, the data would have shown consistently high levels of lead in teeth already in the 1930s and a modest rise as lead was introduced into gasoline from that decade up until usage peaked in the mid 1960s.

Traffic data kept by the Ohio Department of Transportation reinforced the finding. The researchers found that children in neighborhood clusters with the highest number of cars on their roads also had the highest levels of lead in their teeth.

Cleveland is hardly unique in the nation’s history of lead usage and exposure, Robbins said. “What we found here we expect to be similar to urban areas in the rest of the country.”

The study was funded by the Mary Ann Swetland Endowment, Case Western Reserve University and by grants or programs of the National Science Foundation.

On the Net:

Teenage Boys Really Do Eat More

Researchers conducting a lunch-buffet study involving more than 200 kids between the ages of 8 and 17 found that boys routinely eat more compared to girls of the same age.

The researchers also found that boys in their mid-teens were the most ravenous, downing an average of 2,000 calories during the lunch hour.

Senior researcher Dr. Jack A. Yanovski, of the U.S. National Institute of Child Health and Human Development, says the pattern makes sense, given that boys usually hit their growth spurt in late puberty, putting on height and muscle mass.

Although teenage boys are known for packing away food, there really has not been any real evidence showing this as normal. “There’s a lot of folk wisdom that says boys can eat prodigious amounts, but we haven’t had much data,” Yanovski told Reuters Health.

For the study, Yanovski and his colleagues had 204 boys and girls 8 to 17 years old come to a lunch buffet on two separate days. The first day, researchers told the kids to eat as much as they normally would during lunch. On the second day, they were instructed to eat as much as they wanted.

The researchers found that boys ate more than girls at each stage of puberty. The same was true for prepubescent kids, with boys averaging nearly 1,300 lunchtime calories, compared to 900 among girls.

The biggest increase in appetite for girls came during early- to mid-puberty, between the ages of 10 and 13. Girls consumed an average of 1,300 lunchtime calories, but that figure was only slightly higher among girls in late puberty.

Yanovski said that pattern is in line with girls’ development, as they tend to have their most significant growth spurts in early- to mid-puberty.

Boys, however, tend to develop later; and their calorie needs appear to shoot up drastically in late puberty — between the ages of 14 and 17.

Boys showed little change in calorie intake between pre- and mid-puberty. But those going through late puberty had an intake of as much as 2,000 lunchtime calories. Even for active children, those 2,000 calories would be most of their daily energy needs.

Yanovski said that parents of teenage sons should not worry about a sudden surge in eating by their kids, as long as they are healthy and at normal weight.

However, he added, boys who are overweight should have more limits on how many calories they are taking in. Studies suggest that a majority of overweight children become overweight, or obese, adults.

On the Net:

Nations Push To Develop New Whale-based Products

Companies in Norway, Japan and Iceland are betting heavily on the lifting of a commercial whaling moratorium, and are working to develop new whale-based products ranging from golf balls to hair dye, according to a new report released Tuesday by the Whale and Dolphin Conservation Society (WDCS).

As members of the 88-nation International Whaling Commission (IWC) prepare for a meeting next week in Agadir, Morocco, debate on the use of hunted whales has focused on meat consumption, particularly in Japan.

However, as the three nations harvesting the marine mammals despite a global moratorium, Norway, Japan and Iceland also exploit whales in other ways, and are considering future commercial applications, the report said.

Indeed, investigators have found thousands of patents approved for a variety of different products with ingredients such as whale oil, cartilage, and spermaceti — a waxy liquid found in the head cavities of sperm whales.

“It is clear that whalers are planning to use whale oil and other whale derivatives to restore their hunts to long-term profitability,” said Sue Fisher, head of the WDCS’s whale campaign.

“Iceland, Japan and Norway are betting heavily that the commercial whaling moratorium will be lifted.”

The new products, which include goods as diverse as candy, “eco-friendly” detergent, health drinks and bio-diesel, could ultimately dwarf the value of whale meat, she added.

Profit-driven whale hunting has been banned since 1986, and international trade in whales or whale parts is prohibited under the Convention on International Trade in Endangered Species (CITES).  However, Japan, Iceland and Norway have all used loopholes in the moratorium to continue killing the whales.

The IWC has been ineffective in stopping the practice due to divisions between pro-conservation and pro-whaling interests.

But a new proposal to be considered during next week’s meeting in Agadir could pave the way to a compromise deal that all parties can accept, if reluctantly.

The 10-year plan calls for each whaling state to be granted annual kill quotas through 2020, amounting to some 12,000 specimens, in exchange for relinquishing their right to invoke unilateral exemptions or to hunt for “scientific” purposes.

Conservationists would finally achieve their goal of seeing what they call rogue nations brought into the IWC umbrella, along with the formation of a DNA-based monitoring system.  However, it’s an achievement that comes at the cost of thousands of whales.

Although the compromise deal aims to create a pressure-free zone for reaching a permanent agreement, anti-whaling groups worry it will legitimatize commercial hunting and result in an overturn of the ban once the plan expires.

“We anticipate they will use these new pharmaceuticals, animal feed and personal care products to soften global opposition to whaling and challenge the ban on international trade,” said WDCS trade analyst Kate O’Connell.

Japan already uses whale cartilage to make chondroitin for an arthritis treatment, collagen for anti-inflammatory treatments and beauty products, and as a common food additive known as oligosaccarides, the report said.

As the world’s No. 1 exporter of fishmeal and oil for livestock and aquatic farming, Norway has already conducted research on how to incorporate whale products in to the manufacturing process. 

Norwegian researchers have studied the use of whale oil for pharmaceutical and health supplements, with at least one clinical trial underway to test its efficacy as a rheumatoid arthritis treatment.

Meanwhile, the government of Iceland has recently called for the creation of an industrial park in Hvalfiroi, where fin whales could be converted into meat, meal and oil.

On the Net:

How Bacteria Boost The Immune System

Scientists have long known that certain types of bacteria boost the immune system. Now, Loyola University Health System researchers have discovered how bacteria perform this essential task.

Senior author Katherine L. Knight, PhD. and colleagues report their discovery in a featured article in the June 15, 2010, issue of the Journal of Immunology, now available online. Knight is professor and chair of the Department of Microbiology and Immunology at Loyola University Chicago Stritch School of Medicine.

The human body is teeming with bacteria. In each person, there are about 10 times as many bacterial cells as human cells. Bacteria live on skin, in the respiratory tract and throughout the digestive tract. The digestive tract alone is home to between 500 and 1,000 bacterial species.

While some bacteria cause infections, most species are harmless or perform beneficial functions, such as aiding digestion. These beneficial bugs are called commensal bacteria. One of the most important functions of commensal bacteria is boosting the immune system. Studies by other researchers have found that mice raised in sterile, germ-free environments have poorly developed immune systems. But until now, scientists have not known the mechanism by which bacteria help the immune system.

Knight’s lab studied the spores from rod-shaped bacteria called Bacillus, found in the digestive tract. (A spore consists of the DNA of a bacterium, encased in a shell. Bacteria form spores during times of stress, and re-emerge when conditions improve.) Researchers found that when they exposed immune system cells called B lymphocytes to bacterial spores, the B cells began dividing and reproducing.

Researchers further found that molecules on the surfaces of the spores bound to molecules on the surfaces of B cells. This binding is what activated the B cells to divide and multiply. B cells are one of the key components of the immune system. They produce antibodies that fight harmful viruses and bacteria.

The findings suggest the possibility that some day, bacterial spores could be used to treat people with weakened or undeveloped immune systems, such as newborns, the elderly and patients undergoing bone marrow transplants. In cancer patients, bacterial spores perhaps could boost the immune system to fight tumors. However, Knight cautioned that it would take years of research and clinical trials to prove whether such treatments were safe and effective.

On the Net:

Study Examines Relationship Between Type Of Rice Consumption And Diabetes Risk

Consuming more white rice appears to be associated with a higher risk for developing type 2 diabetes, whereas consuming more brown rice may be associated with a lower risk for the disease, according to a report in the June 14 issue of Archives of Internal Medicine, one of the JAMA/Archives journals.

“Rice has been a staple food in Asian countries for centuries,” the authors write as background information in the article. “By the 20th century, the advance of grain-processing technology made large-scale production of refined grains possible. Through refining processes, the outer bran and germ portions of intact rice grains (i.e., brown rice) are removed to produce white rice that primarily consists of starchy endosperm.” U.S. rice consumption is lower than that in Asian countries but is increasing rapidly, and more than 70 percent of the rice consumed is white.

Qi Sun, M.D., Sc.D., of Harvard School of Public Health, Boston, and colleagues assessed rice consumption and diabetes risk among 39,765 men and 157,463 women in three large studies: the Health Professionals Follow-Up Study and the Nurses’ Health Study I and II.

After adjusting for age and other lifestyle and dietary risk factors, those who consumed five or more servings of white rice per week had a 17 percent increased risk of diabetes compared with those who consumed less than one serving per month. In contrast, eating two or more servings of brown rice per week was associated with an 11 percent reduced risk of developing type 2 diabetes than eating less than one serving per month.

Based on the results, the researchers estimated that replacing 50 grams (equivalent to one-third of a serving) of white rice per day with the same amount of brown rice would be associated with a 16 percent lower risk of type 2 diabetes. Replacing white rice with whole grains as a group could be associated with a risk reduction as great as 36 percent.

In general, white rice has a higher glycemic index””a measure of how much a food raises blood glucose levels compared with the same amount of glucose or white bread””than brown rice, the authors note. “The high glycemic index of white rice consumption is likely the consequence of disrupting the physical and botanical structure of rice grains during the refining process, in which almost all the bran and some of the germ are removed,” they write. “The other consequence of the refining process includes loss of fiber, vitamins, magnesium and other minerals, lignans, phytoestrogens and phytic acid, many of which may be protective factors for diabetes risk.”

The current Dietary Guidelines for Americans recommend that at least half of carbohydrate intake come from whole grains. “From a public health point of view, replacing refined grains such as white rice by whole grains, including brown rice, should be recommended to facilitate the prevention of type 2 diabetes,” the authors conclude.

On the Net:

Longer-lasting Morning After Pill Under FDA Review

U.S. health regulatory staff said in documents released on Tuesday that a new, longer-lasting “morning-after” pill to prevent unwanted pregnancy appears to work with no unexpected side effects.

The FDA said in its documents that data shows the one-pill treatment, called “ella” and made by French drugmaker HRA Pharma, is effective when taken as many as five days after unprotected sex.

The FDA’s panel of outside experts will decide whether to recommend the agency to approve the drug for the U.S. market.  Watson Pharmaceuticals would sell the drug in the U.S. if approved.

The HRA Pharma drug has re-ignited debate over “morning-after” pills in the U.S., where reproductive issues are a constant political issue.

Women’s health advocates have welcomed the potential for another emergency contraceptive option, but some critics are concerned the drug is more akin to the abortion pill, known as RU-486 or mifepristone.

HRA Pharma said its drug works by preventing ovulation of a woman’s egg.

FDA staff scientists said in their review that the company’s studies showed no unexpected side effects in women, although reports of nausea, headache and abdominal pain were common.  They added that it was not clear what effect the drug had, if any, when a woman still became pregnant despite taking it.

“Data on pregnancy outcomes after EC (emergency contraceptive) failure with ulipristal were too limited to draw any definitive conclusions regarding the effect of ulipristal on an established pregnancy or fetal development,” they wrote.

On the Net:

Study Evaluates Association Of Genetic Factors And Brain Imaging Findings In Alzheimer’s Disease

By investigating the association between genetic loci related to Alzheimer’s disease and neuroimaging measures related to disease risk, researchers may have uncovered additional evidence that several previously studied genetic variants are associated with the development and progression of Alzheimer’s disease and also may have identified new genetic risk factors for further study, according to a report in the June issue of Archives of Neurology, one of the JAMA/Archives journals.

“The mechanisms underlying Alzheimer’s disease onset and progression remain largely unexplained,” the authors write as background information in the article. Twin studies have suggested that the condition is 60 percent to 80 percent heritable. Until recently, only one genetic variant””known as APOE””was shown to influence Alzheimer’s disease risk and age at onset. However, new findings from genome-wide association studies have identified three additional loci (specific locations of genetic variants on chromosomes) that confer risk of Alzheimer’s disease.

Neuroimaging measures””including the volume of hippocampus, amygdala and other brain structures””also correlate with the risk and progression of Alzheimer’s disease. “The demonstration that recently discovered genetic risk factors for Alzheimer’s disease also influence these neuroimaging traits would provide important confirmation of a role for these genetic variants and suggest mechanisms through which they might be acting,” the authors write.

Alessandro Biffi, M.D., and Christopher D. Anderson, M.D., of Massachusetts General Hospital, Boston, and Broad Institute, Cambridge, Mass., and colleagues studied the associations between genes and neuroimaging results among 168 individuals with probable Alzheimer’s disease, 357 with mild cognitive impairment (a precursor to Alzheimer’s disease) and 215 who were cognitively normal.

The four loci previously associated with Alzheimer’s disease were assessed, along with six neuroimaging traits linked to Alzheimer’s disease. The APOE gene had the strongest association with clinical Alzheimer’s disease, and was associated with all the neuroimaging traits except one. The other candidate genes showed a significant cumulative effect on the neuroimaging measures analyzed.

“Our results indicate that APOE and other previously validated loci for Alzheimer’s disease affect clinical diagnosis of Alzheimer’s disease and neuroimaging measures associated with disease,” the authors write. “These findings suggest that sequence variants that modulate Alzheimer’s disease risk in recent genome-wide association studies may act through their influence on neuroimaging measures.”

In addition, the genetic analysis of neuroimaging traits identified two new target gene locations””BIN1 and CNTN5″”of heightened interest for their relationship with Alzheimer’s disease. “Although our results for these loci can only be considered preliminary, they may help prioritize targets for future genetic studies and genome-wide association studies in Alzheimer’s disease, particularly given their association with neuroimaging correlates of Alzheimer’s disease and disease status,” the authors write. They add that independent evidence for an association between the BIN1 gene location and Alzheimer’s disease emerged in a recent meta-analysis.

(Arch Neurol. 2010;67[6]:677-685. Available pre-embargo to the media at www.jamamedia.org.)

Editor’s Note: Please see the article for additional information, including other authors, author contributions and affiliations, financial disclosures, funding and support, etc.

Editorial: Findings Herald New Era of Genetic Studies

“While we have understood the bases for mendelian, early-onset Alzheimer’s disease for nearly two decades, elucidation of the genetic risks for late-onset disease beyond the apolipoprotein E locus, discovered in 1993, had been painfully slow until last year,” write John Hardy, Ph.D., of University College London Institute of Neurology, and Julie Williams, Ph.D., of the Medical Research Council Centre for Neuropsychiatric Genetics and Genomics, Cardiff, Wales, in an accompanying editorial.

“With the benefit of hindsight, we now have some indication of why no other risk loci were found during this period; simply, there are no other loci with similar effect sizes to apolipoprotein E to be found. Now, however, with the advent of whole-genome associations, we are beginning to find the weaker risk loci for the disease.”

“These findings, and the genome-wide studies that presaged them, mark a new period of optimism for those of us who study the etiologies of complex diseases of the nervous system,” Drs. Hardy and Williams write. “While the drought of genetic findings in Alzheimer’s disease has lasted a long time, the flood of new findings have been a reward worth waiting for.”

(Arch Neurol. 2010;67[6]:663-664. Available pre-embargo to the media at www.jamamedia.org.)

Editor’s Note: Please see the article for additional information, including author contributions and affiliations, financial disclosures, funding and support, etc.

On the Net:

Cancer Drug Derived From Rainforest Plant In Development

Australian firm QBiotics Ltd said Monday that it has developed a potential cancer drug from a rainforest plant that has successfully helped fight off inoperable tumors in pets.

The firm said its EBC-46 drug is derived from seeds of a tropical rainforest shrub and is ready to be tested on humans after treating solid tumors in over 100 dogs, cats and horses.

“We’ve treated over 150 animals … with a variety of tumors and we’re prepared to move into human studies,” chief executive Victoria Gordon told AFP.

Gordon said the results indicate the drug could work to counter a range of malignant growths, like skin cancers, head and neck cancer, breast cancer and prostate cancer.

She said the drug works like a detonator inside tumors, prompting inactive beneficial white cells to begin to fight and destroy the cancer.

The company spent the last six years developing the drug since the previously unknown molecule in the native Australian blushwood plant was discovered.  The firm hopes to raise enough funds to begin human trials in 2011.

Gordon said the compound proves the value of retaining Australia’s tropical rainforests.

“The world’s rainforests are an amazing biological resource which we need to conserve and cherish,” she said in a statement.

“Not only may they hold the secret to many new drugs, they are the home of more than half of all other species with which we share the planet.”

The Cancer Council Australia cautioned the development, saying the firm has yet to publish its research.

“We have yet to see the results of this research published in a scientific journal, where they would be subject to independent scientific scrutiny, which is useful in determining the rigor of the research,” chief executive Ian Olver said in a statement.

“While it is encouraging to see success in animals, this has not been a good predictor of success in humans,” Professor Olver said. “So, it is far too early to be able to class this as a breakthrough.”

On the Net:

CoRoT Unveils Rich Assortment Of New Exoplanets

By detecting the faint dimming in the light emitted by stars during a transit event, CoRoT has detected six new exoplanets – each with its own peculiar characteristics – and one brown dwarf. One of these exoplanets, designated CoRoT-11b, has twice the mass of Jupiter and orbits a rapidly rotating star; this type of star is an extremely difficult target for exoplanet searches and its detection marks a significant achievement for the CoRoT team.

In order to detect planets orbiting other suns, the CoRoT satellite, which is operated by CNES (the French space agency), observes a large number of stars over a significant period of time, trying to spot a subtle decrease in their luminosity: this ‘dimming’ could be a signature that the star hosts a planet, which is transiting in front of it and partially obscuring its light. This transit technique is one of several methods used to search for exoplanets but is the only one that allows astronomers to determine the radius of the planet – by measuring the depth of the transit.

Other geometrical configurations of a stellar system, for instance the presence of one or more companion stars can, however, mimic the presence of a planet. For this reason follow-up observations are needed to confirm the planetary nature of the transiting body. Alerted by CoRoT’s detection of a candidate planet-hosting star, some of the foremost ground-based observatories collect high-resolution images and spectra, yielding a wealth of additional information.

In particular, astronomers look for a Doppler shift in the stellar spectrum, highlighting the periodic ‘wobble’ of the star in the two-body system. From the amplitude of this wobble, it is possible to estimate the mass of the transiting body and, consequently, to determine whether or not it is indeed a planet. Once the mass and the radius are known, the mean density of the planet can be derived ““ a key factor in distinguishing between gaseous giant planets and rocky terrestrial ones. The discovery of these six new exoplanets adds variety to the large number of exoplanets that have been detected to date.

“With the addition of this new batch, the number of exoplanets discovered by CoRoT has risen to 15,” says Magali Deleuil from Laboratoire d’Astrophysique de Marseille, Head of the CoRoT exoplanet program. “The increasing size of the census, which includes objects with very diverse characteristics, is of vital importance for a better understanding of planetary systems other than our own,” she adds.

The new discoveries exhibit a wide variety of physical properties, spanning a broad range of sizes and masses: the smallest of the sample, CoRoT-8b, is about 70% of Saturn’s size and mass, while CoRoT-10b, CoRoT-11b, CoRoT-12b, CoRoT-13b and CoRoT-14b are larger, belonging to the class known as ‘hot Jupiters’. CoRoT-15b, being 60 times as massive as Jupiter, is a brown dwarf, an intermediate object between a planet and a star. In addition, other peculiarities are exhibited in this very heterogeneous set of exoplanets: CoRoT-10b has an extremely eccentric orbit, resulting in large variations in its surface temperature over the course of its year, and CoRoT-11b’s parent star spins around its axis at an extraordinarily fast rate.

“The rich diversity emerging from this sample is a very interesting result, showing CoRoT’s ability to detect exoplanets which are rather different from each other”, comments Malcolm Fridlund, ESA’s Project Scientist for CoRoT. “Being able to study a wide variety of planets will provide important insights into the formation and evolution of planetary systems”, he adds.

One of the planets, CoRoT-11b, stands out from the set of six because of the rotation velocity of CoRot-11, its parent star, which spins around its axis in less than 2 days  – an exceptionally high speed, as compared to the Sun’s rotation period of about 26 days.

“This is the third exoplanet discovered around such a rapidly rotating star”, notes Davide Gandolfi, the ESA Research Fellow who led the study of CoRoT-11b. “Because of the fast rotation of its host star, such a planet could only have been discovered because it transits in front of it, thus only a transit-hunter, such as CoRoT, could have spotted it”, he adds.

The search for Doppler shifts in the spectra of stars, which represents another prolific method for detecting exoplanets, is in fact biased against planets orbiting fast rotators, as the high rotational velocity of the star makes it extremely hard to achieve high-precision Doppler measurements and hence to detect the tiny signature of the presence of a planet. “If it had been included as a possible exoplanet candidate during such a campaign, CoRoT-11b would have been rejected because of the intensive observational effort needed to achieve the required accuracy”, explains Gandolfi.

Instead, the object was first noticed by CoRoT, and then became the subject of extensive photometric and spectroscopic follow-up observations across the world, using the Swiss Leonhard Euler 1.2 m telescope at ESO’s La Silla Observatory and the TEST 30 cm telescope at the Thringer Landessternwarte Tautenburg, as well as a number of world-class spectrographs (HARPS at ESO’s La Silla Observatory, SOPHIE at the Haute-Provence Observatory, UVES at ESO’s Very Large Telescope and HIRES at the Keck Observatory), and the high- and low-resolution spectrograph also at Tautenburg, in Germany. Thanks to the combination of these exceptional data, it was possible to estimate the mass of CoRoT-11b, which is about twice as massive as Jupiter, and its radius, which is about 1.4 times that of Jupiter, thus confirming its planetary nature.

“This result anticipates what may be achieved by future space-based missions searching for exoplanets”, says Fridlund. CoRoT is in fact a precursor for PLATO, a Cosmic Vision candidate mission that will seek planetary transits over a much larger sample of stars – the size of the sample is an important factor determining the number of planets that may be discovered. This significant increase in the sample size is possible because of PLATO’s very wide field of view, which in turn relies on the combined use of 34 small telescopes. In addition, PLATO will study brighter stars than those that can be observed with CoRoT, making it possible to determine the age of the planet-hosting stars through asteroseismology measurements. This, combined with the tremendous improvement in the accuracy on the estimate of exoplanet masses and sizes that is expected from PLATO, will provide an important step in the quest to understand the conditions that favor the formation of Earth-like planets.

Notes:

Since 1995, astronomers have discovered over 450 exoplanets by employing a number of techniques, including astrometry, radial velocity and the transit method. Of the 82 planets that have been discovered using the transit method, 15 were first spotted by CoRoT.

The CoRoT satellite has been developed and is exploited by the French national space agency, CNES, with significant participation from Austria, Belgium, the European Space Agency (ESA), Germany, Spain, and Brazil.

ESA has joined the mission by providing the optics and baffle for the telescope and testing of the payload. Through this collaboration a number of European scientists, from Denmark, Switzerland, the United Kingdom and Portugal, have been selected as Co-Investigators in open competition. As a result of ESA’s participation in CoRoT, scientists from ESA’s Member States also have access to the satellite’s data.

ESA’s Research and Scientific Support Department (RSSD) at ESTEC is a full partner in CoRoT by providing the on-board Data Processing Units (DPU’s).

The ESA PRODEX program has supported the development of the CoRoT telescope baffle, and the software development and data processing of CoRoT light curves.

The ground stations used for CoRoT are located in Kiruna (S), Aussaguel (F) Hartebeesthoek (South Africa), and Kourou (French Guyana), with mission-specific ground stations in Alcantara (Brazil) and Vienna (A).

A number of ground-based telescopes support CoRoT observations and contribute to the characterization of planets: the Canada France Hawaii Telescope, Hawaii, USA; the IAC80 and the ESA-OGS of Teide Observatory, Spain; the 1.2 m telescope at Observatoire Haute Provence, France; the Swiss Leonhard Euler 1.2 m telescope, Chile; the 46 cm and 1 m Wise Observatory, Israel; the 2 m and 30 cm TEST telescopes of Tautenburg Observatory, Germany; the BEST and BEST2 telescopes of the Deutsche Luft und Raumfahrt Gesellschaft (DLR) HARPS spectrograph on the ESO 3.6 m telescope, Chile; the UVES spectrograph on the ESO 8.2 m Very Large Telescopes at Paranal Observatory, Chile; the HIRES spectrograph on the 10 m KECK telescope at Hawaii, USA, the SOPHIE spectrograph on the 1.93 m telescope at Haute Provence Observatory in France; the Sandiford spectrograph on the 2.1 m telescope at McDonald Observatory in Texas, USA; and the AAOmega multi-object spectrograph on the 3.9 m telescope at the Anglo-Australian Observatory.

Reference: Gandolfi, D., et al., “Transiting exoplanets from the CoRoT space mission XII. CoRoT-11b: a transiting massive ‘hot-Jupiter’ in a prograde orbit around a rapidly rotating F-type star”, submitted to Astronomy and Astrophysics.

Image 1: The CoRoT spacecraft. Credit: CNES/D.Ducros

Image 2: The CoRoT family of exoplanets. Credit: CNES

Image 3: The complete lightcurve of CoRoT-11 for the period 15 April to 7 September 2008. Credit:  D. Gandolfi

Image 4: The folded lightcurve, showing the amplitude of the transit. Credit:  D. Gandolfi

On the Net:

Astronomers Have Doubts About The ‘Dark Side’

New research by astronomers in the Physics Department at Durham University suggests that the conventional wisdom about the content of the Universe may be wrong. Graduate student Utane Sawangwit and Professor Tom Shanks looked at observations from the Wilkinson Microwave Anisotropy Probe (WMAP) satellite to study the remnant heat from the Big Bang. The two scientists find evidence that the errors in its data may be much larger than previously thought, which in turn makes the standard model of the Universe open to question. The team published their results in a letter to the journal Monthly Notices of the Royal Astronomical Society.

Launched in 2001, WMAP measures differences in the Cosmic Microwave Background (CMB) radiation, the residual heat of the Big Bang that fills the Universe and appears over the whole of the sky. The angular size of the ripples in the CMB is thought to be connected to the composition of the Universe. The observations of WMAP showed that the ripples were about twice the size of the full Moon, or around a degree across.

With these results, scientists concluded that the cosmos is made up of 4% “Ëœnormal’ matter, 22% “Ëœdark’ or invisible matter and 74% “Ëœdark energy’. Debate about the exact nature of the “Ëœdark side’ of the Universe ““ the dark matter and dark energy ““ continues to this day.

Sawangwit and Shanks used astronomical objects that appear as unresolved points in radio telescopes to test the way the WMAP telescope smoothes out its maps. They find that the smoothing is much larger than previously believed, suggesting that its measurement of the size of the CMBR ripples is not as accurate as was thought. If true this could mean that the ripples are significantly smaller, which could imply that dark matter and dark energy are not present after all.

Prof. Shanks comments “CMB observations are a powerful tool for cosmology and it is vital to check for systematic effects. If our results prove correct then it will become less likely that dark energy and exotic dark matter particles dominate the Universe. So the evidence that the Universe has a “ËœDark Side’ will weaken!”

In addition, Durham astronomers recently collaborated in an international team whose research suggested that the structure of the CMB may not provide the robust independent check on the presence of dark energy that it was thought to.

If dark energy does exist, then it ultimately causes the expansion of the Universe to accelerate. On their journey from the CMB to the telescopes like WMAP, photons (the basic particles of electromagnetic radiation including light and radio waves) travel through giant superclusters of galaxies. Normally a CMB photon is first blueshifted (its peak shifts towards the blue end of the spectrum) when it enters the supercluster and then redshifted as it leaves, so that the two effects cancel. However, if the supercluster galaxies are accelerating away from each other because of dark energy, the cancellation is not exact, so photons stay slightly blueshifted after their passage. Slightly higher temperatures should appear in the CMB where the photons have passed through superclusters.

However, the new results, based on the Sloan Digital Sky Survey which surveyed 1 million luminous red galaxies, suggest that no such effect is seen, again threatening the standard model of the Universe.

Utane Sawangwit says, “If our result is repeated in new surveys of galaxies in the Southern Hemisphere then this could mean real problems for the existence of dark energy.”

If the Universe really has no “Ëœdark side’, it will come as a relief to some theoretical physicists. Having a model dependent on as yet undetected exotic particles that make up dark matter and the completely mysterious dark energy leaves many scientists feeling uncomfortable. It also throws up problems for the birth of stars in galaxies, with as much “Ëœfeedback’ energy needed to prevent their creation as gravity provides to help them form.

Prof. Shanks concludes “Odds are that the standard model with its enigmatic dark energy and dark matter will survive – but more tests are needed. The European PLANCK satellite, currently out there collecting more CMB data will provide vital new information and help us answer these fundamental questions about the nature of the Universe we live in.”

Image Caption: he unresolved radio sources used by Sawangwit & Shanks to measure the effect of telescope smoothing are marked on the WMAP CMB map (open circles). Sawangwit and Shanks found that the radio sources implied stronger telescope smoothing than previously found, suggesting that the CMB ripple size may be smaller. Click the image for larger version (Credit: NASA/WMAP plus Durham University).

On the Net:

More Evidence Of An Extinct Sea On Mars

According to a study released Sunday, a potentially life-giving sea covered over a third of Mars about 3.5 billion years ago.

Scientists have argued for decades over whether Mars once harbored bodies of water big enough to help nourish a hydrological cycle marked by evaporation and rainfall.

Gaetano Di Achille and Brian Hynek of the University of Colorado in Boulder went through huge stores of images that NASA’s Mars Orbiter Laser Altimeter (MOLA) in the late 1990s and other more recent European and US satellite-based monitoring systems.

The researchers were the first to link up all available information on Mars’ terrain into a single computer-driven model.

The study found 52 river-delta deposits scattered across the planet.

Over half occurred at about the same elevation, which marks the boundary of the once-massive sea.

The scientists calculated that the ancient sea covered 36 percent of the planet’s surface and contained about 30 million cubic miles of water.

The scientists wrote that Mars probably had an Earth-like water cycle 3.5 billion years ago, including precipitation, runoff, cloud formation, ice formation and ground water accumulation.

Hynek and colleagues wrote in a parallel study that about 40,000 river valleys existed, four times the number previously suspected.

“The abundance of these river valleys required a significant amount of precipitation,” Hynek wrote in the Journal of Geophysical Research (Planets).

“This effectively puts the nail in the coffin regarding the presence of past rainfall on Mars.”

However, many puzzles still remain.

“One of the main questions we would like to answer is where all of the water on Mars went,” said Di Achille.

The new studies provide critical leads on where to look for signs of early Martian life.

“On Earth, deltas and lakes are excellent collectors and preservers of signs of past life,” Di Achille told AFP news.

“If life ever arose on Mars, deltas may be the key to unlocking Mars’ biological past.”

Hynek wrote that long-lived oceans may have provided an environment for microbial life to take hold on Mars.

The European Space Agency (ESA) and NASA have separately planned a manned flight to Mars in about 30 years.

Mars is between 34 million and 248 million miles from Earth.

Image Caption: This is an illustration of what Mars might have looked like some 3.5 billion years ago when an ocean likely covered one-third of the planet’s surface, according to a new University of Colorado at Boulder study. (Illustration by University of Colorado)

On the Net:

MIT: A New Use For Gold

Engineers turn a drawback “” the stickiness of gold nanoparticles “” into an advantage.

Gold nanoparticles “” tiny spheres of gold just a few billionths of a meter in diameter “” have become useful tools in modern medicine. They’ve been incorporated into miniature drug-delivery systems to control blood clotting, and they’re also the main components of a device, now in clinical trials, that is designed to burn away malignant tumors.

However, one property of these particles stands in the way of many nanotechnological developments: They”Ëœre sticky. Gold nanoparticles can be engineered to attract specific biomolecules, but they also stick to many other unintended particles “” often making them inefficient at their designated task.

MIT researchers have found a way to turn this drawback into an advantage. In a paper recently published in American Chemical Society Nano, Associate Professor Kimberly Hamad-Schifferli of the Departments of Biological Engineering and Mechanical Engineering and postdoc Sunho Park PhD ’09 of the Department of Mechanical Engineering reported that they could exploit nanoparticles’ stickiness to double the amount of protein produced during in vitro translation “” an important tool that biologists use to safely produce a large quantity of protein for study outside of a living cell.

During translation, groups of biomolecules come together to produce proteins from molecular templates called mRNA. In vitro translation harnesses these same biological components in a test tube (as opposed to in vivo translation, which occurs in live cells), and a man-made mRNA can be added to guarantee the production of a desired protein. For example, if a researcher wanted to study a protein that a cell would not naturally produce, or a mutated protein that would be harmful to the cell in vivo, he might use in vitro translation to create large quantities of that protein for observation and testing. But there’s a downside to in vitro translation: It is not as efficient as it could be. “You might get some protein one day, and none for the next two,” explains Hamad-Schifferli.

With funding from the Institute of Biomedical Imaging and Bioengineering, Hamad-Schifferli and her co-workers initially set out to design a system that would prevent translation. This process, known as translation inhibition, can stop the production of harmful proteins or help a researcher determine protein function by observing cell behavior when the protein has been removed. To accomplish this, Hamad-Schifferli attached DNA to gold nanoparticles, expecting that the large nanoparticle-DNA (NP-DNA) aggregates would block translation.

She was discouraged, however, to find that the NP-DNA did not decrease protein production as expected. In fact, she had some unsettling data suggesting that instead of inhibiting translation, the NP-DNA were boosting it. “That’s when we put on our engineering caps,” recalls Hamad-Schifferli.

It turns out that the sticky nanoparticles bring the biomolecules needed for translation into close proximity, which helps speed the translation process. Additionally, the DNA part of the NP-DNA complex is designed to bind to a specific mRNA molecule, which will be translated into a specific protein. The binding must be tight enough to hold the mRNA in place for translation, but loose enough that the mRNA can also attach to the other molecules necessary for the process. Because the designed DNA molecule has a specific mRNA partner, that mRNA in a solution of many similar molecules can be enhanced without having to be isolated.

In addition to enhancing in vitro translation, Hamad-Schifferli’s NP-DNA complexes may have other applications. According to Ming Zheng, a research chemist with the National Institute of Standards and Technology, they could be combined with carbon nanotubes “” tiny, hollow cylinders that are incredibly strong for their size. They may ultimately be the cornerstone of transport systems that ferry drugs into cells or between cells. The stickiness of the NP-DNA might enhance the speed and accuracy of such a drug-delivery system. 

Although Hamad-Schifferli is confident that her discovery will make in vitro translation more reliable and efficient, she is not done. She hopes to tinker with her system to further enhance protein production in vitro, and see if the system can be applied to enhance translation in live cells. To help reach these goals, she must design and conduct experiments to determine which molecules are involved in the enhancement process, and how they interact. “The upside is that we’ve been lucky,” Hamad-Schifferli says, reflecting on her discovery. “The downside is that it will be difficult to tease out exactly how the system works.” 

Diana LaScala-Gruenewald, MIT News correspondent

Image Caption: An image of gold nanoparticles. Image courtesy Kimberly Hamad-Schifferli

On the Net:

Songbirds Learn Songs While They Sleep

When zebra finches learn their songs from their father early in life, their brain is active during sleep. That is what biologists at Utrecht University conclude in a paper published in the Proceedings of the Royal Society B. Their findings are a further demonstration that birdsong learning is very similar to the way that children learn how to speak.

This discovery has important consequences for our understanding of the brain processes involved in learning and memory. Human infants learning to speak show increased activation in a part of the brain that is comparable to that studied in young zebra finches. Furthermore, language learning in children is improved when they are allowed to take a nap. The Utrecht discovery will increase our understanding of the role of sleep in the formation of memory.

A model for speech learning

Previously the researchers, Sharon Gobes, Thijs Zandbergen en Johan Bolhuis, had demonstrated that the way in which zebra finches learn their songs is very similar to the way in which children learn to speak. In both cases learning takes place during early youth and involves considerable practice. Also, in children and songbirds alike, different brain regions are involved in learning and in speaking or singing. The new research shows that, just as in human infants, the brain of the young zebra finch is also active during sleep. This makes songbirds a good animal model to study the role of sleep in human speech acquisition.

The brain is active during sleep

It has been known that sleep plays an important role in learning in humans and other mammals. In songbirds it had been shown previously that during sleep the brain has the same pattern of activity as during singing the day before. The present findings show that the more young songbirds have learned from their father’s song, the more active their brain is during subsequent sleep.

Image Caption: Photograph of a zebra finch. Courtesy Maurice van Bruggen – Wikipedia

On the Net:

Should Peanuts Be Banned From Flights?

Federal regulators are considering banning the serving of peanuts on commercial airline flights.

Advocates say the move would ease fears and potential harm to about 1.8 million Americans that suffer from peanut allergies.  However, peanut farmers and food packagers see it as overreaching and unfair to their precious legume.

“The peanut is such a great snack and such an American snack,” Martin Kanan, CEO of the King Nut Companies, an Ohio company that packages the peanuts served by most U.S. airlines, told The Associated Press (AP). “What’s next? Is it banning peanuts in ballparks?”

The U.S. Transportation Department gave notice last week, twelve years after Congress ordered it to back off peanuts, that it’s gathering feedback from allergy sufferers, medical experts, the food industry and the public on whether to ban or restrict in-flight peanuts.

The proposal is an 84-page document including several other proposed consumer protections for air travelers.  Three options were given:  banning serving of peanuts on all planes; prohibiting peanuts on when an allergic passenger requests it in advance; or requiring an undefined “peanut-free zone” when a passenger asks for one.

The document also states “we are particularly interested in hearing views on how peanuts and peanut products brought on board aircraft by passengers should be handled.”

Spokesman Bill Mosely said the department is responding to concerns from travelers who either suffer from the allergy or have allergic children.

“We’re just asking for comment on whether we should do any of these three things,” Mosely told AP. “We may not do any of them.”

Dr. Scott Sicherer, who studies food allergies at Mount Sinai Hospital in New York, told AP that a few limited studies on airline passengers with peanut allergies has found a number of people reporting symptoms, but few were severe or life-threatening.

“But there’s discomfort,” Sicherer said. “It’s sort of like if you were allergic to dogs and all of a sudden they brought 50 dogs onto the plane.”

The Transportation Department previously weighed imposing peanut-free zones on airlines in 1998.  However, the agency retreated after getting a hostile response from Congress, which threatened to cut its budget.

Huge airline companies like Continental, United, U.S. Airways and JetBlue have voluntarily stopped serving packaged peanuts.  Delta and Southwest still hand out goobers as in-flight snacks.  American Airlines does not serve packaged peanuts, but it does offer trail mix and other snacks that can obtain peanut ingredients.

Georgia, the nation’s top peanut producing state, is not in agreement with the idea of government regulation of peanuts on planes.

“The peanut industry feels like we’re being picked on,” Armond Morris, who grows peanuts on about 270 acres in rural Irwinville and serves as chairman of the Georgia Peanut Commission, told the news agency. “If we’re going to go targeting food products, maybe we just need to ban all food” on planes.

Advocates with the Food Allergy and Anaphylaxis Network say the answer is simple:  planes are confined spaces where the air and dust particles get re-circulated.  There is no way to stop and get off during a severe reaction during flight.

“It’s a different environment when you’re basically 30,000 feet in the air,” said Chris Weiss, the group’s vice president of advocacy and government relations. “If you’re sitting around a bunch of people and all of a sudden they’re all handed packages of peanuts, that could release enough peanut dust into the air to trigger a reaction.”

On the Net:

BCM Surgeon Completes One Thousandth Liver Transplant

The Liver Center at Baylor College of Medicine, already renowned for having the highest liver transplant survival rate in Texas among high-volume programs, has something new to celebrate. Dr. John Goss, professor of surgery in the Michael E. DeBakey Department of Surgery at BCM, has performed his 1,000th liver transplant since joining the BCM faculty.

This milestone was reached on June 7 at St. Luke’s Episcopal Hospital.

Goss is director of liver transplant programs at St. Luke’s, Texas Children’s Hospital and the Michael E. DeBakey VA Medical Center. He performed these life-saving operations at these institutions and at The Methodist Hospital over the past 12 years with the support of his team of surgeons, nurses, hepatologists, medical specialists, and support staff.

“Liver transplantation provides the sole chance for survival for children and adults devastated by liver diseases. We are all grateful that the organ families have graciously donated to make this operation possible. The donors and our patients are our heroes,” said Goss, who is also the chief of the division of abdominal transplantation and hepatobiliary surgery at BCM.

The survival rate is more than 95 percent for the BCM liver transplant program. It is the No. 1 program in Texas among programs performing more than 35 transplants yearly.

“The Baylor liver transplant program is an outstanding success,” said Dr. Stephen Spann, senior vice president and dean of clinical affairs at BCM. “Dr. Goss, Dr. Stribling and their team have a commitment to excellence that is evident in their success and the life-saving care they have given to hundreds of patients. Liver transplant patients at Texas Children’s Hospital, the DeBakey VA and St. Luke’s have benefited from the comprehensive team approach that they have championed.”

Goss and Dr. Risë Stribling, associate professor of surgery at BCM and medical director of the St. Luke’s Liver Transplant and VA Medical Center Programs, co-founded the The Liver Center at BCM to provide comprehensive care for all patients with liver diseases.

“Our team of hepatologists, radiologists, oncologists, pathologists and medical consultants collaborate with our surgeons to provide advanced medical and surgical care based on the latest research and technologies,” said Stribling.

Goss and Stribling both credit their UCLA mentor, Dr. Ronald Busuttil, with motivating them to pursue a collaborative approach to improving the care of all patients with liver diseases and the survival of those undergoing liver transplantation.

“I congratulate Drs. Goss and Stribling on this landmark achievement of performing 1,000 liver transplants. One cannot underestimate the skill, dedication and selfless sacrifices that are required to save so many lives. They and their team should feel a great sense of pride on this momentous occasion,” said Dr. Busuttil, professor and executive chairman of the department of surgery at the David Geffen School of Medicine at UCLA in Los Angeles.

On the Net:

Brain Stimulation With Ultrasound May Enhance Cognitive Function

The ability to diagnose and treat brain dysfunction without surgery, may rely on a new method of noninvasive brain stimulation using pulsed ultrasound developed by a team of scientists led by William “Jamie” Tyler, a neuroscientist at Arizona State University. The approach, published in the journal Neuron on June 9, shows that pulsed ultrasound not only stimulates action potentials in intact motor cortex in mice but it also “elicits motor responses comparable to those only previously achieved with implanted electrodes and related techniques,” says Yusuf Tufail, the lead author from ASU’s School of Life Sciences.

Other techniques such as transcranial magnetic and deep brain stimulation, electroconvulsive shock therapy and transcranial direct current stimulation are used to treat a range of brain dysfunctions, including epilepsy, Parkinson’s disease, chronic pain, coma, dystonia, psychoses and depression. However, most of these approaches suffer from “critical weaknesses,” Tyler says, including requirements for surgery, low spatial resolution or genetic manipulations. Optogenetics, for example, is one state-of-the-art technology that merges genes from plants and other organisms with the intact brains of animals to offer control of neural circuitry.

“Scientists have known for more than 80 years that ultrasound can influence nerve activity,” observes Tufail. “Pioneers in this field transmitted ultrasound into neural tissues prior to stimulation with traditional electrodes that required invasive procedures. Those studies demonstrated that ultrasound pre-treatments could make nerves more or less excitable in response to electrical stimulation.

“In our study, however, we used ultrasound alone to directly stimulate action potentials and drive intact brain activity without doing any kind of surgery,” Tufail says.

“It is fascinating to witness these effects firsthand,” he adds. Tufail is one of four doctoral students in ASU’s School of Life Sciences who worked with Tyler on the project. The team also included Alexei Matyushov, an physics undergraduate student in ASU’s Barrett Honors College working with Tyler, and Nathan Baldwin, a doctoral student in bioengineering, and Stephen Helms Tillery, an assistant professor, with ASU’s Ira A. Fulton Schools of Engineering.

“We knew from some of our previous work that ultrasound could directly stimulate action potentials in dishes containing slices of brain tissue,” says Tyler. “Moving to transmit ultrasound through the skin and skull to stimulate the intact brain inside a living animal posed a much greater challenge.”

Despite such challenges, the study shows how ultrasound can be used to stimulate brain circuits with millimeter spatial resolution. “We’ve come a long way from the observations of Scribonius Largus, a Roman physician in the 1st century A.D. who placed electric torpedo fish on headache sufferers’ foreheads to ease their pain,” Tyler quips. “Our method paves the way for using sound waves to study and manipulate brain function, as well as to diagnose and treat its dysfunction.”

In addition to advancing hope for noninvasive treatments of brain injury and disease, the groups’ experiments in deeper subcortical brain circuits also revealed that ultrasound may be useful for modifying cognitive abilities.

“We were surprised to find that ultrasound activated brain waves in the hippocampus known as sharp-wave ripples,” Tufail says. “These brain activity patterns are known to underlie certain behavioral states and the formation of memories.”

The scientists also found that ultrasound stimulated the production of brain-derived neurotrophic factor (BDNF) in the hippocampus ““ one of the most potent regulators of brain plasticity.

Tyler says the fact that ultrasound can be used to stimulate action potentials, meaningful brain wave activity patterns, and BDNF leads him to believe that, in the future, ultrasound will be useful for enhancing cognitive performance; perhaps even in the treatment of cognitive disabilities such as mental retardation or Alzheimer’s disease.

Tyler’s students have also collected data that suggests that repeated exposure to low intensity ultrasound does not pose a health risk to rodents. “We examined many aspects of brain health following stimulation and found that low-intensity ultrasound is safe for repeatedly stimulating the brains of mice,” noted Anna Yoshihiro, a neuroscience doctoral student in ASU’s College of Liberal Arts and Sciences and co-author of the journal article. Yoshihiro works to treat Parkinsonian monkeys and has achieved some early success in treating epileptic seizures in mice using ultrasonic neuromodulation.

Monica Li Tauchmann, Yoshihiro’s contemporary and co-author on the article, recalls the first time the method worked: “I was helping with experiments. We were trying to stimulate the brain of a living mouse with ultrasound. Not a whole lot was happening at first. Then, Dr. Tyler changed some of the ultrasound waveform parameters and the mouse started moving. We spent the rest of that day repeating the stimulation and the mouse was perfectly fine. It recovered from anesthesia as if nothing had happened. I think we were all astonished.”

Tyler believes that there are a host of potential applications for ultrasound in brain manipulation. Besides basic science and medical uses, ultrasound represents a core platform around which future brain-machine interfaces can also be designed for gaming, entertainment and communication purposes because of its noninvasive nature.

“Space travel, hand-held computers, the Internet, and global positioning ““ not even a lifetime ago these things were mere science fiction. Today, they are commonplace,” Tyler says. “Maybe the next generation of social entertainment networks will involve downloading customized information or experiences from personalized computer clouds while encoding them into the brain using ultrasound. I see no reason to rule out that possibility.”

“To be honest,” he adds, “we simply don’t know yet how far we can push the envelope. That is why many refer to the brain as the last frontier – we still have a lot to learn.”

Image Caption: This is an artistic rendition of an ultrasound pulse being transmitted into intact brain circuits. Credit: ASU/Jamie Tyler

On the Net:

Scientists “ËœPop’ Bubble Mystery

Scientists reported on Wednesday that bubbles do not just disappear when they pop, but actually deflate in a rapid cascade of bubbles.

The physics behind this bursting effect seems to hold true whether the liquid is as thin as water or as thick as heavy oil, suggesting a universal theory of how bubbles behave when they break.

According to the study published in the British journal Nature, a host of practical applications could follow in areas ranging from health care to climate to glass manufacturing.

The study may also prove valuable for controlling processes in which bubble formation can be detrimental.

According to lead researcher James Brid, a graduate student at the Harvard School of Engineering and Applied Sciences, there was an element of serendipity in the discovery.

The team was working late one night investigating ways to spread bubbles on different surfaces when they noticed the rings that form when one bursts.

“After that, any time I was just walking around during a rainy day I’d look at the bubbles popping on puddles,” Bird told AFP news.

“When I went swimming in the ocean I’d watch the bubbles on the surface… And I soon realized that it was everywhere.”

He said in order to minimize surface area, a bubble forms an almost perfect hemisphere when it is in contact with a solid or liquid surface.  When it pops, it creates a ring of smaller bubbles.

The entire process of this was not understood, until now.

In the first of the two-step process, the forces acting on the bubble cause the film to fold in on itself as it retracts, trapping a pocket of air in the shape of a donut.

He said that during the second stage, surface tension breaks this donut of air into a ring of smaller bubbles in the same way that surface tension transforms a thin stream of water flowing from a faucet into individual droplets.

The cascade effect is very short lived and could not be seen with the naked eye.

The team filmed the collapse with high-speed cameras and then used the video to construct a mathematical model to test and replicate their experimental hypothesis.

Bird said he was anxious to study similar popping effects in more exotics materials like molten glass, lava and mud.

“What I love about this study is that the overall effect can be seen by anyone in their kitchen,” he said.

On the Net:

Cost Of Caring For Stroke Patients Double That Of Earlier Estimates

Care in first 6 months post stroke soars to more than $2.5 billion annually

Health-care costs for patients in just the first six months after they have a stroke is more than $2.5 billion a year in Canada, according to a study presented today at the Canadian Stroke Congress.

The Canadian Stroke Network’s Burden of Ischemic Stroke (BURST) study found that the direct and indirect health-care costs for new stroke patients tally an average $50,000 in the six-month period following a new stroke. There are about 50,000 new strokes in Canada each year.

Earlier and widely quoted estimates, based on the most recent data from Health Canada’s Economic Burden of Illness (1998), indicated that the total cost of stroke in Canada was $2.4 billion a year for both new stroke patients and long-term survivors. There are 300,000 people living with stroke in Canada.

“Our old estimates of how much stroke costs the economy are way off base,” says Dr. Mike Sharma, who together with Dr. Nicole Mittmann of Sunnybrook Health Sciences Centre, led the BURST study, which is the first prospective national economic analysis on stroke costs.

“The cost of stroke is far more than we expected ““ at least double previous estimates.”

BURST researchers examined the health-care costs of 232 hospitalized stroke patients in 12 sites across Canada at discharge, three months, six months, and one year. The study looked at both disabling and non-disabling stroke.

Hospitalization, medication, physician services, diagnostic imaging, homecare and rehabilitation all contribute to the bill. There are also indirect costs, including disability leave, lost wages, assisted devices, caregivers, and out-of-pocket expenses for families such as personal assistance products or changes to homes to accommodate disabilities.

While costs are much higher than expected, “the idea is to make initial investments in prevention and acute treatment to prevent these costs down the road,” says Dr. Sharma.

For example, health-care costs fall sharply when people get access to the clot-busting drug tPA, which can significantly reduce post-stroke disability, as well as treatment in a specialized stroke unit.

Prevention is the biggest factor in reducing health-care spending overall, Dr. Sharma says. If people maintain a healthy blood pressure, maintain a healthy weight, reduce sodium intake, and exercise, the impact on stroke costs would be dramatic.

While at least 80 per cent of costs during the first six months post-stroke are health-system costs, families take on a greater proportion of stroke-related expenses, including those associated with caregiving, transportation, and lost income, beginning at the seventh month post-stroke and beyond.

Costs rise dramatically as levels of disability increase. People with non-disabling strokes ““ about 25 per cent of patients ““ personally expended about $2,000 in costs during the first six months. The costs for families increased from there to as much as $200,000 for the most severely affected.

“The difference between merely having symptoms to requiring even minimal at-home assistance from others can mean a significant cost difference,” says Dr. Sharma. “The need to have someone drive you around and help with shopping can double personal costs ““ as well as costs incurred by the person helping you.”

Dr. Sharma, who is director of the Ottawa Hospital’s regional stroke program, says that personal costs for stroke survivors continue through their lifetimes. “It’s a burden on individuals, their families, and communities.”

“A stroke doesn’t just affect one person ∴ it has a ripple effect,” says Heart and Stroke Foundation spokesperson Dr. Michael Hill. “It can challenge families, overburden caregivers, and have a tremendous toll on our healthcare system.”

Stroke is the third leading cause of death and a leading cause of disability. The situation may get worse, with aging baby boomers entering their at-risk years.

In 2011 the baby boom generation will enter the period of increasing risk for stroke. “After age 55, the stroke risk doubles every 10 years,” says Dr. Hill. “This will increase the strain on our healthcare services.”

Over the next two decades, the number of Canadians who are age 65 and over will grow from roughly 4.3 million today to 8 million. Their share of the population will rise from about 13 per cent today to more than 20 per cent, says Dr. Hill.

“We have to learn ∴ and learn fast ∴ how to respond to this situation,” says Dr. Antoine Hakim, Canadian Stroke Network spokesperson.

Coordination is key. “Our objective for the study was to identify the cost drivers so decision makers can make informed choices,” Dr. Sharma says.

On the Net:

DC-8 Flying Laboratory Heading To Australia For Hayabusa Re-entry

A planeload of scientists and specialized instruments aboard NASA’s DC-8 flying laboratory is scheduled to depart NASA’s Dryden Aircraft Operations Facility at Palmdale, Calif., for Australia Tuesday evening, June 8, to catch a glimpse of the fiery return of a Japanese spacecraft to Earth on June 13.

The group of astronomers from NASA, the Japan Aerospace Exploration Agency (JAXA) and other institutions are flying to Melbourne, Australia to make final preparations for the highly anticipated return of JAXA’s Hayabusa spacecraft, which may bring back to Earth a sample of the near-Earth asteroid Itokawa. Hayabusa is expected to re-enter the Earth’s atmosphere and land in the Woomera Test Range in southern Australia late Sunday night, June 13.

The team of 27 astronomers will have their instruments focused out the DC-8’s specialized windows as it cruises at an altitude of 39,000 feet, far above light pollution and clouds. Using their suite of spectroscopic and radiometric imaging instruments, they hope to get a clear reading on what happens during the fiery re-entry process when the spacecraft descends like an artificial meteor at more than 27,000 mph.

At the same time, ground-based observation teams will attempt to reconstruct the as-flown trajectory to correlate with the airborne imaging data.

Following its launch in 2003, Hayabusa arrived at Itokawa in September 2005 and observing the asteroid’s shape, terrain, mineral composition, gravity and other aspects over the next 2 ½ months. Hayabusa briefly touched down on Itokawa’s surface that November to sample surface material.

NASA’s primary goal during the airborne mission is to study the re-entry of Hayabusa’s 40-pound sample return capsule to enable heat shield designers and engineers gain technological insight for the development of NASA’s future exploration vehicles.

Astronomers made similar airborne studies from NASA’s DC-8 flying laboratory for the September 2008 re-entry of the European Space Agency’s Automated Transfer Vehicle “Jules Verne,” as well as the January 2006 Stardust sample return re-entry over Utah. During those missions, scientists studied the levels of radiation, light and out-gassing of the descending spacecraft, to better understand meteor and heat shield radiation mechanisms.

NASA’s Science Mission Directorate is supporting the airborne observation of the Hayabusa SRC re-entry via the In-Space Propulsion Technology Project.

On the Net:

UFO Sighting In Australia Was Falcon 9

According to Australian media reports, the spiral that flew over Australia early Saturday and prompted a flood of UFO reports to local news stations was likely the remnants of SpaceX’s Falcon 9 rocket.

The Australia Broadcasting Corp. (ABC) said the bright sky spiral appeared before sunrise on Saturday over New South Wales, Queensland and the Australia Capital Territory (ACT).

James Butcher of Canberra told ABC that the spiral light appeared to have a yellow hue.

According to ABC, another skywatcher said the object looked like a “huge revolving moon.”

However, despite claims, the new Falcon 9 rocket launched by SpaceX created the phenomenon.  The Space Exploration Technologies company is a California-based spaceflight company led by millionaire PayPal co-founder Elon Musk.

“I heard people in Australia thought UFOs were visiting :),” SpaceX’s millionaire founder Elon Musk told SPACE.com in an e-mail. “The venting of propellants, which is done to ensure that an overpressure event doesn’t produce orbital debris, created a temporary halo caught the sun at just the right angle for a great view from Australia.  I thought the pictures looked really cool.”

ABC reported that professional skywatchers quickly suggested that SpaceX’s Falcon 9 rocket might be the source of the sky spiral.

“The fact that you’ve got the rotation, the spiral effect, is very reminiscent of the much widely reported sightings from Norway and Russia last year, which both turned out to be a Bulava missile which was being adjusted in its orbit,” Geoffrey Whyatt of the Sydney Observatory told ABC. “So possibly a rocket, I would say, having some sort of gyroscopic stability rocket fired on its side.”

The Bulava missile spiral occurred in December 2009 and set off a flurry of UFO sighting reports.

The new Falcon 9 took off from Cape Canaveral Air Force Station on a successful test flight Friday afternoon.  SpaceX plans to use the rocket to bring unmanned cargo flights to the International Space Station for NASA.

On the Net:

ASCO Data Show Serum DNA Blood Tests Detect Cancers With High Sensitivity And Specificity

Growing body of data suggests Chronix’s serum DNA assays may represent a new approach to diagnostics and prognostics in cancer and other diseases

Chronix Biomedical today reported new data further demonstrating that its serum DNA blood tests have the potential to accurately detect early stage breast cancer and prostate cancer. Chronix’s proprietary technology identifies disease-specific genetic fingerprints based on circulating DNA fragments that are released into the bloodstream by damaged and dying (apoptotic) cells. In this new study of 575 individuals, Chronix’s proprietary assays detected and identified DNA fingerprints in the blood that indicated the presence of prostate cancer and breast cancer with 92% sensitivity and 100% specificity, significantly outperforming the published accuracy data for current diagnostic methods. The new study results were presented in an oral session today at the 2010 ASCO Annual Meeting in Chicago.

Breast cancer expert, Steven Narod, M.D., F.R.C.P.C., noted, “These new data, although early, provide further evidence that Chronix’s proprietary serum DNA assays may represent a new diagnostic and prognostic platform that can identify cancer earlier and more accurately than is currently possible. I am pleased to be working with Chronix to further validate these promising findings.” Dr. Narod is Director of the Familial Breast Cancer Research Unit at Women’s College Research Institute, an affiliate of the University of Toronto.

The tests use proprietary algorithms developed by Chronix researchers to detect, analyze and identify cancer-related fragments of circulating DNA that are released into the bloodstream by apoptotic cells. Chronix researchers consistently find that this apoptotic DNA in the serum is coming from a limited number of regions or “hotspots” on the genome specific to each cancer. According to the data presented today, DNA fragments from any one of the 29 breast cancer “hotspots” and 32 prostate cancer “hotspots” identified by Chronix researchers indicate that cancer is present.

“By focusing on these blood-borne genomic ‘hotspots,’ we can reliably detect the presence of cancer without having to first isolate or analyze the tumor cells,” said Howard Urnovitz, Ph.D., Chief Executive Officer of Chronix and a co-author of the study. “If verified by further studies, our Chronix blood-based assays would make it possible to diagnose cancer at its earliest stages, track progress as patients undergo treatment and possibly even optimize treatment using patients’ disease-specific genomic fingerprints.”

The testing involved 575 individuals: 178 with early stage breast cancer, 197 with invasive prostate cancer and 200 healthy controls. The Chronix assays detected breast cancer with a 92% sensitivity and 100% specificity. Although not directly comparable, for reference it is noteworthy that data from a large study of U.S. mammography screening programs reported an overall sensitivity of just 75%, and specificity of 92.3%, with lower figures for some populations such as younger women. The Chronix assays also detected invasive prostate cancer with a 92% sensitivity and 100% specificity. In contrast, the widely used PSA (prostate specific antigen) test has previously demonstrated 85% sensitivity and a specificity of just 25% to 35%. If the Chronix data are confirmed in larger studies, they have the potential to reduce the current rate of false positive and false negative results that contribute to poorer patient outcomes and higher health care costs.

Previous published studies have demonstrated that the Chronix approach can identify the presence or absence of active disease in multiple sclerosis patients and that it can accurately detect early stage breast cancer with high diagnostic sensitivity and specificity. Commercial applications for veterinary use are already in development in conjunction with the University of Calgary, including tests for the early detection of BSE, or mad cow disease.

Dr. Urnovitz added, “With these encouraging findings, we are launching a ‘For Investigative Use Only’ testing service that for the first time will enable cancer researchers to monitor the status of patients in their clinical trials with a high level of sensitivity and specificity, potentially accelerating clinical trials and increasing their chances for success.”

Patient data collected from this new service for clinical researchers along with additional planned clinical studies are expected to expand the database needed to obtain regulatory approval for the use of Chronix assays in ongoing cancer patient care.

On the Net:

Daughters Caring For A Parent Recovering From Stroke More Prone To Depression Than Sons

Adult daughters caring for a parent recovering from stroke are more prone to depression than sons, Marina Bastawrous today told the Canadian Stroke Congress, co-hosted by the Canadian Stroke Network, the Heart and Stroke Foundation, and the Canadian Stroke Consortium.

Caring for a parent who has experienced a stroke results in a dramatic shift from the usual parent-child relationship. “Stroke can be particularly challenging for families,” says Bastawrous, a masters candidate at the University of Toronto. “Taking care of elderly parents can bring out family strengths and family weaknesses.”

The adult child-to-parent bond can result in excellent care when a senior has a stroke. But not always, she says.

The study found that close and secure relationships with parents predicted better mental health and greater satisfaction in adult child caregivers.

“But strained relationships before or following the stroke increases depression in daughters,” she says. “If the relationship between a parent and adult daughter is already strained, a stroke can make things even worse.”

The quality of relationships both before and after the stroke had an equally important influence on wellbeing.

The study found that adult daughters placed greater importance on family relationships than sons and, in turn, were more negatively impacted by poor relationships with their parent.

“When a parent has a stroke, adult children often become their primary caregivers,” says Heart and Stroke Foundation spokesperson Dr. Michael Hill. “It’s important that as part of the recovery process we examine their experiences because they are obviously vital to the ongoing care of the stroke patient.”

Sandwich generation spread too thin

Study co-author Dr. Jill Cameron says adult children providing stroke care for their parents need help and they need it now.

“Adult children are stroke care’s forgotten generation,” she says. “We can’t afford to leave them behind.”

Sixty two percent of stroke caregivers are adult children. Yet stroke care interventions are overwhelmingly designed for spouses.

This imbalance must be addressed, says Dr. Cameron. “We need to make better use of financial resources to enhance the support provided to this growing population of caregivers.”

She notes that adult children caregivers need to balance the challenges of professional life, family life, and the added responsibility of taking on the care of somebody post-stroke. “Caregivers need more support,” she says. “They aren’t trained but their role is essential.”

To remove some of the strain ̢蠫 financial and emotional ̢蠫 innovative thinking is required.

“Our healthcare system is not sustainable in the face of rising costs,” says Dr. Cameron. “We need to plan.”

Here’s what Dr. Cameron envisions as part of this plan:

    * Create work environments that support family members caring for stroke survivors (e.g., caregiving leave).
    * Recognizing that family members perform many caregiving duties after the stroke survivor returns home but receive little if any training; hospitals must train family members for their caregiving role.
    * To ensure post-hospital care plans incorporate the unique circumstances of the family, caregivers should be recognized as members of the care team. “Family caregivers are critical to stroke recovery and typically assume major care roles that are frequently costly to their financial, social, and emotional well-being,” says Dr. Antoine Hakim, spokesperson for the Canadian Stroke Network. “Innovative new ideas to support their balance and quality of life is essential.”

On the Net:

Blood Cholesterol Regulated By The Brain

A US study conducted in mice suggests the amount of cholesterol that circulates through the bloodstream is partially regulated by the brain, which counters previous beliefs that it is solely controlled by what we eat and by cholesterol production in the liver.

The study, carried out by a research team at the University of Cincinnati, found that a hunger hormone called ghrelin in the brain of mice acts as the “remote control” for cholesterol traveling around the body.

The researchers said levels in the blood rise because signals from the brain prompt the liver to store less cholesterol. They find that ghrelin inhibits a receptor in the brain in its role in regulating food intake and energy use.

In a separate experiment, they found that blocking this receptor in mice also increased the levels of cholesterol circulating in the blood.

The findings need to be replicated in humans, the researchers said, in order to open up new ways of potentially treating high cholesterol.

We have long thought that cholesterol is exclusively regulated through dietary absorption or synthesis and secretion by the liver,” study leader Professor Matthias Tschoep told BBC News.

“Our study shows for the first time that cholesterol is also under direct ‘remote control’ by specific neurocircuitry in the central nervous system,” he said.

“This interesting study on mice shows for the first time that blood cholesterol levels can be directly controlled by signals transmitted from the brain to the liver where cholesterol is formed,” Fotini Rozakeas, a cardiac nurse at the British Heart Foundation, told BBC News in an interview.

Rozakeas said the findings could potentially open up new ways of treating and controlling cholesterol levels in humans, which would be great news for people with heart and circulation problems.

Much more research is needed before the mechanisms at play are fully understood, she said.

“In the meantime, people should reduce the amount of saturated fat in their diet, take part in regular physical activity and, in some cases, take prescribed medicines such as statins, to keep their cholesterol levels under control,” said Rozakeas.

The sutdy is published in Nature Neuroscience.

On the Net:

Second-line CML Drug Evokes Faster Response, Fewer Side Effects, Pivotal Study Finds

MD Anderson-led Phase III clinical study determines Sprycel superior to Gleevec as front-line therapy

Dasatanib, a medication currently approved as treatment for drug-resistant chronic myeloid leukemia (CML), provided patients with quicker, better responses as a first therapy than the existing front-line drug, according to researchers at The University of Texas MD Anderson Cancer Center.

The findings were presented at the 46th Annual Meeting of the American Society of Clinical Oncology June 5, and published in the New England Journal of Medicine. Hagop Kantarjian, M.D., professor and chair of MD Anderson’s Department of Leukemia, presented the findings and is the corresponding author on the published study.

Currently, imatinib, or Gleevec ®, is the approved initial therapy for CML, which has increased the five-year survival rate for the disease from 50 percent to 90 percent, said Kantarjian. However, 30-40 percent of imatinib patients do not achieve confirmed cytogenic complete response (CCyR), or the absence of the defective chromosome that causes the disease, within a year. This benchmark is clearly associated with improvements in long-term outcome, said Kantarjian.

“Previous research conducted at MD Anderson found that more patients taking dasatinib were achieving complete responses more quickly than they do on the current standard of care,” said Kantarjian. “In this pivotal Phase III study, we confirmed that dasatinib gets more patients to high-quality remission faster than imatinib, making it a superior front-line therapy. Dasatinib, on average, also has a more favorable side-effect profile.”

For the multinational Phase III study, known as DASSIN (Dasatinib versus Imatinib Study In treatment-naïve CML patients), 519 newly diagnosed CML patients who had received no prior treatment were randomized to receive either dasatinib, also known as Sprycel ®, 100 milligrams once daily (259 patients), or imatinib, 400 milligrams once daily (260 patients). CCyR, confirmed on two assessments, was the study’s primary endpoint. Secondary endpoints included rate of and times to CCyR and major molecular response (MMR), defined as a level of .1 percent or lower of the defective chromosome, as well as safety.

After a minimum follow-up of 12 months, the researchers found that the rates of confirmed CCyR and MMR in those taking dasatinib were 77 percent and 46 percent, respectively, compared to 66 percent and 28 percent, on the imatinib arm.

Therapy failed in nine patients (3.5 percent) in those taking imatinib, compared to five patients (1.9 percent) taking dasatanib. The dasatanib arm reported fewer side effects – nausea, vomiting, muscle inflammation, rash, fluid retention – with most other toxicities being similar in both arms.

“We’ve learned that in cancer therapy, it’s important to use your big guns up front. We know that achieving complete cytogenetic response within a year of starting treatment is associated with more favorable long-term survival; therefore, using this second-generation drug first will likely improve outcomes for patients with chronic myeloid leukemia,” said Kantarjian.

CML is caused by an abnormality known as the Philadelphia chromosome that produces an aberrant protein, Bcr-Abl, which causes the overproduction of one type of white blood cell that drives the disease. Dasatinib, a tyrosine kinase inhibitor, blocks the action of Bcr-Abl; the drug is currently approved by the U.S. Food and Drug Administration for patients who can’t tolerate imatinib or whose CML resists the drug.

The study was supported by Bristol-Myers Squibb, makers of dasatinib. Kantarjian receives research funding from the company.

In addition to Kantarjian, other authors on the ASCO study include: Jorge Cortes, M.D., Department of Leukemia, MD Anderson; Neil Shah, M.D., Ph.D., San Francisco School of Medicine; Andreas Hochhaus, M.D., Universitaetsklinikum Jena; Sandip Shah, M.D., Vedanta Institute of Medical Sciences; Manuel Ayala, M.D., Centro Medico Nacional La Raza; Beatriz Moiraghi, M.D., Hospital General De Agudos J.M. Ramos; Mejia M. Brigid Bradley-Garelik, M.D, Bristol-Myers Squibb; Chao Zhu, Ph.D., Bristol-Myers Squibb, and Michele Baccarani, M.D., University of Bologna.

On the Net:

Mountain Bikers Risk Spinal Injuries: Study

A new study of spinal fractures and spinal cord injuries associated with mountain biking suggests the sport may be just as risky as diving, football and cheerleading.

“The medical, personal, and societal costs of these injuries are high,” wrote the authors of the study in a report published in the American Journal of Sports Medicine. 

Mountain biking, which involves high speeds and long vertical drops over extreme terrain, is growing in popularity.  But researchers warn the sport invites a risk of serious spinal injuries, with one of every six cases studied resulting in total paralysis.

Such accidents typically affect young, male, recreational riders, they said.

“People need to know that the activities they choose to engage in may carry with them unique and specific risks,” said Dr. Marcel Dvorak of the University of British Columbia in Canada during an interview with Reuters.

“Helmets will not protect you from these injuries, nor will wearing Ninja Turtle-like body armor.”

While prior studies have looked at the range of injuries sustained by mountain bikers, and spinal injuries in general across a broad variety of sports, none had yet examined the specific risks of spinal cord injury among mountain bikers.

Dvorak and his team identified 102 men and 5 women who were treated at British Columbia’s primary spine center between 1995 and 2007 after suffering a mountain biking accident.

On average, the patients were 33 years old, and all but two were recreational riders.

The researchers determined that over the 13-year study period, the annual rate of spinal injury among those that mountain biked was one in 500,000 British Columbia residents.  Furthermore, mountain bikers accounted for 4 percent of all spinal trauma admissions to the center.

Surgery was required for roughly two-thirds of the mountain bikers, but the most serious injuries were the 40 percent involving the spinal cord.  Of those, more than four in ten led to complete paralysis, the researchers found.

“Wrist fractures and facial fractures are common”¦but spine injuries are the most severe with the most profound long-term consequences,” Dvorak said.

The majority of mountain bikers were injured as a result of either being thrust over the handlebars (going “endo”) or falling from significant heights (“hucking”), he said. 

In both cases, the result was often a severe impact to the head that generated trauma down the neck and spine.

“The higher the jump or fall, the higher the risk,” Dvorak said.

In a counterintuitive finding, the researchers discovered no relationship between whether or not a rider wore a helmet and the severity of his or her injuries.

“Helmets are good in preventing head injuries, but they do not in any way protect your neck,” Dvorak explained.

Another unique aspect of mountain biking is its environment, which typically includes remote, mountainous and forested areas.

Some of Dvorak’s patients had fallen while riding solo, or at the back of a group, and were not discovered for an hour or longer after they were injured.    And even then, the remote environments often made it difficult for ambulances and rescue helicopters to reach an injured rider.

Researchers say injury prevention should be the primary goal.  Dvorak advises mountain bikers to be cautious about tricks or jumps, to know their terrain well, to ride in groups and stay together.

The study was published online in the May 20, 2010 American Journal of Sports Medicine.

On the Net:

Moving Repeatedly In Childhood Linked With Poorer Quality-Of-Life Years Later

Lack of quality long-term relationships related to poorer well-being

Moving to a new town or even a new neighborhood is stressful at any age, but a new study shows that frequent relocations in childhood are related to poorer well-being in adulthood, especially among people who are more introverted or neurotic.

The researchers tested the relation between the number of childhood moves and well-being in a sample of 7,108 American adults who were followed for 10 years. The findings are reported in the June issue of the Journal of Personality and Social Psychology, published by the American Psychological Association.

“We know that children who move frequently are more likely to perform poorly in school and have more behavioral problems,” said the study’s lead author, Shigehiro Oishi, PhD, of the University of Virginia. “However, the long-term effects of moving on well-being in adulthood have been overlooked by researchers.”

The study’s participants, who were between the ages of 20 and 75, were contacted as part of a nationally representative random sample survey in 1994 and 1995 and were surveyed again 10 years later. They were asked how many times they had moved as children, as well as about their psychological well-being, personality type and social relationships.

The researchers found that the more times people moved as children, the more likely they were to report lower life satisfaction and psychological well-being at the time they were surveyed, even when controlling for age, gender and education level. The research also showed that those who moved frequently as children had fewer quality social relationships as adults.

The researchers also looked to see if different personality types ““ extraversion, openness to experience, agreeableness, conscientiousness and neuroticism ““ affected frequent movers’ well-being. Among introverts, the more moves participants reported as children, the worse off they were as adults. This was in direct contrast to the findings among extraverts. “Moving a lot makes it difficult for people to maintain long-term close relationships,” said Oishi. “This might not be a serious problem for outgoing people who can make friends quickly and easily. Less outgoing people have a harder time making new friends.”

The findings showed neurotic people who moved frequently reported less life satisfaction and poorer psychological well-being than people who did not move as much and people who were not neurotic. Neuroticism was defined for this study as being moody, nervous and high strung. However, the number and quality of neurotic people’s relationships had no effect on their well-being, no matter how often they had moved as children. In the article, Oishi speculates this may be because neurotic people have more negative reactions to stressful life events in general.

The researchers also looked at mortality rates among the participants and found that people who moved often as children were more likely to die before the second wave of the study. They controlled for age, gender and race. “We can speculate that moving often creates more stress and stress has been shown to have an ill effect on people’s health,” Oishi said. “But we need more research on this link before we can conclude that moving often in childhood can, in fact, be dangerous to your health in the long-term.”

On the Net:

Laugh Your Way To Retirement

A sense of humor helps to keep people healthy and increases their chances of reaching retirement age. But after the age of 70, the health benefits of humor decrease, researchers at the Norwegian University of Science and Technology (NTNU) have found.

The study has just been published in the International Journal of Psychiatry in Medicine, and was composed of an examination of records from 53,500 individuals who were followed up after seven years. The study is based on a comprehensive database from the second Nor-Trøndelag Health Study, called HUNT 2, which is comprised of health histories and blood samples collected in 1995-1997 from more than 70,000 residents of a county in mid-Norway.

A positive effect

“There is reason to believe that sense of humor continues to have a positive effect on mental health and social life, even after people have become retirees, although the positive effect on life expectancy could not be shown after the age of 75. At that point, genetics and biological aging are of greater importance,” says project leader Professor Sven Svebak at NTNU’s Department of Neuroscience.

Svebak and his colleagues evaluated people’s sense of humor with three questions from a test designed to measure only friendly humor. The test is not sensitive to humor that creates conflicts, is insulting or that is a variation of bullying, explains Svebak.

The questions revealed a person’s ability to understand humor and to think in a humorous way, Svebak says. He believes there are many myths and misunderstandings about humor. For example, one myth is that happy people have a better sense of humor than people who are more serious.

“But it is not enough to be full of laughter, as we say in Trøndelag. Humor is all about ways of thinking and often occurs in a process or in dialogue with others. It does not need to be externalized,” he says. “What people think is fun, is a different matter. Commonly, people with the same sense of humor tend to enjoy themselves together and can communicate humor without huge gestures. A twinkle in the eye can be more than enough.” He adds that a sense of humor can be learned and improved through practice.

Health and mood

One possible objection to the research findings is that people who have the best sense of humor may believe that they are in good health and are therefore always in the best mood. This would mean that a good sense of humor only reflected a subjective sense of health and well being.

To ensure that their findings were genuine, the researchers studied the effect of sense of humor in two separate groups. One group was composed of people who believed they were healthy, while the other was composed of people who felt they were in poor health. But researchers found the effect of a good sense of humor was the same in the two groups.

“This gives us reason to maintain that sense of humor has a real effect on the health until people reach about 70 years old,” says Svebak.

Two groups

The report shows that the size of health effect was dependent on how researchers grouped people with different scores. One approach was to divide the participants into two groups, one group that scored highest in terms of a good sense of humor and one with a low score. In this comparison, mortality was reduced by about 20 percent in people with high scores compared to people with low scores.

Another approach compared individuals with the highest and lowest scores using a nine-level scale. In this comparison, people with the highest scores were twice as likely to survive the seven year period of follow-up than those with the lowest scores.

Confirms previous findings

The results from the Nord-Trondelag County population confirm findings from a patient group that was studied in Sør-Trøndelag County. These results were published in the International Journal of Psychiatry in Medicine four years ago.

That study, which was based on patients with chronic renal failure who were followed over two years, showed that survival was greatest among those with the best sense of humor. One objection to that research was that the findings could not be generalized to the population at large. The current study confirms those results for the first time in a large population.

Intelligence Test

Svebak said that it has been fifteen years since researchers first attempted to evaluate the effect of sense of humor on life expectancy. At that time, a group of American scientists published data on life expectancy in the journal Psychosomatic Medicine.

Their results were based on a personality survey of 10-year-olds conducted around 1920. The project was initiated by a pioneer in intelligence research, Lewis Terman, in California.

The children had to score above 135 on an intelligence test to participate. Over 1,200 children were involved. The results were surprising: Children with the least sense of humor were most likely to be alive 80 years later.

“But in this case, the children’s sense of humor had been rated by the children’s teachers and parents. They measured the social image of a sense of humor, while we measured self-image, and people’s perception of their own sense of humor. There are also several other differences between the two studies that may affect the results,” said Svebak.
The world’s first

“Nevertheless, the results from the HUNT 2 are the first in history that say something about a sense of humor and health in a large population,” Svebak notes.

The Humor Project has been conducted in collaboration with Solfrid Romundstad, PhD, now employed at Levanger Hospital, and Professor Jostein Holmen at the HUNT Research Centre.

On the Net:

How Single Bacterium Get The Message To Split

Regulator is distributed unevenly during cell division to make two functionally and structurally different cells

Some species of bacteria perform an amazing reproductive feat. When the single-celled organism splits in two, the daughter cell – the swarmer – inherits a propeller to swim freely. The mother cell builds a stalk to cling to surfaces.

University of Washington (UW) researchers and their colleague at Stanford University designed biosensors to observe how a bacterium gets the message to divide into these two functionally and structurally different cells. The biosensors can measure biochemical fluctuations inside a single bacteria cell, which is smaller than an animal or plant cell.

During cell division, a signaling chemical, found only in bacteria, helps determine the fate of the resulting two cells. The signal is a tiny circular molecule called cyclic diguanosine monophosphate or c-di-GMP.

By acting as an inside messenger responding to information about the environment outside the bacteria cell, c-di-GMP is implicated in several bacterial survival strategies. In harmless bacteria, some of these tactics keep them alive through harsh conditions. In disease-causing bacteria, c-di-GMP is thought to regulate antibiotic resistance, adhesiveness, biofilm formation, and cell motility.

In their study, the UW-led team of scientists looked at cell division in a species of disease bacteria that fends off treatment and establishes a stronghold by using these defenses, Pseudomonas aeruginosa. This is the rod-shaped pathogen that causes life-shortening, chronic lung infections in people with cystic fibrosis, burns, and suppressed immune systems associated with cancer. The researchers also examined cell-division in a harmless lake and stream dwelling bacteria, Caulobacter crescentus.

The researchers’ findings will be published in the June 4 Science. The senior author is Dr. Samuel Miller, UW professor of medicine, microbiology, immunology, and genome science. Miller directs the Northwest Regional Center of Excellence for Biodefense and Emerging Infectious Diseases Research. The lead author is Dr. Matthias Christen, a UW postdoctoral fellow in immunology who has moved on to become a faculty member in the Biozentrum at the University of Basel, Switzerland.

To monitor the concentration of c-di-GMP within single living bacteria cells, the scientists developed a biosensor based on genetically encoded fluorescence resonance energy transfer.

C-di-GMP exerts control over several biological functions inside the cell by linking up with a diverse array of receptors. These include proteins required to build and drive waving, hair-like structures for moving cell. These also include riboswitches ““ RNA molecules, transcription factors and proteins –that can alter gene activity.

Because C-di-GMP controls many different cell functions, the researchers believed it was highly likely that it manages its regulatory workload by appearing in the right amount, in the right place, at the right time in the cell cycle.

The researchers observed the living bacteria under a microscope that measures changes in fluorescent emissions from the biosensor. Emissions drop when the biosensor binds to c-di-GMP. Lower emissions reflected higher levels of c-di-GMP in the cell, and vice versa. In this way the researchers could record fluctuations in c-di-GMP levels during cell division

The researchers found that, immediately after a thin partition formed creating two distinct cells, the levels of c-di-GMP were low in the cell propelled by the whipping flagella and five times higher in the non-motile stalk cell. This asymmetrical distribution of the regulatory messenger occurred in both species of bacteria and was not an isolated event.

“In both organisms,” the researchers noted, “c-di-GMP levels were always significantly lower in the flagellated cell than in the non-flagellated cell.”

Some of the enzymes that sense the c-di-GMP messages are place-bound in distinct locations of the cell. The researchers reasoned that the unequal distribution of the messenger c-di-GMP might be caused by the spatially restricted production or activation (or inactivation) of these enzymes. The researchers found that strains of bacteria that produce more of these enzymes in the swarmer cell also had higher concentrations of c-di-GMP in the swarmer cell, suggesting that a localized drop in the enzyme activity would likely result in a localized drop in c-di-GMP.

Impairing the cellular distribution of c-di-GMP, the researchers noted, has major consequences for the development and function Caulobacter cells. Mixing the balance of the sensing enzymes would lead to a swarmer cell that couldn’t swim or to a hypermotile swarmer cell, depending on how the balance of enzymes is tipped. The normal drop of c-di-GMP might also spur rapid take off of the swarmer as it swims away from its mother cell. Less than an hour later, the swarmer can no longer swim, and reverts to a stalk cell.

The researchers have also used the biosensor they developed to study the multi-flagellated Salmonella enterica, which causes food poisoning, as well as the non-flagellated Klebsiella pneumoniae, an air-borne lung pathogen. Both of these bacteria also have uneven distribution of a key internal messenger during cell division.

“This suggests that this phenomenon is not unique to Pseudomonas and Caulobacter,” the researchers surmised, “and that cell properties other than motility are likely to be regulated by asymmetrical second-messenger distribution during cell division.”

The asymmetrical distribution of c-di-GMP observed during cell division, the researcher added, may be an important regulatory step in making and powering nano-scale tools on the outside surface of the cell to carry out essential activities.

In addition to Miller, Hoffman, and Matthias Christen, the other scientists on this project were Hemantha Kulasekara of the UW Department of Immunology; Beat Christen of the Department of Developmental Biology at Stanford University; Bridget Kulasekara of the UW Molecular Cell Biology Program, and Luke Hoffman of the UW Department of Pediatrics. Hoffman is also a pediatrician specializing in lung disease at Seattle Children’s.

The research was supported by grants from the National Institute of Allergy and Infectious Diseases of the National Institutes of Health, the Swiss National Foundation, the Novartis Foundation, the Cystic Fibrosis Foundation, and a graduate research fellowship from the National Science Foundation.

Image Caption: This image is the first visualization of a second messenger through a biosensor in bacteria, which are much smaller than animal or plant cells. Credit: Matthias Christen

On the Net:

Did Amelia Earhart Die As A Castaway?

Researchers scouring a remote, uninhabited South Pacific island believed to be the final resting place of Amelia Earhart have discovered clues that the aviatrix may have struggled to survive there after an emergency landing.

Three pieces of a pocket knife and parts of what may be a broken cosmetic glass jar provide new evidence that the legendary Earhart and her navigator Fred Noonan landed and ultimately died as castaways on the secluded island of Nikumaroro.

The tiny island, located in the southwestern Pacific republic of Kiribati, is roughly 300 miles southeast of Howland Island ““ the target destination of Earhart’s fatal flight on July 2, 1937.

A futile, large-scale search ensued after Electra’s disappearance.

“These objects have the potential to yield DNA, specifically what is known as ‘touch DNA’,'” said Ric Gillespie, executive director of The International Group for Historic Aircraft Recovery (TIGHAR), during an interview with Discovery News.

Gillespie and his team will be searching the island until June 14 for clues that the “Electra”, Earhart’s twin-engine plane, did not crash in the water and sink as many believe.

Earhart had been flying over the Pacific Ocean in an attempt to set a record by flying around the world at the equator.  In her last radio transmission, the slender, tall, blonde reported that her aircraft was running low on fuel.

Gillespie said recent advances in DNA extraction from touched objects might help solve the mystery surrounding Earhart’s crash.

“If DNA from the recovered objects matches the Earhart reference sample now held by the DNA lab we’ve been working with, we’ll have what most people would consider to be conclusive evidence that Amelia Earhart spent her last days on Nikumaroro,” he said.

The group uncovered a number of artifacts during previous TIGHAR excursions conducted since 1989.   In conjunction with archival research, these clues provide solid circumstantial evidence for the castaway theory.

Gillespie’s ongoing excavation is now centered on the remote, southeast end of the island in a densely vegetated area known as the Seven Site. 

The area seems to be where British Colonial Service Officer Gerald Gallagher discovered a partial skeleton of a castaway in 1940. A forensic report describing the remains said they were likely those of a white female of northern European extraction, roughly 5 feet 7 inches tall — a description matching Amelia Earhart.

Although the bones have been lost, parts of the skeleton not uncovered in 1940, including the ribs, spine, half of the pelvis, hands, feet, one arm, and one lower leg, may yet remain at the site.

Gillespie believes that large coconut crabs may have carried off many of the bones, indicating a tragic end for Earhart.  But it’s a hypothesis the researchers wanted to test.

“In 2007 we conducted a taphonomy experiment with a small pig carcass to see how quickly the crabs would eat the remains, and how far, if at all, the crabs dragged the bones. The primary answers were ‘pretty quickly’ and ‘all over the place,'” said TIGHAR’s president, Patricia Thrasher, during an interview with Discovery News.

“This trip, they went back to the site to look at the bones that were left. It’s now been three years that these mammal bones have been out in the weather on Nikumaroro. If Gallagher found Amelia Earhart’s bones, that’s how long they would have been lying out,” she said.

To be sure, the bones appeared much older than three years, in keeping with Gallagher’s report of “gray, pitted, dry remains”, Discovery news reported.

Gillespie placed the pig bones on the coral rubble, and they nearly disappeared.

In addition to searching the coral rubble for bones not seen by Gallagher, Gillespie and his team are investigating an area around a big Ren tree, where they identified a rough ring of fire remains that evoked many questions.

For instance, did the castaways build a ring of fire to keep the crabs at bay during the nighttime? Or was it an attempt to signal search and rescue aircraft?

Other questions arise from the discovery of the pocketknife and the glass jar pieces. 

“The finds are indeed important. In the case of the knife, we found part of it in 2007 and have now found more. The artifacts tell a story of an ordinary pocket knife that was beaten apart to detach the blades for some reason,” Thrasher said. 

The castaways may have been attempting to construct a fishing spear, or perhaps used the blades for prying clams.

Additional questions will likely arise in the days ahead.  Indeed, the researchers have just uncovered another fire feature and will soon excavate the area. 

Meanwhile, other team members are exploring a strip of coral reef at the island’s western end know as the Western Reef Slope.  

Researchers plan to utilize a Remote Operated Vehicle (ROV) to conduct an underwater search near the reef for the wreckage of Electra.  However, the steepness of the reef slope means that any wreckage likely lies as many as 1,000 feet down, the researchers said.

Additional information on TIGHAR’s expedition can be viewed at the Earhart Project’s Web site at http://tighar.org/Projects/Earhart/NikuVI/Niku6dailies.html.

Autism Finding Could Lead To Simple Urine Test For The Condition

Children with autism have a different chemical fingerprint in their urine than non-autistic children, according to new research published tomorrow in the print edition of the Journal of Proteome Research.

The researchers behind the study, from Imperial College London and the University of South Australia, suggest that their findings could ultimately lead to a simple urine test to determine whether or not a young child has autism.

Autism affects an estimated one in every 100 people in the UK. People with autism have a range of different symptoms, but they commonly experience problems with communication and social skills, such as understanding other people’s emotions and making conversation and eye contact.

People with autism are also known to suffer from gastrointestinal disorders and they have a different makeup of bacteria in their guts from non-autistic people.

Today’s research shows that it is possible to distinguish between autistic and non-autistic children by looking at the by-products of gut bacteria and the body’s metabolic processes in the children’s urine. The exact biological significance of gastrointestinal disorders in the development of autism is unknown.

The distinctive urinary metabolic fingerprint for autism identified in today’s study could form the basis of a non-invasive test that might help diagnose autism earlier. This would enable autistic children to receive assistance, such as advanced behavioural therapy, earlier in their development than is currently possible.

At present, children are assessed for autism through a lengthy process involving a range of tests that explore the child’s social interaction, communication and imaginative skills.

Early intervention can greatly improve the progress of children with autism but it is currently difficult to establish a firm diagnosis when children are under 18 months of age, although it is likely that changes may occur much earlier than this.

The researchers suggest that their new understanding of the makeup of bacteria in autistic children’s guts could also help scientists to develop treatments to tackle autistic people’s gastrointestinal problems.

Professor Jeremy Nicholson, the corresponding author of the study, who is the Head of the Department of Surgery and Cancer at Imperial College London, said: “Autism is a condition that affects a person’s social skills, so at first it might seem strange that there’s a relationship between autism and what’s happening in someone’s gut. However, your metabolism and the makeup of your gut bacteria reflect all sorts of things, including your lifestyle and your genes. Autism affects many different parts of a person’s system and our study shows that you can see how it disrupts their system by looking at their metabolism and their gut bacteria.

“We hope our findings might be the first step towards creating a simple urine test to diagnose autism at a really young age, although this is a long way off – such a test could take many years to develop and we’re just beginning to explore the possibilities. We know that giving therapy to children with autism when they are very young can make a huge difference to their progress. A urine test might enable professionals to quickly identify children with autism and help them early on,” he added.

The researchers are now keen to investigate whether metabolic differences in people with autism are related to the causes of the condition or are a consequence of its progression.

The researchers reached their conclusions by using H NMR Spectroscopy to analyse the urine of three groups of children aged between 3 and 9: 39 children who had previously been diagnosed with autism, 28 non-autistic siblings of children with autism, and 34 children who did not have autism who did not have an autistic sibling.

They found that each of the three groups had a distinct chemical fingerprint. Non-autistic children with autistic siblings had a different chemical fingerprint than those without any autistic siblings, and autistic children had a different chemical fingerprint than the other two groups.

On the Net:

Scientist Uses iPad To Communicate With Dolphin

The iPad might offer a new solution for scientists wishing to communicate with dolphins.

Dolphin researcher Jack Kassewitz is using an iPad to interact with a 2-year-old dolphin named Merlin.  Kassewitz says this could potentially not only allow humans and dolphins to interact, but also be easily used as a universal translator for humans.

“For several years, we’ve recognized that part of the problem in creating an artificial language between humans and dolphins has been the speed of acquisition of the human brain; it’s just not up to competing (with that of the dolphin),” Kassewitz, president of SpeakDolphin, a non-profit firm heading up the dolphin research, told reporters recently.

Kassewitz said the dolphin’s “acoustic range is so broad and ours is so limited, and our speed to react to their sound is so slow, I think we were just plain boring.”

He turned to computer hardware, special software for recording real-time data, and underwater microphones.

Kassewitz has whittled down potential human-dolphin interfaces to the iPad and the Panasonic Toughbook 19 over the past two years.  Trials with the iPad are underway, and the results are encouraging.  The tests are being conducted in Puerto Aventuras, Mexico at Dolphin Discovery, which has facilities for swimming with dolphins.

The ultimate goal is to develop a system of symbols and sounds that correspond to objects and concepts for dolphins and humans to communicate.

Kassewitz chose the iPad because it is lightweight and touch sensitive.  Other key advantages it has to offer is that it is fast and has apps like SignalScope, which turns the iPad into a high-tech oscilloscope for capturing recorded sound.

The iPad was encased in a waterproof bag called the Waterwear in order to make it dolphin-friendly.

Merlin is just performing simple interactions with the iPad so far.  Kassewitz will show the dolphin an image of an object on the iPad, and if Merlin recognizes the object then he will tap the touch screen with his nose and then proceed to touch the real 3-D object that someone is holding nearby.  The dolphin’s sounds are being recorded underwater during the same time.

The iPad was simply a new gadget for Merlin.  The dolphin is used to interacting with real objects, so when Kassewitz held up the iPad he saw it as “something novel,” he said.  “For him, it was a new toy.”

Kassewitz has talked with computer programmers who are interested in creating more complex apps, possibly ones that respond with dolphin-like sounds.

On the Net:

Arctic Ice At Low Point Compared To Recent Geologic History

Less ice covers the Arctic today than at any time in recent geologic history.

That’s the conclusion of an international group of researchers, who have compiled the first comprehensive history of Arctic ice.

For decades, scientists have strived to collect sediment cores from the difficult-to-access Arctic Ocean floor, to discover what the Arctic was like in the past. Their most recent goal: to bring a long-term perspective to the ice loss we see today.

Now, in an upcoming issue of Quarternary Science Reviews, a team led by Ohio State University has re-examined the data from past and ongoing studies — nearly 300 in all — and combined them to form a big-picture view of the pole’s climate history stretching back millions of years.

“The ice loss that we see today — the ice loss that started in the early 20th Century and sped up during the last 30 years — appears to be unmatched over at least the last few thousand years,” said Leonid Polyak, a research scientist at Byrd Polar Research Center  at Ohio State University. Polyak is lead author of the paper and a preceding report that he and his coauthors prepared for the U.S. Climate Change Science Program.

Satellites can provide detailed measures of how much ice is covering the pole right now, but sediment cores are like fossils of the ocean’s history, he explained.

“Sediment cores are essentially a record of sediments that settled at the sea floor, layer by layer, and they record the conditions of the ocean system during the time they settled. When we look carefully at various chemical and biological components of the sediment, and how the sediment is distributed — then, with certain skills and luck, we can reconstruct the conditions at the time the sediment was deposited.”

For example, scientists can search for a biochemical marker that is tied to certain species of algae that live only in ice. If that marker is present in the sediment, then that location was likely covered in ice at the time. Scientists call such markers “proxies” for the thing they actually want to measure — in this case, the geographic extent of the ice in the past.

While knowing the loss of surface area of the ice is important, Polyak says that this work can’t yet reveal an even more important fact: how the total volume of ice — thickness as well as surface area — has changed over time.

“Underneath the surface, the ice can be thick or thin. The newest satellite techniques and field observations allow us to see that the volume of ice is shrinking much faster than its area today. The picture is very troubling. We are losing ice very fast,” he said.

“Maybe sometime down the road we’ll develop proxies for the ice thickness. Right now, just looking at ice extent is very difficult.”

To review and combine the data from hundreds of studies, he and his cohorts had to combine information on many different proxies as well as modern observations. They searched for patterns in the proxy data that fit together like pieces of a puzzle.

Their conclusion: the current extent of Arctic ice is at its lowest point for at least the last few thousand years.

As scientists pull more sediment cores from the Arctic, Polyak and his collaborators want to understand more details of the past ice extent and to push this knowledge further back in time.

During the summer of 2011, they hope to draw cores from beneath the Chukchi Sea, just north of the Bering Strait between Alaska and Siberia. The currents emanating from the northern Pacific Ocean bring heat that may play an important role in melting the ice across the Arctic, so Polyak expects that the history of this location will prove very important. He hopes to drill cores that date back thousands of years at the Chukchi Sea margin, providing a detailed history of interaction between oceanic currents and ice.

“Later on in this cruise, when we venture into the more central Arctic Ocean, we will aim at harvesting cores that go back even farther,” he said. “If we could go as far back as a million years, that would be perfect.”

Polyak’s coauthors on the report hailed from Penn State University, University of Colorado, University of Massachusetts, the U.S. Geological Survey, Old Dominion University, the Geological Survey of Canada, University of Copenhagen, the Cooperative Institute for Research in Environmental Sciences, Stockholm University, McGill University, James Madison University, and the British Antarctic Survey.

This research was funded by the US Geological Survey and the National Science Foundation.

By Pam Frost Gorder, Ohio State University

On the Net:

Peaches, Plums Could Help In Fight Against Breast Cancer

Breast cancer cells – even the most aggressive type – died after treatments with peach and plum extracts in lab tests at Texas AgriLife Research recently, and scientists say the results are deliciously promising. Not only did the cancerous cells keel over, but the normal cells were not harmed in the process.

AgriLife Research scientists say two phenolic compounds are responsible for the cancer cell deaths in the study, which was published in the Journal of Agriculture and Food Chemistry. The phenols are organic compounds that occur in fruits. They are slightly acidic and may be associated with traits such as aroma, taste or color.

“It was a differential effect which is what you’re looking for because in current cancer treatment with chemotherapy, the substance kills all cells, so it is really tough on the body,” said Dr. David Byrne, AgriLife Research plant breeder who studies stone fruit. “Here, there is a five-fold difference in the toxic intensity. You can put it at a level where it will kill the cancer cells – the very aggressive ones – and not the normal ones.”

Byrne and Dr. Luis Cisneros-Zevallos originally studied the antioxidants and phytonutrients in plums and found them to match or exceed the blueberry which had been considered superior to other fruits in those categories.

“The following step was to choose some of these high antioxidant commercial varieties and study their anticancer properties,” Cisneros-Zevallos said. “And we chose breast cancer as the target because it’s one of the cancers with highest incidence among women. So it is of big concern.”

According to the National Cancer Institute, there were 192,370 new cases of breast cancer in females and 1,910 cases in males in 2009. That year, 40,170 women and 440 men died from breast cancer. The World Health Organization reports that breast cancer accounts for 16 percent of the cancer deaths of women globally.

Cisneros-Zevallos, an AgriLife Research food scientist, said the team compared normal cells to two types of breast cancer, including the most aggressive type. The cells were treated with an extract from two commercial varieties, the “Rich Lady” peach and the “Black Splendor” plum.

“These extracts killed the cancer cells but not the normal cells,” Cisneros-Zevallos said.

A closer look at the extracts determined that two specific phenolic acid components – chlorogenic and neochlorogenic – were responsible for killing the cancer cells while not affecting the normal cells, Cisneros-Zevallos said.

The two compounds are very common in fruits, the researchers said, but the stone fruits such as plums and peaches have especially high levels.

“So this is very, very attractive from the point of view of being an alternative to typical chemotherapy which kills normal cells along with cancerous ones,” Byrne added.

The team said laboratory tests also confirmed that the compounds prevented cancer from growing in animals given the compounds.

Byrne plans to examine more fully the lines of the varieties that were tested to see how these compounds might be incorporated into his research of breeding plums and peaches. Cisneros-Zevallos will continue testing these extracts and compounds in different types of cancer and conduct further studies of the molecular mechanisms involved.

The work documenting the health benefits of stone fruit has been supported by the Vegetable and Fruit Improvement Center at Texas A&M University, the U.S. Department of Agriculture and the California Tree Fruit Agreement.

Image Caption: Breast cancer cells — even the most aggressive type — died after treatments with peach and plum extracts in lab tests at Texas AgriLife Research. Credit: (Photo courtesy of U.S. Department of Agriculture-Agriculture Research Service)

On the Net:

American Cigarettes Worse Than Others

U.S. researchers reported on Tuesday that American cigarettes contain more cancer-causing agents than those in Canada, Britain and Australia.

Their study also demonstrated that the amount of these carcinogens in a smoker’s cigarette butts directly correlated with tell-tale compounds in the smoker’s urine.

The study can help researchers try to trace the harmful effects of smoking.

“We know that cigarettes from around the world vary in their ingredients and the way they are produced,” said Dr. Jim Pirkle of the U.S. Centers for Disease Control and Prevention (CDC), who heads a lab using a mass spectrometer to measure levels of chemicals in people’s bodies

“All of these cigarettes contain harmful levels of carcinogens, but these findings show that amounts of tobacco-specific nitrosamines differ from country to country, and U.S. brands are the highest in the study,” Pirkle said in a statement.

CDC’s David Ashley and colleagues did in-depth tests involving 126 smokers in the U.S., Canada, Britain and Australia.

“Seventeen eligible cigarette brands (between 3 and 5 brands from each country) were selected on the basis of national sales and nicotine yield to identify popular brands with a range of ventilation,” the researchers wrote in the June issue of Cancer Epidemiology.

Volunteers had their saliva and urine tested and also turned over their used cigarette butts to the researchers.

These were all tested for nicotine and the chemicals 4-(methylnitrosamino)-1-(3-pyridyl)-1-butanone (NNK for short) and the breakdown product of NNK in the body, known as NNAL.

These cancer-causing agents are known as tobacco-specifics nitrosamines or TSNAs.

“We have shown a direct association between the 24-hour mouth-level exposure of NNK resulting from cigarette smoking and the concentration of its primary metabolite, NNAL, in the urine of smokers,” the researchers wrote.

“Internal dose concentrations of urinary NNAL are significantly lower in smokers in countries that have lower TSNA levels in cigarettes such as Canada and Australia in contrast to countries that have high levels of these carcinogens in cigarettes, such as the United States.”

The popular U.S. cigarette brands studied contained “American blend” tobacco.  This tobacco is known to have higher TSNA levels than the “bright” tobacco used in the most popular Australian, Canadian, and British brands.

Australian and Canadian smokers got more nicotine than U.S. and British smokers, but not of TSNAs.

The World Health Organization says 5 million people die each due to tobacco-related heart attacks, strokes and cancers.  Another 430,000 adults die annually from second-hand smoke.

On the Net:

Genome Of Bacteria Responsible For Tuberculosis Of Olive Tree Sequenced

Researchers at the Public University of Navarra, the Polytechnic University of Madrid (CBGP), the University of Malaga, the University of Wisconsin and the Valencian Institute of Agricultural Research have managed to sequence the genome of the bacteria responsible for tuberculosis in the olive tree. The study, included in the June issue of Environmental Microbiology, represents the first sequencing of the genome of a pathogenic bacteria undertaken in Spain, being the first genome known worldwide of a pathogenic Pseudomonas in woody plants.

The sequencing of the genome of this pathogen opens the doors to the identification of the genes responsible for the virulence of this bacteria and its survival on the philosphere (leaf surface), thus facilitating the design of specific strategies in the fight against the disease and enabling drawing up programmes for the genetic improvement of olive groves.

Pseudomonas savastanoi is the agent that gives rise to tuberculosis in the olive tree, a disease that causes important losses in the olive crops in Spain. Trees affected present tumours (known as verrucas) that can grow to several centimetres diameter in trunks, branches, stalks and buds. Diseased trees are less robust and have less growth, to the point of being non-productive if the attack is very intense. To date, due to the absence of effective methods of control, preventive strategies have been carried out, reducing populations of bacteria with phytosanitary treatment.

New strategies

Plant diseases produced by pathogenic microorganisms not only reduce production but can also alter the quality of the food and drastically diminish the commercial value of the crops. The new strategies for disease control today involve the analysis of information contained in the genome of pathogenic organisms. Similar to what has happened with the human genome, this technology is generating a great amount of valuable information for the development of innovative technologies, that will enable identifying and controlling the pathogen as well as obtaining new varieties of the host plant that have greater resistance to the disease.

On the Net:

Only Five Percent Of Cancer Research Funds Are Spent On Metastases, Yet It Kills 90 Percent Of All Cancer Patients

On average, about five percent of total cancer research funding is spent on investigating metastases (the spread of cancer cells around the body) in Europe, yet metastatic disease is the direct or indirect cause of 90 percent of all cancer deaths, according to an editorial in the European Journal of Cancer (EJC). [1]

The authors of the editorial, which introduces a special EJC issue on metastasis (“Stopping cancer in its tracks: metastasis as a therapeutic target”), highlight this discrepancy in funding and they believe that, although it is difficult to obtain accurate figures, the situation is probably similar in other countries such as the USA and Japan.

It has been known for some time that metastasis is the key problem in cancer and the main reason why people die from the disease. Until recently, the reasons why some people developed metastases and other did not had been unclear, but, as this special issue of the EJC shows, at last there are models and scientific hypotheses that have begun to unravel this process and the EJC reviews the state of the art in this respect. However, research into metastasis has not necessarily attracted the recognition it deserves from funding organisations.

Professor Jonathan Sleeman, one of the two guest editors of the EJC special issue and head of microvascular pathobiology research at the University of Heidelberg (Germany), said: “Metastasis is a process in cancer that is very poorly understood; it kills patients and therefore we believe that it should be funded better. Yet at the European level and, indeed, worldwide, comparatively little emphasis is placed on tackling metastases and in providing appropriate levels of funding for research.”

He continued: “Given the clinical importance of metastasis for cancer patients, the limited treatment options for metastatic disease and the open question of how metastasis works, we need to know how much research funding is being directed at the problem and what proportion of funding for cancer research ends up focused on metastases? I have found it hard to obtain reliable figures, but although there is considerable variation between European countries, I estimate that the average spent on metastasis research is around 5% of total cancer research funding. Given that metastasis is of central importance to the prognosis and outcome of cancer patients, we could argue that in many countries more funding should be directed toward metastasis research.”

Metastasis is the process by which cancer cells split off from the original, primary tumour and travel to other parts of the body via the blood or lymph systems. This leads to the growth of secondary tumours in places such as the bones, brain, lungs and liver, and it is usually these that end up killing the patient.

“Metastatic disease, therefore, represents a major public health problem, affecting cancer patients and their families, as well as health care systems and the broader economy. Despite this, progress in developing treatments for metastatic disease remains slow,” write Prof Sleeman and the second guest editor, Professor Patricia Steeg (chief of the Women’s Cancers Section, Laboratory of Molecular Pharmacology at the National Cancer Institute, Bethesda, USA) in their editorial.

In addition to adequate funding, they call for:

    * effective translational research for metastatic disease, which will take discoveries made in the laboratory quickly into new and better treatments for cancer patients;
    * clinical trials to be designed so that they include information on metastases;
    * clinicians and scientists to work more closely together to design clinical trials that assess the development of new metastases.

“In summary, combating metastasis formation and growth is the key to successfully treating cancer,” they conclude. “Traditional growth control approaches are inadequate and can even be detrimental in the long term: new therapies built upon a solid understanding of the process of metastatic disease are urgently required. In turn, this demands an increased pre-clinical knowledge base that capitalises on major conceptual advances made in recent years, as well as a rational approach to the design of clinical trials with the inclusion of metastasis as an end-point. Together these observations speak for the necessity of increasingly close interactions between basic and clinical scientists, as well as the enhanced levels of research funding required to alleviate this major clinical problem.”

The EJC special issue on metastasis consists of a number of articles looking at the state of current knowledge about the disease and outlining promising areas of research. Prof Steeg said: “We hope that this special issue will highlight the fact that metastasis should be an important consideration during drug development. If more attention was paid to it, we could really improve treatments for cancer patients.”

The special issue will be available on the Elsevier stand at the American Society of Clinical Oncology (ASCO) meeting in Chicago (USA) from June 4-8.

On the Net:

Comparision Of Overall Survival For Non-Small Cell Lung Cancer Patients

There’s debate about the best treatment approach for patients with certain stages of non-small cell lung cancer (NSCLC), which accounts for about 80 percent of all lung cancers. Patients with early stages of NSCLC are typically treated with surgery, but those with stage IIIA present more of a challenge because they are such a diverse group. However, research from Fox Chase Cancer Center shows that patient’s with stage IIIA NSCLC who receive surgery, lobectomy in particular, have increased overall survival compared to those who received chemoradiation alone”“”“those receiving lobectomy plus chemoradation had survival rates that were higher than previously reported as well.

The research, led by Charu Aggarwal, M.D., M.P.H., Walter Scott, M.D., F.A.C.S., and George Simon, M.D., will be presented at the 46th Annual Meeting of the American Society of Clinical Oncology on Sunday June 6.

Stage IIIA presents challenges because patients classified at that level are such a diverse group. The common denominator is that cancer has spread to lymph nodes on the same side of the chest as the primary tumor, but beyond that, stage IIIA patients may have tumors of any size and may differ in the number and location of affected lymph nodes.

“In the past, patients with advanced stage lung cancer were treated either with surgery alone, chemotherapy alone, or radiation alone,” says Aggarwal. “Scientists and clinicians found that adding radiation to chemotherapy improved survival, so concurrent chemotherapy and radiation together became the standard of care.” Recent large, Phase III clinical trials showed that patients treated with chemoradiation followed by surgery were more likely to be disease-free than counterparts who received chemoradiation without surgery set the stage for Aggarwal’s study.

Looking at data from 249 patients treated at Fox Chase for stage IIIA non-small cell lung cancer from 2000 through 2008, the research team divided patients into three groups based on the treatments they received: chemoradiation only, chemoradiation plus surgical removal of a lung (pneumonectomy), or chemoradiation plus surgical removal of only one lobe of the lung (lobectomy).

Biostatistician Brian Egleston, Ph.D., used a technique called propensity score analysis to balance the three groups in terms of age, gender, smoking status, physical condition, and other variables that could bias the results.

“After we adjusted for all those variables, we saw that patients who received surgery, lobectomy in particular, had increased overall survival compared to those who received chemoradiation alone,” says Aggarwal. “We saw overall survival of 40 percent at five years for chemoradiation plus lobectomy, higher than seen with pneumonectomy.”

Removing the entire lung, a more extensive procedure that poses a greater risk of post-operative complications, offered no additional survival benefit.” This tells us that you don’t have to put the patient through pneumonectomy; you might get superior outcomes with a smaller, safer operation,” says Aggarwal, who received an ASCO Foundation Merit Award for this research.

On the Net:

Breakthrough Research For Fighting The Ebola Virus

Scientists reported on Friday that Pentagon-backed research has yielded a breakthrough in the fight against the Ebola virus, a pathogen that is also feared as a future bioterror weapon.

They said monkeys injected with the deadliest strain of Ebola survived after receiving an experimental formula that uses tiny particles of genetic material to disrupt viral reproduction.

This is the first time a drug has protected non-human primates against Ebola, a virus that is sometimes referred to as a “slate wiper” for its ruthless culling of lives.

Ebola is one of a family of so-called filoviruses, which cause hemorrhagic fever, a rare but highly lethal disease in which the patient can bleed to death, sometimes from the mouth, ears and eyes.

There have been five Ebola strains identified since the first case came to light at the Ebola River in Zaire, now the Democratic Republic of Congo, in 1976.

The deadliest is the Zaire strain, which inflicts a death rate of 80 to 90 percent.

Thomas Geisbert of the National Emerging Infectious Diseases Laboratories Institute at Boston University School of Medicine led a team that tested the formula on two groups of rhesus macaque monkeys after getting promising results on guinea pigs.

The drug contains so-called small interfering RNAs, or siRNAs, designed to act as tiny wrenches that are thrown into the machinery of enzymes that enable the virus to replicate.

Three monkeys were given a potentially lethal dose of ZEBOV in the first experiment and then inoculated with four doses of the drug on successive days.  One of the animals died.

The second group consisted of four monkeys that received seven successive doses and all survived.

In both experiments, a “control” monkey that was not inoculated died from the virus.

Geisbert said the results marked a major step forward, although he cautioned that a long road still lies ahead in other safety tests before the treatment could be licensed for humans.

“We believe this work justifies the immediate development of this treatment as an agent to treat EBOV-infected patients, either in outbreaks or accidental laboratory exposures,” he told AFP News.

The Defense Threat Reduction Agency funded the work, which is a branch of the Pentagon that works on strategies against weapons of mass destruction.

According to the U.N.’s World Health Organization (WHO), about 1,850 cases of Ebola, with about 1,200 deaths, have occurred since 1976.

The virus has a natural reservoir in several species of African fruit bat.  Gorillas and other non-human primates are also susceptible to the disease.

The Lancet published the results.

On the Net:

Common Gene Found For Congenital Heart Disease

Although congenital heart disease represents the most common major birth defect, scientists have not previously identified the genes that give rise to it. Now genetics and cardiology researchers, two of them brothers, have discovered a genetic variant on chromosome 5 that strongly raises the risk of congenital heart disease.

“This gene, ISL1, plays a key role in regulating early cardiac development, so there is a compelling biological reason for investigating it as a genetic risk factor for CHD,” said study leader Peter J. Gruber, M.D., Ph.D., a cardiothoracic surgeon and developmental biologist at The Children’s Hospital of Philadelphia. Gruber collaborated with his brother, Stephen B. Gruber, M.D., Ph.D., a geneticist and epidemiologist at the University of Michigan Medical School.

The study appeared online May 26 in the journal PLoS ONE.

Congenital heart disease (CHD), said Peter Gruber, is the “Wild West” of genetics, largely unexplored when compared to diseases such as cancer. Researchers have identified genes involved in chromosomal abnormalities and rare genetic syndromes that include heart defects, but no common gene variant had previously been found for non-syndromic complex CHD.

CHD affects at least one in 100 live births. It ranges widely in severity, from tiny holes between heart chambers that close naturally, to life-threatening abnormal structures such as hypoplastic left-heart syndrome that require a series of complicated surgeries.

CHD can affect a variety of different structures in the heart, but the researchers decided to focus on the earliest period of the organ’s development. “Instead of assuming separate genes would govern each specific defect, we formed the hypothesis that a common gene variant operates early in the biological pathway of heart formation, thus affecting multiple subtypes of congenital heart disease,” said Peter Gruber.

In Peter Gruber’s previous research in human cardiac stem cells, he found that a gene called ISL1 was crucial in regulating the development of early cardiac progenitor cells. Suspecting that ISL1 was a likely candidate gene involved in human CHD, he designed a study in collaboration with two genetics teams, one in Philadelphia, the other in Michigan.

At the Children’s Hospital of Philadelphia, he worked with Hakon Hakonarson, M.D., Ph.D., director of the Center for Applied Genomics, one of the world’s largest centers for pediatric genotyping. Gruber collected DNA samples from 300 children with CHD at the hospital’s comprehensive Cardiac Center, and from 2,200 healthy children at the Center for Applied Genomics. Hakonarson’s team did the initial genotyping””looking for gene variants (mutations) in the DNA of genes in or near the ISL1 gene. When combined with results from the genetics team at the University of Michigan, the researchers found eight of these alternative spellings in DNA bases (single-nucleotide polymorphisms, or SNPs) raised the risk of CHD.

Stephen Gruber and colleagues at the University of Michigan performed second-stage studies on the initial data, analyzing specific DNA sequences and performing “fine mapping” research””focusing in sharper detail on the gene regions of interest. “It was challenging to analyze how genetic variation contributes to complex congenital heart disease,” Stephen Gruber said. “We combined expertise in cardiology, epidemiology, genetics and developmental biology that led to an interesting discovery.”

Adding DNA from medical programs in Canada and the Netherlands to the U.S. samples, the researchers studied genes from a total of 1,344 children with CHD and 6,135 healthy controls, and confirmed in replication studies that variants in the ISL1 gene had strong associations with CHD. Within that gene, they found that one SNP raised the risk for white children, and a different SNP increased the risk for African American children.

While the gene findings do not directly affect treatment for children with CHD, Peter Gruber said that better knowledge of the molecular basis of heart disease may provide eventual benefits for the children he sees as a surgeon. “As future studies better define exactly how a mutation leads to a specific type of heart defect, we may be better able to predict how a gene variant affects other organ systems,” he added. “We may be better able to understand how a child will respond to surgery, and when or even perhaps how to best perform perioperative, intraoperative or postoperative care. A greater understanding of molecular events in early development brings us that much closer to personalized medicine.”

The Leducq Foundation provided funding support for this study. Co-authors with the Grubers and Hakonarson were Kristen N. Stevens, from the University of Michigan; Cecilia E. Kim, Jennifer Raue, Joseph T. Glessner and Anne Granger, of The Children’s Hospital of Philadelphia; and collaborators from the Netherlands, Canada and Spain. In addition to his post at Children’s Hospital, Peter Gruber is a member of the Penn Cardiovascular Institute and the Institute for Regenerative Medicine, both at the University of Pennsylvania School of Medicine.

Citation: Stevens KN, Hakonarson H, Kim CE, Doevendans PA, Koeleman BPC, et al. (2010) Common Variation in ISL1 Confers Genetic Susceptibility for Human Congenital Heart Disease. PLoS ONE 5(5): e10855. doi:10.1371/journal.pone.0010855

Funding: Fondacion Leducq (PJG). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Competing Interests: The authors have declared that no competing interests exist.

On the Net:

Concern About Pandemic Flu Has Positive Impact On Personal Hygiene Behaviors

Fear of the H1N1 virus appears to be the driving factor behind the adoption of preventive behaviors, according to a study published in the June issue of AJIC: American Journal of Infection Control, the official publication of the Association for Professionals in Infection Control and Epidemiology, (APIC). Researchers studying the public response during the recent H1N1 outbreak in Hong Kong concluded that fear about the pandemic prompted residents to frequently wash hands and wear face-masks.

A team led by Joseph T.F. Lau, PhD, a professor at the Chinese University of Hong Kong, investigated the prevalence of self-reported preventive behaviors in response to the influenza A/H1N1 epidemic in Hong Kong, including wearing face-masks regularly in public areas, wearing face-masks after the onset of influenza-like illness (ILI) symptoms, and frequent hand-washing. Previous research shows that both frequent hand-washing and wearing face-masks can control the spread of influenza.

The study’s results showed that 47 percent washed hands more than 10 times per day, 89 percent wore face-masks when having influenza-like illness (ILI) and 21.5 percent wore face-masks regularly in public areas.

The authors note that pandemic outbreaks “have had a sustained impact on personal hygiene and protective behaviors. Our study showed that people with a higher level of mental distress due to A/H1N1 were more likely to adopt some of the three preventive measures.” They go on to say that emerging infectious diseases “provide a window of opportunity for health education to improve personal hygiene.”

According to the authors, these preventive behaviors can play an important role in controlling pandemic influenza, but they cautioned that there is a lack of data on their adoption by the public and see a need for more research.

On the Net:

Study Finds ‘Law-Like’ Patterns In Human Preference Behavior

Patterns consolidate features of older theories, may help diagnosis of psychiatric disorders

In a study appearing in the journal PLoS ONE, Massachusetts General Hospital (MGH) scientists describe finding mathematical patterns underlying the way individuals unconsciously distribute their preferences regarding approaching or avoiding objects in their environment. These patterns appear to meet the strict criteria used to determine whether something is a scientific law and, if confirmed in future studies, could potentially be used to guide diagnosis and treatment of psychiatric disorders.

“Law-like processes are important in science for their predictive value, and finding patterns in behavior that meet criteria for lawfulness is extremely rare,” explains Hans Breiter, MD, principal investigator of the MGH Phenotype Genotype Project in Addiction and Mood Disorder (http://pgp.mgh.harvard.edu), who led the study. “The patterns we observed appear to describe the unconscious range of preference an individual has with a specificity suggestive of that of a fingerprint. We look forward to learning what other scientists find about these patterns.” These patterns ““ which the authors group together under the name relative preference theory ““ incorporate features of several older theories of reward and aversion into a single theory.

The PLoS ONE study reports on the outcome of three sets of experiments. In each, healthy participants were presented with a series of images and could vary the amount of time they viewed each image by pressing the keys of a keypad. The first group of participants viewed a series of four human faces ““ average-looking male, average-looking female, attractive male and attractive female. The second group was presented with a series of photographs of images ranging from children, food, sports and musical instruments to war, disaster, and drug paraphernalia. The third group viewed four different images of food ““ two of normal appearing meals, one in which the food was abnormally colored, and one of raw, unprepared food ““ on two different days. For one viewing, participants were hungry, for the other, they had just eaten. Responses were measured by whether participants increased, decreased, or did nothing to change how long they viewed particular images.

All three experiments showed the same patterns in both groups and individuals, patterns that contained a set of features that varied between people. These patterns describe how groups and individuals make tradeoffs between approaching and avoiding items; how value is attributed to objects, which is linked assessments of other objects of the same type; and how individuals set limits to how strongly they will seek or avoid objects.

The authors note that these patterns incorporate aspects of three existing theories in reward/aversion: prospect theory, which includes the fact that people are more strongly motivated to avoid negative outcomes than to attain positive outcomes; the matching law, which describes how the rates of response to multiple stimuli are proportional to the amount of reward attributed to each stimulus; and alliesthesia, which notes that the value placed on something depends on whether it is perceived to be scarce ““ for example, hungry people place greater value on food than do those who have just eaten.

One of the key differences between relative preference theory (RPT) and earlier theories is that RPT evaluates preferences relating to the intrinsic value of items to an individual, rather than relating preferences to values set by external forces ““ such as how the overall economy sets the value of the dollar. The patterns observed in this study are similar at the individual and group levels ““ a relationship known as “scaling.”

“In order for behavioral patterns to be considered law-like, they need to be described mathematically, recur in response to many types of stimuli, remain stable in the face of statistical noise, and potentially show scaling across different levels of measurement,” explains Anne Blood, PhD, deputy principal investigator of the Phenotype Genotype Project, director of the MGH Mood and Motor Control Laboratory, and a co-author of the PLoS ONE report.

“Relative preference theory meets those requirements, but these observations need to be confirmed by other scientists,” she adds. Other work by our team is examining how these RPT patterns are affected by depression and addiction, with the goal of developing RPT as an Internet tool for psychiatric diagnosis.” Earlier studies by the MGH investigators also connected aspects of RPT to reward circuits in the brain, using magnetic resonance imaging, and to measures of genetic variability.

On the Net:

2 Glasses Of Milk A Day Tones Muscles, Keeps Away Fat

Women who drink two large glasses of milk a day after their weight-lifting routine gained more muscle and lost more fat compared to women who drank sugar-based energy drinks, a McMaster study has found.

The study appears in the June issue of Medicine and Science in Sport and Exercise.

“Resistance training is not a typical choice of exercise for women,” says Stu Phillips, professor in the Department of Kinesiology at McMaster University. “But the health benefits of resistance training are enormous: It boosts strength, bone, muscular and metabolic health in a way that other types of exercise cannot.”

A previous study conducted by Phillips’ lab showed that milk increased muscle mass and fat loss in men. This new study, says Phillips was more challenging because women not only steer clear of resistance training they also tend to steer away from dairy products based on the incorrect belief that dairy foods are fattening.

“We expected the gains in muscle mass to be greater, but the size of the fat loss surprised us,” says Phillips. “We’re still not sure what causes this but we’re investigating that now. It could be the combination of calcium, high-quality protein, and vitamin D may be the key, and. conveniently, all of these nutrients are in milk.

Over a 12-week period, the study monitored young women who did not use resistance-training exercise. Every day, two hours before exercising, the women were required not to eat or drink anything except water. Immediately after their exercise routine, one group consumed 500ml of fat free white milk; the other group consumed a similar-looking but sugar-based energy drink. The same drinks were consumed by each group one hour after exercising.

The training consisted of three types of exercise: pushing (e.g. bench press, chest fly), pulling (e.g. seated lateral pull down, abdominal exercises without weights), and leg exercises (e.g. leg press, seated two-leg hamstring curl). Training was monitored daily one on one by personal trainers to ensure proper technique.

“The women who drank milk gained barely any weight because what they gained in lean muscle they balanced out with a loss in fat” said Phillips. “Our data show that simple things like regular weightlifting exercise and milk consumption work to substantially improve women’s body composition and health.” Phillips’ lab is now following this study up with a large clinical weight loss trial in women.

Funding for the study was provided by McMaster University, CIHR, and the Dairy Farmers of Canada. McMaster University, one of four Canadian universities listed among the Top 100 universities in the world, is renowned for its innovation in both learning and discovery. It has a student population of 23,000, and more than 145,000 alumni in 128 countries.

On the Net:

AT&T U-Verse Suffers Digital Phone Outage

AT&T’s new digital home phone service failed around the country Tuesday, demonstrating ongoing reliability issues with Internet-based phone service.

Customers of AT&T’s U-Verse Voice complained that their landline phones have had no dial tones since Tuesday morning. Customers say that those who are trying to reach them get a message that the line has been disconnected.

Customer support specialists with AT&T are telling customers that a server crash brought down the network in the company’s entire 22-state local-phone service area.

Mari Meguizo, a spokeswoman for AT&T, told the Associated Press (AP) the outage started about 10:30 a.m., and most customers had their service restored by 2:45 p.m. She said the extent of the outage was unknown.

AT&T’s U-Verse Voice has nearly 1.2 million customers. The service works in a similar fashion to independent phone services such as Vonage. The technology, known as Voice over Internet Protocol, has a blemished reliability record compared to standard phone services, though there has been some improvement. U-Verse Internet and TV services were not affected by Tuesday’s outage.

Charles Tillman, a bank employee working from his home in Jupiter, Fla., told AP the outage was “infuriating.”

“If you have a client calling and they get message saying it’s a non-working number, they don’t know it’s a service outage,” he said, adding that he was thankful for his cell phone.

Kimberly Dotseth, a real-estate broker in San Diego, was also upset that clients were unable to reach her. She told AP that she has been a U-Verse customer for only a month and now wants to switch back to a regular landline.

U-Verse Voice is only available in areas where AT&T has upgraded phone networks to support TV services over phone lines.

On the Net:

Surprising New Evidence For Asymmetry Between Matter And Antimatter

UC Riverside physicists involved in the international research; new result brings us closer to understanding the universe and its origins

Why is there matter in the universe and not antimatter, its opposite?

Physicists at Fermi National Accelerator Laboratory, including John Ellison, a professor of physics at UC Riverside, have announced that they have found evidence for a significant violation of matter-antimatter symmetry in decays of B-mesons, which are exotic particles produced in high energy particle collisions.

To arrive at their result, the research team, known as the DZero collaboration, analyzed billions of proton-antiproton collisions at Fermilab’s Tevatron particle collider, and found a 1 percent excess of pairs of muons over pairs of antimuons produced in the decays of B-mesons. Muons, which occur naturally in cosmic rays, are fundamental particles similar to electrons but 200 times heavier.

Ellison said this result is exciting and surprising since it is not predicted in the Standard Model, the comprehensive theory that explains the interactions between all fundamental elementary particles.

He explained that the dominance of matter we observe in the universe is possible only if there are differences, called “CP violation,” in the behavior of particles and antiparticles.

“The reason this is important is that CP violation ““ the fact that physics does not look the same when particles and antiparticles are interchanged and all processes are mirror-reflected ““ is one of the three ingredients identified by Andrei Sakharov, the famous Soviet physicist and dissident, needed to explain the matter-antimatter asymmetry observed in our universe,” Ellison said. “That the universe is completely dominated by matter is a mystery because the Big Bang theory predicts that there should have been equal amounts of matter and antimatter.”

According to Ellison and his DZero peers, the explanation for the dominance of matter in the present day universe is that the CP violation treated matter and antimatter differently and allowed the early universe to evolve into a situation with matter dominating completely over antimatter.

“CP violation as predicted in the Standard Model has been observed before but at a level many orders of magnitude too small to explain the asymmetry,” Ellison said. “This is the first evidence for anomalous CP violation. If confirmed by further measurements, this points to new physics phenomena in particle interactions that give rise to the matter-antimatter asymmetry, and may be another step forward in our understanding of why matter dominates over antimatter in the universe.”

The DZero result is based on data collected over the last eight years by the DZero experiment at Fermilab. Besides Ellison, the UC Riverside co-authors of the paper, submitted for publication in Physical Review D, are Ann Heinson, Liang Li, Mark Padilla, and Stephen Wimpenny.

DZero is an international experiment of about 500 physicists from 86 institutions in 19 countries. It is supported by the U.S. Department of Energy, the National Science Foundation and a number of international funding agencies.

On the Net: