Bacteria Sent To ISS Survive For 500+ Days

Bacteria taken from a small fishing hamlet in the UK and placed on the outside of the International Space Station (ISS) for more than a year not only managed to survive the journey, but continue to thrive in laboratory conditions, according to a recent BBC News report.

In an August 23 story by Science Correspondent Jonathan Amos, the microbes were taken from cliffs at the English village of Beer and placed on the exterior of the ISS “to see how they would cope in the hostile conditions that exist above the Earth’s atmosphere.”

Some of the bacteria survived for a reported 553 days in outer space as part of an experiment designed “to find microbes that could be useful to future astronauts who venture beyond low-Earth orbit to explore the rest of the Solar System,” Amos reported in his Monday article.

“It has been proposed that bacteria could be used in life-support systems to recycle everything,” Dr. Karen Olsson-Francis of the Open University, where the surviving microbes continue to be monitored, told the BBC. “There is also the concept that if we were to develop bases on the Moon or Mars, we could use bacteria for ‘bio-mining’ – using them to extract important minerals from rocks.”

The bacteria were taken from Beer and placed on and in small chunks of limestone that were taken to the ISS onboard the European Space Agency’s (ESA) Technology Exposure Facility. Once there, Amos says that they were exposed to cosmic rays, dramatic temperature shifts, and powerful UV rays, and the vacuum of space would have drained the rocks of their water.

Now, he says, Olsson-Francis and her colleagues are trying to determine exactly how the bacteria managed to survive such harsh conditions.

“Bacterial spores have been known to endure several years in orbit but this is the longest any cells of cyanobacteria, or photosynthesizing microbes, have been seen to survive in space,” Amos said, adding that the bacteria in question “resemble closely a group of cyanobacteria known as Gloeocapsa”¦ They have a thick cell wall and this could be part of the reason they survived so long in space.”

Image 2: The cliffs at Beer in Devon, England. Courtesy Wikipedia

On the Net:

Girls vs. Boys: Social Butterflies vs. Wallflowers

(Ivanhoe Newswire) — Is your son or daughter the social butterfly or wallflower of the class?  A new study by a Michigan State University Psychologist shows that their sex might be the reason why.

In one of the first studies that look at how girls’ and boys’ peer networks develop across grades, psychologists find that both sexes aren’t as different as we think they are.

“Although we tend to think that girls’ and boys’ peer groups are structured differently, these differences disappear as children get older,” Jennifer Watling Neal, assistant professor of psychology, was quoted as saying.

This may be due to an increased interaction with the opposite sex.

“Younger boys and girls tend to play in same-sex peer groups, but every parent can relate to that moment when their son or daughter suddenly takes an interest, whether social or romantic, in the opposite sex,” Neal added.

The study examined peer relationships of third through eighth grade students at a school in Chicago. Neal found that girls in the younger grades tend to flock together in smaller, more intimate groups while boys’ groups were progressively larger.

Neal concluded that the difference disappeared by the eighth grade yet further research with a single group of children over time is needed to confirm results.

SOURCE: Journal of Social and Personal Relationships, August 2010

Google Knows What It ‘Likes’, Buys It

Google announced that it has purchased Like.com, which offers a visual search engine for retail products.

The acquisition comes years after rumors emerged that Google was interested in the company, which was initially known as Riya when the company was founded in 2004.

Like.com CEO and co-founder Munjal Shah said on the company’s home page about the deal Friday:  “We see joining Google as a way to supersize our vision and supercharge our passion.”

According to Like.com, the company provides a visual search engine focused on shoes, clothes, jewelry and decor.

“We’ve developed technology that lets us understand visually what terms like ‘red high-heeled pumps’ and ‘floral patterned sleeveless dress’ mean and created algorithms to understand whether those pumps complement or clash with that dress,” the company’s description said.

Like.com also owns Covet.com, which is described as an online personal shopper for fashion products, and virtual fashion studio Couturious.com.

Shah did not say provide ay specifics of the financial details in his notes, although according to TechCrunch, it’s somewhere around $100 million.

On the Net:

Aerospace Work Force Facing Retirement

U.S. aerospace companies are encouraging students to pursue technical careers to help replace an expected flood of worker retirements.

Companies are sponsoring student robotics competitions, forming partnerships with technical schools and calling for higher national education standards in an attempt to bring new urgency to an emerging U.S. shortage of workers in science, technology, engineering and mathematics (STEM).

“If we can work on retention and we can work on the excitement of STEM or engineering, then we can change the equation,” William Swanson, chief executive of Raytheon Co, said in an interview with Reuters.

Aviation Week magazine performed a 2010 study that found 19 percent of employees are now at retirement age. The publication said that figure will jump to over 30 percent in 2012 and nearly 40 percent by 2014.

According to a 2008 report from the Aerospace Industries Association trade group, only about 70,000 bachelor’s degrees in engineering are awarded in the U.S.

The problem is a growing one for aerospace and defense companies because many engineering jobs in the field are only open to U.S. citizens due to security requirements.

“I have a lot of positions, but a lot of times I may not be able to fill them because I don’t have U.S. citizens,” Lisa Kollar, executive director of career services at Embry-Riddle Aeronautical University, one of the top U.S. schools for aerospace recruitment, told Reuters.

Swanson said the shortfall in engineering-trained talent could pose a national security danger because it limits the ability of the U.S. to be innovative.

“I have nothing against the service industry,” Swanson said. “I just don’t see our country being a great country if we’re flipping hamburgers and selling coffee.”

Raytheon is targeting students at the middle-school age because that is when research shows children lose interest in science and math.  The company created MathMovesU, a program that includes an interactive website, contests, live events, scholarships and tutoring to help send the message that math and science are cool and leads to an interesting career.

Aerospace companies are calling for better training and pay for math and science teachers.

“The gestation period for fixing this may be three, four, five, 10 years out before you start to see the curve change,” Swanson said.

Clay Jones, the CEO of avionics maker Rockwell Collins Inc. told Reuters that there is not enough U.S. technical talent that meets the need.  Aerospace companies may eventually have no choice but to go after more workers in place that produce STEM-trained personnel, such as India and China.

According to the Aerospace Industries Association report, five percent of U.S. bachelor’s degrees are in engineering, compared with 20 percent in Asia.

“It’s not so much that the source of supply is not there,” Jones told Reuters. “It’s that the source of supply in the United States may not be there.”

On the Net:

Bladder Cancer Outlook Good If Caught Early

Bladder cancer has a high cure rate when caught early, but knowing the warning signs and risk factors for the disease is especially important because routine screenings are not common, said a Baylor College of Medicine urologic oncologist.

“Bladder cancer is a highly treatable, highly curable disease particularly before it invades the muscle,” said Dr. Seth P. Lerner, professor in the Scott Department of Urology and the NCI-designated Dan L. Duncan Cancer Center at BCM and Beth and Dave Swalm Chair in Urologic Oncology.

Warning signs

While bladder cancer is typically a disease of people in their 60s and 70s, Lerner said symptoms can occur in much younger people. Painful urination and/or blood in the urine are the most common presenting symptoms.

“This disease is more common in men than women,” said Lerner. “However, women have a tendency to have more advanced forms of the cancer because early warning signs, such as blood in urine, are frequently dismissed as urinary tract infections or voiding dysfunction.”

Never ignore blood in the urine, Lerner emphasized. “Get it checked out every single time.”

Risk factors

Risk factors for the disease include working in certain occupations and cigarette smoking.

“Cigarette smoking is associated with 50 percent of all bladder cancers, while occupational exposure accounts for another 20 percent.”

Research has identified a list of occupations that may include exposure to carcinogens that link to bladder cancer. These occupations include dry cleaning, rubber and aluminum industries, magenta and auramine manufacturing, petroleum refining and petrochemical manufacturers and barbers and/or hair dressers working with permanent dye.

Bladder cancer forms in the lining of the bladder after carcinogens in urine concentrate and sit in the bladder. Over a long period of time, this exposure to the bladder lining causes the changes in the cells that lead to cancer, Lerner said.

Management and treatment

“About 75 percent of cases when first diagnosed have a tumor that is non-muscle invasive and can be managed quite easily,” said Lerner. “We can remove the tumor with a cystoscopic procedure using a lighted instrument to scrape out the tumor.”

These patients remain at a high risk for developing tumors in the future, Lerner said. They require frequent follow-up cystoscopy tests and urine biomarker tests performed in the outpatient clinic.

“Bladder cancer requires long-term follow-up and surveillance to spot and treat a potential recurrence,” Lerner emphasized.

Highest-risk patients may be treated with chemotherapy or immunotherapy delivered into the bladder by a catheter passed through the urethra, a procedure called intravesical therapy.

“The other 25 percent of patients have muscle invasive tumors that may spread to other parts of the body,” said Lerner. “These patients typically have their bladders removed or reconstructed or undergo chemotherapy and radiation and are also followed very closely.”

Major challenge

Lerner said a major challenge in fighting bladder cancer is the lack of data to support screening for bladder cancer in a broader population. He is currently enrolling patients for a study to test screening in a high-risk population which includes male cigarette smokers 60 years or older with a 40 pack year smoking history (number of cigarettes smoked per day multiplied by the number of years smoked).

“In 2010, an estimated 70,530 adults (52,760 men and 17,770 women) will be diagnosed with bladder cancer in the United States,” said Lerner. “It is estimated that 14,680 people (10,410 men and 4,270 women) will die from this disease.”

On the Net:

Cosmic Accelerators Found In Our Galaxy

Physicists from UCLA and Japan have discovered evidence of “natural nuclear accelerators” at work in our Milky Way galaxy, based on an analysis of data from the world’s largest cosmic ray detector.

The research is published Aug. 20 in the journal Physical Review Letters.

Cosmic rays of the highest energies were believed by physicists to come from remote galaxies containing enormous black holes capable of consuming stars and accelerating protons at energies comparable to that of a bullet shot from a rifle. These protons “” referred to individually as “cosmic rays” “” travel through space and eventually enter our galaxy.

But earlier this year, physicists using the Pierre Auger Observatory in Argentina, the world’s largest cosmic ray observatory, published a surprising discovery: Many of the energetic cosmic rays found in the Milky Way are not actually protons but nuclei “” and the higher the energy, the greater the nuclei-to-proton ratio.

“This finding was totally unexpected because the nuclei, more fragile than protons, tend to disintegrate into protons on their long journey through space,” said Alexander Kusenko, UCLA professor of physics and astronomy and co-author of the Physical Review Letters research. “Moreover, it is very unlikely that a cosmic accelerator of any kind would accelerate nuclei better than protons at these high energies.”

The resolution to the paradox of the nuclei’s origin comes from an analysis by Kusenko; Antoine Calvez, a UCLA graduate student of physics who is part of Kusenko’s research group; and Shigehiro Nagataki, an associate professor of physics at Japan’s Kyoto University. They found that stellar explosions in our own galaxy can accelerate both protons and nuclei. But while the protons promptly leave the galaxy, the heavier and less mobile nuclei become  trapped in the turbulent magnetic field and linger longer.

“As a result, the local density of nuclei is increased, and they bombard Earth in greater numbers, as seen by the Pierre Auger Observatory,” said Kusenko, who is also a senior scientist at the University of Tokyo’s Institute for Physics and Mathematics of the Universe (IPMU).

These ultra”“high-energy nuclei have been trapped in the web of galactic magnetic fields for millions of years, and their arrival directions as they enter the Earth’s atmosphere have been “completely randomized by numerous twists and turns in the tangled field,” he said.

“When the data came out, they were so unexpected that many people started questioning the applicability of known laws of physics at high energy,” Kusenko said. “The common lore has been that all ultra”“high-energy cosmic rays must come from outside the galaxy. The lack of plausible sources and the arrival-direction anisotropy (the nuclei have different physical properties when measured in different directions) have been used as arguments in favor of extragalactic sources.

“However, since the cosmic rays in question turned out to be nuclei, the galactic field can randomize their arrival directions, taking care of the anisotropy puzzle. As for the plausible sources, the enormous stellar explosions responsible for gamma ray bursts can accelerate nuclei to high energies. When we put these two together, we knew we were on the right track. Then we calculated the spectra and the asymmetries, and both agreed with the data very well.”

Kusenko hopes this research will enhance the understanding of “astrophysical archeology.”

“We can study the collective effects of gamma ray bursts that have taken place in the past of our own galaxy over millions of years,” he said.

Stellar explosions capable of accelerating particles to ultra-high energies have been seen in other galaxies, where they produce gamma-ray bursts. The new analysis provides evidence that such powerful explosions occur in our galaxy as well, at least a few times per million years, Kusenko said.

Kusenko and his colleagues predict that the protons escaping from other galaxies should still be seen at the highest energies and should point back to their sources, providing Pierre Auger Observatory with valuable data.

The Pierre Auger Observatory records cosmic ray showers through an array of 1,600 particle detectors placed about one mile apart in a grid spread across 1,200 square miles, complemented by specially designed telescopes. The observatory is named for the French physicist Pierre Victor Auger, who in the 1920s discovered air showers.

Kusenko’s research was federally funded by the U.S. Department of Energy and NASA. Nagataki’s research was funded by the Japan Society for the Promotion of Science.

Image Credit NASA/JPL-Caltech/S. Stolovy (SSC/Caltech)

On the Net:

Eclipsing Pulsar Promises Clues To Crushed Matter

Astronomers using NASA’s Rossi X-ray Timing Explorer (RXTE) have found the first fast X-ray pulsar to be eclipsed by its companion star. Further studies of this unique stellar system will shed light on some of the most compressed matter in the universe and test a key prediction of Einstein’s relativity theory.

The pulsar is a rapidly spinning neutron star — the crushed core of a massive star that long ago exploded as a supernova. Neutron stars pack more than the sun’s mass into a ball nearly 60,000 times smaller. With estimated sizes between 10 and 15 miles across, a neutron star would just span Manhattan or the District of Columbia.

“It’s difficult to establish precise masses for neutron stars, especially toward the higher end of the mass range theory predicts,” said Craig Markwardt at NASA’s Goddard Space Flight Center in Greenbelt. “As a result, we don’t know their internal structure or sizes as well as we’d like. This system takes us a step closer to narrowing that down.”

Known as Swift J1749.4-2807 — J1749 for short — the system erupted with an X-ray outburst on April 10. During the event, RXTE observed three eclipses, detected X-ray pulses that identified the neutron star as a pulsar, and even recorded pulse variations that indicated the neutron star’s orbital motion.

J1749 was discovered in June 2006, when a smaller eruption brought it to the attention of NASA’s Swift satellite. Observations by Swift, RXTE and other spacecraft revealed that the source was a binary system located 22,000 light-years away in the constellation Sagittarius and that the neutron star was actively capturing, or accreting, gas from its stellar partner. This gas gathers into a disk around the neutron star.

“Like many accreting binary systems, J1749 undergoes outbursts when instabilities in the accretion disk allow some of the gas to crash onto the neutron star,” said Tod Strohmayer, RXTE’s project scientist at Goddard.

The pulsar’s powerful magnetic field directs infalling gas onto the star’s magnetic poles. This means that the energy release occurs in hot spots that rotate with the neutron star, producing fast X-ray pulses. How fast? J1749 is spinning 518 times a second — a city-sized sphere rotating as fast as the blades of a kitchen blender.

In addition, the pulsar’s orbital motion imparts small but regular changes in the frequency of the X-ray pulses. These changes indicate that the stars revolve around each other every 8.8 hours.

During the week-long outburst, RXTE observed three periods when J1749’s X-ray emission briefly disappeared. Each eclipse, which lasts 36 minutes, occurs whenever the neutron star passes behind the normal star in the system.

“This is the first time we’ve detected X-ray eclipses from a fast pulsar that is also accreting gas,” Markwardt said. “Using this information, we now know the size and mass of the companion star with unprecedented accuracy.”

By comparing RXTE observations across the theoretical mass range for neutron stars, the astronomers determined that J1749’s normal star weighs in with about 70 percent of the sun’s mass — but the eclipses indicate that the star is 20 percent larger than it should be for its mass and apparent age.

“We believe that the star’s surface is ‘puffed up’ by radiation from the pulsar, which is only about a million miles away from it,” Markwardt explained. “This additional heating probably also makes the star’s surface especially disturbed and stormy.”

Writing about their findings in the July 10 issue of The Astrophysical Journal Letters, Markwardt and Strohmayer note that they have all but one orbital variable needed to nail down the mass of the pulsar, which is estimated to be between about 1.4 and 2.2 times the sun’s mass.

“We need to detect the normal star in the system with optical or infrared telescopes,” Strohmayer said. “Then we can measure its motion and extract the same information about the pulsar that the pulsar’s motion told us about the star.”

However, a pioneering X-ray measurement well within the capability of RXTE may make a hunt for the star irrelevant.

One consequence of relativity is that a signal — such as a radio wave or an X-ray pulse — experiences a slight timing delay when it passes very close to a massive object. First proposed by Irwin Shapiro at the Massachusetts Institute of Technology (MIT) in Cambridge, Mass., in 1964 as a new test for predictions of Einstein’s relativity, the delay has been demonstrated repeatedly using radio signals bounced off of Mercury and Venus and experiments involving spacecraft communications.

“High-precision measurements of the X-ray pulses just before and after an eclipse would give us a detailed picture of the entire system,” Strohmayer said. For J1749, the predicted Shapiro delay is 21 microseconds, or 10,000 times faster than the blink of an eye. But RXTE’s superior timing resolution allows it to record changes 7 times faster.

With only three eclipses observed during the 2010 outburst, RXTE didn’t capture enough data to reveal a large delay. However, the measurements set a limit on how massive the normal star can be. The study shows that if the star’s mass was greater than 2.2 times the sun’s, RXTE would have seen the delay.

“We believe this is the first time anyone has set realistic limits for this effect at X-ray wavelengths outside of our solar system,” Markwardt noted. “The next time J1749 has an outburst, RXTE absolutely could measure its Shapiro delay.”

Launched in late 1995, RXTE is second only to Hubble as the longest serving of NASA’s currently operating astrophysics missions. RXTE discovered the first accreting millisecond pulsar — SAX J1808.4-3658 — in 1998 and continues to provide a unique observing window into the extreme environments of neutron stars and black holes.

By Francis Reddy, NASA’s Goddard Space Flight Center

Image Caption: J1749 is the first accreting millisecond pulsar to undergo eclipses. The pulsar and its companion star are separated by 1.22 million miles, or about five times the distance between Earth and the moon. Irradiated by the pulsar’s intense X-rays, the star’s outer layers puff up to make it about 20 percent larger than a star of its mass and age should be. This artist’s rendering includes additional data about the system. Credit: NASA/GSFC

On the Net:

New Mechanisms Of Tumor Resistance To Targeted Therapy In Lung Cancer Are Discovered

CSHL-led team demonstrates that increased IL-6 secretion can lead to decreased sensitivity to Tarceva

One of the most tantalizing developments in anti-cancer therapy over recent years has been the advent of targeted treatments, which have proven highly effective in holding aggressive cancers at bay in certain patients, although typically only for a limited period of time.

A team led by Raffaella Sordella, Ph.D., an Assistant Professor at Cold Spring Harbor Laboratory (CSHL), today published results of a study that suggests new ways in which tumor cells develop resistance to one of the most successful targeted therapies, the small-molecule drug Tarceva (erlotinib). Since its approval by the U.S. Food and Drug Administration in 2004, Tarceva has produced dramatic, albeit impermanent, remissions in a subset of patients with notoriously difficult-to-treat cancers including non-small cell lung cancer (NSCLC) and pancreatic cancer.

Tarceva homes in on a specific area, or domain, of a very common cell-membrane receptor called EGFR, the epidermal growth-factor receptor. The drug has been shown to be effective in some NSCLC patients with specific oncogenic (cancer-promoting) mutations of EGFR. The drug molecule physically occupies a tiny pocket in the receptor structure–within an area called the tyrosine kinase domain, located just beneath the surface of the cell. This prevents the receptor from initiating a cascade of internal signals that can cause cellular growth to careen out of control, something like a switch stuck in the “on” position.

Over recent years Sordella and colleagues have focused on the question of how NSCLC tumor cells develop resistance to Tarceva. Specifically, they wanted to move beyond known explanations accounting for about half of the observed resistance. “Our colleagues in the field have already shown that secondary mutations of EGFR or amplification of a gene called c-MET are responsible for about 50% of cases of Tarceva resistance. We wanted to explain the other cases, in which the mechanism of resistance simply was not known.”

Sordella’s team, which was joined by investigators from Weill Cornell Medical College of Cornell University and the Boltzmann Institute for Cancer Research in Vienna, found what they regard as “compelling evidence” of other modes of resistance. One pertains to a population of NSCLC cells within untreated tumors that they found to be “intrinsically resistant” to Tarceva. These are cells–accounting for about 3% of the tumor samples studied”“that exhibit features suggestive of something scientists call EMT: the transition of normal epithelial cells to mesenchymal cells–cells with an increased metastatic potential.

This subpopulation of tumor cells were observed to secrete elevated amounts of TGF-ÃŽ², a type of growth factor that plays a role in cell differentiation, development, and in regulation of the immune system. Up-regulation of TGF-ÃŽ² in these tumor cells resulted in increased secretion of a signaling molecule called IL-6, which among other things is also involved in immune responses. This subset of tumor cells with up-regulated TGF-ÃŽ² and increased IL-6 secretion was observed to resist treatment with Tarceva, independently of the EGFR pathway.

“This led us to the idea that inflammation might be one of the factors that could reduce a lung tumor cell’s sensitivity to Tarceva,” Sordella says. Since IL-6 and TGF-ÃŽ² are actively produced during the general inflammatory process, the team was led to explore whether inflammation mediated by non-cancer cells in the tumor microenvironment might also play a role in resistance to Tarceva. Using mouse models, the team was able to show precisely that.

Why IL-6 seems to be required for the survival of the cancer cells is not yet clear,” Sordella notes. “We hypothesize that it plays a role in protecting cells from programmed cell-death, or apoptosis. We expect to investigate that possibility in future studies.”

On the Net:

Depressed Teen Boys Could Use A Little More Oily Fish

A new Japanese study suggests that eating more oily fish like sardines, salmon and yellowtail could help teenage boys feel less depressed.

However, the same does not apply for teenage girls.

Omega-3 fatty acids, including EPA and DHA, are found predominantly in oily fish.  Many researchers have wondered whether increased consumption of these foods could lower the risk of depression.  However, studies performed among young adults have yielded inconclusive results.

Investigators had yet to look for the potential link in youth, which is an age group prone to the debilitating problem.  However, Kentaro Murakami of the University of Tokyo and colleagues analyzed the diets and rates of depression in over 6,500 Japanese junior high school students between the ages of 12 and 15.

They reported in the journal Pediatrics that overall, 23 percent of the boys and 31 percent of the girls suffered from symptoms of depression, including feelings of worthlessness, hopelessness and sleep disturbances.

The investigators found that boys who ate the most fish had 27 percent lower odds of being depressed compared to those ranked in the bottom 5th. 

Similar differences were seen when specifically looking at the EPA and DHA content of the fish consumed. 

Meanwhile, the fish oil had no effect among the girls that took part in the study.

The researchers said that the effect of fish oil between girls and boys is difficult to explain, although they point to a few possibilities such as a stronger genetic role for depression in women compared to men.

The investigators also said their findings do not provide enough evidence to determine if fish oil actually lowers the risk of feel blue.  For example, it might be that those who are depressed just eat less fish.

The researchers concluded that in boosting the intake of fish, EPA and DHA “may be an important strategy for the prevention of depression.”

On the Net:

Drinking Beer Could Cause Psoriasis In Women

Women who drink certain types of beer may be more likely to develop psoriasis, according to a new study published Monday in the Archives of Dermatology, a JAMA/Archives journal.

In a study that lasted from 1991 through 2005, researcher and dermatologist Dr. Abrar A. Qureshi from Harvard Medical School and Brigham and Women’s Hospital in Boston looked at more than 1,000 cases of psoriasis among women ages 27 to 44. They discovered that the risk of developing the skin condition was 72-percent greater in women who averaged at least 2.3 alcoholic drinks each week.

Next, they looked at the type of alcohol consumed, and discovered that while light beer, red and white wine, and other forms of liquor were not associated with an increased risk for the disease, regular (non-light) beer was. In fact, women who consumed at least five regular beers each week were nearly twice as likely (1.8 times) to be diagnosed with psoriasis.

“Nonlight beer was the only alcoholic beverage that increased the risk for psoriasis, suggesting that certain nonalcoholic components of beer, which are not found in wine or liquor, may play an important role in new-onset psoriasis,” Dr. Qureshi wrote in his paper, entitled “Alcohol Intake and Risk of Incident Psoriasis in U.S. Women.”

“One of these components may be the starch source used in making beer. Beer is one of the few nondistilled alcoholic beverages that use a starch source for fermentation, which is commonly barley,” he added. “This differs from wine that uses a fruit source (grapes) for fermentation. Some types of liquors such as vodka may use a starch source for fermentation; however, these starches are physically separated from the liquor during distillation. Starch sources such as barley contain gluten, which has been shown to be associated with psoriasis.”

“Lower intake of nonlight beer and intake of other types of alcoholic beverages do not appear to influence the risk of developing psoriasis,” Dr. Qureshi concluded. “Women with a high risk of psoriasis may consider avoiding higher intake of nonlight beer. We suggest conducting further investigations into the potential mechanisms of nonlight beer inducing new-onset psoriasis.”

On the Net:

FDA Proposes Withdrawal Of Low Blood Pressure Drug

Companies failed to provide evidence of clinical benefit of midodrine hydrochloride

The U.S. Food and Drug Administration today proposed to withdraw approval of the drug midodrine hydrochloride, used to treat the low blood pressure condition orthostatic hypotension, because required post-approval studies that verify the clinical benefit of the drug have not been done.

Patients who currently take this medication should not stop taking it and should consult their health care professional about other treatment options.

The drug, marketed as ProAmatine by Shire Development Inc. and as a generic by others, was approved in 1996 under the FDA’s accelerated approval regulations for drugs that treat serious or life-threatening diseases. That approval required that the manufacturer verify clinical benefit to patients through post-approval studies.

To date, neither the original manufacturer nor any generic manufacturer has demonstrated the drug’s clinical benefit, for example, by showing that use of the drug improved a patient’s ability to perform life activities.

Orthostatic hypotension is a condition in which patients are unable to maintain blood pressure in the upright position and, therefore, become dizzy or faint when they stand up.

“We’ve worked continuously with the drug companies to obtain additional data showing the drug’s clinical benefits to patients,” said Norman Stockbridge, M.D., director of the Division of Cardiovascular and Renal Drugs in the FDA’s Center for Drug Evaluation and Research. “Since the companies have not been able to provide evidence to confirm the drug’s benefit, the FDA is pursuing a withdrawal of the product.”

The FDA today issued a Proposal to Withdraw Marketing Approval and Notice of Opportunity for a Hearing to the companies that manufacture midodrine. This is the first time the agency has issued such a notice for a drug approved under the FDA’s accelerated approval regulations. Shire, the maker of the brand name drug, must respond to the FDA in writing within 15 days to request a hearing. If the company fails to do so, the opportunity for a hearing will be waived. Sponsors of generic versions of midodrine will have 30 days to submit written comments on the notice. If, after considering any relevant submissions, the FDA continues to believe that withdrawal of approval is warranted, approval of all midodrine products, including generic versions, will be withdrawn.

Generic versions of the drug are made by Apotex Corp., Impax Laboratories Inc., Mylan Pharmaceuticals, Sandoz Inc., and Upsher-Smith Laboratories.

Under accelerated approval, a drug company may obtain approval of a drug used to treat a serious or life-threatening disease or condition based upon a surrogate endpoint. A surrogate endpoint is a clinical marker, such as a positive effect on blood pressure, believed to predict actual clinical benefits such as improved survival or decreased severity of the disease.

Drug companies that obtain approval under this program are required to conduct additional clinical trials after approval to confirm the drug’s benefit. If those trials fail to confirm clinical benefit to patients, or if the companies do not pursue the required confirmatory trials with due diligence, the FDA can withdraw approval of the drug using expedited procedures.

According to a database used by the FDA, about 100,000 patients in the United States filled prescriptions for brand or generic forms of midodrine in 2009.

The agency is working with the drug manufacturers to develop an expanded-access program to allow patients who currently receive the drug to continue to receive it. On a case-by-case basis, expanded-access programs allow the use of a drug outside of a clinical trial to treat patients with a serious or immediately life-threatening disease or a condition that has no comparable or satisfactory alternative treatment options.
 

On the Net:

Modern-day Space Race To The Moon

Russia and India are battling China in a modern-day space race to land an unmanned probe on the moon.

Russian and Indian engineers have started working together on a robotic mission to land on the moon in 2013.

China’s Chang’e-3 spacecraft is also racing to get to Earth’s celestial neighbor during the same time frame.

Whichever rover lands first would be the first human hardware to function on the lunar surface since the Soviet Luna-24 spacecraft returned to Earth with Moon soil samples in 1976.

The joint Russian and Indian mission will include an Indian-built lunar orbiter and a Russian-built landing platform, which would both be launched by a single Indian rocket.

The Russian-built four-legged platform will deliver about 77 pounds of scientific equipment to the Moon and release a 33-pound Indian-built robotic rover.

The tiny Indian electric vehicle is expected to provide scientific data, thanks to miniaturization of technology.

“We do understand that, first of all, it is a demonstration of the Indian presence on the surface of the Moon,” Aleksandr Zakharov, a leading scientist at the Space Research Institute (IKI) in Moscow told BBC News.

“However, it will have a TV camera onboard, and we also asked our Indian partners to include a miniature manipulator, so it could sample soil beyond the reach of the robotic arm of the (stationary Russian) lander.”

Zakharov told BBC that the rover and all of its scientific equipment is expected to be Indian-built, even though India is free to solicit foreign participation.

Russian space industry officials said that the country recently put the highest priority on the Luna-Resource project in order to fulfill the 2013 launch window.

Zakharov said the work on the lander was proceeding even more actively than on Russia’s own project of lunar exploration.

Russia is planning to finalize the selection of instruments, which will comprise the scientific payload aboard the stationary Luna-Resource lander.

Confirming the existence of lunar water became important for planetary scientists in the 1990s, after a U.S. probe found signs of water ice around the lunar pose.

If water is found on the moon then it would provide a major imperative if humans ever attempt to establish a habitable base on the Moon.

According to Zakharov, a drill mechanism could penetrate as deep as 3 feet below the surface of the Moon, and with some luck achieve the pioneering feat of “touching” lunar water.

Russian and Indian scientists will be working to carefully select landing sites for the mission in order to increase the chances of capturing water on the moon.

The lunar South Pole had already been singled out as a possible target for finding water ice close to the surface.

The selection process could be facilitated by data from India’s first lunar mission “Chandrayaan – 1,” which orbited the Moon in 2008.

Zakharov told BBC that landing at the poles of the Moon could be arranged so that it ensures the largely uninterrupted communications of the spacecraft with ground control. 

The Moon’s polar regions are largely an enigma to scientists, because previous lunar landings were limited to equatorial and middle latitudes.

The Luna Resource mission could improve the understanding of the Moon’s internal composition and its orbital movements with the help of a seismometer and a laser reflector.

A radio beacon, which could facilitate lunar landings for future missions, is on the short list of potential payloads.  Zakharov said that up to 10 scientific instruments could be placed aboard the lender

The Luna-Resource is expected to make a maximum use of scientific hardware, which had already been developed for exploration of the Martian Moon.

The Russian space agency expects that many of its traditional partners would consider participating in the new mission.

“We do talk to our usual partners in France, Germany, Sweden and other countries and we are counting on that,” Zakharov told BBC.

Household Cleaners Are Deadly to Children

The very things we use to keep our house clean and smelling fresh are dangerous in the hands of young children.

Fermi Finds Nova Generating Gamma-rays

Astronomers using NASA’s Fermi Gamma-ray Space Telescope have detected gamma-rays from a nova for the first time, a finding that stunned observers and theorists alike. The discovery overturns the notion that novae explosions lack the power to emit such high-energy radiation.

A nova is a sudden, short-lived brightening of an otherwise inconspicuous star. The outburst occurs when a white dwarf in a binary system erupts in an enormous thermonuclear explosion.

“In human terms, this was an immensely powerful eruption, equivalent to about 1,000 times the energy emitted by the sun every year,” said Elizabeth Hays, a Fermi deputy project scientist at NASA’s Goddard Space Flight Center in Greenbelt, Md. “But compared to other cosmic events Fermi sees, it was quite modest. We’re amazed that Fermi detected it so strongly.”

Gamma rays are the most energetic form of light, and Fermi’s Large Area Telescope (LAT) detected the nova for 15 days. Scientists believe the emission arose as a million-mile-per-hour shock wave raced from the site of the explosion.

A paper detailing the discovery will appear in the Aug. 13 edition of the journal Science.

The story opened in Japan during the predawn hours of March 11, when amateur astronomers Koichi Nishiyama and Fujio Kabashima in Miyaki-cho, Saga Prefecture, imaged a dramatic change in the brightness of a star in the constellation Cygnus. They realized that the star, known as V407 Cyg, was 10 times brighter than in an image they had taken three days earlier.

The team relayed the nova discovery to Hiroyuki Maehara at Kyoto University, who notified astronomers around the world for follow-up observations. Before this notice became widely available, the outburst was independently reported by three other Japanese amateurs: Tadashi Kojima, Tsumagoi-mura Agatsuma-gun, Gunma prefecture; Kazuo Sakaniwa, Higashichikuma-gun, Nagano prefecture; and Akihiko Tago, Tsuyama-shi, Okayama prefecture.

On March 13, Goddard’s Davide Donato was on-duty as the LAT “flare advocate,” a scientist who monitors the daily data downloads for sources of potential interest, when he noticed a significant detection in Cygnus. But linking this source to the nova would take several days, in part because key members of the Fermi team were in Paris for a meeting of the LAT scientific collaboration.

“This region is close to the galactic plane, which packs together many types of gamma-ray sources — pulsars, supernova remnants, and others in our own galaxy, plus active galaxies beyond them,” Donato said. “If the nova had occurred elsewhere in the sky, figuring out the connection would have been easier.”

The LAT team began a concerted effort to identify the mystery source over the following days. On March 17, the researchers decided to obtain a “target-of-opportunity” observation using NASA’s Swift satellite — only to find that Swift was already observing the same spot.

“At that point, I knew Swift was targeting V407 Cyg, but I didn’t know why,” said Teddy Cheung, an astrophysicist at the Naval Research Laboratory (NRL) in Washington, D.C., and the lead author of the study. Examining the Swift data, Cheung saw no additional X-ray sources that could account for what Fermi’s LAT was seeing.

V407 Cyg had to be it.

Half an hour later, Cheung learned from other members of the LAT team that the system had undergone a nova outburst, which was the reason the Swift observations had been triggered. “When we looked closer, we found that the LAT had detected the first gamma rays at about the same time as the nova’s discovery,” he said.

V407 Cyg lies 9,000 light-years away. The system is a so-called symbiotic binary containing a compact white dwarf and a red giant star about 500 times the size of the sun.

“The red giant is so swollen that its outermost atmosphere is just leaking away into space,” said Adam Hill at Joseph Fourier University in Grenoble, France. The phenomenon is similar to the solar wind produced by the sun, but the flow is much stronger. “Each decade, the red giant sheds enough hydrogen gas to equal the mass of Earth,” he added.

The white dwarf intercepts and captures some of this gas, which accumulates on its surface. As the gas piles on for decades to centuries, it eventually becomes hot and dense enough to fuse into helium. This energy-producing process triggers a runaway reaction that explodes the accumulated gas.

The white dwarf itself, however, remains intact.

The blast created a hot, dense expanding shell called a shock front, composed of high-speed particles, ionized gas and magnetic fields. According to an early spectrum obtained by Christian Buil at Castanet Tolosan Observatory, France, the nova’s shock wave expanded at 7 million miles per hour — or nearly 1 percent the speed of light.

The magnetic fields trapped particles within the shell and whipped them up to tremendous energies. Before they could escape, the particles had reached velocities near the speed of light. Scientists say that the gamma rays likely resulted when these accelerated particles smashed into the red giant’s wind.

“We know that the remnants of much more powerful supernova explosions can trap and accelerate particles like this, but no one suspected that the magnetic fields in novae were strong enough to do it as well,” said NRL’s Soebur Razzaque.

Supernovae remnants endure for 100,000 years and affect regions of space thousands of light-years across.

Kent Wood at NRL compares astronomical studies of supernova remnants to looking at static images in a photo album. “It takes thousands of years for supernova remnants to evolve, but with this nova we’ve watched the same kinds of changes over just a few days,” he said. “We’ve gone from a photo album to a time-lapse movie.”

Francis Reddy, NASA’s Goddard Space Flight Center

Image 1: Fermi’s Large Area Telescope saw no sign of a nova in 19 days of data prior to March 10 (left), but the eruption is obvious in data from the following 19 days (right). The images show the rate of gamma rays with energies greater than 100 million electron volts (100 MeV); brighter colors indicate higher rates. Credit: NASA/DOE/Fermi LAT Collaboration

Image 2: Japanese amateur astronomers discovered Nova Cygni 2010 in an image taken at 19:08 UT on March 10 (4:08 a.m. Japan Standard Time, March 11). The erupting star (circled) was 10 times brighter than in an image taken several days earlier. The nova reached a peak brightness of magnitude 6.9, just below the threshold of naked-eye visibility. Credit: K. Nishiyama and F. Kabashima/H. Maehara, Kyoto Univ.

Image 3: This image from Steve O’Connor in St. Georges, Bermuda, shows the nova (red star, center) on March 17, about a week into the eruption. Credit: Steve O’Connor

On the Net:

Link Between Brain Activity, Childhood Anxiety

By studying young monkeys, scientists from the University of Wisconsin-Madison School of Medicine and Public Health believe they have found the areas of the brain that cause childhood anxiety.

The study, which was published in the August 12 edition of Nature, could potentially help doctors develop new methods for detecting and treating kids at-risk for developing anxiety-related disorders, according to a press release published by the university on Wednesday.

Young primates who had increased activity in the amygdala and anterior hippocampus were more likely to demonstrate anxious behavior, University of Wisconsin School of Medicine and Public Health Psychology Chair Dr. Ned H. Kalin and his colleagues discovered. In previous research, Kalin discovered similarities between anxious young monkeys and their homosapien counterparts.

“We believe that young children who have higher activity in these brain regions are more likely to develop anxiety and depression as adolescents and adults, and are also more likely to develop drug and alcohol problems in an attempt to treat their distress,” Kalin said in the press release, which was posted to the university’s official website.

Kalin and his team analyzed 238 young rhesus monkeys and used PET scans to monitor their brain functions. They discovered that the monkeys that displayed anxious temperaments had higher recorded activity in the central nucleus of the amygdala and the anterior hippocampus, and that they could use brain activity readings to successfully predict each monkey’s level of anxiety.

“We’re really excited about the findings because we think that they have the potential to have a direct impact on how we understand these illnesses in children and hopefully we can come up with better ways to treat kids based on this information,” Kalin told Jon Lentz of Reuters in a telephone interview on Wednesday.

“Basically the idea and the hope would be we could intervene in a way that we could, more or less permanently, change a young child’s brain such that they would not have to struggle with these problems,” he added.

On the Net:

The Mysterious Roving Rocks Of Racetrack Playa

In a particularly parched region of an extraordinary planet, rocks big and small glide across a mirror-flat landscape, leaving behind a tangle of trails. Some rocks travel in pairs, their two tracks so perfectly in synch along straight stretches and around curves that they seem to be made by a car. Others go freewheeling, wandering back and forth alone and sometimes traveling the length of several football fields. In many cases, the trails lead right to resting rocks, but in others, the joyriders have vanished.

This may sound like an alien world, but it’s actually Racetrack Playa in Death Valley, Calif. Since the 1940s, researchers have documented trails here and on several other playas in California and Nevada. Seventeen undergraduate and graduate students from the Lunar and Planetary Sciences Academy (LPSA) at NASA’s Goddard Space Flight Center in Greenbelt, Md., traveled to the Racetrack and nearby Bonnie Claire playas this summer to investigate how these rocks move across the nearly empty flats.

Some rocks are thought to have moved nearly as fast as a person walks. But nobody has actually seen a rock in motion, and scientists haven’t deduced exactly how it happens. The easy explanations””assistance from animals, gravity, or earthquakes””were quickly ruled out, leaving room for plenty of study and irresistible speculation over the years.

“When you see these amazing rocks and trails,” says Mindy Krzykowski, an intern from the University of Alaska in Fairbanks, “you really get into coming up with your own ideas about what’s going on.”

Like being an explorer

After driving two hours from Beatty, Nev. (the nearest town with guest lodging), through the rugged mountains of the Amargosa and Panamint Ranges, the playa looks impossibly flat. This is the nature of a playa””a lake bed that is dry most of the time. A visitor can truly see for miles, because Racetrack rises only about an inch over its 4-1/2-mile length.

“Around you is hot white cracked clay in all directions (you’re spinning to take it in),” wrote intern Emma McKinney of the Massachusetts Institute of Technology in Cambridge, Mass., in the blog for the LPSA trip. The mountains that hug both sides of the playa are so distant they look “like the backdrop to an old western [movie]”¦. No one speaks because they are a what-seems-to-be-a-million-miles-away-distance from you.”

Sporting sunhats and carrying lots of water, the students arrived around 7 a.m. for their day of data collecting. They broke into five teams, each led by a Goddard scientist, and took out their maps. Then they packed their equipment and headed in different directions in search of rocks and trails. Justin Wilde from the University of Wyoming in Laramie says, “It felt like a treasure hunt.”

For each rock and trail, the students recorded GPS coordinates and snapped photos. They dug up small sensors called Hygrochrons that had been buried (with the required permission of the National Park Service) three months earlier by Gunther Kletetschka, one of the trip leaders. The interns captured the electronically stored temperature and humidity data. They marked the trail boundaries by slipping colored pushpins into cracks in the clay and measured each track’s length, depth, and width. They confirmed earlier observations that some of the big rocks have moved farther than the small ones.

The interns also found small mounds at the ends of some trails. People speculate these were formed when the rocks ploughed into the clay and came to rest. Quite puzzling were the mounds at the ends of trails that had no rocks.

The students checked for unusual or changing magnetic fields. (Nope, no evidence of that.) One student conducted radiation measurements. (Nothing strange there, either.) They pulled out small levels to determine if the rocks might be moving along trails tilted ever-so-slightly downhill. Instead, “the general trend is that they move uphill,” as reported by Andrew Ryan of Slippery Rock University in Slippery Rock, Penn., in a talk that the LPSA group gave later at Goddard. “But the slope is so insignificant that we don’t think it would influence this movement.”

Two interns, Kynan Rilee from Princeton University in Princeton, N.J, and Gregory Romine, a graduate student at San Francisco State University, got the special assignment of photographing the playa’s skyline and correlating these pictures with GPS coordinates. Rilee later fed this information into a model that can be used to determine where on the playa a photo was taken even if no GPS coordinates were documented. Soon, any visitor to Racetrack Playa will be able to upload photos for analysis at www.racetrackplaya.org.

Rilee estimates that the two of them walked 10 miles total””just two guys and their gear, the isolation occasionally broken by their walkie-talkie or by returns to the base to get more water. “Being out there was almost like being an explorer,” he says.

Pondering the playa

The smorgasbord of data is needed because “what’s happening on Racetrack Playa is subtle and complicated. It’s not obvious right away which data is going to be important,” says Brian Jackson, one of the trip leaders. He and a colleague have been studying Racetrack Playa since 2006 and recently published a paper comparing the site to a dry lake bed on Saturn’s moon Titan.

For a while, speculation was that the Racetrack Playa rocks have properties that help them move. But the rocks are just dark dolomite boulders that tumbled down from the mountain highlands. (That’s not how the trails were made; those came after the rocks found a home on the playa.) “Dolomite is relatively common, and the rocks themselves are not unusual,” explains Jackson. “It’s where the rocks are located that makes them special.”

Some of the rocks that have moved weigh less than a pound, but many are 25″“30 pounds. One of the largest sliders, named Karen, has been estimated at 700 pounds. A powerful force is required to move rocks that big, and the obvious candidate is the fierce playa wind. “It’s surprising when you see how big some of these boulders are,” says Ryan. “You think, ‘How can something that big get blown around?'”

Wind speeds of 150 miles per hour or more would probably be necessary to move most of the rocks. The wind speeds that graze the playa’s surface are very fast, but not that fast, so the newer studies tend to ask how the friction between the rocks and the clay might be reduced.

The interns evaluated several hypotheses that have been offered over the years. “Within each group, we traded the data we collected and analyzed it together, though we also had our own ideas,” Romine says.

“Teamwork is a key part of the program,” notes Cynthia Cheung, the LPSA principal investigator. “The students come at this challenging problem with different strengths and from different directions. That’s the way science is done.”

Investigators have thought for years that the friction is somewhat reduced when the playa’s surface gets wet and the top layer of clay transforms into a slick film of mud. Algae may lie dormant in the dry clay and bloom when the surface wets, further reducing the amount of friction. The students performed water-absorption experiments at Bonnie Claire Playa and found that the clay does get slippery. Even so, the students concluded that most rocks could not move without other help.

The aid probably comes in the form of ice””in this high desert, winter brings snow to the mountains. The meltwater washes downhill and collects in huge, shallow pools that spread across the playa and freeze at night. Decades ago, researchers proposed that big sheets of ice might envelop clusters of rocks, then catch the wind and drag the rocks around together. This might explain the cases in which two tracks run perfectly alongside each other.

When an experiment ruled out the possibility that this happens in all cases, the concept was refined. Now it’s thought that collars of ice can form around the lower parts of the stones, probably because the mass of a rock retains the cold. When more water moves in, the collar helps the rock partially float, so even a heavy rock might slide when the wind blows. The presence of ice collars could explain why some trails start narrow and get wider: the rock gradually sinks into the wet clay as its icy lifejacket melts away.

The interns found that the Hygrochron sensors buried about three-fourths of an inch deep registered freezing temperatures in March, and those buried a little more than 3 inches down registered wet conditions almost continuously in March and April. This is evidence that conditions are right to form ice collars, reported Clint Naquin of Louisiana State University in Shreveport and Devon Miller of West Virginia Wesleyan College in Buckhannon, in a talk presented on behalf of all interns.

Kletetschka is coordinating a research paper by the group that will present Hygrochron and other data and will suggest a slightly different mechanism for the rock movement. The rocks are still thought to be collared by ice, but the group has identified a new parameter that is critical in explaining why it is so easy to move the rocks and create trails. The paper will give the details, but the finding means that the wind speed doesn’t have to be as great to move the rocks. “This idea would also explain the trails that don’t have rocks,” Kletetschka says. “The trails were made by rocks whose larger parts were made from ice.”

The students also held an online event, coordinated by Goddard’s Maggie McAdam, to talk about their NASA experiences with roughly 450 kids in Boys and Girls Clubs of America nationwide.

Problems still to solve

For some LPSA interns, the playa trip was their first exposure to scientific field work. “I learned that I prefer applied work,” says Emily Kopp of the University of Wisconsin in Eau Claire. “I liked working with my hands, gathering my own data for my own research.”

Students who had participated in previous field work appreciated their very large roles this time in collecting and analyzing key experimental data. “An intern can get stuck doing something like a cross between going to class and doing busywork,” says Romine. “I feel like what I’m doing here is important.”

Ryan, who is an environmental science major, plans to keep working on the Racetrack Playa mystery. He devised a test bed packed with mud from Bonnie Claire Playa that can be used to probe whether ice collars can form and make rocks buoyant under controlled laboratory conditions. His one obstacle: finding a freezer big enough to hold the setup. But he is not deterred. “I can take this home with me to finish the work if I need to,” he says.

Leva McIntire from Seattle Pacific University, has another hypothesis to test. She postulated that the rocks are moving by regelation, a process usually associated with glaciers and mountains. Regelation is caused by a difference in pressure on the two sides of an object. Water on one side remains liquid and leaks around to the other side, trapping air bubbles on the second side, where ice forms. McIntire thinks this could happen on the playa. She notes that some students observed bubble-like formations in the clay next to certain rocks. “This theory might explain how the big rocks move,” she says, “because it does not require floatation of the rocks.”

In addition to the field work, LPSA interns worked closely with mentors on active research projects at Goddard, such as developing a “lunar dust buster” to remove sticky dust from astronauts’ spacesuits, running Monte Carlo simulations of the atmosphere of Mercury and testing neutron and gamma-ray equipment that is designed to gather subsurface information about a planet without sample collection. The interns’ results were presented in a campus-wide poster session.

“I came to the program liking engineering,” notes Wilde, a chemical engineering major who explored possible pathways for forming water on the moon. “But I found the planetary research very enjoyable because I had so much liberty in my work.”

“I’ve always been interested in the formation of stars and planets, and here, I’ve seen that there are still tremendous opportunities for research,” says Krzykowski. “Not all the problems in planetary science have been solved.”

By Elizabeth Zubritsky, NASA’s Earth Science News Team

Image Caption: The rocks are famous because they move, leaving tell-tale trails in the clay, like this one. This happens at several playa in California and Nevada. There’s no record of anybody seeing one of the rocks move, and scientists aren’t quite sure how it happens. But they know that it’s not the work of animals, gravity, or earthquakes. Photo credit: NASA/GSFC/Cynthia Cheung

On the Net:

Archaeologists Make Monumental Discovery At Caerleon

Archaeologists from Cardiff University have made a major new discovery that will change the way we think about how Britain was conquered and occupied by the Roman army almost 2,000 years ago.

A complex of monumental buildings has been located outside the Roman fortress at Caerleon in South Wales, which is likely to lead to a complete rethink of one of the country’s most important Roman sites.

The discovery was fortuitous – students from the School of History, Archaeology and Religion were learning how to use geophysical equipment in fields outside the fortress that were not thought to have been extensively occupied in the Roman period. 10 days later, the students and their tutors had revealed the outlines of a series of huge buildings squeezed into the ground between the amphitheatre and the River Usk.

Dr Peter Guest, Senior Lecturer in Roman Archaeology at the School said: “Caerleon is one of the best-known Roman sites in Britain, so it was a great surprise to realize that we had found something completely new and totally unexpected. We’ve discovered the remains of several very large buildings shown remarkably clearly on the geophysical surveys completed by our students.

“It is difficult to be certain about what we have been found because nothing like this has been discovered in Roman Britain before.  The buildings’ ground plans suggest that they were of some importance. We think that they could have included markets, administrative buildings like town halls, bath-houses, store buildings, or even possibly temples. The biggest is enormous and must be one of the largest buildings known from Roman Britain. We can only guess what it was for, but at the moment we’re working on the idea that it had something to do with a harbor on the river, although it does look uncannily like a residential villa building ““ if that’s the case it was built on a palatial scale.

“The layout and scale of the buildings look like they should be at the center of a town or city,” continued Dr Guest, “but here at Caerleon we seem to have the central public spaces without the surrounding city ““ where are the people who would have used these buildings? Perhaps they were intended for the legionaries of the Second Augustan, but it is also possible that this is the first evidence for Roman plans to develop the fortress at Caerleon into a major settlement in western Britain ““ plans that for some reason never came to fruition. That’s the great thing about an archaeological discovery like this ““ lots of new questions that we just don’t have definite answers to at the moment.”

Caerleon is one of only three permanent legionary fortresses in Britain. The other fortresses at Chester and York are much more difficult to excavate because their remains are mostly buried under cities. Caerleon provides the only opportunity to study the Roman legions in Britain.

The new discovery was made as part of the School’s ongoing excavations at the site. Over the last four years, staff and students have uncovered eight previously unknown barrack blocks, three large granaries, a monumental metal workshop and a very large store building. On occasions, members of the public have also helped with the excavations.

Between 9 August and 17 September 2010, the team of archaeologists, along with staff and students from University College London (UCL) will be at Caerleon for their final season of excavation. Taking place near the site of the new discovery, the team hopes to uncover yet more information about the fortress and its inhabitants.

“We will be spending six weeks in Caerleon this summer, excavating within the fortress walls with colleagues from UCL. We hope to reveal yet more information about the fortress and its legion and I am sure that our work will produce some really exciting results,” said Dr Guest. “The dig is open to the public and we’d be delighted to see people coming along with family and friends to find out more about the work we are doing.”

Guided tours of the Priory Field excavation will be available twice daily (11 am and 2.30 pm, except Mondays) and the excavation is open throughout the Summer Bank Holiday weekend (28th ““ 30th August 2010). As well as tours, there will be displays of the latest finds, a “Ëœmini-dig’, and the chance to talk to archaeologists about how they excavate ancient sites.

Image Caption: Reconstruction of Caerleon in the Roman period, showing the newly discovered monumental suburb (© 7reasons)

On the Net:

Does Cosmetic Surgery Help Body Dysmorphic Disorder?

New study examines commonly requested procedures and the impact on BDD symptoms

A new study finds that while many who suffer from body dysmorphic disorder (BDD) seek cosmetic procedures, only two percent of procedures actually reduced the severity of BDD. Despite this poor long-term outcome, physicians continue to provide requested surgeries to people suffering from BDD. The study was recently published in Annals of Plastic Surgery.

Katharine A. Phillips, MD, is the director of the body image program at Rhode Island Hospital and a co-author of the paper. Phillips says, “BDD is a psychiatric disorder characterized by preoccupation with an imagined or slight defect in appearance which causes clinically significant distress or functional impairment. A majority of these individuals believe they have an actual deformity that can be corrected by cosmetic treatments to fix these perceived defects rather than seeking psychiatric intervention.”

Phillips and her co-author, Canice Crerand, PhD, of The Children’s Hospital of Philadelphia, reported in previous studies that BDD appears relatively common among individuals who receive cosmetic surgery, with reported rates of 7 to 8 percent in cosmetic surgery patients in the United States. Even with the high frequency of those with BDD seeking and receiving cosmetic procedures, few studies have more specifically investigated the clinical outcomes of surgical and minimally invasive cosmetic treatments, such as chemical peels, microdermabrasion, and injectable fillers).

In their new study, the researchers report that in a small retrospective study of 200 individuals with BDD, 31 percent sought and 21 percent received surgical or minimally invasive treatment for BDD symptoms. Nearly all of these individuals continued to have BDD symptoms, and some actually developed new appearance preoccupations. They also note that in a survey of 265 cosmetic surgeons, 178 (65 percent) reported treating patients with BDD, yet only one percent of the cases resulted in BDD symptom improvement. Phillips, who is also a professor of psychiatry and human behavior at The Warren Alpert Medical School of Brown University, says, “These findings, coupled with reports of lawsuits and occasionally violence perpetrated by persons with BDD towards physicians, have led some to believe that BDD is a contraindication for cosmetic treatment.”

The researchers found that the most common surgical procedures sought were rhinoplasty and breast augmentation, while the most common minimally invasive treatments were collagen injections and microdermabrasion. Three quarters of all the requested procedures involved facial features. The findings also indicate that more than a third of patients received multiple procedures.

In terms of long-term outcomes from procedures, only 25 percent of the patients showed an improvement in their appraisal of the treated body part and showed a longer-term decreased preoccupation. However, as noted by co-author Crerand, “Only two percent of surgical or minimally invasive procedures led to longer-term improvement in overall BDD symptoms.”

The researchers also found that when treatment was sought, 20 percent of the procedures were not received. Cost was the most common reason for not receiving the requested procedure (30 percent), followed by physician refusal to perform the procedure (26 percent).Their findings also indicate that physicians were significantly less likely to refuse a surgical or minimally invasive treatment than other procedures (dermatological, dental and others). Phillips says, “This suggests that many surgeons were not aware of the patient’s BDD or do not consider BDD a contraindication to treatment. In a survey of 265 cosmetic surgeons, only 30 percent believed that BDD was always a contraindication to surgery.”

The researchers conclude, “This study provides new and more detailed information about receipt and outcome of surgical/minimally invasive procedures, and the findings indicate that there is a clear need to further investigate this topic in prospective studies. In the meantime, physicians need to be aware that psychiatric treatments for BDD such as serotonin reuptake inhibitors and cognitive behavioral therapy appear to be effective for what can be a debilitating disorder.”

On the Net:

Criminals Beware – Innovation Could Bring Super-accurate Crime Forensics

A new technology enabling tiny machines called micro electromechanical systems to “self-calibrate” could make possible super-accurate and precise sensors for crime-scene forensics, environmental testing and medical diagnostics.

The innovation might enable researchers to create a “nose-on-a-chip” for tracking criminal suspects, sensors for identifying hazardous solid or gaseous substances, as well as a new class of laboratory tools for specialists working in nanotechnology and biotechnology.

“In the everyday macroscopic world, we can accurately measure distance and mass because we have well-known standards such as rulers or weights that we use to calibrate devices that measure distances or forces,” said Jason Vaughn Clark, an assistant professor of electrical and computer engineering and mechanical engineering. “But for the micro- or nanoscopic worlds, there have been no standards and no practical ways for measuring very small distances or forces.”

The micro electromechanical systems, or MEMS, are promising for an array of high-tech applications.

Researchers previously have used various techniques to gauge the force and movement of tiny objects containing components so small they have to be measured on the scale of micrometers or nanometers, millionths or billionths of a meter, respectively. However, the accuracy of conventional techniques is typically off by 10 percent or more because of their inherent uncertainties, Clark said.

“And due to process variations within fabrication, no two microstructures have the same geometric and material properties,” he said.

These small variations in microstructure geometry, stiffness and mass can significantly affect performance.

“A 10 percent change in width can cause a 100 percent change in a microstructure’s stiffness,” Clark said. “Process variations have made it difficult for researchers to accurately predict the performance of MEMS.”

The new technology created by Clark, called electro micro metrology – or EMM – is enabling engineers to account for process variations by determining the precise movement and force that’s being applied to, or sensed by, a MEMS device.

“For the first time, MEMS can now truly self-calibrate without any external references,” Clark said. “That is, our MEMS are able to determine their unique mechanical performance properties. And in doing so, they become very accurate sensors or actuators.”

Research findings were detailed in two papers presented in June during a meeting of the Society of Experimental Mechanics in Indianapolis and at the Nanotech 2010 Conference and Expo in Anaheim, Calif. The work is based at the Birck Nanotechnology Center in Purdue’s Discovery Park.

MEMS accelerometers and gyroscopes currently are being used in commercial products, including the Nintendo Wii video game, the iPhone, walking robots and automotive airbags.

“Those MEMS work well because they don’t need ultra-high precision or accuracy,” Clark said. “It is difficult for conventional technology to accurately measure very small forces, such as van der Waals forces between molecules or a phenomenon called the Casimir effect that is due to particles popping in and out of existence everywhere in the universe.”

These forces are measured in “piconewtons,” a trillionth of the weight of a medium-size apple.

“If we are trying to investigate or exploit picoscale phenomena like Casimir forces, van der Waals forces, the hydrogen bond forces in DNA, high-density data storage or even nanoassembly, we need much higher precision and accuracy than conventional methods provide,” Clark said. “With conventional tools, we know we are sensing something, but without accurate measurements it is difficult to fully understand the phenomena, repeat the experiments and create predictive models.”

Self-calibration also is needed because microdevices might be exposed to harsh environments or remain dormant for long periods.

“Say you have a MEMS sensor in the environment or on a space probe,” Clark said. “You want it to be able to wake up and recalibrate itself to account for changes resulting from temperature differences, changes in the gas or liquid ambient, or other conditions that might affect its properties. That’s when self-calibration technology is needed.”

EMM defines mechanical properties solely in terms of electrical measurements, which is different than conventional methods, he said.

For example, by measuring changes in an electronic property called capacitance, or the storage of electrical charge, Clark is able to obtain the microstructure’s shape, stiffness, force or displacement with high accuracy and precision, he said.

“We can measure capacitance more precisely than we can measure any other quantity to date,” he said. “That means we could potentially measure certain mechanical phenomena more precisely by using MEMS than we could by using conventional macroscale measurement tools.”

The researcher will use the new approach to improve the accuracy of instruments called atomic force microscopes, which are used by nanotechnologists.

“The atomic force microscope, which jumpstarted the nanotechnology revolution, is often used to investigate small displacements and forces,” Clark said. “But the operator of the tool cannot precisely say what distance or force is being sensed beyond one or two significant digits. And the typical operator knows even less about the true accuracy of their measurements.”

Purdue operates about 30 atomic force microscopes, and Clark’s research group is planning to teach users how to calibrate their instruments using the self-calibrating MEMS.

He also plans to use his new approach to create a miniature self-calibrating “AFM-on-a-chip,” dramatically shrinking the size and cost of the laboratory instrument.

“Such an advent should open the door to the nanoworld to a much larger number of groups or individuals,” he said.

Clark’s research group has fabricated and tested the first generation of self-calibrating MEMS, and repeatable results have shown the presence of the Casimir and van der Waals forces.

The research is funded by the National Science Foundation.

Image Caption: This image depicts a device that enables tiny micro electromechanical systems to “self-calibrate,” an advance that could make possible super-accurate sensors, a “nose-on-a-chip” for law enforcement and a new class of laboratory tools for specialists working in nanotechnology and biotechnology.

On the Net:

Drug Coverage Under Medicare Leads To Increased Use Of Antibiotics

Improved drug coverage under Medicare Part D has led to an increase in the use of antibiotics by seniors, particularly of brand-name and more expensive drugs, according to a University of Pittsburgh Graduate School of Public Health study. Published in the Aug. 23 issue of Archives of Internal Medicine and the first to explore spending on antibiotics under Medicare Part D, the study suggests recent changes in drug coverage improved the use of antibiotics for pneumonia, but could lead to unnecessary spending on expensive broad-spectrum antibiotics and the overuse of inappropriate antibiotics.

“Overuse of antibiotics is a common and important problem that can lead to medical complications and drug resistance,” said the study’s lead author, Yuting Zhang, Ph.D., assistant professor of health economics at the University of Pittsburgh Graduate School of Public Health. “One of the key questions we sought to answer with our study is how improved prescription drug coverage under Part D affects the usage of these drugs.”

The study included more than 35,000 Medicare beneficiaries and compared their use of antibiotics two years before and after the implementation of Medicare Part D, which reduced out-of-pocket drug spending between 13 and 23 percent. They found that antibiotic use increased most among beneficiaries who lacked drug coverage prior to enrolling in Medicare Part D. Beneficiaries who previously had limited drug coverage also were more likely to fill prescriptions for antibiotics after enrolling in Part D. The largest increases were found in the use of broad-spectrum, newer and more expensive antibiotics.

Researchers also noted that the use of antibiotic treatment for pneumonia tripled among those who previously lacked drug coverage, which they say is encouraging given the high mortality associated with community-acquired pneumonia among the elderly. However, they also found increases in antibiotic use for other acute respiratory tract infections (sinusitis, pharyngitis, bronchitis and non-specific upper respiratory tract infection) for which antibiotics are generally not indicated.

“When drug coverage is generous, people are more likely to request and fill prescriptions for antibiotics, which may lead to misuse,” said Dr. Zhang. “Although many interventions have helped curb antibiotic prescribing for acute respiratory tract infections and other conditions, our study indicates there may still be substantial room for improvement through education and changes in reimbursement practices to reduce inappropriate use of these drugs.”

On the Net:

US Girls Hitting Puberty At Earlier Age

According to new research, U.S. girls may be hitting puberty at earlier ages.

Dr. Frank Biro of Cincinnati Children’s Hospital Medical Center, who led the study, said the study suggests earlier development than what was reported during a 1997 study. 

Biro’s team said that girls who hit puberty earlier are more likely to engage in risky behavior and might be at a higher risk for breast cancer.

“This could represent a real trend,” Dr. Joyce Lee, a pediatric endocrinologist at the University of Michigan who was not involved with the new research, told Reuters Health.

They say that doctors are still not sure what is causing girls to develop at earlier ages, but rising obesity rates may be the culprit.

The researchers examined about 1,200 girls at ages 7 and 8 in Cincinnati, New York and San Francisco.  The team used a standard measure of breast development to determine which girls had started puberty.

Compared to the 1997 findings, girls in the current study were more developed at a younger age.  There were also large differences in development based on race.

The researchers said about 10 percent of white girls and 23 percent of black girls started developing breast at age 7, compared to 5 percent of white girls and 15 percent of black girls in 1997.

Eighteen percent of white girls and 43 percent of black girls were entering puberty at age eight, which is an increase from about 11 percent of white girls from 1997, but the same as black girls in that year.

The study, which was published in Pediatrics, suggests that being overweight makes girls more likely to enter puberty at an earlier age. 

The researchers found that girls with a higher body mass index (BMI) at ages 7 and 8 were more likely to be developed than their peers that were thinner.

The authors said that their study population does not necessarily represent what is happening in all U.S. girls.  However, they are continuing to follow the girls in the study to see when the rest of them hit puberty.

Biro said that rising rates of obesity could be a major reason why girls seem to be developing faster than they did 13 years ago.

“We’re on the opposite side of an increase in BMI that has been seen in this country and in other countries,” he told Reuters Health.

Lee, of the University of Michigan, said that researchers know that heavier girls are more likely to enter puberty early.  She said that could be because overweight people have more of a hormone known to be linked to development, but it could also be because of the actual nutrients girls get from their diet.

Lee and Biro said doctors are worried about the psychological and physical health of girls who hit puberty at a young age.

“We’re on the opposite side of an increase in BMI that has been seen in this country and in other countries,” he told Reuters Health.

Women that spend more of their lives menstruating are at a higher risk for breast cancer.

Biro said that there are things families can do to minimize the possible risk of early puberty in young daughters, such as eating more fruits and vegetables and eating together as a family.

On the Net:

Corporal Punishment Of Children Remains Common Worldwide

Three studies led by UNC researchers find that spanking and other forms of corporal punishment of children are still common in the U.S. and worldwide, despite bans in 24 countries.

Spanking has declined in the U.S. since 1975 but nearly 80 percent of preschool children are still disciplined in this fashion. In addition, corporal punishment of children remains common worldwide, despite bans on corporal punishment that have been adopted in 24 countries since 1979.

These are some of the more thought-provoking findings reported in three separate, recently published studies of corporal punishment led by researchers at the University of North Carolina Injury Prevention Research Center.

“The findings are stark. Harsh treatment of children was epidemic in all communities. Our data support the conclusions that maltreatment occurs in all nations,” said Desmond Runyan, MD, DrPH, professor of social medicine at UNC and lead author of a study that conducted surveys in Egypt, India, Chile, the Philippines, Brazil and the U.S. to track international variations in corporal punishment.

Some findings of Runyan’s study, published online Aug. 2 by the journal Pediatrics, include:

“¢ Rates of harsh physical discipline revealed by the surveys were “dramatically higher” in all communities “than published rates of official physical abuse in any country.”

“¢ Mothers with fewer years of education more commonly used physical punishment.

“¢ Rates of corporal punishment vary widely among communities within the same country. For example, both the highest and lowest rates of hitting a child on the buttocks with an object (such as a paddle) were found in different communities in India. (About one quarter of respondents in the U.S. sample used this form of punishment.)

“¢ Harsh punishment of children by parents is not less common in countries other than the U.S. It may be more common, especially in low and middle income countries.

The other two studies were led by Adam J. Zolotor, MD, MPH, assistant professor of family medicine in the UNC School of Medicine. The first, published online in June by the journal Child Abuse Review, tracked corporal punishment and physical abuse trends for three-to-11-year-old children in the U.S. as demonstrated by four separate surveys conducted in 1975, 1985, 1995 (all national surveys) and 2002 (in North Carolina and South Carolina).

This study found that 18 percent fewer children were slapped or spanked by caregivers in 2002 compared to 1975. However, even after this decline, most preschool-aged children are spanked (79 percent), and nearly half of children ages eight and nine in the 2002 survey were hit with an object such as a paddle or switch.

“This study shows that the U.S., unlike most other high income countries, has had little change in the use of corporal punishment as commonplace,” Zolotor said. “Given the weight of evidence that spanking does more harm than good, it is important that parents understand the full range of options for helping to teach their children.  A bit of good news is that the decline in the use of harsher forms of punishment is somewhat more impressive.”

The second study led by Zolotor was a systematic review of the laws and changes in attitudes and behaviors in countries that have adopted bans on corporal punishment since the passage of the Convention on the Rights of the Child in 1979. The United Nations adopted the convention in November 1989 and by September 1990, 20 nations signed on to enforce the treaty. Currently, 193 nations have signed on to enforce it, but the U.S. and Somalia have not. A bill that opposes signing of the convention, sponsored by Republican Sen. Jim Demint of South Carolina, is currently pending in the U.S. Senate and is supported by 30 senators, all Republicans.

Zolotor’s second study was published online in July by Child Abuse Review and appears in the July/August 2010 print issue of the journal.

Findings of that study include:

“¢ Although 24 countries have banned corporal punishment, this is only 12 per cent of the world’s countries.

“¢ Of the 24 countries with corporal punishment bans, 19 are in Europe, including all of the Scandinavian and near Scandinavian countries (Sweden, Norway, Denmark, Iceland and Finland). Three others are in Central or South America, one in the Middle East and one in Oceania (the region that includes Australia, New Zealand and Pacific Ocean island nations such as Malaysia and Indonesia).

“¢ There are no national bans on corporal punishment anywhere in Asia or North America.

“¢ National bans on corporal punishment are closely associated with declining popular support for corporal punishment and parent report of spanking.  However, this decline seems to begin before the passage of such law.  The association between such bans and child abuse are less clear, but studies suggest a decline in abuse following legal prohibition.

“This study shows us that, over 30 years after the passage of the Convention on the Rights of the Child by the United Nations and after ratification by 193 member countries, a small number have supported this convention by explicit prohibition of corporal punishment. It also underscores the important relationship between social change and legislative change,” Zolotor said. 

Image Caption: This photo, taken by police, shows an 8-year-old boy with welts on his back after his father beat him with a belt. Image source: www.corpun.com 

On the Net:

Scientists: Body Parts Tan Differently

According to researchers at Edinburgh University, people may be disappointed when trying to get tan by soaking up sunrays this summer.

The researchers said that different parts of the body go brown at different speeds, so achieving that idealized image of beauty is not going to happen.

The findings explain why certain people find it difficult to get an even, consistent tan.  The main problem is people’s bottoms, which take a lot longer to go brown than other parts of their anatomy.

Ninety-eight volunteers were given six doses of ultraviolet radiation on their backs and their bottoms from a tanning light in order to determine the effect was the same in both places.  Once the researchers examined the participants a week later they found that their backs had turned significantly browner than their buttocks.

“The research shows that instead of thinking that you have got one skin, or one skin type, in fact each of us has lots of different skin regions, each of which responds differently to UV light and so take longer than others to go red and then tan,” said Jonathan Rees, a professor of dermatology, who led the study.

“If you shine sunshine on different parts of your body, the difference in how red they go varies by a factor of five for redness and two for tanning, depending on the body site.”

The bottom is not the only obstacle though.  Rees has confirmed previous research that showed that sun worshippers already knew that the upper back is much more likely to tan than the legs.

“What is burning for one body site is not for another. And the degree of UV protection that develops following ultraviolet radiation exposure may vary site by site,” Rees added.

The study also found that those who had no freckles on their skin tanned easier than those with freckles.

Rees and his team looked into why different types of skin cancer occur in different parts of the body.  Usually they develop on ears, faces and the backs of hands and on the top of the head, rather than on the limbs.  Rees said that women get Melanoma on their calves but men get it on their shoulders.

He said the findings show that most advice about how long it is safe to spend in the sun is worthless because different body parts are more sensitive to the sun’s damaging effects than others.

Rise Of The Himalayas Traced Through Frog Evolution

Genetic analysis of 24 spiny frogs supports minority theory of India’s collision with Asia

The evolution of a group of muscled frogs scattered throughout Asia is telling geologists about the sequence of events that led to the rise of the Himalayas and the Tibetan plateau starting more than 55 million years ago.

Scientists from Kunming, China, and the University of California, Berkeley, conducted a genetic analysis of 24 species of spiny frogs from the tribe Paini that shows how these Asian frogs evolved along with the mountains’ uplift, developing hard, nubby spines and Popeye-like arms to hold onto their mates in the swift-running streams roaring down from the highest mountains in the world.

The sequence of evolutionary changes, in turn, tells geologists the sequence in which mountain ranges and river systems arose and isolated frog populations as a result of the Indian tectonic plate pushing northward into Asia.

“Geologists know a lot about that area, but what they haven’t been able to do is give a sequence to the timing of the rise of particular mountain masses and particular ridges and pieces,” said co-author David Wake, a herpetologist and evolutionary biologist and professor of integrative biology at UC Berkeley. “We use these frogs as a surrogate for a time machine.”

“What we have here,” he explained, “is a group of very old frogs that are so fixed to their habitats that they just stuck there, sitting on that mountain mass when it got raised up. They were separated by these uplifts and by the rivers between the mountains into different units, and these give us a fix on the timing of geological events.”

Wake and his colleagues, including first author Jing Che of the Kunming Institute of Zoology, who performed some of the genetic analyses while a visiting scholar at UC Berkeley from 2008-2009, published their research in this week’s print edition of the journal Proceedings of the National Academy of Sciences. A photo of the spiny frog Quasipaa boulengeri from the mountains of Sichuan, China, graces the issue’s cover.

According to An Yin, a professor of geology at UCLA who was not part of the analysis team, the sequence of spiny frog evolution supports a minority view of how the India/Asia collision played out between about 55 million years ago and 15-20 million years ago. Rather than merely pushing the Himalayas upward, as some geologists believe, others have argued that the indenting Indian plate also pushed Southeast Asia and China aside towards the Pacific ocean, a process referred to as extrusion or escape tectonics.

“David and colleagues very cleverly used the frogs and their habitats “¦ and came up with very interesting evolutionary stages that are correlated with what this model predicts,” Yin said. “It really is proving important support for a model proposed and based on completely different principles and reasoning and different observations.”

That theory, originally proposed by Paul Tapponnier of the University of Paris and Peter Molnar of the University of Colorado at Boulder, and subsequently championed by Tapponnier, says that the Indian plate’s push into Asia was not continuous, but occurred in a series of northward jumps, first pushing land aside to form Southeast Asia, then pushing South China to the east, and then pushing Central China northeastward.

“If India keeps moving,” Yin said, “then North China will move out of the way. So, you have stages that have very important implications for paleogeographic evolution in this area.”

Spiny frogs of the tribe Paini (a grouping within the family Dicroglossidae) are often called stone frogs in China because they cling to moss-covered rocks near rapidly flowing streams. When male frogs mate with females, they typically grab the female from behind in a strong grip known as amplexus. Though not all Paini have muscular forearms combined with keratinous spines, like coarse sandpaper, on their chests and forearms, those who live in fast-flowing streams do, evidently to prevent the slimy female from being whisked away during the sometimes hours-long amplexus required for the female to ovulate.

Using frogs collected by Che and others, including frogs collected 20 years ago in Tibet and the Himalayas by Theodore Papenfuss, a herpetologist in UC Berkeley’s Museum of Vertebrate Zoology, the researchers analyzed four genes from the nucleus in 29 individuals from 24 species of spiny frog. The analysis showed the Paini is composed of two major groups and five distinct lineages, which was a surprise, Wake said.

“The study has revealed at least five unnamed frogs in Indochina and adjacent Yunnan Province in China,” Papenfuss said.

According to the researchers’ genetic reconstruction, the tribe Paini arose in what is now Indochina and spread into Western China about 27 million years ago, diverging into two groups: Nanorana, now consisting primarily of high-elevation species up to 4,700 meters, in Western China; and Quasipaa, consisting of mostly low-elevation species in Indochina and South China.

As the Tibetan plateau was pushed higher, it became separated from the Himalayas, isolating populations in these regions some 19 million years ago. Those restricted to the Himalayas are now considered the Paa subgenus. The Nanorana subgenus isolated in Tibet began to diversify again about 9 million years ago, consistent with the period during which the Tibetan plateau rose above 3,000 meters. These new Nanorana species became well adapted to the cold, arid, low-oxygen conditions of Tibet, and as a result, some organs degenerated, Papenfuss said. For example, some frogs today have no or reduced structures in the ear, including the external tympanum, which transmits sound to the inner ear, and the columella in the middle ear.

The researchers found that in Indochina and South China, on the other hand, the Quasipaa frogs were split by the uplift of the Truong Son Mountain Range on the Laos-Vietnam border, along what is called the Ailao Shan-Red River shear zone. This uplift and the opening of the South China Sea probably occurred as the Indian landmass pushed Indochina southeastward, isolating the frogs of Indochina (now subgenus Eripaa) from those of South China (Quasipaa).

“Our study suggests that the geologically hypothesized uplift (thickening) and strike-slip extrusion must have occurred simultaneously to generate the observed biotic distribution pattern” some 23-24 million years ago, the authors wrote in their paper.

The movement of large land masses must also have been accompanied by climate change, the authors noted, which suggests that the forests of Central Vietnam may have served as a refuge for Paini frogs during climate oscillations. Today, Vietnam is a conservation hotspot.

“Basically, the frogs were rafting on top of the continents,” Yin said “The tectonics control morphological evolution by transporting originally very closely related frogs so far apart they all diverge and develop very differently.”

Wake, whose interest is homoplasy ““ the independent origin of similar shapes and structures ““ noted that the study shows “that spines and hypertrophied forelimbs have evolved independently four different times as they coped with the ever steeper rising mountains and adapted to these increasingly rapidly flowing streams.”

Other authors of the PNAS paper are Ya-Ping Zhang, director of the State Key Laboratory of Genetic Resources and Evolution (SKLGRE) at the Kunming Institute of Zoology and of the Laboratory for Conservation and Utilization of Bioresources at Yunnan University in Kunming; and Wei-Wei Zhou, Jian-Sheng Hu and Fang Yan, also of SKLGRE.

The work was supported by the National Basic Research Program of China (973 Program) and the U.S. National Science Foundation AmphibiaTree of Life program.



Image Caption: Spiny frogs from the tribe Paini are a curious group characterized by spines and powerful forelimbs in males.

On the Net:

Can Vitamin D Help Fight Off A Cold?

Young men may have more sick-free days through the cold and flu season if they take a daily vitamin D supplement, a small study published in the Journal of Infectious Diseases suggests.

Vitamin D has been the center of attention in many research studies of late, with studies linking low vitamin D levels in the blood to elevated risks of type 1 diabetes and severe asthma attacks in children and heart disease, cancer and depression in adults.

But whether the vitamin is the reason for the risks — and whether taking the supplements can cut the risks — has yet to be proven.

The body produces vitamin D naturally when the skin is exposed to sunlight. Because vitamin D levels in the human body are generally insufficient during the winter months in many parts of the world, researchers have been interested in whether vitamin D supplements may play a role in people’s susceptibility to colds, flu and other respiratory ailments.

Some research has found in the past that people with lower vitamin D levels in their blood tend to have higher rates of respiratory infections than those with higher levels of the vitamin, said lead researcher Dr Ilkka Laaksi of the University of Tampere in Finland.

Recent lab research has shown that vitamin D may also play an “important role” in the body’s immune defenses against respiratory pathogens, Laaksi told Reuters Health in an email.

“However, there is a lack of clinical studies of the effect of vitamin D supplementation for preventing respiratory infections,” the researcher said.

In the current study, Laaksi and colleagues assigned 164 military recruits to take either 400 international units (IU) of vitamin D or inactive placebo pills every day for six months — October to March, covering the months when people’s vitamin D levels usually decline and when respiratory infections tend to peak.

At the end of the study, researchers found no clear difference between the two groups in the average number of days missed from duty due to a respiratory infection — which includes bronchitis, sinus infections, pneumonia, ear infections and sore throat.

The team found that, on average, men who took the vitamin D supplement missed about two days from duty due to respiratory infection, compared with three days in the placebo group. The difference was not significant in statistical terms.

However, the team determined that men in the vitamin D group were more likely to have no days missed from work due to respiratory illness.

Overall, 51 percent of the vitamin D group remained “healthy” throughout the six-month study, versus 36 percent of the placebo group, the team reports.

Laaksi said the findings offer “some evidence” of a benefit from vitamin D against respiratory infections.

The extent of these benefits is still not clear, however. While recruits in the vitamin D group were more likely to not miss days from duty, they were no less likely to report having cold-like symptoms at some point during the six-month period.

Some other recent studies have shown conflicting results on the usefulness of vitamin D for lower the risk of respiratory ailments.

A Japanese study of schoolchildren published earlier this year found that those given 1,200 IU of vitamin D each day during the cold and flu season were less likely to contract influenza A. Of 167 children given the supplement, 18 developed the flu, compared to 31 of the 167 who were given placebos.

Another recent study of 162 adults found that those who received 2,000 IU of vitamin D every day for 12 weeks were no less likely to develop respiratory ailments than those who took placebos.

Larger clinical trials looking at different doses of vitamin D are still needed before the vitamin can be recommended for cutting the risk of respiratory infections, said Laaksi.

Health officials in the US recommend that adults up to the age of 50 get 200 IU of vitamin D daily, while older adults should get up to 600 IU. The upper limit is currently set at 2,000 IU per day. Higher amounts may raise the risk of side effects.

Some researchers say that people need more vitamin D than health officials recommend, and that intakes of more than 2,000 IU per day are safe. However, the exact optimal amount of vitamin D one should take remains under debate.

Vitamin D can be found in milk, cereals and orange juice fortified with vitamin D, as well as some fatty fish, like salmon. Experts typically recommend vitamin pills for people who do not get enough vitamins from their food.

On the Net:

Scientists Finally Reach Greenland Ice Sheet Bedrock

International team of climate researchers drill through a mile and half of the Greenland ice sheet in search of climate change insights

After years of concentrated effort, scientists from the North Greenland Eemian Ice Drilling (NEEM) project hit bedrock more than 8,300 feet below the surface of the Greenland ice sheet last week. The project has yielded ice core samples that may offer valuable insights into how the world can change during periods of abrupt warming.

Led by Denmark and the United States, and comprised of scientists from 14 countries, the NEEM team has been working to get at the ice near bedrock level because that ice dates back to the Eemian interglacial period, about 115,000 to 130,000 years ago, when temperatures on Earth were warmer by as much as 5 degrees Fahrenheit than they are today. The Eemian period ice cores should yield a host of information about conditions on Earth during that time of abrupt climate change, giving climate scientists valuable data about future conditions as our own climate changes.

“Scientists from 14 countries have come together in a common effort to provide the science our leaders and policy makers need to plan for our collective future,” said Jim White, director of University of Colorado at Boulder’s Institute of Arctic and Alpine Research and an internationally known ice core expert. White was the lead U.S. investigator on the project, and his work there was supported primarily by the National Science Foundation’s Office of Polar Programs. Other U.S. institutions collaborating on the NEEM effort include Oregon State University, Penn State, the University of California, San Diego, and Dartmouth College.

Greenland is covered by an ice sheet thousands of feet thick that built up over millennia as layers of snow and ice formed. The layers contain information about atmospheric conditions that existed when they were originally formed, including how warm and moist the air was, and the concentrations of various greenhouse gases. While three previous Greenland ice cores drilled in the past 20 years covered the last ice age and the period of warming to the present, the deeper ice layers, representing the warm Eemian and the period of transition to the ice age were compressed and folded, making them difficult to interpret, said White.

After radar measurements taken through the ice sheet from above indicated that the Eemian ice layers below the NEEM site were thicker, more intact and likely contained more accurate and specific information, researchers began setting up an extensive state-of-the-art research facility there. Despite being located in one of the most remote and harsh places on Earth, the NEEM team constructed a large dome, the drilling rig for extracting three-inch-diameter ice cores, drilling trenches, laboratories and living quarters, and officially started drilling in June 2009.

According to Simon Stephenson, Director of the Arctic Sciences Division at NSF, the accomplishment at NEEM “is important because the ability to measure gases and dust trapped in the ice at high resolution is likely to provide new insight into how the global climate changes naturally, and will help us constrain climate models used to predict the future.” Stephenson added that the NEEM ice cores will allow scientists to measure conditions in the past with more specificity–down to single years.

“We are delighted that the NEEM project has completed the drilling through the ice-sheet,” Stephenson said.  “This has been a very successful international collaboration, and NSF is pleased to have supported the U.S. component.”

Accurate climate models based in part on the data collected at NEEM could play an important role in helping human civilization adapt to a changing climate. During the Eemian period, for example, the Greenland ice sheet was much smaller, and global sea levels were about 15 feet higher than they are today, a height that would swamp many major cities around the world.

Now that drilling is complete, scientists will continue to study the core samples and analyze other data they have collected. For his part, White hopes the NEEM project establishes a blueprint for future scientific collaborations.

“I hope that NEEM is a foretaste of the kind of cooperation we need for the future,” White said, “because we all share the world.”

Image 1: The team at NEEM celebrates the final core sample collected at bedrock level, or over 8,300 feet beneath the Greenland ice sheet. The multi-year drilling project was a collaboration of scientists from 14 different countries and sought to gather ice core samples from the Eemian period, about 130,000 to 115,000 years ago. The Eemian period ice cores should yield a host of information about conditions on Earth during that time of abrupt climate change, giving climate scientists valuable data about future conditions as our own climate changes. Credit: NEEM Project Office

Image 2: The very last ice core sample collected from the NEEM project. The layers of black contain bits of rock that were trapped in the ice over 100,000 years ago. Credit: NEEM Project Office

On the Net:

Tongue Piercing May Cause Gapped Teeth

Could cost thousands of dollars in orthodontic repairs

Mark this one down as a parental nightmare.

First, your child gets her tongue pierced. Then, as if you needed something else, she starts “playing” with the tiny barbell-shaped stud, pushing it against her upper front teeth. And before you know it, she forces a gap between those teeth — a fraction-of-an-inch gap that may cost thousands of dollars in orthodontic bills to straighten.

How and why this happens has been documented in a case study by University at Buffalo researchers published in the July issue of the Journal of Clinical Orthodontics.

“It is a basic tenet of orthodontic that force, over time, moves teeth,” explains the study’s primary investigator, Sawsan Tabbaa, DDS, MS, assistant professor of orthodontics at the UB School of Dental Medicine.

Tabbaa notes that a previous UB dental school survey study of Buffalo high school students revealed that the presence of a barbell implant/stud caused a damaging habit whereby subjects pushed the metal stud up against and between their upper front teeth, a habit commonly referred to among the students as “playing.”

“And it happened in very high percent of the cases,” said Tabbaa.

That repeated “playing” with the stud may result in a gap as is demonstrated in Tabbaa’s current case study.

The study involved a 26- year-old female patient examined at UB’s orthodontic clinic who complained that a large space had developed between her upper central incisors or upper front teeth. The patient also had a tongue piercing that held a barbell-shaped tongue stud.

The tongue was pierced seven years earlier and every day for seven years she had pushed the stud between her upper front teeth, creating the space between them and, subsequently, habitually placing it in the space. The patient did not have a space between her upper front teeth prior to the tongue piercing.

“The barbell is never removed because the tongue is so vascular that leaving the stud out can result in healing of the opening in the tongue, said Tabbaa, “so it makes perfect sense that constant pushing of the stud against the teeth — every day with no break — will move them or drive them apart.”

The patient provided the research team with photos that demonstrated she had no diastema, or space, prior to having her tongue pierced. For the purposes of treating this patient’s space, it was assumed that positioning of the tongue stud between the maxillary central incisors or “playing” caused the midline space.

Her treatment involved a fixed braces appliance to push the front teeth back together.

Tongue piercing can result in serious injury not just to teeth but has also been associated with hemorrhage, infection, chipped and fractured teeth, trauma to the gums and, in the worst cases, brain abscess, said Tabbaa.

“The best way to protect your health, your teeth and your money is to avoid tongue piercing.”

Image 1: A UB orthodontist has documented that “playing” with a pierced-tongue stud can eventually result in a gap between the front teeth.

Image 2: Subjects frequently develop a habit of pushing the metal stud in their tongues up against and between the front teeth, creating a gap.

On the Net:

Seven Hours Of Sleep Best For Heart Health

Not getting enough sleep? Getting too much? You may be increasing your chances of cardiovascular disease, according to a study published in the August 1 edition of the scientific journal SLEEP.

In the study, Doctors Charumathi Sabanayagam and Anoop Shankar of the West Virginia University (WVU) School of Medicine determined that seven hours of sleep is the optimal amount for heart health, according to a statement released on Sunday by the American Academy of Sleep Medicine (AASM), publishers of SLEEP.

Analyzing data from more than 30,000 adults who participated in the 2005 National Health Interview Survey, Sabanayagam and Shankar discovered that those who slept five total hours per day (including naps) had more than twice the risk of suffering from angina, coronary heart disease, heart attack or stroke than those who slept a total of seven hours each day.

Those who slept nine hours or more each day also had an elevated risk of cardiovascular disease–they were one and a half times more susceptible, according to the report. They also discovered an “elevated but less dramatic risk” of cardiovascular health issues in those who slept for either six or eight hours each day, according to the press release.

“The most at-risk group was adults under 60 years of age who slept five hours or fewer a night. They increased their risk of developing cardiovascular disease more than threefold compared to people who sleep seven hours,” AFP reporter Karin Zeitvogel wrote on Sunday. “Women who skimped on sleep, getting five hours or fewer a day, including naps, were more than two-and-a-half times as likely to develop cardiovascular disease.”

“The authors of the WVU study were unable to determine the causal relationship between how long a person sleeps and cardiovascular disease,” Zeitvogel added. “But they pointed out that sleep duration affects endocrine and metabolic functions, and sleep deprivation can lead to impaired glucose tolerance, reduced insulin sensitivity and elevated blood pressure, all of which increase the risk of hardening the arteries.”

On the Net:

NASA’s Robonaut 2 To Tweet From Space

NASA’s Robonaut 2 has no voice but is ready to tell you its story — in 140 characters or less. The prototype robot will travel to space this fall to give NASA a deeper understanding of human-robotic interaction.

Called R2, the robot has started sending updates about its upcoming mission from its new Twitter account, @AstroRobonaut. With the help of its supporting team, R2 will document its preparations for launch and, eventually, its work aboard the International Space Station.

“Hello World! My name is Robonaut 2 — R2 for short,” R2 and the team tweeted this week. “Follow my adventures here as I prepare for space!”

Follow R2’s updates on Twitter at: http://www.twitter.com/AstroRobonaut

The public will get the first chance to interview the robot when R2 and its team answer questions submitted via Twitter at 10 a.m. on Aug. 4. Twitter followers can submit their questions to R2 in real time by including the hashtag #4R2 in their questions tweeted to @AstroRobonaut.

R2 will be shipped next month from Johnson, where it was created, to NASA’s Kennedy Space Center in Florida for final testing and packing. It will launch aboard space shuttle Discovery as part of the STS-133 mission, targeted to lift off in November.

Robonaut 2 was created through a joint project between NASA and General Motors that began in 2007. R2 originally was intended to be an Earth-bound prototype, but engineers wanted to see how it fared in microgravity so the robot is being sent to space in Discovery’s cargo bay.

R2 is already the most advanced dexterous humanoid robot in existence. Once in space, it will become the first humanoid robot to reach orbit and the first American-built robot at the space station. Over time, as its creators learn more about operating R2 in space, upgrades and modifications could be made that would allow the robot to assist astronauts inside and outside of the station with routine tasks or those too dangerous for humans.

For more information about Robonaut 2, visit: http://www.nasa.gov/robonaut

For more information about the STS-133 mission, visit: http://www.nasa.gov/mission_pages/shuttle/shuttlemissions/sts133

For more information about the space station, visit: http://www.nasa.gov/station

Robonaut is just one of many NASA missions using Twitter and other social media sites. Find them all at: http://www.nasa.gov/connect

World-renowned Astronomer Donald C. Backer Passes Away

Don Backer, a professor in the Department of Astronomy at the University of California, Berkeley, and a world leader in the field of radio astronomy, died on Sunday, July 25, after collapsing outside his home. He was 66.

Backer joined the UC Berkeley Astronomy Department in 1975; since 1989, he held a position both as a full professor in astronomy and as a researcher in the department’s Radio Astronomy Laboratory (RAL). He served as chair of the department from 1998-1999 and from 2002-2008, and as the RAL director from 2008 until his death.

An innovative and visionary scientist, instrumentalist and observer, Backer worked in many areas of astronomy and was involved in numerous ground-breaking projects over his 40-year career. His research focused on pulsars, high-energy astrophysics, the epoch of reionization and the exploration of these topics with the most imaginative and state-of-the art instrumentation.

Equally an expert in radio astronomy techniques and engineering instrumentation, Backer insisted that his work be considered within the main body of astronomy research, and not just as radio astronomy.

“His work was characterized by a clear vision of fundamental physics, technical expertise and a passionate enthusiasm,” said UC Berkeley professor Carl Heiles, a longtime colleague of Backer’s.

Backer made seminal contributions to the study of pulsars. In the early 1980s, he and several collaborators discovered the first millisecond pulsar, a neutron star spinning close to its breakup speed. He also developed an important use for the millisecond pulsar as a probe of the gravitational wave background. Dozens of researchers around the world are in active pursuit of the discovery, characterization and use of millisecond pulsars, especially for detection of gravitational waves. Backer invented and developed digital systems for the detection and precise measurement of pulsars, and they have been adopted as standards in the field and are used at the major observatories worldwide.

Backer was a pioneer in Very Long Baseline Interferometry (VLBI), a technique of linking together distant radio telescopes to produce high-resolution images, allowing the investigation of astronomical structures with microarcsecond angular resolution. He linked the RAL’s 85-foot-centimeter wave telescope to a number of similar antennas distributed across the country and the world. He pursued ever-increasing resolution with the goal of imaging the environment of the black hole at the center of the Milky Way Galaxy. Researchers working in this area with Backer made the highest resolution image ever of a black hole. Backer led the VLBI consortium for several years, helping to develop both accurate radio astronomy and the study of plate motions in the earth’s surface.

In the past few years, Backer initiated a unique “telescope” that consists of an array of antennas spread over pastureland to detect, via their effect on intergalactic hydrogen, the first stars and galaxies that formed in the universe. This “Precision Array To Probe the Epoch of Reionization” (PAPER) led to deployment of these telescopes in Green Bank, W. Va., and in South Africa.

“Despite starting on a small scale and fabricating all of the equipment from scratch, this project is now recognized internationally as an important step toward understanding the history of the universe,” said Heiles.

As RAL director, Backer was deeply engaged in the laboratory’s two major facilities, the Allen Telescope Array (ATA) and the Combined Array for Millimeter Astronomy (CARMA). Fostering the growth of three unique radio telescopes — ATA and CARMA, as well as PAPER — he was leading efforts to cover a factor of 1000 in wavelength range, promising a bright future for the lab and the field of radio astronomy. Continuing Backer’s legacy, the lab will continue to move forward along this path of pioneering novel techniques, said Imke de Pater, professor and chair of UC Berkeley’s Astronomy Department.

Born in Plainfield, N.J., on Nov. 9, 1943, and raised in New Jersey, Backer went off to study at Cornell University, where he graduated with a B.S. in engineering physics in 1966. He then attended the University of Manchester, where he completed an M.Sc. in radio astronomy in 1968 before returning to Cornell for further graduate studies. He received his Ph.D. from Cornell in 1971.

His early career was ignited by the discovery of pulsars and the construction by Cornell of the world’s largest radio telescope, the Arecibo dish in Puerto Rico.

Among other honors during his long career, Backer was chosen for the prestigious Jansky Lectureship at the National Radio Astronomy Observatory in 2003. In addition, he served on countless national and international committees in astronomy.

At UC Berkeley, Backer’s colleagues and former students said he will be remembered not only for his valuable research accomplishments, but also for his relentless energy, deep passion for science, and as someone who cared greatly for the people around him.

‘Innately curious’

“Don was constantly challenging himself and his students, switching fields dramatically throughout his career, from pulsars to gravity waves, to the galactic center, and then to the origin of structure in the universe,” said Dan Werthimer, a former student of Backer’s who is chief scientist of SETI@home at UC Berkeley’s Space Sciences Laboratory. “Don made pioneering contributions to each of these fields, but he didn’t let it go to his head, and he always maintained his characteristic kind and gentle manner.”

“I don’t think I’ve met a finer man or a scientist that surpasses him in thoughtfulness,” said UC Berkeley astronomy professor Leo Blitz, who knew Backer since they were freshmen at Cornell 48 years ago.

Backer was determined to “pass the torch to the next generation” and trained his replacements in the field of astronomy every day, added colleague Geoffrey Bower, UC Berkeley assistant professor of astronomy and one of Backer’s former Ph.D. students. “That’s why we have some chance of moving ahead without him.”

Backer’s family remembers him fondly as someone who took great joy in travel, the simple and complex aspects of nature, hiking, swimming, skiing, going to the beach and camping.

“He was innately curious, loved to explore the earth as much as he did the cosmos, and had wonderful opportunities throughout his life to do both, close to home and around the world,” said his son, David Backer. “Every trip, personal and professional, was another chance, not to be missed, for adventure, discovery and appreciation.

“And lucky for us, he was unflappable, capable and dependable on these quests, always prepared to start a fire with only a couple of sticks, to siphon gas with a small tube, to carry someone injured on his back to get medical assistance. We and everyone who knew him will miss his spirit, integrity and bedrock.”

In addition to his wife, Susan Backer of Berkeley, Backer is survived by his son, David of Rockville, Maryland: his mother, Lura Backer of Bredenton, Fla., and a niece, a nephew and a granddaughter. His father, Phillip Backer, died in 1998 and his brother, Ken Backer, in 2007.

In lieu of flowers, donations can be sent to the Donald Backer Memorial Fund, to continue his legacy and to support the new frontiers of astronomy research and education. Checks can be made out to the UC Berkeley Foundation and mailed to Barbara Hoversten, Astronomy Department, MC3411, University of California, Berkeley, CA 94720-7450.

A private memorial service is scheduled.

Image Caption: Don Backer (UC Berkeley Department of Astronomy photo)

On the Net:

Mars Site Could Contain Proof Of Life

Scientists believe that they have found rocks containing the fossilized remains of early life on Mars, according to a new article published in the latest edition of Earth and Planetary Science Letters.

According to the report, a team of scientists–led by Dr. Adrian J. Brown of the Search for Extraterrestrial Intelligence (SETI) Institute in California–discovered the rocks in the Nili Fossae region of the “Red Planet.” Their research on the hydrothermal formation of clay-carbonate rocks in said area apparently provides evidence that living organisms could have called Mars home about 4 billion years ago.

The study “suggests that carbonate bearing rocks found in the Nili Fossae region of Mars are made up of hydrothermally altered ultramafic (perhaps komatiitic) rocks,” claims a press release, dated July 29, that details the findings.

Furthermore, the media statement claims that the research “also shows that the carbonates at Nili Fossae are not pure Mg-carbonate. Moreover, the study explains that talc is present in close proximity to the carbonate locations–rather than previously suggested saponite–and talc-carbonate alteration of high-Mg precursor rocks has taken place.”

According to BBC News Science Reporter Victoria Gill, the mineral content of the rocks in the Nili Fossae region is very similar to those found in a northwestern Australian location known as the Pilbara, “where some of the earliest evidence of life on Earth has been buried and preserved in mineral form.”

That means that it is at least somewhat likely that the Martian location could be home to similar evidence, according to what Brown told Gill.

“If there was enough life to make layers, to make corals or some sort of microbial homes, and if it was buried on Mars, the same physics that took place on Earth could have happened there,” he said.

Joining Brown on the research team were experts from California’s Jet Propulsion Laboratory, Johns Hopkins University in Maryland, the Desert Research Institute in Nevada, and Universidade Estadual de Campinas in Brazil. Their findings were published in the paper entitled “Hydrothermal formation of Clay-Carbonate alteration assemblages in the Nili Fossae region of Mars.”

Image Caption: Nili Fossae Phyllosilicate Crater Ejecta. Credit: NASA/JPL/University of Arizona

On the Net:

Phytoplankton Decline Seen Over The Last Century

Research suggests that the amount of phytoplankton found in the top layers of the ocean has declined markedly over the last century.

Scientists wrote in the journal Nature that the decline appears to be linked to rising water temperatures.

The researchers looked at records of the transparency of sea water, which is affected by the plants.
 
The decline could be ecologically significant as plankton sit at the base of marine food chains.

This is the first study that has attempted a comprehensive global look a plankton changes over such a long time scale.

“What we think is happening is that the oceans are becoming more stratified as the water warms,” said research leader Daniel Boyce from Dalhousie University in Halifax, Nova Scotia, Canada.

“The plants need sunlight from above and nutrients from below; and as it becomes more stratified, that limits the availability of nutrients,” he told BBC News.

Phytoplankton are eaten by zooplankton, which are prey for small fish and other animals.

The first reliable system for measuring the transparency of sea water was developed by astronomer and Jesuit priest Pietro Angelo Secchi.

Secchi was asked by the Pope in 1865 to measure the clarity of water in the Mediterranean Sea for the Papal navy.  He then developed the “Secchi disk,” which is lowered into the sea until its white color disappears from view.

A variety of substances in the water affects it transparency, but one of the main ones is the concentration of chlorophyll, which is the green pigment that is key to photosynthesis in plants at sea and on land.

Secchi disk measurements around the world have been augmented by shipboard analysis of water samples, and more recently by satellite measurements of ocean color.

The final tally included 445,237 data points from the Secchi disk spanning the period 1899-2008.

“This study took three years, and we spent lots of time going through the data checking that there wasn’t any ‘garbage’ in there,” Boyce told BBC.

“The data is good in the northern hemisphere and it gets better in recent times, but it’s more patchy in the southern hemisphere – the Southern Ocean, the southern Indian Ocean, and so on.”

The researchers have used higher quality data to calculate that since 1950, the world has seen a phytoplankton decline of about 40 percent.

The decline has been seen throughout most of the world, except the Indian Ocean.  There are also phytoplankton increases in coastal zones where fertilizer run-off from agricultural land is increasing nutrient supplies.

However the pattern is far from steady.  There are strong variations spanning a few years or a few decades.
 
Many of these variations are correlated with natural cycles of temperature seen in the oceans, such as the El Nino Southern Oscillation (ENSO), the North Atlantic Oscillation and the Arctic Oscillation.

The warmer ends of these cycles coincide with a reduction in plankton growth, while abidance in the colder phase is higher.

Carl-Gustaf Lundin, head of the marine program at the International Union for the Conservation of nature (IUCN), said that there could be other factors involved, such as the huge expansion in open-ocean fishing that has taken place over the century.

“Logically you would expect that as fishing has gone up, the amount of zooplankton would have risen – and that should have led to a decline in phytoplankton,” he told BBC News.

“So there’s something about fishing that hasn’t been factored into this analysis.”

He said the method of dividing oceans into grids like the Dalhousie researchers used did not permit scrutiny of areas where this might be particularly important, like the upwelling in the Eastern Pacific that supports the Peruvian anchovy fishers.

The team said that if the trend is real, it could also accelerate warming.

Photosynthesis by phytoplankton removes carbon dioxide from the air and produces oxygen.

Scientists have already said that the waters appear to be absorbing less CO2 in several parts of the world, notably in the Southern Ocean.  However, this is principally thought to be due to changes in wind patterns.

“Phytoplankton… produce half of the oxygen we breathe, draw down surface CO2, and ultimately support all of our fisheries,” said Boris Worm, another member of the Dalhousie team.

“An ocean with less phytoplankton will function differently.”

The overall decline in phytoplankton might be expected to continue if the planet continues to warm in line with projections of computer models of climate.

However, Daniel Boyce said that it was not certain.

“It’s tempting to say there will be further declines, but on the other hand there could be other drivers of change, so I don’t think that saying ‘temperature rise brings a phytoplankton decline’ is the end of the picture,” he said.

Lundin told BBC that the implications could be significant.

“If in fact productivity is going down so much, the implication would be that less carbon capture and storage is happening in the open ocean,” he said.

“So that’s a service that humanity is getting for free that it will lose; and there would also be an impact on fish, with less fish in the oceans over time.”

Image Caption: Phytoplankton are the foundation of the oceanic food chain. Credit: NOAA MESA Project

On the Net:

Mouth-To-Mouth Resuscitation Unnecessary

Are you hesitant to help people who have collapsed because you don’t want to give them mouth-to-mouth resuscitation? You might be in luck, as two studies published in the New England Journal of Medicine on Thursday show that hands-only CPR could be just as effective.

In fact, according to Gene Emery of Reuters, “When someone collapses suddenly, mouth-to-mouth rescue breathing may not be necessary and could lower the chances of survival”¦ The findings come at a time when less emphasis is being placed on mouth-to-mouth rescue breathing, which people often regard as unsanitary anyway, and more emphasis is focusing on properly pressing on the chest at a rate of 100 times a minute.”

Furthermore, “More bystanders are willing to attempt CPR if an emergency dispatcher gives them firm and direct instructions,” reports AP Medical Writer Mike Stobbe.

One of the studies, conducted by King County (Wash.) Emergency Medical Services program director Dr. Thomas Rea and colleagues, looked at 1,900-plus people in the Seattle and London areas who had witnessed a person in cardiac arrest and called emergency personnel. The second was conducted by officials at the Stockholm Prehospital Center in Sweden and focused on over 1,200 similar cases.

According to Emery, “Most of the victims died, but when bystanders did chest compressions alone it slightly increased a patient’s chance of leaving the hospital without brain damage; 11.5 percent escaped brain damage if rescue breathing was done while 14.4 percent escaped neurological problems with chest compressions alone, Rea’s team found”¦ In the Swedish test the 30-day survival rate was 7 percent with rescue breathing and 8.7 percent without.”

“In both cases, the difference was small enough that it was not considered statistically significant,” CNN’s Caleb Hellerman said in a Wednesday article. “But the authors–and an accompanying editorial–all said the findings support the idea that bystanders should be encouraged to do steady chest compressions on victims of apparent cardiac arrest, without pausing to give breaths.”

On the Net:

Toilets Safer Than Some Mobile Phones

According to a new study the average mobile phone carries 18 times more potentially harmful germs than the flush handle on a men’s toilet.

The analysis of handsets found that almost a quarter were so dirty that they had up to ten times the acceptable level of TVC bacteria.

One of the phones in the test had such high levels of bacteria that it could have given its owner a serious stomach upset.

Elevated levels of TVC indicate poor personal hygiene and act as a breeding ground for other bugs.

The findings from Which? suggests that 14.7 million of the 63 million mobile phones used in the U.K. today could potentially be a health hazard.   The study consisted of testing 30 phones.

Hygiene expert Jim Francis, who carried out the research, said, “The levels of potentially harmful bacteria on one mobile were off the scale. That phone needs sterilizing.”

The phone that tested worse for hygiene had 39 times the safe level of enterobacteria, a group of bacteria that live in the lower intestines of humans and animals and include bugs like Salmonella.

The phone had 170 times the acceptable level of fecal coliforms, which are associated with human waste.

Other bacteria including food poisoning bugs e.coli and staphylococcus aureus were found on the phones, but at safe levels.

Ceri Stanaway, a Which? researcher, said “Most phones didn’t have any immediately harmful bacteria that would make you sick straight away but they were grubbier than they could be.”

“The bugs can end up on your hands which is a breeding ground and be passed back to your phone. They can be transferred back and forth and eventually you could catch something nasty.”

“What this shows is how easy it is to come into contact with bacteria. People see toilet flushes as being something dirty to touch but they have less bacteria than phones.”

“People need to be mindful of that by observing good hygiene themselves and among others who they pass the phone to when looking at photos, for example.”

Which? has previously performed test like this that found some computer keyboards carry more harmful bacteria than a toilet seat.

On the Net:

Computer Scientists Break Terabyte Sort Barrier

Computer scientists from the University of California, San Diego broke “the terabyte barrier” ““ and a world record ““ when they sorted more than one terabyte of data (1,000 gigabytes or 1 million megabytes) in just 60 seconds. During this 2010 “Sort Benchmark” competition ““ the “World Cup of data sorting” ““ the computer scientists from the UC San Diego Jacobs School of Engineering also tied a world record for fastest data sorting rate. They sorted one trillion data records in 172 minutes ““ and did so using just a quarter of the computing resources of the other record holder.

Companies looking for trends, efficiencies and other competitive advantages have turned to the kind of heavy duty data sorting that requires the hardware muscle typical of data centers. The Internet has also created many scenarios where data sorting is critical. Advertisements on Facebook pages, custom recommendations on Amazon, and up-to-the-second search results on Google all result from sorting data sets as large as multiple petabytes. A petabyte is 1,000 terabytes.

“If a major corporation wants to run a query across all of their page views or products sold, that can require a sort across a multi-petabyte dataset and one that is growing by many gigabytes every day,” said UC San Diego computer science professor Amin Vahdat, who led the project. “Companies are pushing the limit on how much data they can sort, and how fast. This is data analytics in real time,” explained Vahdat. Better sort technologies are needed, however. In data centers, sorting is often the most pressing bottleneck in many higher-level activities, noted Vahdat who directs the Center for Networked Systems (CNS) at UC San Diego.

The two new world records from UC San Diego are among the 2010 results released recently on http://sortbenchmark.org ““ a site run by the volunteer computer scientists from industry and academia who manage the competitions. The competitions provide benchmarks for data sorting and an interactive forum for researchers working to improve data sorting techniques.

World Records

The Indy Minute Sort and the Indy Gray Sort are the two data sorting world records the UC San Diego computer scientists won in 2010, the first year they entered the Sort Benchmark competition.

In the Indy Minute Sort, the researchers sorted 1.014 terabytes in one minute ““ thus breaking the minute barrier for this terabyte sort for the first time.

“We’ve set our research agenda around how to make this better”¦and also on how to make it more general,” said UC San Diego computer science PhD student Alex Rasmussen, the lead graduate student on the team.

The team also tied the world record for the Indy Gray Sort which measures sort rate per minute per 100 terabytes of data.

“We used one fourth the number of computers as the previous record holder to achieve that same sort rate performance ““ and thus one fourth the energy, and one fourth the cooling and data center real estate,” said George Porter, a Research Scientist at the Center for Networked Systems at UC San Diego. The Center for Networked Systems is an affiliated Center of the California Institute for Telecommunications and Information Technology (Calit2).

Both world records are in the Indy category ““ meaning that the systems were designed around the specific parameters of the Sort Benchmark competition. The team is looking to generalize their results for the Daytona competition and for use in the real world.

“Sorting is also an interesting proxy for a whole bunch of other data processing problems. Generally, sorting is a great way to measure how fast you can read a lot of data off a set of disks, do some basic processing on it, shuffle it around a network and write it to another set of disks,” explained Rasmussen. “Sorting puts a lot of stress on the entire input/output subsystem, from the hard drives and the networking hardware to the operating system and application software.”

Balanced Systems

The data sorting challenges the computer scientists took on are quite different from the modest sorting that anyone with off the shelf database software can do by comparing two tables. One of the big differences is that data in terabyte and petabyte sorts is well beyond the memory capacity of the computers doing the sorting.

In creating their heavy duty sorting system, the computer scientists designed for speed and balance. A balanced system is one in which computing resources like memory, storage and network bandwidth are fully utilized and as few resources as possible are wasted.

“Our system shows what’s possible if you pay attention to efficiency ““ and there is still plenty of room for improvement,” said Vahdat, holder of the SAIC Chair in Engineering in the Department of Computer Science and Engineering at UC San Diego. “We asked ourselves, “ËœWhat does it mean to build a balanced system where we are not wasting any system resources in carrying out high end computation?'” said Vahdat. “If you are idling your processors or not using all your RAM, you’re burning energy and losing efficiency.” For example, memory often uses as much or more energy than processors, but the energy consumed by memory gets less attention.

To break the terabyte barrier for the Indy Minute Sort, the computer science researchers built a system made up of 52 computer nodes. Each node is a commodity server with two quad-core processors, 24 gigabytes (GB) memory and sixteen 500 GB disks ““ all inter-connected by a Cisco Nexus 5020 switch. Cisco donated the switches as a part of their research engagement with the UC San Diego Center for Networked Systems. The compute cluster is hosted at Calit2.

To win the Indy Gray Sort, the computer science researchers sorted one trillion records in 10,318 seconds (about 172 minutes), yielding their world-record tying data sorting rate of 0.582 terabytes per minute per 100 terabytes of data. The winning sort system is made up of 47 computer nodes similar to those used in the minute sort.

According to wolframalpha.com, 100 terabytes of data is roughly equivalent to 4,000 single-layer Blu-Ray discs, 21,000 single-layer DVDs, 12,000 dual-layer DVDs or 142,248 CDs (assuming CDs are 703 MB).

The roster for TritonSort, the world record breaking sort team:

Alex Rasmussen, Radhika Niranjan Mysore and Michael Conley are computer science graduate students at UC San Diego. Alexander Pucher is a visiting student from Vienna University of Technology. Harsha V. Madhyastha is a postdoctoral researcher in computer science at UC San Diego. George Porter is a Research Scientist at the Center for Networked Systems at UC San Diego. Amin Vahdat holds the SAIC Chair in Engineering in the Department of Computer Science and Engineering and directs the Center for Networked Systems (CNS) at UC San Diego. Learn more about Sort Benchmark at http://sortbenchmark.org/

Image Caption: To break the terabyte barrier for the Indy Minute Sort, the computer science researchers built a system made up of 52 computer nodes. Each node is a commodity server with two quad-core processors, 24 gigabytes (GB) memory and sixteen 500 GB disks ““ all inter-connected by a Cisco Nexus 5020 switch. Cisco donated the switches as a part of their research engagement with the UC San Diego Center for Networked Systems. The compute cluster is hosted at Calit2.

On the Net:

Shade-coffee Farms Help Maintain Tropical Genetic Diversity

Shade-grown coffee farms support native bees that help maintain the health of some of the world’s most biodiverse tropical regions, according to a study by a University of Michigan biologist and a colleague at the University of California, Berkeley.

The study suggests that by pollinating native trees on shade-coffee farms and adjacent patches of forest, the bees help preserve the genetic diversity of remnant native-tree populations. The study was published online Monday in the Proceedings of the National Academy of Sciences.

“A concern in tropical agriculture areas is that increasingly fragmented landscapes isolate native plant populations, eventually leading to lower genetic diversity,” said Christopher Dick, a U-M assistant professor of ecology and evolutionary biology. “But this study shows that specialized native bees help enhance the fecundity and the genetic diversity of remnant native trees, which could serve as reservoirs for future forest regeneration.”

An estimated 32.1 million acres of tropical forest are destroyed each year by the expansion of cropland, pasture and logging. Often grown adjacent to remnant forest patches, coffee crops cover more than 27 million acres of land in many of the world’s most biodiverse regions.

Over the last three decades, many Latin American coffee farmers have abandoned traditional shade-growing techniques, in which plants are grown beneath a diverse canopy of trees. In an effort to increase production, much of the acreage has been converted to “sun coffee,” which involves thinning or removing the canopy.

Previous studies have demonstrated that shade-grown farms boost biodiversity by providing a haven for migratory birds, nonmigratory bats and other beneficial creatures. Shade-coffee farms also require far less synthetic fertilizer, pesticides and herbicides than sun-coffee plantations.

In the latest study, U-M’s Dick and UC-Berkeley’s Shalene Jha investigated the role of native bees that pollinate native trees in and around shade-grown coffee farms in the highlands of southern Chiapas, Mexico. In their study area, tropical forest now represents less than 10 percent of the land cover.

Jha and Dick wanted to determine the degree to which native bees, which forage for pollen and nectar and pollinate trees in the process, facilitate gene flow between the remnant forest and adjacent shade-coffee farms.

They focused on Miconia affinis, a small, native understory tree that many farmers allow to invade shade-coffee farms because the trees help control soil erosion.

M. affinis, commonly known as the saquiyac tree, is pollinated by an unusual method known as buzz pollination. In order to release pollen from the tree’s flowers, bees grab hold and vibrate their flight muscles, shaking the pollen free. Non-native Africanized honeybees don’t perform buzz pollination, but many native bees do.

“Our focus on a buzz-pollinated tree allowed us to exclude Africanized honeybees and highlight the role of native bees as both pollinators and vectors of gene flow in the shade-coffee landscape mosaic,” said Jha, a postdoctoral fellow at UC-Berkeley who conducted the research while earning her doctorate at U-M.

Jha and Dick combined field observations with seed-parentage genetic analysis of Miconia affinis. They found that trees growing on shade-coffee farms received bee-delivered pollen from twice as many donor trees as M. affinis trees growing in the adjacent remnant forest. The higher number of pollen donors translates into greater genetic diversity among the offspring of the shade-farm trees.

Seed parentage analysis revealed that pollen from forest trees sired 65.1 percent of the seeds sampled from M. affinis trees growing in a shade-coffee habitat. That finding demonstrates that native bees are promoting gene flow between the remnant forest and the coffee farms””bridging the two habitat types””and that the shade-farm trees serve as a repository of local M. affinis genetic diversity, according to the authors.

In addition, Jha and Dick found that native bees carried pollen twice as far in a shade-coffee habitat than they did in the forest. They documented shade-farm pollination trips of nearly a mile, which are among the longest precisely recorded pollination trips by native tropical bees.

Jha and Dick said their results likely apply to other buzz-pollinated plants, which represent about 8 percent of the world’s flowering plant species, as well as to other native plants whose limited pollen and nectar rewards don’t attract honeybees.

The enhanced genetic diversity of the shade-farm trees could provide a reservoir for future forest regeneration, as the coffee farms typically fall out of production in less than a century. Given that potential, along with the shade farm’s previously identified roles in connecting habitat patches and sheltering native wildlife, it is important to encourage this traditional style of agriculture, Jha and Dick said.

The project was supported by the Helen Olson Brower Fellowship at the University of Michigan and by the National Science Foundation.

Image Caption: Shade coffee vista in Chiapas, Mexico, with coffee bloom in foreground. Credit: Shalene Jha, University of California, Berkeley

On the Net:

Earth At Risk Of Asteroid Impact In 2182

Scientists have discovered that a potentially hazardous asteroid might collide with Earth in 2182.

The scientists used two mathematical models in order to determine the chance of impact.

“The total impact probability of asteroid ‘(101955) 1999 RQ36’ can be estimated in 0.00092 ““approximately one-in-a-thousand chance-, but what is most surprising is that over half of this chance (0.00054) corresponds to 2182,” explains to SINC María Eugenia Sansaturio, co-author of the study and researcher of Universidad de Valladolid (UVA).

The research involved scientists from the University of Pisa, Italy, the Jet Propulsion Laboratory at NASA and INAF-IASF in Rome, Italy.

The asteroid is part of the Potentially Hazardous Asteroids (PHA), which are groups of asteroids deemed to have the possibility of impacting with Earth because of their orbits.  The asteroid was discovered in 1999 and is 1,837 feet in diameter.

The researchers performed 290 optical observations and 13 radar measurements in order to determine the asteroid’s orbit.  However, a significant “orbital uncertainty” still exists because its path is influenced by the Yarkovsky effect.

The Yarkovsky effect slightly modifies the orbit of the Solar System’s small objects because they radiate from one side the radiation they take from the sun through the other side.

The scientists reported their findings in the journal Icarus.

“The consequence of this complex dynamic is not just the likelihood of a comparatively large impact, but also that a realistic deflection procedure (path deviation) could only be made before the impact in 2080, and more easily, before 2060,” said Sansaturio.

The scientists concluded: “If this object had been discovered after 2080, the deflection would require a technology that is not currently available. Therefore, this example suggests that impact monitoring, which up to date does not cover more than 80 or 100 years, may need to encompass more than one century. Thus, the efforts to deviate this type of objects could be conducted with moderate resources, from a technological and financial point of view.”

Image 2: These are asteroids and comets visited by spacecraft. Credit: ESA, NASA, JAXA, RAS, JHUAPL, UMD, OSIRIS

On the Net:

China Mulls Super Powerful Rocket Engine

A new super-powerful engine is being considered by Chinese engineers for the next generation of space rockets, according to officials.

Li Tongyu, general manager of marketing at the China Academy of Launch Vehicle Technology (CALT), told BBC News that engineers are currently analyzing a rocket engine with the thrust of 600 metric tons, burning an extremely potent liquid oxygen propellant.

If the development is a success, it would increase the country’s space capabilities exponentially.

Long March-5, China’s current most powerful rocket being developed, would have engine with 120 metric tons of thrust.

“Rockets (with 600-metric-ton thrust engines) would only be justified for things like sending humans to the Moon, if such projects are approved,” Tongyu told BBC News.

The official China Daily newspaper disclosed in March that CALT was studying a super-heavy launch vehicle, which could be used to launch lunar expeditions. CALT vice-president Liang Xiaohong was quoted by the paper as saying that a total lift-off of the future launcher would be 3,000 metric tons.

To produce such thrust, the first stage of the proposed rocket would need five 600-metric-ton engines, most likely distributed between one central engine and four boosters. The rocket would be similar to architecture adopted for the Long March-5 rocket, but on a much grander scale.

The development of the Long March-5 rocket is proceeding well toward its first test launch, currently expected in 2014, Tongyu said.

The vehicle’s first stage engine had already amassed more than 10,000 seconds of firing during testing — an important marker on the way to its certification for real missions. A full-scale prototype of the Long March 5 rocket would be ready for testing by 2012 and a year later, test firing of fully assembled rocket stages would be conducted.

When operational, Long March-5 is expected to deliver up to 25 metric tons of payload, including space station modules to the low Earth orbit, and up to 14 metric tons to the so called geostationary transfer orbit, where most communications satellites are released after launch.

The rockets are expected to be built in new facilities near the Chinese capital of Beijing. The rocket stages would then be shipped to the launch site in southern China, where it would take advantage of the Earth rotation to maximize its cargo capabilities.

Along with Earth-orbiting satellites, the Long March-5 is expected to carry Chinese spacecraft into deep space, including unmanned missions to return soil samples from the Moon.

* Measurements used in this article are calculated in metric tons. US ton measurements are slightly greater. (1 metric ton = 1.1 US tons).

VA Clarifies Medical Marijuana Policies

According to new federal regulations, patients being treated at Veterans Affairs hospitals and clinics will be able to use medical marijuana in the 14 states where it has been legalized.

The Veterans Affairs Department directive is intended to clarify current policy in the coming week that says veterans can also be denied pain medication if they use illegal drugs. Veteran groups have continually complained that this could bar veterans from VA benefits if they were caught with medical marijuana.

The new guidance does not authorize VA doctors to start prescribing medical marijuana, which is considered an illegal drug under federal law. But, it will make clear that in the 14 states where federal and state law are in conflict, VA clinics will allow the use of medical marijuana for veterans already prescribed it from other clinicians.

“For years, there have been veterans coming back from the Iraq war who needed medical marijuana and had to decide whether they were willing to cut down on their VA medications,” said John Targowski, a legal adviser to the group Veterans for Medical Marijuana Access.

Targowski told The Associated Press (AP) in an interview Saturday that confusion over the government’s policy might have led some veterans to lose trust in their doctors or avoid the VA system altogether.

Dr. Robert A. Petzel, the VA’s undersecretary for health, sent a letter this month to Veterans for Medical Marijuana Access that spells out the department’s policy. The guidelines will be distributed to 900 VA facilities around the country next week.

Petzel made it clear that VA doctors could reserve the right to modify a veteran’s treatment if there were risks of a bad interaction with other prescription medications.

“If a veteran obtains and uses medical marijuana in a manner consistent with state law, testing positive for marijuana would not preclude the veteran from receiving opioids — narcotic painkillers which include morphine, oxycodone and methadone — for pain management,” Petzel wrote, adding “the discretion to prescribe, or not prescribe, opioids in conjunction with medical marijuana, should be determined on clinical grounds.”

Under the previous policy, local VA clinics in some of the 14 states had opted to allow the use of medical marijuana because there had been no rule explicitly barring them from doing so.

According to the National Conference of State Legislatures, there are 14 states and the District of Columbia with medical marijuana laws: Alaska, California, Colorado, Hawaii, Maine, Maryland, Michigan, Montana, Nevada, New Mexico, Oregon, Rhode Island, Vermont and Washington.

New Jersey also recently passed a medical marijuana law, which is scheduled to be put into place next January.

On the Net:

Diet Soda Linked To Risk Of Premature Birth

New research suggests that there may be a link between the consumption of artificially sweetened beverages and the increased risk of premature births.

Dr. Thorhallur I. Halldorsson of the Statens Serum Institute in Copenhagen, one of the researchers on the study, told Reuters Health that it may be “non-optional for pregnant women to have high consumption of these types of products.”

“Diet” beverages have been widely promoted as a healthy alternative to sugary sodas and juices, but Halldorsson and his colleagues said that there has been little research on the safety of regular use of artificial sweeteners in humans.

Both artificially and sugar sweetened soft drinks have been recently linked to high blood pressure, the researchers add, which increases the risk of premature delivery.

To find whether there might be a direct link, the team looked at nearly 60,000 Danish women who reported on their diet, including how many soft drinks they consumed each day, at around 25 weeks in pregnancy.

About 5 percent of the women delivered their babies ahead of 37 weeks.

Women who reported having at least one serving of artificially sweetened soda a day while they were pregnant were 38 percent more likely to deliver prematurely than women who drank no diet soda at all, the researchers found.

Furthermore, women who consumed at least four soft drinks a day were nearly 80 percent more likely to deliver preterm. The association was the same for normal-weight and overweight women.

The researchers did not report the actual risk of premature babies in each group. However, according to the March of Dimes, one in eight babies is born prematurely. This means that if drinking diet soda does increase the risk, a woman who drank one diet soda daily would have a 17 percent risk, while being around 22 percent if she drank four or more a day.

The Calorie Control Council, a lobbying group for companies that make and distribute low-calorie foods, said in a statement that the study was “misleading.”

“This study may unduly alarm pregnant women. While this study is counter to the weight of the scientific evidence demonstrating that low-calorie sweeteners are safe for use in pregnancy, research has shown that overweight and obesity can negatively affect pregnancy outcomes,” Beth Hubrich, a dietitian with the council, said in the statement.

“Further, low-calorie sweeteners can help pregnant women enjoy the taste of sweets without excess calories, leaving room for nutritious foods and beverages without excess weight gain – something that has been shown to be harmful to both the mother and developing baby,” she added.

Because only diet soda was linked to preterm delivery, the study findings suggested that artificial sweetener itself, not soda drinking, accounts for the link, said the researchers. They added, however, that other possible causes for the link shouldn’t be ruled out.

The researchers didn’t specifically look at all artificial sweeteners, and Halldorsson noted that there are a number of different sweeteners. However, the researchers say there is indirect evidence linking the sweetener aspartame to preterm delivery in animals.

Aspartame breaks down into methanol and other substances in the body, which can be converted to toxic substances such as formaldehyde and formic acid, the team explained. Studies in non-human primates have linked even very low exposure to methanol to shortened pregnancy and labor complications.

Halldorsson said the findings warrant further attention, and pregnant women who consume soft drinks shouldn’t be alarmed by the findings.

According to the American College of Obstetricians and Gynecologists, women who normally use the artificial sweeteners can safely continue to do so “in moderation” during pregnancy.

The researchers reported their findings in the American Journal of Clinical Nutrition.

Artificial sweeteners include: saccharin (Sweet n’ Low), aspartame (NutraSweet), sucralose (Splenda) or acesulfame K (Sunett, Sweet One).

On the Net:

Fat Kid, Fat Feet?

Researchers have known that overweight children tend to have “flatter” feet than their normal-weight peers, but it has not been clear whether the problem is because of a potential problem in the structure of the foot bone or simply extra fat padding. A new study suggests it could be both.

Generally, people with flat feet have a lowered arch at the inside of the foot, which typically makes them leave a complete footprint on a flat surface. All babies and toddlers have flat feet, with the arch developing during childhood. Obese children are more likely to retain a flat foot. It has been assumed that this is because their extra weight creates a fallen arch.

Another possibility, though, is that most heavier children have more fat padding on the soles of their feet.

This is important because flat feet caused by lower arches can cause problems for some people. Some children and adults can have severe foot pain, and in the long-term, flat feet can contribute to ankle and back pain.

In the new study, published in the International Journal of Obesity, Australian researchers used ultrasound tests to examine the feet of 75 obese children and 75 normal-weight children between the ages of 6 and 10.

They found that obese children were in fact more likely to have fat padding on the soles of their feet. But they also found that overweight children also tended to have lower arches.

The researchers, led by Dr. Diane L. Riddiford-Harland of the University of Wollongong, said it remained unclear what that might mean for obese children’s foot function or risk of future musculoskeletal problems.

They say more research is needed to follow children over time, to see how obesity and possible weight loss might affect the structure and health of their feet over time.

When it comes to flat-footedness in children in general, recent studies showed positive results. A study published in the journal Pediatrics last year, found that among 11- to 15- year-olds, there was no link between arch-height and their performances on motor-skill tests, such as balance and jumping.

Flat feet that cause no pain generally do not need any special therapies. But if a child does have pain, a doctor may recommend arch supports for shoes or physical therapy.

On the Net:

Impact Crater Found In The Sahara Desert

Researchers sifting through Google Earth images have discovered what may be the world’s best-preserved small impact crater in a remote area of the Sahara desert in southwestern Egypt.  

The immaculate, 148 ft. wide crater was likely excavated by a fast-moving iron meteorite, a few thousand years ago, scientists said Thursday.

Dubbed Kamil, the crater is remarkably preserved when compared with most of the other craters on the Earth’s surface, many of which are partially eroded.

The Kamil crater, by contrast, has retained much of its structure, and includes even the rays of ejected debris that were disbursed from the center as the meteorite made impact.

“This crater is really a kind of beauty because it’s so well-preserved that it will tell us a lot about small-scale meteorite impacts on the Earth’s crust,” study leader Luigi Folco, meteorite curator at the Museo Nazionale dell’Antartide in Siena, Italy, told SPACE.com.

“It’s so nice. It’s so neat. There is something extraordinary about it,” he said, adding that craters as pristine as Kamil are typically found only on Mars or on the moon, where there are fewer environmental and atmospheric forces to degrade them.

Vincenzo de Michele, a former curator of the Civico Museo di Storia Naturale in Milan, Italy, first identified the bowl-shaped Kamil crater in Google Earth satellite photographs.

Based on its size and characteristics, researchers believe Kamil was formed by the impact of an iron meteorite roughly 4.3 feet in diameter traveling at nearly 8,000 mph.

A team of geophysicists, including Folco, visited the site in Egypt’s Sahara desert in February to confirm the discovery.

“The first real impression when we were in the field “” we could see with our eyes that it was really well preserved and a potential source of detailed information about this kind of event,” Folco told SPACE.com.

Although researchers do not know the precise moment the meteorite hit the Earth, they estimate it was likely a few thousand years ago, which is fairly recent in geological terms.

There are approximately 175 confirmed impact craters on the Earth’s surface, most of which are worn away.

Space rocks the size of large television sets routinely reach the Earth’s atmosphere every month, but most burn up before they reach the surface.  Many of the resulting fireballs are not observable because they occur over remote land areas or over the oceans, which cover two-thirds of the Earth’s surface.

“This [the Kamil discovery] is important because small impacts are rather frequent on Earth “” on the order of one event every 10 to 100 years,” Folco explained.

“Studying this crater is a good opportunity for scientists to get to a correct assessment of the hazard small impacts pose to the Earth and to devise mitigation strategies.”

Folco is lead author on a report about the findings published in the July 23 issue of the journal Science.

Image Caption: At only 16 meters deep and 45 meters wide, you wouldn’t call the Kamil Crater in the southwest corner of Egypt “Deep Impact”. Indeed, there are much more massive and impressive meteor craters scarring the Earth. Yet this small but perfectly preserved depression, discovered in 2009 during a Google Earth survey, is a rare beauty. There are currently 176 known craters on Earth’s surface, of which only 15 are less than 300 meters wide””but all of these small craters have rapidly eroded and lost most of their original features. The Kamil Crater, however, has been well preserved, with the radial streaks of ejecta thrown out during impact still visible. And that’s giving scientists a unique opportunity to study the characteristics of small-scale meteor impacts, researchers report online today in Science. The crater is in such pristine condition because it is geologically young (current estimates put the age at less than 5000 years) and because it escaped significant weathering, as it formed when extremely arid conditions were already prevalent in this remote part of Egypt. Craters like this one are typically found on planetary bodies in the Solar System that don’t have atmospheres and therefore no weather systems to erode them. Credit: Museo Nazionale dell’Antartide Università di Siena

On the Net:

Study Finds Structural Brain Alterations In Patients With Irritable Bowel Syndrome

Findings suggest IBS similar to other pain disorders

A large academic study has demonstrated structural changes in specific brain regions in female patients with irritable bowel syndrome (IBS), a condition that causes pain and discomfort in the abdomen, along with diarrhea, constipation or both.

A collaborative effort between UCLA and Canada’s McGill University, the study appears in the July issue of the journal Gastroenterology.

The findings show that IBS is associated with both decreases and increases in grey matter density in key areas of the brain involved in attention, emotion regulation, pain inhibition and the processing of visceral information.

IBS affects approximately 15 percent of the U.S. population, primarily women. Currently, the condition is considered by the medical field to be a “functional” syndrome of the digestive tract not working properly rather than an “organic” disorder with structural organ changes. Efforts to identify structural or biochemical alterations in the gut have largely been unsuccessful. Even though the pathophysiology is not completely understood, it is generally agreed that IBS represents an alteration in brain-gut interactions.

These study findings, however, show actual structural changes to the brain, which places IBS in the category of other pain disorders, such as lower back pain, temporomandibular joint disorder, migraines and hip pain “” conditions in which some of the same anatomical brain changes have been observed, as well as other changes. A recent, smaller study suggested structural brain changes in IBS, but a larger definitive study hadn’t been completed until now.

“Discovering structural changes in the brain, whether they are primary or secondary to the gastrointestinal symptoms, demonstrates an ‘organic’ component to IBS and supports the concept of a brain-gut disorder,” said study author Dr. Emeran Mayer, professor of medicine, physiology and psychiatry at the David Geffen School of Medicine at UCLA. “Also, the finding removes the idea once and for all that IBS symptoms are not real and are ‘only psychological.’ The findings will give us more insight into better understanding IBS.”

Researchers employed imaging techniques to examine and analyze brain anatomical differences between 55 female IBS patients and 48 female control subjects. Patients had moderate IBS severity, with disease duration from one to 34 years (average 11 years). The average age of the participants was 31.

Investigators found both increases and decreases of brain grey matter in specific cortical brain regions.

Even after accounting for additional factors such as anxiety and depression, researchers still discovered differences between IBS patients and control subjects in areas of the brain involved in cognitive and evaluative functions, including the prefrontal and posterior parietal cortices, and in the posterior insula, which represents the primary viscerosensory cortex receiving sensory information from the gastrointestinal tract.

“The grey-matter changes in the posterior insula are particularly interesting since they may play a role in central pain amplification for IBS patients,” said study author David A. Seminowicz, Ph.D., of the Alan Edwards Centre for Research on Pain at McGill University. “This particular finding may point to a specific brain difference or abnormality that plays a role in heightening pain signals that reach the brain from the gut.”

Decreases in grey matter in IBS patients occurred in several regions involved in attentional brain processes, which decide what the body should pay attention to. The thalamus and midbrain also showed reductions, including a region “” the periaqueductal grey “” that plays a major role in suppressing pain.

“Reductions of grey matter in these key areas may demonstrate an inability of the brain to effectively inhibit pain responses,” Seminowicz said.

The observed decreases in brain grey matter were consistent across IBS patient sub-groups, such as those experiencing more diarrhea-like symptoms than constipation.

“We noticed that the structural brain changes varied between patients who characterized their symptoms primarily as pain, rather than non-painful discomfort,” said Mayer, director of the UCLA Center for Neurobiology of Stress. “In contrast, the length of time a patient has had IBS was not related to these structural brain changes.”

Mayer added that the next steps in the research will include exploring whether genes can be identified that are related to these structural brain changes. In addition, there is a need to increase the study sample size to address male-female differences and to determine if these brain changes are a cause or consequence of having IBS.

On the Net:

International Researchers Meet In Singapore To Discuss Research Integrity

First-ever global research integrity code to be finalized at the conference

Research plays a crucial role in economies that value knowledge creation and innovation. As more countries develop their research and development (R&D) capabilities and embark on research initiatives and partnerships at the international level, research integrity becomes all the more important. Increasing globalisation and international collaborations also highlight the importance of promoting integrity in cross-border research.

Some 350 senior education and research policymakers, leaders of research-funding agencies, university leaders and faculty, researchers, and academic publishers from 58 countries are meeting in Singapore to discuss the key issues of research misconduct policy, responsible conduct of research, education and the promotion of professional responsibility in research.

The World Conference on Research Integrity 2010 is hosted by Singapore’s leading research-performing organisations, namely NTU, NUS, SMU and A*STAR, with support from Singapore’s Ministry of Education and the Singapore Tourism Board. The theme for conference, held at Pan Pacific Singapore from 21 to 24 July 2010, is ‘Leadership Challenges and Responses’.

During the conference, participants will exchange views and work together to develop a set of guidelines and recommendations for promoting integrity in research on a global scale. At the end of the conference, they hope to affirm the ‘Singapore Statement on Research Integrity’, which will serve as a benchmark statement for research integrity and a guide for professionally responsible research practices throughout the world.

The guest-of-honour, Singapore’s Minister for Education, Dr Ng Eng Hen, officiated at the opening of the conference on 22 July.

In his opening speech, Dr Ng highlighted the important of research integrity as public trust. “High standards of integrity and ethics in research are crucial in Singapore’s quest to be a R&D node in this global network,” he said. “Knowledge without integrity can harm. The 2nd World Conference on Research Integrity is therefore timely and will assist us in reviewing our systems and procedures for our universities and research institutes.”

“To this end, I am glad that this conference has already set itself a discrete deliverable. It aims to crystallise one of the main recommendations from the 1st World Conference on Research Integrity, which is the need for consistent institutional and national policies. It will work towards developing a set of fundamental and basic principles to be agreed by consensus at this meeting, and which will be known collectively as the Singapore Statement.”

“It is my hope that this Statement will serve as a basic document for a global code of conduct and protocols, from which individual countries can adapt and use for their own institutions to address the issues pertaining to research integrity and good research practices,” said Dr Ng, who is also Singapore’s Second Minister for Defence.

Among those present at this morning’s conference opening were Mr Lim Chuan Poh, Chairman of Singapore’s Agency for Science, Technology and Research (A*STAR); Dr Su Guaning, President of Nanyang Technological University (NTU); Professor Howard Hunter, President of Singapore Management University (SMU); Professor Bertil Andersson, Provost of NTU; and Professor Seeram Ramakrishna, Vice President (Research Strategy) of National University of Singapore (NUS).

Mr Lim, Chairman of A*STAR, said: “Arriving at a common set of norms and standards for research integrity has assumed even greater importance, as stakes get higher in an increasingly competitive global R&D landscape, and as R&D pursuits increasingly cut across scientific fields, national borders and organisational cultures, and take on global dimensions with impact on areas such as climate change or managing pandemics. The conversations that will take place among the key stakeholders from academia, public research agencies, policymakers as well as the media, coming from 58 countries at the 2nd World Conference on Research Integrity, are meaningful steps for us to achieve a common set of global standards.”

“As an organisation with a vibrant community of 2,300 research scientists and engineers, and with extensive collaborations with other international organisations and the industry, A*STAR is fully committed to upholding the highest standards of research integrity. To this end, the A*STAR Code of Practice and Procedure relating to Integrity in Research, which was developed based on international best practices, serves as a guide for A*STAR researchers,” said Mr Lim.

“NTU is proud to have been appointed the lead organiser and host for this event, bringing together renowned experts in their respective fields to address the vital issue of research integrity,” said NTU President Dr Su in his welcome speech at the opening of the conference. “Cutting across all disciplines, research integrity has become increasingly important today, given that innovation and R&D are key drivers of economic growth worldwide.”

“The Singapore Statement shall serve as a fundamental and landmark document containing basic principles that can be used as a standard for research integrity throughout the world. Such a standard will be especially important for educators and educational institutions like NTU, growing rapidly as major research universities,” Dr Su said. “At NTU, we have taken steps to introduce a clear policy, based of course, with due acknowledgement, on the model provided by the US Office of Research Integrity. We have a zero-tolerance policy for anyone, whatever their status, who breaches the university’s research integrity norms.”

NTU Provost, Professor Andersson, who was instrumental in the launch of inaugural World Conference on Research Integrity held in September 2007 in Lisbon, Portugal, when he was the Chief Executive Officer of the European Science Foundation, and who successfully promoted the bid for Singapore to host the 2nd World Conference on Research Integrity, said: “The importance of research integrity in a world that is increasingly globalised and where research is becoming more multidisciplinary cannot be over emphasised. That the conference is being held in Singapore reflects the increasing importance of the Asia Pacific region as a hub for R&D, and of Singapore as key node in this hub.”

SMU President, Professor Hunter, said: “The issue of research integrity is closely entwined with the SMU culture and the SMU approach to education. Integrity is one of the fundamental values of the university and it is infused into what we teach, how we teach and what students are exposed to. I look forward to the discussions during the conference and most importantly, the meaningful impact these will have on the sphere of research and beyond.”

NUS Vice President (Research Strategy), Professor Ramakrishna, said: “Nearly one-third of the world’s R&D spending comes from public funds, and naturally, there is greater expectation on the research community to deliver scientific breakthroughs. Hence, the emphasis on research integrity cannot be understated. NUS, which has an international community of faculty and students from over 100 countries, strongly values research integrity and actively promotes high standards in our research practices. We will continue to review our practices to ensure that they remain relevant to the evolving research landscape.”

Conference co-chair, Professor Nicholas Steneck of the University of Michigan, said: “Singapore has provided the ideal venue for the 2nd World Conference on Research Integrity and we appreciate the support which we have received for Singapore’s major research organisations.”

Agreeing, the other conference co-chair, Mr Tony Mayer, NTU’s Europe Representative, who is also acting for the European Science Foundation, added: “With an international conference planning committee to assist us, we hope that we have created a stimulating programme which will, through the Singapore Statement, become a landmark event in good research practice throughout the world.”

On the Net:

A New Code Of Conduct For Researchers

Fostering research integrity in Europe

A new European Code of Conduct for Research Integrity will be presented by the European Science Foundation at the World Conference on Research Integrity. The code addresses good practice and bad conduct in science, offering a basis for trust and integrity across national borders.

This Europe-wide code offers a reference point for all researchers, complementing existing codes of ethics and complying with national and European legislative frameworks. It is not intended to replace existing national or academic guidelines, but represents agreement across 30 countries on a set of principles and priorities for self-regulation of the research community. It provides a possible model for a global code of conduct for all research.

“Science is an international enterprise with researchers continually working with colleagues in other countries. The scientists involved need to understand that they share a common set of standards. There can be no first-class research without integrity,” said Marja Makarow, Chief Executive of the European Science Foundation. “Researchers build on each other’s results so they must be honest with themselves, and with each other, and share the same standards of fairness, which makes the European Code of Conduct for Research Integrity a vital document.”

The code describes the proper conduct and principled practice of systematic research in the natural and social sciences and the humanities. Research misconduct is quite rare, but just one extraordinary case can endanger the reputation of a university, a research community or even the reputation of science itself. One well-publicised allegation of research dishonesty or malpractice can call to question the efforts of thousands of scientists and decades of research effort. Europe has experienced several well publicised cases recently at, for example, the University of East Anglia in the UK, and at the Karolinska Institute in Sweden.

The term ‘research misconduct’ embraces many things, including insufficient care for the people, animals or objects that are the subject of or participants in research; breaches of confidentiality, violation of protocols, carelessness of the kind that leads to gross error and improprieties of publication involving conflict of interest or appropriation of ideas. Many of these unacceptable research practices are addressed in the European Code of Conduct for Research Integrity.

The code was developed from meetings and workshops involving the European Science Foundation (ESF) Member Organisations who are 79 national funding bodies, research-performing agencies, academies and learned societies from 30 countries. They worked with the All European Academies (ALLEA). The next steps in implementing the code will be discussed in the autumn by ESF Member Organisations.

On the Net:

Hyperfast Star Was Kicked Out Of The Milky Way

A hundred million years ago, a triple-star system was traveling through the bustling center of our Milky Way galaxy when it made a life-changing misstep. The trio wandered too close to the galaxy’s giant black hole, which captured one of the stars and hurled the other two out of the Milky Way. Adding to the stellar game of musical chairs, the two outbound stars merged to form a super-hot, blue star.

This story may seem like science fiction, but astronomers using NASA’s Hubble Space Telescope say it is the most likely scenario for a so-called hypervelocity star, known as HE 0437-5439, one of the fastest ever detected. It is blazing across space at a speed of 1.6 million miles (2.5 million kilometers) an hour, three times faster than our Sun’s orbital velocity in the Milky Way. Hubble observations confirm that the stellar speedster hails from the Milky Way’s core, settling some confusion over where it originally called home.

Most of the roughly 16 known hypervelocity stars, all discovered since 2005, are thought to be exiles from the heart of our galaxy. But this Hubble result is the first direct observation linking a high-flying star to a galactic center origin.

“Using Hubble, we can for the first time trace back to where the star comes from by measuring the star’s direction of motion on the sky. Its motion points directly from the Milky Way center,” says astronomer Warren Brown of the Harvard-Smithsonian Center for Astrophysics in Cambridge, Mass., a member of the Hubble team that observed the star. “These exiled stars are rare in the Milky Way’s population of 100 billion stars. For every 100 million stars in the galaxy lurks one hypervelocity star.”

The movements of these unbound stars could reveal the shape of the dark matter distribution surrounding our galaxy. “Studying these stars could provide more clues about the nature of some of the universe’s unseen mass, and it could help astronomers better understand how galaxies form,” says team leader Oleg Gnedin of the University of Michigan in Ann Arbor. “Dark matter’s gravitational pull is measured by the shape of the hyperfast stars’ trajectories out of the Milky Way.”

The stellar outcast is already cruising in the Milky Way’s distant outskirts, high above the galaxy’s disk, about 200,000 light-years from the center. By comparison, the diameter of the Milky Way’s disk is approximately 100,000 light-years. Using Hubble to measure the runaway star’s direction of motion and determine the Milky Way’s core as its starting point, Brown and Gnedin’s team calculated how fast the star had to have been ejected to reach its current location.

“The star is traveling at an absurd velocity, twice as much as the star needs to escape the galaxy’s gravitational field,” explains Brown, a hypervelocity star hunter who found the first unbound star in 2005. “There is no star that travels that quickly under normal circumstances “” something exotic has to happen.”

There’s another twist to this story. Based on the speed and position of HE 0437-5439, the star would have to be 100 million years old to have journeyed from the Milky Way’s core. Yet its mass “” nine times that of our Sun “” and blue color mean that it should have burned out after only 20 million years “” far shorter than the transit time it took to get to its current location.

The most likely explanation for the star’s blue color and extreme speed is that it was part of a triple-star system that was involved in a gravitational billiard-ball game with the galaxy’s monster black hole. This concept for imparting an escape velocity on stars was first proposed in 1988. The theory predicted that the Milky Way’s black hole should eject a star about once every 100,000 years.

Brown suggests that the triple-star system contained a pair of closely orbiting stars and a third outer member also gravitationally tied to the group. The black hole pulled the outer star away from the tight binary system. The doomed star’s momentum was transferred to the stellar twosome, boosting the duo to escape velocity from the galaxy. As the pair rocketed away, they went on with normal stellar evolution. The more massive companion evolved more quickly, puffing up to become a red giant. It enveloped its partner, and the two stars spiraled together, merging into one superstar “” a blue straggler.

“While the blue straggler story may seem odd, you do see them in the Milky Way, and most stars are in multiple systems,” Brown says.

This vagabond star has puzzled astronomers since its discovery in 2005 by the Hamburg/European Southern Observatory sky survey. Astronomers had proposed two possibilities to solve the age problem. The star either dipped into the Fountain of Youth by becoming a blue straggler, or it was flung out of the Large Magellanic Cloud, a neighboring galaxy.

In 2008 a team of astronomers thought they had solved the mystery. They found a match between the exiled star’s chemical makeup and the characteristics of stars in the Large Magellanic Cloud. The rogue star’s position also is close to the neighboring galaxy, only 65,000 light-years away. The new Hubble result settles the debate over the star’s birthplace.

Astronomers used the sharp vision of Hubble’s Advanced Camera for Surveys to make two separate observations of the wayward star 3 1/2 years apart. Team member Jay Anderson of the Space Telescope Science Institute in Baltimore, Md., developed a technique to measure the star’s position relative to each of 11 distant background galaxies, which form a reference frame.

Anderson then compared the star’s position in images taken in 2006 with those taken in 2009 to calculate how far the star moved against the background galaxies. The star appeared to move, but only by 0.04 of a pixel (picture element) against the sky background. “Hubble excels with this type of measurement,” Anderson says. “This observation would be challenging to do from the ground.”

The team is trying to determine the homes of four other unbound stars, all located on the fringes of the Milky Way.

“We are targeting massive ‘B’ stars, like HE 0437-5439,” says Brown, who has discovered 14 of the 16 known hypervelocity stars. “These stars shouldn’t live long enough to reach the distant outskirts of the Milky Way, so we shouldn’t expect to find them there. The density of stars in the outer region is much less than in the core, so we have a better chance to find these unusual objects.”

The results were published online in The Astrophysical Journal Letters on July 20, 2010. Brown is the paper’s lead author.

Image Caption: In this illustration, the hot, blue star HE 0437-5439 has been tossed out of the center of our Milky Way galaxy with enough speed to escape the galaxy’s gravitational clutches. The stellar outcast is rocketing through the Milky Way’s distant outskirts at 1.6 million miles an hour, high above the galaxy’s disk, about 200,000 light-years from the center. The star is destined to roam intergalactic space. Illustration Credit: NASA, ESA, and G. Bacon (STScI) Science Credit: NASA, ESA, O. Gnedin (University of Michigan, Ann Arbor), and W. Brown (Harvard-Smithsonian Center for Astrophysics, Cambridge, Mass.)

On the Net:

Hunting Down Frugal Aliens

“ËœBenford beacons’ concept could refocus search for signs of intelligent extraterrestrial life

For 50 years, humans have scanned the skies with radio telescopes for distant electronic signals indicating the existence of intelligent alien life. The search “” centered at the SETI Institute in Mountain View, Calif. “” has tapped into our collective fascination with the concept that we may not be alone in the universe.

But the effort has so far proved fruitless, and the scientific community driving the SETI project has begun questioning its methodology, which entails listening to specific nearby stars for unusual blips or bleeps. Is there a better approach?

UC Irvine astrophysicist Gregory Benford and his twin, James “” a fellow physicist specializing in high-powered microwave technology ““ believe there is, and their ideas are garnering attention.

n two studies appearing in the June issue of the journal Astrobiology, the Benford brothers, along with James’ son Dominic, a NASA scientist, examine the perspective of a civilization sending signals into space ““ or, as Gregory Benford puts it, “the point of view of the guys paying the bill.”

“Our grandfather used to say, “ËœTalk is cheap, but whiskey costs money,'” the physics professor says. “Whatever the life form, evolution selects for economy of resources. Broadcasting is expensive, and transmitting signals across light-years would require considerable resources.”

Assuming that an alien civilization would strive to optimize costs, limit waste and make its signaling technology more efficient, the Benfords propose that these signals would not be continuously blasted out in all directions but rather would be pulsed, narrowly directed and broadband in the 1-to-10-gigahertz range.

“This approach is more like Twitter and less like War and Peace, ” says James Benford, founder and president of Microwave Sciences Inc. in Lafayette, Calif.

Their concept of short, targeted blips “” dubbed “Benford beacons” by the science press “” has gotten extensive coverage in such publications as Astronomy Now. Well-known cosmologist Paul Davies, in his 2010 book The Eerie Silence: Renewing Our Search for Alien Intelligence, supports the theory.

This means that SETI “” which focuses its receivers on narrow-band input “” may be looking for the wrong kind of signals. The Benfords and a growing number of scientists involved in the hunt for extraterrestrial life advocate adjusting SETI receivers to maximize their ability to detect direct, broadband beacon blasts.

But where to look? The Benfords’ frugal-alien model points to our own Milky Way galaxy, especially the center, where 90 percent of its stars are clustered.

“The stars there are a billion years older than our sun, which suggests a greater possibility of contact with an advanced civilization than does pointing SETI receivers outward to the newer and less crowded edge of our galaxy,” Gregory Benford says.

“Will searching for distant messages work? Is there intelligent life out there? The SETI effort is worth continuing, but our common-sense beacons approach seems more likely to answer those questions.”

By Tom Vasich, UC Irvine

Image Caption: Astrophysicist Gregory Benford “” standing before the UCI Observatory “” believes an alien civilization would transmit “cost-optimized” signals rather than the kind sought for decades by the SETI Institute. Credit: Steve Zylius/UC Irvine

On the Net:

i-Dosing: Innocent Download Or Intoxicating Drug?

Parents who have worked hard to keep their kids off of substances like marijuana or narcotics now have a new addiction to worry about–digital music downloads that can reportedly induce feelings of extreme ecstasy and alter the listener’s brain patterns to simulate a drug-induced high.

The phenomenon, which is being referred to as “i-Dosing,” was first reported by Oklahoma News 9 last week and has since been reported on by several media outlets worldwide.

According to Ryan Single of Wired.com, i-dosing “involves finding an online dealer who can hook you up with ‘digital drugs’ that get you high through your headphones”¦ I-dosing involves donning headphones and listening to ‘music’–largely a droning noise–which the sites peddling the sounds promise will get you high. Teens are listening to such tracks as ‘Gates of Hades,’ which is available on YouTube gratis (yes, the first one is always free).”

Daniel Bates of the Daily Mail Online notes that several videos have been posted on sites, including YouTube, which “show a young girl freaking out and leaping up in fear, a teenager shaking violently and a young boy in extreme distress”¦ Those who come up with the ‘doses’ claim different tracks mimic different sensations you can feel by taking drugs such as Ecstasy or smoking cannabis.”

It may sound unusual, but the i-dosing phenomenon has captured the attention of the Oklahoma Bureau of Narcotics and Dangerous Drugs, which has released a statement warning kids not to partake of the supposedly mind-altering MP3s and videos.

“Kids are going to flock to these sites just to see what it is about and it can lead them to other places,” Mark Woodward, a spokesman with the group, told Bates on Wednesday. “We want parents to be aware of what sites their kids are visiting and not just dismiss this as something harmless on the computer”¦ If you want to reach these kids, save these kids and keep these kids safe, parents have to be aware. They’ve got to take action.”

Not everyone is buying into the dangers of this current craze, however.

Dave Pell of Gizmodo is one of the skeptics. In a July 21 blog entry entitled, “Parents, Your Kids Aren’t Getting High i-Dosing MP3s,” Pell writes, “So let me get this straight. Kids are putting on some headphones, lying down and cranking some really monotonous music and that’s supposed to be the internet-era drug we should worry about?”

“That’s like worrying that a crack addict is drinking too much decaf,” he adds, adding that information and technology overload are more pressing concerns. “What we now call i-dosing are sounds previously known as binaural beats that have been used for research and sleep therapy. What’s amazing is that these beats are suddenly being viewed as something dangerous or even as an [illicit] drug”¦ If i-dosing means putting on your headphones and being alone in your head for a few minutes at a time, then it sounds more like a cure than a disease.”

On the Net:

Quasar Acts As A Cosmic Lens

The EPFL’s Laboratory of Astrophysics has for the first time observed a quasar that is located between the earth and a more distant galaxy and acts as a gravitational lens

A quasar acting as a gravitational lens has now been observed for the first time. This discovery, made by the EPFL’s Laboratory of Astrophysics in cooperation with Caltech, represents an advance in the field, since it will allow scientists to weigh and measure a galaxy that contains a quasar. The news is published July 20 in the journal Astronomy & Astrophysics.

Gravitational lenses are common throughout the universe. They are caused by massive objects such as stars or galaxies that bend rays of light passing nearby. If these objects are between the earth and a more distant light source, the light will therefore be brighter and easier to observe, but also very distorted. If the alignment of the various stellar bodies is almost perfect, the image of the source will be multiplied.

The lens phenomenon is not only an interesting result of Einstein’s theory of general relativity; it has also been a valuable astrophysical tool with important applications in the search for extrasolar planets and the study of stars, galaxies, clusters of galaxies and quasars. For example, the nature of the distortion, the number of images of the most distant objects and their position in the sky provide essential information about the distribution of matter in the lens galaxy and allow a measurement of its total matter, including dark matter, to be made.

A quasar is the heart of a galaxy, consisting of a supermassive black hole. The small fraction of the galaxy’s mass that is close enough to be swallowed up by the black hole emits light before disappearing forever, giving rise to this extremely bright and transient phenomenon.

To date, about a hundred of these quasars emitting light that is concentrated by a lens galaxy located between them and the earth have been discovered. However, this is the first time that the opposite case has been observed, where the quasar is in the foreground and the galaxy behind it. The interest of this discovery lies in the fact that it provides an unprecedented opportunity to “weigh” a galaxy containing a quasar.

This advance was made thanks to the SLOAN Digital Sky survey database (www.sdss.org), which makes three-dimensional sky maps covering more than a quarter of the sky available to scientists and contains nearly a million galaxies and over 120,000 quasars. A sample of some 23,000 of these quasars in the northern hemisphere was selected by the Laboratory of Astrophysics team. In the end, only four of them seemed to act as a gravitational lens.

One of these was studied using the Keck telescope (Caltech) on Mauna Kea peak in Hawaii. These images will be supplemented in the coming months with very high-quality photographs from the Hubble Space Telescope, which will reveal more about the nature of this particular quasar.

Image Caption: This is an image of the first-ever foreground quasar (blue) lensing a background galaxy (red), taken with the Keck II telescope. Credit: Courbin, Meylan, Djorgovski, et al., EPFL/Caltech/WMKO

On the Net:

Russia Funding New Spaceport

Russian Prime Minister Vladimir Putin said Monday that the country would provide $800 million to help jump start the construction of a new cosmodrome to ease its dependence on a Soviet-era launch site in Kazakhstan.

“The government has made a decision to earmark $809 million over the next three years for the start of the full-blown construction of the Vostochny Cosmodrome,” Putin said in televised remarks at a government meeting.

Russia rents its main Soviet-era spaceport Baikonur from neighboring Kazakhstan. It said it plans to build a new one near the town of Uglegorsk in the Far Eastern Amur region and it should come online by 2015.

“I very much expect that Vostochny will become the first national cosmodrome for civilian use and guarantee Russia complete independence of space activities,” Putin said at the Energia Rocket and Space Corporation, the country’s main maker of spacecraft.

“It is important that the cosmodrome will effectively ensure the operation of all promising space projects,” including planned interplanetary flights, Putin said.

Putin said that Russia will put aside about $3.2 million for its space industry this year, including the development of GLONASS, which is its answer to U.S. Global Positioning System (GPS).

He said that he had ordered that foreign scientists from NASA, the European Space Agency, the Japan Aerospace Exploration Agency and Boeing, among others, to be given access to Energia rocket maker.

“Together with their Russian colleagues, they will ensure the work of the International Space Station,” Putin said in remarks released by the government.

“The same decision has been made in relation to our Ukrainian friends,” he added. “They will take part in the work to assemble and test the Soyuz and Progress manned spacecraft.”

Russia sent the first man into space in 1961 and launched the first sputnik satellite four years earlier.

Image Caption: Currently Russia rents its main Soviet-era spaceport Baikonur from neighboring Kazakhstan. Credit: NASA/Bill Ingalls

On the Net: