Long-Term Study Examines Western European Bonelli’s Eagle Populations

redOrbit Staff & Wire Reports – Your Universe Online

A 30-year study of Bonelli’s Eagle populations in Western Europe has revealed the birds are most at risk in northern Spain, researchers from the University of Barcelona announced on Wednesday.

Bonelli’s Eagle (also known as Aquila fasciata) is one of the most common birds of prey in the Mediterranean area, as well as one of the region’s most endangered avian species, Joan Real and Antonio Hernández Matías of the university’s Biodiversity Research Institute (IRBio) explained in a statement.

Real and Matías, who also represent the university’s Department of Animal Biology, set out to analyze key vital rates in Bonelli’s Eagle populations between 1980 and 2009. They conducted long-term monitoring of the Iberian Peninsula and France in order to know demographic relationships among populations, as well as to understand population dynamics throughout the western part of the continent (where the birds are more at risk).

“Bonelli’s Eagle does not have a large distribution in Europe. Northwest edge is located at southern France and the most Southern populations are in Portugal and Andalusia,” Real said. “We have been performing… an annual analysis of Bonelli’s Eagle populations in Catalonia for thirty years. This work, together with other analysis carried out by European research groups, has covered for the first time the whole eagle population.”

Their research has enabled Real and Matías to study the creatures in a homogeneous manner, as well as to study their demographic evolution for conservation purposes. They found the eagle populations are not isolated in the Iberian Peninsula (in the extreme southwest of Europe). Rather, Matías said, “our complex model confirms the presence of large-scale spatially structured population with source–sink dynamics in the Peninsula.”

In other words, as the authors explain in the Ecological Society of America journal Ecological Monographs, populations located throughout the region could be essential to the overall survival of the Bonelli’s Eagle in the Peninsula, and perhaps even throughout the continent. This is said to be the first study to provide an in-depth analysis of differences between vital rates in eagle populations in Europe.

“These differences are due to human activity and environmental, geographical and climatic conditions,” the university said. “Most populations in Northern Iberia, where rural areas are under populated and tree-covered, are at high risk (high adult mortality rate, decreased fertility, etc.). However, in Southern Iberia populations, who live in warmer areas where traditional activity continues, demographic parameters are better.”

“The species coexist well with traditional agriculture and farming managed in a sustainable way as they facilitate prey presence,” Real added. However, if those activities cease, it could result in a habitat change and the loss of prey, making it difficult for the eagles to survive. Furthermore, “excessive human activity” such as intensive farming or urbanization could also disrupt breeding sites and otherwise negatively impact the birds’ survival rates.

The investigators believe their findings emphasize the necessity of accounting for spatial structure, subpopulation heterogeneity, dispersion processes and main uncertainty sources regarding the Bonelli’s Eagle in the future. They added it will be important to consider the potential effect of habitat fragmentation and other ecological factors in order to improve predictions about the potential impact of global change on the species.

Golden Rice

Golden Rice is a variety of Oryza sativa rice produced  through genetic engineering to biosynthesize beta-carotene, a precursor of vitamin A, in the edible parts of the rice. The research was conducted with the goal of producing a fortified food to be grown and consumed in areas with a shortage of dietary vitamin A, a deficiency which is estimated to kill 670,000 children under five years old each year.

Golden rice is different from its parental strain by the addition of three beta-carotene biosynthesis genes. The scientific details of the rice were initially published in Science in 2000, the product of an eight-year project by Ingo Potrykus of the Swiss Federal Institute of Technology and Peter Beyer of the University of Freiburg. At the time of the publication, golden rice was considered a significant breakthrough in biotechnology, as the researchers had engineered an entire biosynthetic pathway.

In 2005, a new variety named Golden Rice 2, which produces up to 23 times more beta-carotene than the original golden rice, was revealed. Although Golden Rice was developed as a humanitarian tool, it has been met with substantial opposition from environmental and anti-globalization activists. Golden Rice has undergone 2 years of field testing within the Philippines.

Golden Rice was designed to generate beta-carotene, a precursor of vitamin A, in the edible portion of the rice, the endosperm. The rice plant can naturally create beta-carotene within its leaves, where it’s involved in photosynthesis. However, the plant doesn’t normally produce the pigment in the endosperm, where photosynthesis doesn’t take place. A crucial breakthrough was the discovery that a single phytoene desaturase gene (bacterial CrtI) can be utilized to create lycopene from phytoene in GM tomato, as opposed to having to introduce the multiple carotene denaturizes that are usually utilized by higher plants. Lycopene is then cyclized to beta-carotene via the endogenous cyclase in Golden Rice.

The Golden Rice was created by transforming rice with only two beta-carotene biosynthesis genes: psy (phytoene synthase) from daffodil (Narcissus pseudonarcissus) and crtI (carotene desaturase) from the soil bacterium Erwinia uredovora. The insertion of a lyc (lycopene cyclase) gene was thought to be required, but further research proved it’s already being produced in wild-type rice endosperm.

The psy and crtI genes were transformed into the rice nuclear genome and placed under the control of an endosperm-specific promoter, so they are only expressed in the endosperm. The exogenous lyc gene has a transit peptide sequence connected so it is targeted to the plastid, where geranylgeranyl diphosphate formation takes place. The bacterial crtI gene was a significant inclusion to accomplish the pathway, since it can catalyze multiple steps in the synthesis of carotenoids up to lycopene, while these steps need more than one enzyme in plants. The end product of the engineered pathway is lycopene, but if the plant accumulated lycopene, the rice would be colored red. Recent analysis has shown the plant’s endogenous enzymes process the lycopene to beta-carotene in the endosperm, giving the rice the distinguishing yellow color for which it is named. The original Golden Rice was named SGR1, and under greenhouse conditions it generated 1.6 micrograms per gram of carotenoids.

Golden Rice has been bred with local rice cultivars within the Philippines, Taiwan, and with the American rice cultivar ‘Cocodrie’. The first field trials of these golden rice cultivars were performed by Louisiana State University Agricultural Center in 2004. Field testing supplies a more accurate measurement of nutritional value and permits feeding tests to be performed. Initial results from the field tests have displayed field-grown golden rice produces four to five times more beta-carotene than golden rice that is grown under greenhouse conditions.

In 2005, a team of researchers at biotechnology company, Syngenta, created a variety of golden rice named “Golden Rice 2”. They joined the phytoene synthase gene from maize with crtI from the original golden rice. Golden Rice 2 produces 23 times more carotenoids than golden rice, and preferentially accumulates beta-carotene. To receive the Recommended Dietary Allowance, it’s estimated that 144 grams of the most high-yielding strain would have to be consumed. Bioavailability of the carotene from golden rice has been established and found to be an effective source of Vitamin A for humans.

In June of 2005, researcher Peter Beyer received funding form the Bill and Melinda Gates Foundation to further enhance golden rice by increasing the levels of the bioavailability of pro-vitamin A, vitamin E, iron, and zinc, and to enhance protein quality through genetic modification.

The research that led to golden rice was performed with the objective of helping children who suffer from vitamin A deficiency (VAD). In 2005, 190 million children and 19 million pregnant women, in 122 countries, were estimated to be affected by vitamin A deficiency. VAD is held responsible for 1 to 2 million deaths, 500,000 cases of irreversible blindness and millions of cases of xerophthalmia annually. Children and women who are pregnant are at the highest risk. Vitamin A is supplemented orally and by injection in areas where the diet is lacking Vitamin A. As of 1999, there were 43 countries that had vitamin A supplementation programs for children under the age of 5; in 10 of these countries, two high dose supplements are obtainable per year, which, according to UNICEF, could efficiently eradicate VAD. However, UNICEF and numerous NGOs involved in supplementation note more frequent low-dose supplementation must be an objective where it is possible.

Because many children within countries where there is a dietary deficiency in vitamin A depend on rice as a staple food, the genetic modification to make rice produce the vitamin A precursor beta-carotene is seen as an effortless and less expensive alternative to vitamin supplements or an increase in the consumption of green vegetables or animal products. It can be considered as the genetically engineered equivalent of fluoridated water or iodized salt in that it aids in preventing disease, with the exception that fluoride isn’t an elemental nutrient for survival.

Initial analysis of the potential nutritional benefits of golden rice proposed consumption of golden rice wouldn’t get rid of the issues of vitamin A deficiency, but should be seen as a complement to other techniques of vitamin A supplementation. Since then, improved strains of golden rice have been developed having adequate provitamin A to provide the total dietary requirement of this nutrient to people who eat about 75 grams of golden rice each day.

Particularly, since carotenes are hydrophobic, there needs to be an adequate amount of fat present within the diet for golden rice to have the ability to lessen vitamin A deficiency. In that respect, it’s significant that vitamin A deficiency is rarely an isolated phenomenon, but normally coupled to a general lack of a balanced diet. The RDA levels accepted in developed countries are far in excess of the amounts required to prevent blindness. Furthermore, this claim referred to an early cultivar of golden rice; one bowl of the latest version provides 60 percent of RDA for healthy children.

Critics of genetically engineered crops have raised a variety of concerns. One of these is that golden rice originally didn’t have sufficient vitamin A. This issue was solved by the development of new strains of rice. Nonetheless, there are still doubts about the speed at which vitamin A degrades once the plant is harvested, and how much remains after cooking it. A study in 2009 concluded that golden rice is successfully converted into vitamin A in humans and a 2012 study that fed 68 children ages 6 to 8 concluded that golden rice was as good as vitamin A supplements and better than the natural beta-carotene in spinach.

Greenpeace opposes the release of any genetically modified organisms into the environment and is concerned that golden rice is a Pandora’s Box that will open the door to more common usage of GMOs.

Vandana Shiva, an Indian anti-GMO activist, argued that the issue wasn’t that the crop had any particular deficiencies, but that there were potential issues with poverty and loss of biodiversity in food crops. These issues are aggravated by the corporate control of agriculture by means of controlling genetically modified organisms. By concentrating on a narrow issue (vitamin A deficiency), Shiva argued, the golden rice proponents were obscuring the larger issue of a lack of broad availability of diverse and nutritionally sufficient food sources. Other groups argued that a varied diet containing foods that are rich in beta carotene such as sweet potatoes, leafy green vegetables, and fruit would supply children with adequate vitamin A. However, Keith West of Johns Hopkins Bloomberg School of Public Health has argued that foodstuffs containing vitamin A are either not available, or only available in certain seasons, or that they are too expensive for poor families in underdeveloped countries.

Due to a lack of real-world studies and uncertainty about how many people will use golden rice, WHO malnutrition expert Francesco Branca concludes “giving out supplements, fortifying existing foods with vitamin A, and teaching people to grow carrots or certain leafy vegetables are, for now, more promising ways to fight the problem”. More recently, author Michael Pollan, who had attacked the product in the year 2001, while still doubtful about the benefits, expressed support for the continuance of the research.

An experimental plan of golden rice being grown within the Philippines was uprooted during direct action on August 8, 2013. While the action was, at first, credited to 400 local farmers, it was later found to have been performed by a group of 50 anti-GMO activists.

Potrykus has organized an effort to have golden rice distributed for free to subsistence farmers. Free licenses for developing countries were arranged quickly due to the positive publicly that golden rice received, especially in Time magazine in July of 2000. Golden Rice was said to be the first recombinant DNA tech crop that was unarguably advantageous. Monsanto Company was one of the first companies to permit free licenses. The cutoff between humanitarian and commercial use was set at 10,000 US dollars. Thus, as long as a farmer or subsequent user of Golden Rice genetics doesn’t make more than 10,000 dollars per year, no royalties need to be paid. Additionally, farmers are allowed to keep and replant seed.

Image Caption: Golden Rice grain compared to white rice grain in screenhouse of Golden Rice plants. Credit: International Rice Research Institute (IRRI)/Wikipedia (CC BY 2.0)

Researchers Discover Ancient Martian Supervolcano

[ Watch the Video: Supervolcano Found On Mars ]

Lee Rannals for redOrbit.com – Your Universe Online

In a report that appeared in the journal Nature this week, scientists say they have discovered a supervolcano on Mars for the first time. The scientists determined that a vast circular basin on the face of the Red Planet is actually the remains of an ancient supervolcano eruption. To reach this conclusion, the team used images and topograhic data from NASA’s Mars Odyssey, Mars Global Surveyor and Mars Reconnaissance Orbiter spacecraft.

“On Mars, young volcanoes have a very distinctive appearance that allows us to identify them,” stated Joseph R. Michalski, a Senior Scientist at the Planetary Science Institute, who led the study. “The long-standing question has been what ancient volcanoes on Mars look like. Perhaps they look like this one.”

The researchers said a large body of magma loaded with dissolved gas rose through a thin crust to the surface rapidly, like a bottle of soda that has been shaken. This supervolcano would have ejected its contents far and wide, spilling ash and material across vast swaths of the Red Planet.

“This highly explosive type of eruption is a game-changer, spewing many times more ash and other material than typical, younger Martian volcanoes,” said Jacob E. Bleacher of NASA Goddard Space Flight Center, who co-authored the paper. “During these types of eruptions on Earth, the debris may spread so far through the atmosphere and remain so long that it alters the global temperature for years.”

When the supervolcano expelled all its material from the eruption, it caused all the ground around it to sink like a balloon deprived of its air.

The supervolcano is located in the Arabia Terra region of Mars, which is a battered terrain loaded with impact craters. When Michalski examined this particularly basin more closely, he noticed that it lacked the raised rim that an impact crater typically has. He was also unable to find a blanket of ejecta, the melted rock that splashes outside the crater when an object hits.

After noting this, Michalski contacted Bleacher, who identified features at Eden Patera that usually indicate volcanic activity. The scientists found that the outside of the basin is ringed by the kinds of faults and valleys that occur when the ground collapses because of activity below the surface. They also found a few more basins nearby that are volcano candidates.

“If just a handful of volcanoes like these were once active, they could have had a major impact on the evolution of Mars,” Bleacher said

UCLA Study Finds Link Between High-Fat, High-Calorie Diet And Pancreas Cancer

Results support low-fat, low-calorie diet as preventive measure against disease

Researchers at UCLA’s Jonsson Comprehensive Cancer Center have found that mice made obese by high-calorie, high-fat diets develop abnormally high numbers of lesions known to be precursors to pancreas cancer.

This is the first study to show a direct causative link in an animal model between obesity and risk of this deadly cancer.

The study, published Sept. 30 in the journal Cancer Prevention Research, was led by Dr. Guido Eibl, a member of the Jonsson Cancer Center and a professor in the department of surgery at the David Geffen School of Medicine at UCLA.

Pancreatic ductal adenocarcinoma, or cancer of the pancreas, is one of the most deadly forms of cancer in humans. Overall five-year survival rates are approximately 3 to 5 percent, and the average survival period after diagnosis is just four to six months. It is a particularly aggressive disease, one that is often beyond the point of effective treatment by the time symptoms appear.

Since current treatments are limited in quantity and effectiveness, researchers are turning to prevention strategies to try to make headway against the disease before it reaches advanced stages.

Previous research in large populations has strongly supported a positive association between obesity and increased risk of pancreas cancer, but no studies had yet modeled human pancreas cancer in animals. The availability of genetically engineered model mice that have the same mutation found in human pancreas cancer patients — the KR mutation — has made the study of possible causes more feasible because changes in mouse metabolism caused by obesity are similar to those in humans.

Eibl and his colleagues set out to model diet-induced obesity and the development of pancreas cancer in a set of mice and then compare them to genetically identical mice that had not been given a high-fat, high-calorie diet. Obesity in these mice resembles human obesity in a number of important clinical features, including weight gain and the disturbance of metabolism. The mouse model was ideal for unraveling any underlying biological mechanisms of pancreas cancer put in motion by obesity, the researchers said.

The research team also set parameters to assess the impact of the high-fat, high-calorie diet on mouse pancreas tissue, such as increased inflammation and other biological signs that indicate pancreas problems. These indicators were measured and used to create an overall “pancreatitis score” to indicate the negative effects on the pancreas. The researchers then conducted pathology tests on mouse pancreas tissue to determine how many precursor lesions — known as pancreatic intraepithelial neoplasias — had developed.

The mice that ate a normal diet gained an average of 7.2 grams (plus or minus approximately 2.8 grams) over 14 months. Mice that ate the high-fat, high-calorie diet gained an average of 15.9 grams (plus or minus 3.2 grams). Mice fed the normal diet had mostly normal pancreases with very few scattered lesions. Mice fed the high-fat, high-calorie diet had significantly more lesions and had fewer healthy pancreases.

The study showed that the mice fed a diet high in fats and calories gained significantly more weight, had abnormalities in their metabolism and increased insulin levels, and displayed marked pancreas tissue inflammation and development of pancreas intraepithelial neoplasias. These observations strongly suggest that such a diet leads to weight gain and metabolism disturbances, can cause pancreas inflammation, and promotes pancreas lesions that are precursors to cancer.

“The development of these lesions in mice is very similar to what happens in humans,” Eibl said. “These lesions take a long time to develop into cancer, so there is enough time for cancer-preventive strategies, such as changing to a lower-fat, lower-calorie diet, to have a positive effect.”

On the Net:

Herman The Bull

Herman the Bull was the first genetically modified or transgenic bovine in the world. The publication of Herman’s creation caused an ethical storm.

At the early embryo stage, Herman was genetically engineered within a laboratory by Gen Pharm International of Mountain View, California. Scientists microinjected cells with the human gene coding for lactoferrin. The Dutch Parliament changed law in December of 1992 to enable Herman to reproduce. Eight calves were born in 1994 following a breeding program established at Gen Pharm’s European laboratory Pharming Group N. V in Leiden, the Netherlands. All of the calves inherited the lactoferrin production gene. With following sirings, Herman fathered a total of 55 calves.

Dutch law demanded he be slaughtered at the conclusion of his role within the experiment. The Dutch Agriculture Minister at the time, Jozias van Aartsen, agree, however, to a pardon, provided Herman did not have more offspring, after public and scientists rallied to his defense.

Together, with the cloned cows named Belle and Holly, he lived out his retirement at Naturalis, the National Museum of Natural History in Leiden. Herman the Bull was one of the oldest bulls ever within the Netherlands.

On April 2, 2004, Herman was euthanized by vets from the University of Utrecht due to him suffering badly from osteoarthritis.

Herman the Bull’s hide has been preserved and mounted by taxidermists; since February 15, 2008, Herman is permanently on display in Naturalis. According to Naturalis, the symbolic value of having Herman the Bull is that he represents the onset of a new era in the way man deals with nature, an icon of scientific progress, and the subsequent public discussion of these problems.

Image Caption: Bull Herman (Lelystad, 16 December 1990 – Leiden, 2 April 2004), the first non human mammal with human DNA and first genetically engineered animal on exibit in the National Museum of Natural History ‘Naturalis’ in Leiden, the Netherlands. Credit: Peter Maas/Wikipedia (CC BY-SA 3.0)

Flowering Plants Evolved 100 Million Years Earlier Than Believed

[ Watch the Video: Flowering Plants Arose In The Early Triassic ]

redOrbit Staff & Wire Reports – Your Universe Online

Researchers from the University of Zurich in Switzerland have uncovered evidence suggesting flowering plants evolved 100 million years earlier than previously believed, according to new research appearing in the open-access journal Frontiers in Plant Science.

Flowering plants evolved from extinct plants related to conifers, cycads, ginkgos and seed ferns, and the oldest known fossils from these types of plants are pollen grains – small, robust spores which are numerous and fossilize more easily than flowers or leaves. Now, drilling cores have unearthed well-preserved 240-million-year-old pollen grains, making them the oldest known fossils from flowering plants.

“An uninterrupted sequence of fossilized pollen from flowers begins in the Early Cretaceous, approximately 140 million years ago, and it is generally assumed that flowering plants first evolved around that time,” the university explained in a statement.

“But the present study documents flowering plant-like pollen that is 100 million years older, implying that flowering plants may have originated in the Early Triassic (between 252 to 247 million years ago) or even earlier,” it added. “Many studies have tried to estimate the age of flowering plants from molecular data, but so far no consensus has been reached. Depending on dataset and method, these estimates range from the Triassic to the Cretaceous.”

Typically, molecular estimates need to have a foundation based on fossil evidence, the researchers explained. However, when it came to flowering plants, exceptionally old fossils had not been available. This is the reason why the Zurich team’s discovery of flower-like pollen dating back to the Triassic is said to be so significant.

Study authors Peter Hochuli and Susanne Feist-Burkhardt analyzed a pair of drilling cores obtained from Weiach and Leuggern in northern Switzerland. They discovered pollen grains resembling fossil pollen from the earliest known flowering plants, and then used laser-scanning microscopy to obtain high-resolution 3D images of six different types of pollen.

Nine years ago, the duo conducted research in which they described different but related flowering plant-like pollen from the Middle Triassic in cores from the Barents Sea, south of Spitsbergen. That study, combined with the results of their current work, leads Hochuli to conclude “even highly cautious scientists will now be convinced that flowering plants evolved long before the Cretaceous.”

“What might these primitive flowering plants have looked like? In the Middle Triassic, both the Barents Sea and Switzerland lay in the subtropics, but the area of Switzerland was much drier than the region of the Barents Sea,” the university said. “This implies that these plants occurred a broad ecological range. The pollen’s structure suggests that the plants were pollinated by insects: most likely beetles, as bees would not evolve for another 100 million years.”

Image Below: These are images of pollen grains. Credit: Peter A. Hochuli1. P, A and Feist-Burkhardt. S, Frontiers in Plant Science, 2013

Bed Sharing With Infants On The Rise, Experts Disagree On Pros And Cons

Brett Smith for redOrbit.com – Your Universe Online

Despite a wealth of information advising against it, the number of infants sharing a bed with their parents or a sibling increased between 1993 and 2010, according to a new report in JAMA Pediatrics.

While parents and their babies sharing the same bed may seem like a benign or even loving habit, studies have shown that the practice is connected to an increased risk of sudden infant death syndrome (SIDS).

Based on a series of annual telephone surveys conducted between 1993 and 2010 in 48 states, the study found that the proportion of infants sharing a bed increased from 6.5 percent in 1993 to almost 14 percent in 2010. The increase was most significant among black and Hispanic families throughout the study period. Among white families, the practice increase from 1993 to 2000, but but appeared to level off from 2001 to 2010.

“That’s a concern because we know that blacks are at increased risk for SIDS,” study author Marian Willinger, an administrator at the National Institute of Child Health and Human Development, told the Associated Press. “We want to eliminate as many risks as we can for everybody, particularly in that population where we’re seeing increasing disparities.”

Infant deaths are categorized as SIDS if it occurs in the first year of life and remains inexplicable even after a thorough investigation. Federal officials started to take annual surveys on infant sleep habits in 1993 after the American Academy of Pediatrics recommended that parents place sleeping babies on their backs as a way of lowering SIDS risk.

The study authors also noted household income, region of the country, infant age and whether the child was born prematurely as determining factors for bed sharing. Over half the respondent since 2006 said medical professionals had never discussed the risks of bed-sharing with them.

“That in and of itself is kind of shocking … because the recommendations have long been out,” SIDS expert Dr. Fern R. Hauck, a family medicine professor at the University of Virginia, told the AP.

In a contrasting editorial published alongside the study, Dr. Abraham B. Bergman of the Harborview Medical Center in Seattle generally disagreed with the notion of bed sharing as a bad idea.

“Colson and colleagues report that from 1993 through 2010, the overall trend for U.S. caregivers to share a bed (also known as cosleeping) with their infants has significantly increased, especially among black families,” he wrote. “Because of their belief that bed sharing increases infant mortality, the authors call for increased efforts by pediatricians to discourage the practice. I find the report disquieting because evidence linking bed sharing per se to the increased risk for infant death is lacking.”

“The campaign against bed sharing stems from a recommendation of the American Academy of Pediatrics (AAP),” Bergman continued. “Equal time in counseling should be given to the benefits to bed sharing, such as more sleep for the parent, easier breastfeeding when the infant is nearby, ease of pacifier reinsertion, and the intangible satisfaction of skin-to-skin contact.”

“In its admonition against bed sharing, the AAP has overreached,” Bergman concluded.

War On Illegal Drugs Is A Losing Battle For Authorities

Lee Rannals for redOrbit.com – Your Universe Online

Authorities are losing the global battle trying to control illegal drugs, according to a study published in the journal BMJ Open.

Researchers analyzed data from seven international government-funded drug surveillance systems containing 10 years of information on the price and purity of cannabis, cocaine and opiates such as heroin. They also reviewed the number of seizures of illegal drugs in production regions and rates of consumption in markets where demand for illegal drugs is high.

The team found that the purity of these illegal substances is growing, which suggests that the international authorities are losing the battle. Overall, they concluded that the global supply of illicit drugs has not likely been reduced in the past two decades.

“In particular, the data presented in this study suggest that the supply of opiates and cannabis have increased, given the increasing potency and decreasing prices of these illegal commodities,” the authors wrote in the journal. “These findings suggest that expanding efforts at controlling the global illegal drug market through law enforcement are failing.”

Scientists found that the purity and potency of illegal drugs either generally remained stable or increased between 1990 and 2010. They also found that the street price has dropped, indicating a jump in the supply, and seizures of drugs increased in countries of major supply and demand.

The average street price of heroin, cocaine and marijuana in the US dropped by over 80 percent over the past two decades, while the purity of these drugs increased by up to 161 percent. The average price of opiates and cocaine in Europe decreased by 74 percent and 51 percent, respectively; in Australia, the price of cannabis dropped by 49 percent while cocaine prices dropped by 14 percent.

In an accompanying podcast, Dr Evan Wood, scientific chair of the International Centre for Science in Drug Policy and research chair in Inner City Medicine at the University of British Columbia in Canada, stated, “These findings add to the growing body of evidence that the war on drugs has failed. We should look to implement policies that place community health and safety at the forefront of our efforts, and consider drug use a public health rather than a criminal justice issue.”

The researchers say they hope their study brings to light the need to improve drug strategies across the globe.

“It is hoped that this study highlights the need to re-examine the effectiveness of national and international drug strategies that place a disproportionate emphasis on supply reduction at the expense of evidence based prevention and treatment of problematic illegal drug use,” the authors said.

Study Shows Stress Leads To Dementia And Alzheimer’s In Women

[ Watch the Video: Mid-life Stress Can Bring On Dementia ]

Michael Harper for redOrbit.com – Your Universe Online

Women who experience stressful life events in their 30s, 40s and 50s are more likely to develop dementia later in life. An extensive study of Swedish women spanning almost 40 years found mid-life stress accounted for a 21 percent greater risk of developing Alzheimer’s disease.

Women who encountered stressful events more frequently were more likely to develop the neurodegenerative disease. According to the study authors from the Sahlgrenska Academy at the University of Gothenburg, a hormone released during stressful events can trigger harmful alterations in the brain. These hormones also affect blood pressure and blood sugar control in the body.

The results of this new study appear in the most recent edition of the journal BMJ Open.

“This suggests that common psychosocial stressors may have severe and long-standing physiological and psychological consequences,” explain the authors in their paper. Though no medications have been shown to prevent Alzheimer’s disease, this study suggests simple stress management practices and behavioral therapy could potentially decrease the risk of the disease.

To conduct the study, the Swedish researchers analyzed results from a long-term mental health study. The Prospective Population Study of Women in Gothenburg began in 1968 and, as a part of the study, these women were subjected to a round of neuropsychiatric tests in the first year. Specifically, the researchers looked at middle-aged women in their mid 30s, 40s and 50s. The same tests were conducted at regular intervals over the following 40 years.

When the study began, one-fourth of the 800 women said they had experienced some sort of stressful life event up to that point. These events included losing a spouse or loved one, alcoholism or other illness in a loved one, or losing a job. Nearly the same number of women (about 23 percent) said they had experienced at least two of these stressful events in their life, while one in five of the sample group of women said they experienced at least three stressful events at that point in their life, and 16 percent of the women said they could relate to four or more of these circumstances.

The research showed the most common stressor to be mental illness in a close family member.

While these women were being observed during the 40 year study, 425 of them passed away at an average age of 79. One in five of these women developed dementia between the first year and 2006, while another 104 developed Alzheimer’s disease in the same period. On average, those who developed Alzheimer’s were 78 years old. The researchers say it took on average 29 years following the stressful events for the women to develop the disease.

All told, the number of stressful events encountered by these women increased their chances of showing long-term symptoms of cognitive decline. Those who experienced some sort of stress while they were middle aged were 21 percent more likely to develop Alzheimer’s disease, and 15 percent were more likely to develop any type of dementia in their later years.

“We know that the risk factors for dementia are complex and our age, genetics and environment may all play a role,” Dr. Simon Ridley of Alzheimer’s Research UK, who was not associated with the study, told BBC News.

“Current evidence suggests the best ways to reduce the risk of dementia are to eat a balanced diet, take regular exercise, not smoke, and keep blood pressure and cholesterol in check.”

Dr. Ridley suggests that people should talk with a doctor when they are feeling stressed to help mitigate any further issues.

UCLA Engineers Develop New Metabolic Pathway To More Efficiently Convert Sugars Into Biofuels

UCLA chemical engineering researchers have created a new synthetic metabolic pathway for breaking down glucose that could lead to a 50 percent increase in the production of biofuels.

The new pathway is intended to replace the natural metabolic pathway known as glycolysis, a series of chemical reactions that nearly all organisms use to convert sugars into the molecular precursors that cells need. Glycolysis converts four of the six carbon atoms found in glucose into two-carbon molecules known acetyl-CoA, a precursor to biofuels like ethanol and butanol, as well as fatty acids, amino acids and pharmaceuticals. However, the two remaining glucose carbons are lost as carbon dioxide.

Glycolysis is currently used in biorefinies to convert sugars derived from plant biomass into biofuels, but the loss of two carbon atoms for every six that are input is seen as a major gap in the efficiency of the process. The UCLA research team’s synthetic glycolytic pathway converts all six glucose carbon atoms into three molecules of acetyl-CoA without losing any as carbon dioxide.

The research is published online Sept. 29 in the peer-reviewed journal Nature.

The principal investigator on the research is James Liao, UCLA’s Ralph M. Parsons Foundation Professor of Chemical Engineering and chair of the chemical and biomolecular engineering department. Igor Bogorad, a graduate student in Liao’s laboratory, is the lead author.

“This pathway solved one of the most significant limitations in biofuel production and biorefining: losing one-third of carbon from carbohydrate raw materials,” Liao said. “This limitation was previously thought to be insurmountable because of the way glycolysis evolved.”

This synthetic pathway uses enzymes found in several distinct pathways in nature.

The team first tested and confirmed that the new pathway worked in vitro. Then, they genetically engineered E. coli bacteria to use the synthetic pathway and demonstrated complete carbon conservation. The resulting acetyl-CoA molecules can be used to produce a desired chemical with higher carbon efficiency. The researchers dubbed their new hybrid pathway non-oxidative glycolysis, or NOG.

“This is a fundamentally new cycle,” Bogorad said. “We rerouted the most central metabolic pathway and found a way to increase the production of acetyl-CoA. Instead of losing carbon atoms to CO2, you can now conserve them and improve your yields and produce even more product.”

The researchers also noted that this new synthetic pathway could be used with many kinds of sugars, which in each case have different numbers of carbon atoms per molecule, and no carbon would be wasted.

“For biorefining, a 50 percent improvement in yield would be a huge increase,” Bogorad said. “NOG can be a nice platform with different sugars for a 100 percent conversion to acetyl-CoA. We envision that NOG will have wide-reaching applications and will open up many new possibilities because of the way we can conserve carbon.”

The researchers also suggest this new pathway could be used in biofuel production using photosynthetic microbes.

The paper’s other author is Tzu-Shyang Lin, who recently received a bachelor’s degree from UCLA in chemical engineering.

On the Net:

Throbbing pain everywhere? It can be fibromyalgia

Fibromyalgia

The constant pain in some part of the body can be bothersome to the extent that your day to day activities may suffer largely due to this. Another problem with this is that you do not really know why and what is happening, which leads to severe mood swings and anxiety.

The only thing you want to do is chuck out this acute stinging all over the body. You rely on different sedatives to simply breathe a sigh of relief. When all these anesthetics fail we resort to non-scientific methods. By then, this excruciating pain starts spreading to other parts and causes mental exertion.

Take a look at this mysterious ailment

More and more people are suffering from this condition, where the doctors are unable to see the pain as all the results depict everything to be normal. Obviously nobody is going to simply fake a pain for such a long time! So, what is this mysterious condition that is affecting a large population of the world? The scientific research conducted on the patients who underwent this condition, gave it the name fibromyalgia.

It was observed that a majority of women suffer from this condition. This syndrome is not restricted to a particular age group; it can even target children. Due to less awareness about this condition, often the treatment is not effective. In fact, there is no specific treatment available.

Uncover the signs of this pain

This condition is marked by severe pain in those areas that are anyways more prone to cause discomfort. For example, the neck pain is often confused with cervical pain or back pain with slip disc or something. The joints and muscles are the most affected areas, which are badly hit by fibromyalgia.

After a point of time or even from the beginning you may suffer from migraine, dizziness, and stomach disorders. In some cases, people started having anxiety that they could not understand, as there was no cause of stress. Moreover, the patients were fit and active.

Overall, the causes of this syndrome were found to be embedded in the past. People who went through some traumatic experience either physically or mentally were more likely to develop this condition.

In certain cases, if you ever had an accident that targeted your joints or you suffered from back pain or any sort of pain resulted in this ailment. Anybody can suffer from this throbbing condition, with no prior information.

Due to this sudden invasion of your comfort, you may take time to understand this ailment and adjust your life accordingly. This inability to comprehend the changes that your body is undergoing can lead to insomnia and depression.

It becomes difficult to cope up with something that you are not able to combat in spite of trying all the medications. In addition, the people around you are also not able to see the problem.

Not so simple to locate this consistent stinging

In order to implement a proper treatment method, it is better to diagnose this condition. The main hurdle in the path of improvement is that there is no reliable detection technique as well as lack of awareness. You can say that it is a new born ailment in terms of detection and its discovery.

Generally, specialists will ask you for certain blood as well as urine tests along with observing you for a period of three months. During this period, the eighteen sensitive points of the body including ears, neck, legs or knees are checked. If eleven of them match, then you have fibromyalgia.

The unreliability of this method is mainly because you may not have this condition but something else. That is the reason why you cannot depend on the detection process completely. It is better to consult a specialist in this particular syndrome. The detection process is necessary as it will give you a slight idea of the condition. As there is no other option, you are left this method only.

Soothe the pain by managing it

Like the diagnosis procedure, there is no particular treatment that is available for this syndrome. The treatment method is more of a management or monitoring the condition. It is divided into two parts. The first part focuses on the mental health.

As the fear of depression and anxiety constantly looms over you, the effort should be made to maintain a distance from stressful things. The focus should be on stress management.

The doctor will prescribe you antidepressants and sleeping medicines. In addition to the dependence on these medications, you should regularly exercise for half an hour. You can also practice yoga and meditation; these activities will keep your brain calm.

It is essential to consult a neurologist in case of excessive headache or nerve problem. You should participate in activities that make you happy and relieve you from any anxiety.

There are therapy sessions where you can talk to a psychotherapist, who can help you deal with this condition. In fact, you can also participate in group sessions. These sessions more or less work like therapies where you share your problems and you will find many people who are facing problems because of this syndrome. A healthy discussion can be fruitful to your health.

Then comes the second part which needs to be attended simultaneously with the first one, this is the physical pain that you suffer during this period. You will get a feeling that immediately after curing one problem, you are attacked by the second one.

First you had a headache and now back ache. This is very common with this problem. Therefore, you should keep painkillers and other sedatives that relieve you from the pain. Regardless of the pain, you should consult the specialist about your daily progress as well as problems. Even if it is a minor pain do not take it lightly as it can aggravate also.

You can go for massage therapies and physiotherapy to get relief. Regular hot bath in Epsom or lavender salt can be of some help at this time. Along with these, you should consume healthy food and connect with other people. You should not disconnect yourself during this time that can result in depression.

Vacuum Dust Contains Potentially Harmful Bacteria

redOrbit Staff & Wire Reports – Your Universe Online

The aerosolized dust generated by vacuums contains bacteria and mold that could have harmful effects for infants, people with allergies and those with compromised immunity, according to a new study published this month in the journal Applied and Environmental Microbiology.

The researchers from the University of Queensland and Laval University used a special clean air wind tunnel to measure vacuum emissions from 21 vacuums of varying quality and age.

The clean air wind tunnel allowed researchers to eliminate other sources of particles and bacteria, said study leader Dr. Luke Knibbs of the University of Queensland.

“That way, we could confidently attribute the things we measured purely to the vacuum cleaner.”

Among the study’s more troubling findings were resistance genes for five common antibiotics identified in the sampled bacteria, along with the Clostridium botulinum toxin gene.

This is of particular concern because previous studies have found that dust found indoors could act as a vehicle for infant botulism infection that can have severe consequences, including sudden infant death syndrome, the researchers said.

The current study reinforces previous research that found human skin and hair to be important sources of bacteria in floor dust and indoor air, which can be readily re-suspended and inhaled, said study co-author Caroline Duchaine at Laval University in Canada.

Vacuum cleaners are “underrepresented in indoor aerosol and bioaerosol assessment and should be considered, especially when assessing cases of allergy, asthma, or infectious diseases without known environmental reservoirs for the pathogenic or causative microbe,” she said.

Knibbs said he hopes other studies will follow this one, raising the profile of potential indoor sources of culprits in unsolved medical cases.

American Heartland At High Risk For Sizable Earthquakes

[ Watch the Video: Earthquake Risks In America’s Heartland ]

Lawrence LeBlond for redOrbit.com – Your Universe Online

An earthquake zone that extends from Marked Tree, Arkansas to Paducah, Kentucky and as far south as Memphis, Tennessee has a higher earthquake risk than adjacent areas within the United States, according to new research from the US Geological Survey (USGS).

Using sophisticated technology, USGS scientists have developed new high-resolution images of the New Madrid Seismic Zone (NMSZ), which allows them to map the area in more detail than ever before. These images allow for greater understanding of the weak rocks in this region, found at much greater depths in the Earth’s mantle compared to those in surrounding zones.

The USGS-led research was published in the journal Earth and Planetary Science Letters.

Some of the largest earthquakes in the US have occurred in the NMSZ, including three earthquakes greater than magnitude 7 on the Richter scale that occurred in 1811 and 1812. Smaller temblors that have occurred in this region since those more powerful ones have been significant in their own right.

“With the new high-resolution imagery, we can see in greater detail that the New Madrid Seismic Zone is mechanically weaker than surrounding areas and therefore concentrates movement and stress in a narrow area,” said USGS scientist Fred Pollitz, who is the lead author of this research. “The structure beneath this zone is unique when compared to adjacent areas in the central and eastern United States. A more in-depth understanding of such zones of weakness ultimately helps inform decisions such as the adoption of appropriate building codes to protect vulnerable communities, while also providing insight that could be applied to other regions across the world.”

The USGS has mapped the NMSZ before and had concluded that it is a region of high seismic hazard. However, previous assessments included earthquake records over a much shorter time span — 4,500 years.

This mapping project looked at a much larger area – the 500-million-year-old Reelfoot Rift – specifically, with the NMSZ being located at the northernmost part of that region. The USGS team imaged rocks deep beneath the Earth’s surface to get a good sense of their characteristics and an understanding of their mechanical behavior, especially their ability to withstand a constant stream of stress and pressure.

Surprisingly, the team found that the weak rocks under the Reelfoot Rift fault lines extend more than 100 miles down into the mantle, much farther than weak rocks found in other ancient rift zones in the central and eastern US. The weak mantle rocks in the Reelfoot Rift area are more susceptible to concentration of tectonic stress and more mobile due to their low seismic velocity.

For the mapping project, the team relied on data from USArray, a large network of seismometers that make up one part of the EarthScope program operated by the National Science Foundation (NSF). These seismometers provide USGS and other scientists with images of the crust and mantle as far down as 120 miles.

“Our results are unexpected and significant because they suggest that large earthquakes remain concentrated within the New Madrid Seismic Zone,” said USGS scientist Walter Mooney, the co-author of the report. “There are still many unknowns about this zone, and future research will aim to understand why the seismic zone is active now, why its earthquake history may be episodic over millions of years, and how often it produces large quakes.”

The USGS research team said they hope to map the seismic structure of the entire country using data from USArray. The effort, which started in California nearly 10 years ago, is now focusing on the east coast and will later map out zones under Alaska. The team noted that all the USArray and Earthscope data will also help inform future USGS National Seismic Hazard Maps.

Beyond The Little Blue Pill: Scientists Develop Compound That May Treat Priapism

New research in The FASEB Journal suggests that C6′, a compound that releases nitric oxide in the body, prevented abnormalities associated with priapism in mice with and without sickle cell disease

It’s not the little blue pill famous for helping men get big results, but for those who need it, the outcome might be even more significant. A new research report published online in The FASEB Journal, offers hope to men who experience priapism. This condition, which is often seen in men with sickle cell disease, causes erections lasting so long that they cause permanent damage to the penis. Specifically, a compound, called “C6′” offered mice — with and without sickle cell disease — relief by normalizing nitric oxide levels in penile blood. In addition to helping men with priapism, this action of this compound also provides insight for future research related to vascular and circulatory disorders such as hypertension.

“This study has implications for quality of life by suggesting the possible role of a drug therapy for controlled, physiologic release of nitric oxide that may treat conditions of altered nitric oxide signaling or function,” said Gwen Lagoda, M.S., a researcher involved in the work from the Department of Urology at Johns Hopkins Medical Institutions, in Baltimore, Maryland. “Its application may extend beyond erection disorders and include other health conditions involving abnormal circulation and blood flow.”

Scientists analyzed two groups of experimental mice. The first group had both endothelial and neuronal nitric oxide synthase knocked out. The second group of mice had sickle cell disease. In both groups of mice, nitric oxide signaling was known to be abnormal and resulted in abnormal erections. When these mice were given C6′ treatment, their molecular abnormalities were reduced and erectile functioning returned to levels similar to normal mice.

“Thanks to massive advertising, when people think of ‘E.D.,’ they often think of an inability to achieve or maintain an erection,” said Gerald Weissmann, M.D., Editor-in-Chief of The FASEB Journal. “What they don’t realize is that there can be other problems as well. Priapism is a dangerous and painful form of erectile dysfunction that is overlooked. Hopefully this compound will be just as effective in people as it was in mice.”

On the Net:

Flesh-Eating Beach Bacteria Kills 9 In Florida

Lee Rannals for redOrbit.com – Your Universe Online
Deadly flesh-eating bacteria are emerging on the beaches of Florida and health officials are warning people at the coastlines to be aware. The Vibrio vulnificus bacterium has killed nine people in Florida so far this year, according to a report from a local ABC affiliate in Florida. This bacteria occur naturally in seawater, but they can get into a person’s bloodstream through an open wound or from consuming raw shellfish.
The Centers for Disease Control and Prevention (CDC) says Vibrio vulnificus can cause vomiting, diarrhea and abdominal pain. It can also cause a severe and life-threatening illness characterized by fever and chills, decreased blood pressure and blistering skin lesions. The CDC said that 50 percent of Vibrio vulnificus bloodstream infections are fatal.
“V. vulnificus can cause an infection of the skin when open wounds are exposed to warm seawater; these infections may lead to skin breakdown and ulceration. Persons who are immunocompromised are at higher risk for invasion of the organism into the bloodstream and potentially fatal complications,” reads the CDC’s website.
WESH news station of Orlando is reporting that 26 people have been infected with the deadly flesh-eating bacteria across the state. Health officials from the AAFP say that to reduce the likelihood of infection, people should avoid contact with raw seafood juices and use separate cutting boards and knives for seafood and non-seafood. They also say to avoid eating raw oysters or seafood, especially if an immuno-compromising condition exists or when chronic liver disease is present.
Henry Konietzky was one of the people who died from an infection due to the bacteria. The 59-year-old was a married father who accidentally stepped onto some ants before getting into the water. The ant bites may have created an open wound for the bacteria to enter. After being bitten, Konietzky waded into the water knee-deep to set up crab traps. He died just 28 hours after he contracted the bacterial infection.
“We are still in shock. What’s really devastating is that he fished his whole life. For something like this to take him away from us so quickly, without warning, is really scary,” Konietzky’s daughter told the Daily Mail.
Flagler Health Department Administrator Patrick Johnson told The Daytona Beach News-Journal that the two most recent cases are linked to open-wound exposure to the bacteria in the Halifax River near High Bridge Road in Ormond Beach.
“This is an illness that generally happens when someone eats raw oysters but that’s not the case here,” Johnson told the Daytona journal. “Because the two most recent cases are linked to the same area, we wanted to make the public aware.”

Combining Chinese And Western Medicine Could Lead To New Cancer Treatments

Combining traditional forms of Chinese and Western medicine could offer new hope for developing new treatments for liver, lung, colorectal cancers and osteosarcoma of the bones.

Experts from Cardiff University’s School of Medicine have joined forces with Peking University in China to test the health benefits of a traditional Chinese medicine.

The team also set-out to examine how by combining it with more traditional methods like Chemotherapy could improve patient outcomes and potentially lead to the development of new cancer treatments and therapies.

“Traditional Chinese medicine where compounds are extracted from natural products or herbs has been practiced for centuries in China, Korea, Japan and other countries in Asia,” according to Professor Wen Jiang from Cardiff University’s School of Medicine, who is the director of the Cardiff University-Peking University Joint Cancer Institute at Cardiff and led the research as part of a collaboration between Cardiff University and Peking University.

“Although a few successes, most of the traditional remedies are short of scientific explanation which has inevitably led to scepticism – especially amongst traditionalists in the West.

“As a result, we set out to test the success of a Chinese medicine and then consider how combining it alongside traditional methods like Chemotherapy could result in positive outcome for patients,” he adds.

Yangzheng Xiaoji is a traditional Chinese formula consisting of 14 herbs. The formula has been shown to be beneficial to cancer patients – however, until now how it works has remained unknown.

Since 2012 the Team have investigated how the formula works, discovering that it works by blocking a pathway which stops the spread of cancer cells in the body.

“The formula has been shown to be beneficial to patients with certain solid tumors, when used alone and in conventional therapies, such as Chemotherapy.

“It suggests that combining the formula with conventional as well as new therapies could hold the key to developing new treatments for cancer patients.

“We are already looking to clinical trials in treatment of lung and other cancer types.”

Funded by Cancer Research Wales and the Albert Hung Foundation – the results will be presented at the European Cancer Congress 2013 which takes place in Amsterdam between the 27th September and 1st October.

On the Net:

Threat Of Space Debris On Human Spacecraft

NASA

More than 500,000 pieces of debris, or “space junk,” are tracked as they orbit the Earth. They all travel at speeds up to 17,500 mph, fast enough for a relatively small piece of orbital debris to damage a satellite or a spacecraft.

The rising population of space debris increases the potential danger to all space vehicles, but especially to the International Space Station, space shuttles and other spacecraft with humans aboard.

NASA takes the threat of collisions with space debris seriously and has a long-standing set of guidelines on how to deal with each potential collision threat. These guidelines, part of a larger body of decision-making aids known as flight rules, specify when the expected proximity of a piece of debris increases the probability of a collision enough that evasive action or other precautions to ensure the safety of the crew are needed.

Orbital Debris

Space debris encompasses both natural (meteoroid) and artificial (man-made) particles. Meteoroids are in orbit about the sun, while most artificial debris is in orbit about the Earth. Hence, the latter is more commonly referred to as orbital debris.

Orbital debris is any man-made object in orbit about the Earth which no longer serves a useful function. Such debris includes nonfunctional spacecraft, abandoned launch vehicle stages, mission-related debris and fragmentation debris.

There are more than 20,000 pieces of debris larger than a softball orbiting the Earth. They travel at speeds up to 17,500 mph, fast enough for a relatively small piece of orbital debris to damage a satellite or a spacecraft. There are 500,000 pieces of debris the size of a marble or larger. There are many millions of pieces of debris that are so small they can’t be tracked.

Even tiny paint flecks can damage a spacecraft when traveling at these velocities. In fact a number of space shuttle windows have been replaced because of damage caused by material that was analyzed and shown to be paint flecks.

“The greatest risk to space missions comes from non-trackable debris,” said Nicholas Johnson, NASA chief scientist for orbital debris.

With so much orbital debris, there have been surprisingly few disastrous collisions.

In 1996, a French satellite was hit and damaged by debris from a French rocket that had exploded a decade earlier.

On Feb. 10, 2009, a defunct Russian satellite collided with and destroyed a functioning U.S. Iridium commercial satellite. The collision added more than 2,000 pieces of trackable debris to the inventory of space junk.

China’s 2007 anti-satellite test, which used a missile to destroy an old weather satellite, added more than 3,000 pieces to the debris problem.

Tracking Debris

The Department of Defense maintains a highly accurate satellite catalog on objects in Earth orbit that are larger than a softball.

NASA and the DoD cooperate and share responsibilities for characterizing the satellite (including orbital debris) environment. DoD’s Space Surveillance Network tracks discrete objects as small as 2 inches (5 centimeters) in diameter in low Earth orbit and about 1 yard (1 meter) in geosynchronous orbit. Currently, about 15,000 officially cataloged objects are still in orbit. The total number of tracked objects exceeds 21,000. Using special ground-based sensors and inspections of returned satellite surfaces, NASA statistically determines the extent of the population for objects less than 4 inches (10 centimeters) in diameter.

Collision risks are divided into three categories depending upon size of threat. For objects 4 inches (10 centimeters) and larger, conjunction assessments and collision avoidance maneuvers are effective in countering objects which can be tracked by the Space Surveillance Network. Objects smaller than this usually are too small to track and too large to shield against. Debris shields can be effective in withstanding impacts of particles smaller than half an inch (1 centimeter).

Planning for and Reacting to Debris

NASA has a set of long-standing guidelines that are used to assess whether the threat of such a close pass is sufficient to warrant evasive action or other precautions to ensure the safety of the crew.

These guidelines essentially draw an imaginary box, known as the “pizza box” because of its flat, rectangular shape, around the space vehicle. This box is about a mile deep by 30 miles across by 30 miles long (1.5 x 50 x 50 kilometers), with the vehicle in the center. When predictions indicate that the debris will pass close enough for concern and the quality of the tracking data is deemed sufficiently accurate, Mission Control centers in Houston and Moscow work together to develop a prudent course of action.

Sometimes these encounters are known well in advance and there is time to move the station slightly, known as a “debris avoidance maneuver” to keep the debris outside of the box. Other times, the tracking data isn’t precise enough to warrant such a maneuver or the close pass isn’t identified in time to make the maneuver. In those cases, the control centers may agree that the best course of action is to move the crew into the Soyuz spacecraft that are used to transport humans to and from the station. This allows enough time to isolate those spaceships from the station by closing hatches in the event of a damaging collision. The crew would be able to leave the station if the collision caused a loss of pressure in the life-supporting module or damaged critical components. The Soyuz act as lifeboats for crew members in the event of an emergency.

Mission Control also has the option of taking additional precautions, such as closing hatches between some of the station’s modules, if the likelihood of a collision is great enough.

Maneuvering Spacecraft to Avoid Orbital Debris

NASA has a set of long-standing guidelines that are used to assess whether the threat of a close approach of orbital debris to a spacecraft is sufficient to warrant evasive action or precautions to ensure the safety of the crew.

Debris avoidance maneuvers are planned when the probability of collision from a conjunction reaches limits set in the space shuttle and space station flight rules. If the probability of collision is greater than 1 in 100,000, a maneuver will be conducted if it will not result in significant impact to mission objectives. If it is greater than 1 in 10,000, a maneuver will be conducted unless it will result in additional risk to the crew.

Debris avoidance maneuvers are usually small and occur from one to several hours before the time of the conjunction. Debris avoidance maneuvers with the shuttle can be planned and executed in a matter of hours. Such maneuvers with the space station require about 30 hours to plan and execute mainly due to the need to use the station’s Russian thrusters, or the propulsion systems on one of the docked Russian or European spacecraft.

Several collision avoidance maneuvers with the shuttle and the station have been conducted during the past 10 years.

NASA implemented the conjunction assessment and collision avoidance process for human spaceflight beginning with shuttle mission STS-26 in 1988. Before launch of the first element of the International Space Station in 1998, NASA and DoD jointly developed and implemented a more sophisticated and higher fidelity conjunction assessment process for human spaceflight missions.

In 2005, NASA implemented a similar process for selected robotic assets such as the Earth Observation System satellites in low Earth orbit and Tracking and Data Relay Satellite System in geosynchronous orbit.

In 2007, NASA extended the conjunction assessment process to all NASA maneuverable satellites within low Earth orbit and within 124 miles (200 kilometers) of geosynchronous orbit.

DoD’s Joint Space Operations Center (JSpOC) is responsible for performing conjunction assessments for all designated NASA space assets in accordance with an established schedule (every eight hours for human spaceflight vehicles and daily Monday through Friday for robotic vehicles). JSpOC notifies NASA (Johnson Space Center for human spaceflight and Goddard Space Flight Center for robotic missions) of conjunctions which meet established criteria.

JSpOC tasks the Space Surveillance Network to collect additional tracking data on a threat object to improve conjunction assessment accuracy. NASA computes the probability of collision, based upon miss distance and uncertainty provided by JSpOC.

Based upon specific flight rules and detailed risk analysis, NASA decides if a collision avoidance maneuver is necessary.

If a maneuver is required, NASA provides planned post-maneuver orbital data to JSpOC for screening of near-term conjunctions. This process can be repeated if the planned new orbit puts the NASA vehicle at risk of future collision with the same or another space object.

On The Net:

Colorectal Cancer Tests Reportedly More Effective Than Breast, Prostate Cancer Screenings

redOrbit Staff & Wire Reports – Your Universe Online

“Irrefutable” evidence that colorectal cancer (CRC) screening is effective at reducing the disease’s mortality rate should lead healthcare policymakers to shift resources currently devoted to breast and prostate cancer screenings to CRC testing, Belgian epidemiologist Philippe Autier will report this weekend.

Autier, the vice president of population studies at the International Prevention Research Institute in France, will present data about CRC screenings collected as part of the Survey of Health, Ageing, and Retirement in Europe (SHARE) project during the 2013 European Cancer Congress (ECC2013) on Sunday.

During his speech, the professor will argue that colon and bowel cancer screenings such as fecal occult blood tests of FOBT (which tests stool samples for hidden blood) and endoscopy (in which a tiny camera is used to search of pre-cancerous polyps in the large bowel of the patient) have been proven effective, while the evidence that breast and prostate cancer screenings can save lives is less convincing.

Combining SHARE data pertaining to screenings in both men and women over the age of 50 between 1989 and 2010 with information from the World Health Organization (WHO) cause-of-death database, Autier and his colleagues calculated CRC death rates in 11 different European countries.

The information was then related with how widespread screening procedures were, as well as how likely people were to take advantage of such services. In some cases, the screenings were part of national health programs (such as FOBT screening in France and the UK, or both FOBT and endoscopy in Germany and parts of Italy). In others, patients and/or their doctors made the decision to undergo one or both types of CRC screenings.

“We saw quite clearly that the greater proportions of men and women who were screened, the greater the reductions in mortality,” Autier is scheduled to report during his presentation. “Reduced death rates from CRC were not noticeable in countries where screening was low, even though healthcare services in those countries were similar to those in countries where screening was more widespread.”

According to the researchers, 61-percent of all Austrian residents reported having undergone an FOBT screening during the study period. At that same time, the CDC mortality rate decreased by 39-percent among men and 47-percent among women. Conversely, only eight percent of Greek males underwent an endoscopic examination during the study period. During that time, the country reported a 30-percent increase in CRC death rates among men.

Overall, in the 11 European nations studied, 73-percent of the decrease in CRC-related mortality in males and 82-percent in females over a period of 10 years could be attributed to undergoing at least one endoscopic bowel examination over the span of a decade, Autier and his colleagues claim.

“The evidence could not be clearer, and it is therefore very disappointing that national differences in the availability of CRC screening programs are still so pronounced,” according to the professor. “There are signs that CRC screening can reduce the incidence of this cancer as well as mortality from it, in exactly the same way as is happening with cervical cancer screening. We would also like to investigate the cost-effectiveness of CRC screening, since we believe that it has the potential to bring about economic gains associated with averted CRC cases and deaths, and hence to more than pay for its initial cost.”

Autier’s team plans to gather additional information on CRC screening, and are also looking to review data collected from Australia and North America. If two-thirds of eligible people in each country undergo testing, the researchers believe that it could result in “a considerable reduction” in colon and bowel cancer deaths in a minimum of 10 years. They believe that federal healthcare services need to redouble their efforts into making FOBT and endoscopy tests available to citizens, while educating people over the age of 50 about the availability of these screenings.

“There is a clear relationship between randomized trials showing the ability of any type of CRC screening to reduce the risk of death from the disease, data from cancer registries showing declines in the incidence of advanced CRC, and declines in CRC mortality over time,” Autier said. “In breast cancer, there is no such smooth logical sequence between randomized trials and these population statistics. It seems to us that there is now an irrefutable case for devoting some of the resources from breast and prostate cancer screening to the early detection of CRC.”

How Mucus Keeps Your Gut Healthy

Michael Harper for redOrbit.com – Your Universe Online

Researchers from the Icahn School of Medicine at Mount Sinai’s Immunology Institute say they’ve discovered an important role played by mucus in the gut. Though little was previously known about mucus, the Icahn doctors say the slimy, sticky stuff acts as an anti-inflammatory in the stomach and provides a protective self-regulating immune function.

The research also showed that mucus acts as a protective barrier against bacteria and toxins and may one day be used to treat inflammatory bowel disease (IBD), Crohn’s disease, and even cancer. A report of their research is published online in the journal Science.

“We asked ourselves whether dendritic cells in the gut could capture mucus, as well as bacteria and food antigens,” said senior author Andrea Cerutti, MD, PhD in a statement. Cerutti is a professor of medicine at the Icahn school and says the dendritic cells in mucus are responsible for triggering an immune response in the body.

“We found that whenever mucus was present, it was stimulating the production of anti-inflammatory cytokines [regulatory proteins released by the cells of the immune system that act to regulate an immune response],” explained Cerutti in a statement. In other words, mucus doesn’t just act as a barrier, it also acts as a turret to take out incoming dangers.

The human body is capable of producing upwards of a liter of mucus every day through mucosal tissues, yet little research has ever been done to better understand the stuff. In fact, many doctors and researchers assumed mucus was a bad thing, as more mucus is generally produced when a person is feeling ill. Yet while many doctors ignored mucus, they were continually puzzled by the body’s immunity to potentially dangerous bacteria living in the gut.

“Immunologists have always been interested in finding out why we do not develop an inflammatory reaction to the trillions of bacteria and large amounts of food antigens that come in contact with our intestinal mucosa,” explained PhD student Maurizio Gentile.

“Yet, these same agents cause dangerous inflammatory reactions and even death when other parts of our body are exposed to them. The discovery published in this study helps to explain this long-standing question.”

During their research the doctors found that a molecule called MUC2 is responsible for giving mucus its protective powers. It’s now understood that MUC2 both acts as a barrier and sends the anti-inflammatory signals to the dendritic cells. Because mucus is so common, the doctors involved in this research say others will be able to carry their work even further.

“By showing the beneficial anti-inflammatory activity of mucus, our work opens up a broad field of research,” said Dr. Linda Cassis, who helped Dr. Cerutti for this study.

“The natural pharmacological properties of mucus might provide a promising complementary way to treat inflammatory bowel disease, including ulcerative colitis and Crohn’s disease.”

The researchers suggest that patients with diseases such as Chrohn’s or ulcerative colitis could benefit from synthesized medicines which are based on the MUC2 compound in mucus. The same compounds may also be used in medicines to protect against cancerous cells.

Humans Are Main Cause Of Climate Change, Says UN Report

[ Watch The Video: Temperature And Precipitation In The 21st Century ]

Brett Smith for redOrbit.com – Your Universe Online

A new report by the UN-created Intergovernmental Panel on Climate Change (IPCC) stated that global warming is “unequivocal” and it is “extremely likely” that human activities are the main driver of this epic warming.

“Human influence has been detected in warming of the atmosphere and the ocean, in changes in the global water cycle, in reductions in snow and ice, in global mean sea level rise, and in changes in some climate extremes,” said the report, which was released Friday morning in Stockholm, Sweden.

The report added that “each of the last three decades has been successively warmer at the Earth’s surface than any preceding decade since 1850” and that in “the Northern Hemisphere, 1983–2012, was likely the warmest 30-year period of the last 1400 years.”

Skeptics of global warming tend to cite a recent slowing of rising temperatures that has been occurring since 1998. However, the report said this ‘pause’ could be due to changing climate models and called for further study.

“Due to natural variability, trends based on short records are very sensitive to the beginning and end dates and do not in general reflect long-term climate trends,” the report explained.

Qin Dahe, co-chair of the IPCC working group that produced the report, said the panel’s assessments are based on “multiple lines of independent evidence.”

One of those lines of evidence was a series of climate data visualizations called the Coupled Model Intercomparison Project Phase 5 (CMIP5), which included climate models from NASA’s Scientific Visualization Studio at Goddard Space Flight Center in Greenbelt, Maryland.

“Our assessment of the science finds that the atmosphere and ocean have warmed, the amount of snow and ice has diminished, the global mean sea level has risen and the concentrations of greenhouse gases have increased,” Dahe said.

Thomas Stocker, another co-chair of the group, said that climate change “challenges the two primary resources of humans and ecosystems, land and water. In short, it threatens our planet, our only home.”

A report recently published in the journal Nature Climate Change put the potential impacts of climate change and carbon emissions in the context of costs. The report concluded that over 500,000 lives could be saved each year by 2030 if nations around the world took actions to mitigate climate change. The economics of this health benefit exceeds the costs of forcing nations to drop their fossil fuel emissions, the study added.

The study researchers noted that the cost benefit would be especially true for China, where the gain would equal 10 to 70 times the cost of cutting greenhouse gas emissions. The team noted that their study attempts to quantify the benefits of taking action.

“Neglecting the air quality co-benefits misses an important component of the benefits of reducing greenhouse gas emissions,” study author Jason West, assistant professor of environmental sciences and engineering at the University of North Carolina told National Geographic.

“We show those benefits are large enough that they should be part of the analysis, and it should give extra motivation for people to think about why we should be taking action to slow climate change.”

Researchers Find Brain Circuitry That Triggers Binge Eating

redOrbit Staff & Wire Reports – Your Universe Online

Researchers at University of North Carolina have identified a part of the brain that may play a critical role in eating disorders such as anorexia and bulimia.

The scientists were able to pinpoint the precise cellular connections responsible for triggering this behavior, something that could give insight into a potential cause of obesity and perhaps lead to better treatments for anorexia, bulimia, and binge eating disorder – the most common eating disorders in the United States. The study could also eliminate some of the stigmatizing explanations that these disorders are often attributed to, such as a lack of willpower.

“The study underscores that obesity and other eating disorders have a neurological basis,” said senior study author Garret Stuber, PhD, assistant professor in the departments of psychiatry, cell biology and physiology at UNC and a member of the UNC Neuroscience Center.

“With further study, we could figure out how to regulate the activity of cells in a specific region of the brain and develop treatments.”

Sixty years ago, scientists found they could electrically stimulate a region of a mouse’s brain called the lateral hypothalamus and cause the mouse to eat, whether hungry or not. However, these stimulations were actually being applied to many different types of brain cells.

Stuber wanted to focus on one specific cell type – gaba neurons in the bed nucleus of the stria terminalis, or BNST. The BNST is an outcropping of the amygdala, the part of the brain associated with emotion. The BNST also forms a bridge between the amygdala and the lateral hypothalamus, the brain region that drives primal functions such as eating, sexual behavior, and aggression.

The BNST gaba neurons have a cell body and a long strand with branched synapses that transmit electrical signals into the lateral hypothalamus. Stuber and colleagues wanted to stimulate those synapses using an optogenetic technique, a complex process that would stimulate BNST cells simply by shining light on their synapses.

Typically, brain cells don’t respond to light. So Stuber’s team used genetically engineered proteins from algae, which are sensitive to light and used genetically engineered viruses to deliver them into the brains of mice. Those proteins then get expressed only in the BNST cells, including in the synapses that connect to the hypothalamus.

The researchers then implanted fiber optic cables in the brains of these specially-bred mice, allowing them to shine light through the cables and onto BNST synapses. They found that as soon as the light hit BNST synapses, the mice began to eat voraciously, even though they were well fed. Furthermore, the mice showed a strong preference for high-fat foods.

“They would essentially eat up to half their daily caloric intake in about 20 minutes,” Stuber said. “This suggests that this BNST pathway could play a role in food consumption and pathological conditions such as binge eating.”

Stimulating the BNST also led the mice to exhibit behaviors associated with reward, suggesting that shining light on BNST cells enhanced the pleasure of eating. The study also found that shutting down the BNST pathway caused mice to show little interest in eating, even if they had been deprived of food.

“We were able to really hone in on the precise neural circuit connection that was causing this phenomenon that’s been observed for more than 50 years,” Stuber said.

The research suggests that faulty wiring in BNST cells could interfere with hunger or satiety cues and contribute to human eating disorders, leading people to eat even when they are full or to avoid food when they are hungry.

The scientists said additional research is needed to determine whether it would be possible to develop drugs that correct a malfunctioning BNST circuit.

“We want to actually observe the normal function of these cell types and how they fire electrical signals when the animals are feeding or hungry,” Stuber said.

“We want to understand their genetic characteristics – what genes are expressed. For example, if we find cells that become really activated after binge eating, can we look at the gene expression profile to find out what makes those cells unique from other neurons.”

Such research could lead to potential targets for drugs to treat certain populations of patients with eating disorders, he said.

A report of the team’s findings are published in the September 27 edition of the journal Science.

How Rare ‘Words’ In Bacterial Genes Boost Protein Production

Wyss Institute for Biologically Inspired Engineering at Harvard

Scientists routinely seek to reprogram bacteria to produce proteins for drugs, biofuels and more, but they have struggled to get those bugs to follow orders. But a hidden feature of the genetic code, it turns out, could get bugs with the program. The feature controls how much of the desired protein bacteria produce, a team from the Wyss Institute for Biologically Inspired Engineering at Harvard University reported in the September 26 online issue of Science.

The findings could be a boon for biotechnologists, and they could help synthetic biologists reprogram bacteria to make new drugs and biological devices.

By combining high-speed “next-generation” DNA sequencing and DNA synthesis technologies, Sri Kosuri, Ph.D., a Wyss Institute staff scientist, George Church, Ph.D., a core faculty member at the Wyss Institute and professor of genetics at Harvard Medical School, and Daniel Goodman, a Wyss Institute graduate research fellow, found that using more rare words, or codons, near the start of a gene removes roadblocks to protein production.

“Now that we understand how rare codons control gene expression, we can better predict how to synthesize genes that make enzymes, drugs, or whatever you want to make in a cell,” Kosuri said.

To produce a protein, a cell must first make working copies of the gene encoding it. These copies, called messenger RNA (mRNA), consist of a specific string of words, or codons. Each codon represents one of the 20 different amino acids that cells use to assemble proteins. But since the cell uses 61 codons to represent 20 amino acids, many codons have synonyms that represent the same amino acid.

In bacteria, as in books, some words are used more often than others, and molecular biologists have noticed over the last few years that rare codons appear more frequently near the start of a gene. What’s more, genes whose opening sequences have more rare codons produce more protein than genes whose opening sequences do not.

No one knew for sure why rare codons had these effects, but many biologists suspected that they function as a highway on-ramp for ribosomes, the molecular machines that build proteins. According to this idea, called the codon ramp hypothesis, ribosomes wait on the on-ramp, then accelerate slowly along the mRNA highway, allowing the cell to make proteins with all deliberate speed. But without the on-ramp, the ribosomes gun it down the mRNA highway, then collide like bumper cars, causing traffic accidents that slow protein production. Other biologists suspected rare codons acted via different mechanisms. These include mRNA folding, which could create roadblocks for ribosomes that block the highway and slow protein production.

To see which ideas were correct, the three researchers used a high-speed, multiplexed method that they’d reported in August in The Proceedings of the National Academy of Sciences.

First, they tested how well rare codons activated genes by mass-producing 14,000 snippets of DNA with either common or rare codons; splicing them near the start of a gene that makes cells glow green, and inserting each of those hybrid genes into different bacteria. Then they grew those bugs, sorted them into bins based on how intensely they glowed, and sequenced the snippets to look for rare codons.

They found that genes that opened with rare codons consistently made more protein, and a single codon change could spur cells to make 60 times more protein.

“That’s a big deal for the cell, especially if you want to pump out a lot of the protein you’re making,” Goodman said.

The results were also consistent with the codon-ramp hypothesis, which predicts that rare codons themselves, rather than folded mRNA, slow protein production. But the researchers also found that the more mRNA folded, the less of the corresponding protein it produced — a result that undermined the hypothesis.

To put the hypothesis to a definitive test, the Wyss team made and tested more than 14,000 mRNAs – including some with rare codons that didn’t fold well, and others that folded well but had no rare codons. By quickly measuring protein production from each mRNA and analyzing the results statistically, they could separate the two effects.

The results showed clearly that RNA folding, not rare codons, controlled protein production, and that scientists can increase protein production by altering folding, Goodman said.

The new method could help resolve other thorny debates in molecular biology. “The combination of high-throughput synthesis and next-gen sequencing allows us to answer big, complicated questions that were previously impossible to tease apart,” Church said.

“These findings on codon use could help scientists engineer bacteria more precisely than ever before, which is tremendous in itself, and they provide a way to greatly increase the efficiency of microbial manufacturing, which could have huge commercial value as well,” said Wyss Institute Founding Director Don Ingber, M.D., Ph.D. “They also underscore the incredible value of the new automated technologies that have emerged from the Synthetic Biology Platform that George leads, which enable us to synthesize and analyze genes more rapidly than ever before.”

On The Net:

Steroids Capable Of Regenerating Themselves In The Environment

redOrbit Staff & Wire Reports – Your Universe Online

A steroid currently used in the beef industry does not fully break down in water as previously thought, new research examining the impact of pharmaceutical substances on aquatic organisms has revealed.

The paper, which was published online Thursday in the journal Science, challenges the longstanding belief that these products become less ecologically harmful as they degrade. Experts are increasingly concerned that once these substances enter the environment, some of their bioactive organic compounds could be altered in a way that makes their behavior more uncertain.

David Cwiertny, assistant professor in engineering at the University of Iowa, and his colleagues set out to test this hypothesis using the anabolic steroid trenbolone acetate and two other drugs.

Cwiertny’s team conducted both lab tests and field experiments and found that the steroid does not completely break down in water as believed. Instead, it maintains enough of a chemical residue to regenerate itself in the environment under specific conditions, even to the extent that the drugs’ lifespans could be prolonged in trace amounts.

The study authors said that this is an important step towards better understanding “the environmental role and impact of steroids and pharmaceutical products, all of which have been approved by the federal government for various uses and that have been shown to improve food availability, environmental sustainability and human health.”

“We’re finding a chemical that is broadly utilized, to behave in a way that is different from all our existing regulatory and risk-assessment paradigms,” explained Cwiertny, a co-corresponding author on the paper. “What our work hopefully will do is help us better understand and assess the environmental fate of emerging contaminant classes.”

“There are a variety of bioactive pharmaceuticals and personal-care products that we know are present in trace amounts in our water supply,” he added. “We should use what we’re learning about trenbolone to more closely scrutinize the fate and better mitigate the impact of these products in the environment.”

Similar results were reported for the other two substances tested: dienogest, a hormone used as an ingredient in the birth-control pill Natazia, and dienedone, an anabolic steroid that has been banned but is nonetheless marketed as a bodybuilding supplement. The research was funded by the US Department of Agriculture (USDA), the National Institutes of Health (NIH), and the National Science Foundation (NSF).

While the steroid has been considered safe due to its rapid degradation (research has suggested it has an environmental half-life of less than a day), there had been concern as to whether or not it and other types of synthetic drugs can be harmful to aquatic lifeforms and the environment in concentrated amounts. Studies have suggested that these substances can cause female fish to produce fewer eggs and skew the sex of some species.

“We rarely see fish kills anymore, and we probably aren’t discharging many carcinogens into surface waters anymore. But I don’t believe this necessarily means that our water is safe for aquatic organisms,” explained University of Nevada-Reno associate engineering professor and corresponding author Edward Kolodziej. “It just might be harder to characterize the adverse effects associated with contaminant exposures these days.”

Sunlight was found to be one catalyst for breaking down the compounds, but the researchers simulated the day-night cycle and found that trenbolone acetate never completely disappeared in daylight. Furthermore, they found that during a simulated night and under typical surface water conditions, some of the compounds managed to regenerate themselves – up to 60 percent of the metabolite’s initial mass over a 120-hour period.

“More of the drug’s mass was regenerated – up to 88 percent in one highly acidic state (pH 2) – when water temperature was higher and when it was more acidic or alkaline,” the University of Iowa said in a statement. They added that the lab results were later verified through a pair of field experiments, “one with water culled from the Iowa River in Iowa City, Iowa and the other from samples taken from a collection pond at a cattle rangeland and research operation run by the University of California.”

First US Cases Of Flesh-Eating Drug Krokodil Reported In Arizona

redOrbit Staff & Wire Reports – Your Universe Online

A flesh-eating drug that first surfaced in Russia over a decade ago has found its way to US shores, with poison control center officials confirming that two people have been treated for using the substance this week.

Dr. Frank LoVecchio, co-medical director at Banner Good Samaritan Poison and Drug Information Center in Arizona, told Lee Moran of the New York Daily News on Thursday that there were two patients who had used Krokodil, an injectable narcotic with potentially deadly side-effects.

“As far as I know, these are the first cases in the United States that are reported. So we’re extremely frightened,” LoVecchio told FoxNews.com. He noted that the cases appeared to be linked, and that there was concern that there will be more cases surfacing in the near future.

“This is really frightening,” added LoVecchio’s colleague Dr. Aaron Skolnik, a toxicologist. “This is something we hoped would never make it to the U.S. because it’s so detrimental to the people who use it.” Officials at the Banner Good Samaritan Poison and Drug Information Center did not provide an update on the conditions of the patients.

According to Moran, Krokodil, which is also known by the medical name desomorphine and is said to be three-times cheaper to produce than heroin, is “a poisonous cocktail of codeine, gasoline, paint thinner, hydrochloric acid, iodine and red phosphorous.” The homemade substance, which has reportedly been dubbed “the drug that eats junkies,” causes “gangrenous sores that open all the way to the bone.”

Once the substance is injected, it ruptures blood vessels, causing the user’s tissue to die and his or her skin to rot. As a result, the skin hardens and can even fall off to expose the bone, Time’s Eliza Gray reported. In addition, Daily Mail reporter James Nye said that the drug is also capable of causing brain damage and speech impediments.

The average life span of a Krokodil addict is two to three years, Gray said – though Moran said that the narcotic has been known to kill users within 12 months of their first hit. Desomorphine use was first reported in Russia over 10 years ago, and currently there are an estimated three million people using the substance.

“Prevalent in Siberia and the Russian Far East, the explosion of users began in 2002, but over the past five years in Russia, usage has trebled. In 2011 alone, Russia’s Federal Drug Control Service confiscated 65 million doses,” said Nye.

”These people are the ultimate in self-destructive drug addiction,” Dr. Ellen Marmur, chief of dermatological and cosmetic surgery at Mount Sinai Medical Center in New York City, told Fox News. “Once you are an addict at this level, any rational thinking doesn’t apply.”

Peanut Butter And Nut Consumption During Adolescence May Help Improve Breast Health

April Flowers for redOrbit.com – Your Universe Online

Peanut butter is one of childhood’s greatest pleasures, but a new study from Washington University School of Medicine in St. Louis and Harvard Medical School shows that girls who eat more peanut butter could improve their breast health in life.

The research, published in the journal Breast Cancer Research and Treatment, shows that girls aged 9 to 15 who regularly ate peanut butter or nuts were 39 percent less likely to develop benign breast disease by age 30. Although noncancerous, benign breast disease increases the risk of breast cancer later in life.

“These findings suggest that peanut butter could help reduce the risk of breast cancer in women,” said Graham Colditz, MD, DrPH, associate director for cancer prevention and control at Siteman Cancer Center at Barnes-Jewish Hospital and Washington University School of Medicine.

Colditz, who is also the Niess-Gain Professor in Medicine at Washington University School of Medicine, collaborated with Catherine Berkey, MA, ScD,  a biostatistician at Harvard Medical School and Brigham and Women’s Hospital in Boston.

The study data was collected from the health histories of 9,039 girls in the US who were enrolled in the Growing Up Today Study from 1996 through 2001. Between 2005 to 2010, when the subjects were 18 to 30 years old, they self-reported whether they had been diagnosed with benign breast disease that had been confirmed by breast biopsy.

Participants who ate peanut butter or nuts two times each week were 39 percent less likely to have developed benign breast disease than those who never ate them, according to the study. Benign breast cancer disease might also be prevented by beans, lentils, soybeans and corn, the study shows, but consumption of these foods was much lower in these girls, making the evidence weaker.

Prior research has linked peanut butter, nut and vegetable fat consumption to a lower risk for benign breast disease. Participants in those studies, however, were asked to recall their high school dietary habits years later. This current study is the first to use reports made during adolescence, with continued follow-up as cases of benign breast disease are diagnosed in young women.

Colditz has recommended that girls replace high-calorie junk foods and sugary drinks with peanut butter and nuts because of the obesity epidemic in this country.

Multi-Satellite Observations Help Uncover Origins Of Space Weather

[ Watch the Video: What Really Causes Space Weather? ]

redOrbit Staff & Wire Reports – Your Universe Online

Research published in Friday’s issue of the journal Science looks to shed new light on how changing environmental conditions in near-Earth space known as “space weather” can occur.

Space weather is caused by solar storms – powerful eruptions of solar material and magnetic fields into interplanetary space – and can interfere with wireless communication and GPS signals, cause extensive power blackouts, and even result in the complete failure of essential satellites.

Little had been known about the exact processes that caused these changing environmental conditions. However, in the new study, researchers from UCLA, NASA, the Austrian Space Research Institute (IWF Graz) and the Japan Aerospace Exploration Agency (JAXA) have helped to provide new insight into the phenomenon.

Some of the energy given off by the sun during solar storms becomes temporarily stored in Earth’s stretched, compressed magnetic field, the study authors explain. That solar energy is ultimately released in explosive fashion, powering the planet’s radiation belts and causing brilliant auroras to occur in the polar skies.

While experts have been able to observe solar storms using cameras, the process through which the stored magnetic energy is unleashed had previously gone unobserved. Now, however, the research team has managed to measure the release of that energy thanks to six Earth-orbiting spacecraft and NASA’s ARTEMIS dual lunar orbiters.

[ Watch the Video: Magnetospheric Substorm ]

“Space weather begins to develop inside Earth’s magnetosphere, the giant magnetic bubble that shields the planet from the supersonic flow of magnetized gas emitted by the sun,” UCLA explained Thursday in a press statement. “During solar storms, some solar energy enters the magnetosphere, stretching the bubble out into a long, teardrop-shaped tail that extends more than a million miles into space.”

The stored magnetic energy is then released through a process known as “magnetic reconnection” – an event which can only be detected when energized particles speed past a spacecraft fortuitously positioned at the right place at the right time. Such an instance occurred in 2008, when NASA’s five Earth-orbiting THEMIS satellites discovered that magnetic reconnection was the trigger for near-Earth substorms (the building blocks of space weather).

“However, there was still a piece of the space weather puzzle missing: There did not appear to be enough energy in the reconnection flows to account for the total amount of energy released for typical substorms,” the Los Angeles-based university said. “In 2011, in an attempt to survey a wider area of the Earth’s magnetosphere, the THEMIS team repositioned two of its five spacecraft into lunar orbits, creating a new mission dubbed ARTEMIS.”

“From afar, these two spacecraft provided a unique global perspective of energy storage and release near Earth,” they added. “Similar to a pebble creating expanding ripples in a pond, magnetic reconnection generates expanding fronts of electricity, converting the stored magnetic energy into particle energy. Previous spacecraft observations could detect these energy-converting reconnection fronts for a split second as the fronts went by, but they could not assess the fronts’ global effects because data were collected at only a single point.”

[ Watch the Video: Tracking Energy through Space ]

However, by last summer, the THEMIS and ARTEMIS satellites, JAXA’s Geotail satellite and the US National Oceanic and Atmospheric Administration (NOAA) GOES probe were in the proper alignment and managed to collect data accounting for the total amount of energy which leads to near-Earth space weather. According to the study authors, energy equivalent to a 7.1 Richter-scale earthquake was released during the event.

The vehicles and satellites observed as a pair of expanding energy fronts launched symmetrically on either side of the site where magnetic reconnection occurs. One moved towards Earth, while the other moved away from it and past the moon. The magnetic energy was converted into particle and wave energy during its 250,000 mile journey from its origin to a narrow region located just a few dozen miles across.

According to the study authors, this occurrence explains why single-satellite measurements in the past were unable to make much of the energy release. Conversely, the multi-satellite fleet was able to illustrate that the energy conversion process continued for as much as 30 minutes after the reconnection process started.

“We have finally found what powers Earth’s aurora and radiation belts,” explained Vassilis Angelopoulos, a professor in the UCLA Department of Earth, Planetary and Space Sciences, principal investigator for the ARTEMIS and THEMIS missions, and lead author of the study. “It took many years of mission planning and patience to capture this phenomenon on multiple satellites, but it has certainly paid off. We were able to track the total energy and see where and when it is converted into different kinds of energy.”

Neutrons Show Accumulation Of Antidepressant In Brain

Lithium in the brain
Experiments with neutrons at the Technische Universität München (TUM) show that the antidepressant lithium accumulates more strongly in white matter of the brain than in grey matter. This leads to the conclusion that it works differently from synthetic psychotropic drugs. The tissue samples were examined at the Research Neutron Source Heinz Maier-Leibnitz (FRM II) with the aim of developing a better understanding of the effects this substance has on the human psyche.
At present lithium is most popular for its use in rechargeable batteries. But for decades now, lithium has also been used to treat various psychological diseases such as depressions, manias and bipolar disorders. But, the exact biological mode of action in certain brain regions has hardly been understood. It is well known that lithium lightens moods and reduces aggression potential.
Because it is so hard to dose, doctors have been reluctant to prescribe this “universal drug”. Nonetheless, a number of international studies have shown that a higher natural lithium content in drinking water leads to a lower suicide rate in the general population. Lithium accumulates in the brains of untreated people, too. This means that lithium, which has so far been regarded as unimportant, could be an essential trace element for humans.
This is what Josef Lichtinger is studying in his doctoral thesis at the Chair for Hadron and Nuclear Physics (E12) at the Technische Universität München. From the Institute for Forensic Medicine at the Ludwig-Maximilians-Universität Munich (LMU) he received tissue samples taken from patients treated with lithium, untreated patients and healthy test persons. The physicist exposed these to a focused cold neutron beam of greatest intensity at the measuring station for prompt gamma activation analysis at FRM II.
Lithium reacts with neutrons in a very specific manner and decays to a helium and a tritium atom. Using a special detector developed by Josef Lichtinger, traces as low as 0.45 nanograms of lithium per gram of tissue can be measured. “It is impossible to make measurements as precise as those using the neutrons with any other method,” says Jutta Schöpfer, forensic scientist at the LMU in charge of several research projects on lithium distribution in the human body.
Lichtinger’s results are surprising: Only in the samples of a depressive patient treated with lithium did he observe a higher accumulation of lithium in the so-called white matter. This is the area in the brain where nerve tracts run. The lithium content in the neighboring grey matter was 3 to 4 times lower. Lithium accumulation in white matter was not observed in a number of untreated depressive patients. This points to the fact that lithium does not work in the space between nerve cells, like other psychotropic drugs, but within the nerve tracts themselves.
In a next step Josef Lichtinger plans to examine further tissue samples at TUM’s Research Neutron Source in order to confirm and expand his results. The goal is a space-resolved map showing lithium accumulation in the brain of a healthy and a depressive patient. This would allow the universal drug lithium to be prescribed for psychological disorders with greater precision and control. The project is funded by the German Research Foundation (DFG).

On the Net:

Creating Matter That Behaves Like Luke Skywalker’s Light Saber

redOrbit Staff & Wire Reports – Your Universe Online

Scientists from Harvard University and the Massachusetts Institute of Technology (MIT) have joined forces to create a never-before-seen form of matter that is said to behave like the legendary light sabers of Star Wars fame.

Harvard physics professor Mikhail Lukin, MIT physics professor Vladan Vuletic and their colleagues were able to coax photons into binding together to form what they describe as “photonic molecules.”

In a paper that appeared in Wednesday’s edition of the journal Nature, the authors explain how their findings run contrary to widely-accepted knowledge about the way in which light behaves.

Scientists have long believed that photons are massless particles which do not interact with each other, the researchers said, noting that if you shine two laser beams at one another, they would simply pass through each other. Photonic molecules behave differently, however, added Lukin.

“What we have done is create a special type of medium in which photons interact with each other so strongly that they begin to act as though they have mass, and they bind together to form molecules,” he said. “This type of photonic bound state has been discussed theoretically for quite a while, but until now it hadn’t been observed.”

“It’s not an in-apt analogy to compare this to light sabers,” the Harvard professor added. “When these photons interact with each other, they’re pushing against and deflect each other. The physics of what’s happening in these molecules is similar to what we see in the movies.”

So how did Lukin, Vuletic and their colleagues manage to get the ordinarily massless photons to bind to one another? They started by pumping rubidium atoms into a vacuum chamber, and then used lasers to cool the cloud of atoms to only a few degrees above absolute zero.

Using extremely weak laser pulses, they fired single photons into the cloud of atoms. As the photons entered that cloud, the energy produced excited atoms along the way, causing the photon to slow dramatically. The energy is passed from atom to atom along the way, and eventually exits the cloud with the photon.

“When the photon exits the medium, its identity is preserved,” explained Lukin. “It’s the same effect we see with refraction of light in a water glass. The light enters the water, it hands off part of its energy to the medium, and inside it exists as light and matter coupled together, but when it exits, it’s still light.”

“The process that takes place is the same it’s just a bit more extreme – the light is slowed considerably, and a lot more energy is given away than during refraction,” he added. Much to their surprise, when they fired two photos into the cloud, they exited together as a lone molecule due to an effect known as the Rydberg blockade.

The Rydberg blockade, Lukin explained, states that when an atom is excited, nearby atoms cannot be excited to the same degree. Essentially, this phenomenon means that when two photos enter an atomic cloud together, the first one excites an atom but must move forward before the other one can excite nearby items. As a result, the two photons push and pull with each other as their energy is passed from one atom to another, the researchers explained.

“It’s a photonic interaction that’s mediated by the atomic interaction. That makes these two photons behave like a molecule, and when they exit the medium they’re much more likely to do so together than as single photons,” Lukin said, noting that the odd phenomenon has some practical applications, such as quantum computing.

“We do this for fun, and because we’re pushing the frontiers of science,” the professor added. “But it feeds into the bigger picture of what we’re doing because photons remain the best possible means to carry quantum information. The handicap, though, has been that photons don’t interact with each other.”

In order to build a quantum computer, Lukin said that developers first need to come up with a way to preserve quantum information and process it using quantum logic operations. However, quantum logic requires interactions between individual quanta so that these types of systems can successfully process information.

“What we demonstrate with this process allows us to do that,” he explained. “Before we make a useful, practical quantum switch or photonic logic gate we have to improve the performance, so it’s still at the proof-of-concept level, but this is an important step. The physical principles we’ve established here are important.”

Experts Predict Effects Of Deepwater Horizon Oil Spill Could Last Decades

redOrbit Staff & Wire Reports – Your Universe Online

The Deepwater Horizon disaster could have a lasting impact on the Gulf of Mexico, according to a new paper suggesting that the region’s deep-sea soft-sediment ecosystem could take decades to recover from the 2010 oil spill.

The authors claim that their study, which was printed last month by the online journal PLoS ONE, provides comprehensive results on the spill’s effect on deep-water communities at the base of the Gulf’s food chain for the first time.

The spill resulted from an explosion on board the Deepwater Horizon oil rig that occurred on April 20, 2010, and resulted in a total of 4.9 million barrels (205.8 million gallons) of crude in what went on to become the largest offshore oil spill in US history.

In gauging the long-term effect of the disaster, the scientists conducted a 2011 cruise to collect additional data from sites that had been previously sampled in the fall of 2010. Specifically, they looked at the Gulf’s soft-bottom muddy habitats, examining biological composition and chemical composition at the same time and at the same location.

“This is not yet a complete picture,” said lead scientist Cynthia Cooksey of the National Oceanic and Atmospheric Administration (NOAA) National Centers for Coastal Ocean Science. “We are now in the process of analyzing data collected from a subsequent cruise in the spring of 2011. Those data will not be available for another year, but will also inform how we look at conditions over time.”

“As the principal investigators, we were tasked with determining what impacts might have occurred to the sea floor from the Deepwater Horizon oil spill,” Dr. Paul Montagna, Endowed Chair for Ecosystems and Modeling at the Harte Research Institute for Gulf of Mexico Studies, Texas A&M University-Corpus Christi, added. “We developed an innovative approach to combine tried and true classical statistical techniques with state of the art mapping technologies to create a map of the footprint of the oil spill.”

Typically, when researchers investigate offshore drilling sites, Montagna said that researchers typically find pollution between 300 and 600 yards from the site. However, during their most recent expedition, he said that they found it nearly two miles from the wellhead, with “identifiable impacts” of the pollution observed more than 10 miles from the actual site. Previously, experts had been unable to identify the effect on bottom of the vast underwater plume, and the “devastating” effect that the Deepwater Horizon spill had on the sea floor.

According to the researchers, the oil spill and plume covered nearly 360 square miles, with the most severe reduction of biological abundance and biodiversity occurring in a region roughly nine miles around the wellhead. Moderate effects were also observed 57 square miles around the wellhead, they added.

“The tremendous biodiversity of meiofauna in the deep-sea area of the Gulf of Mexico we studied has been reduced dramatically,” said Dr. Jeff Baguley, an expert on meiofauna (small invertebrates that live in both marine and fresh water) from the University of Nevada, Reno. “Nematode worms have become the dominant species at sites we sampled that were impacted by the oil. So though the overall number of meiofauna may not have changed much, it’s that we’ve lost the incredible biodiversity.”

Disaster Recovery Tool Tested By NASA And Homeland Security

NASA’s Jet Propulsion Laboratory

NASA and the U.S. Department of Homeland Security are collaborating on a first-of-its-kind portable radar device to detect the heartbeats and breathing patterns of victims trapped in large piles of rubble resulting from a disaster.

The prototype technology, called Finding Individuals for Disaster and Emergency Response (FINDER) can locate individuals buried as deep as 30 feet (about 9 meters) in crushed materials, hidden behind 20 feet (about 6 meters) of solid concrete, and from a distance of 100 feet (about 30 meters) in open spaces.

Developed in conjunction with Homeland Security’s Science and Technology Directorate, FINDER is based on remote-sensing radar technology developed by NASA’s Jet Propulsion Laboratory in Pasadena, Calif., to monitor the location of spacecraft JPL manages for NASA’s Science Mission Directorate in Washington.

“FINDER is bringing NASA technology that explores other planets to the effort to save lives on ours,” said Mason Peck, chief technologist for NASA and principal advisor on technology policy and programs. “This is a prime example of intergovernmental collaboration and expertise that has a direct benefit to the American taxpayer.”

The technology was demonstrated to the media today at the DHS’s Virginia Task Force 1 Training Facility in Lorton, Va. Media participated in demonstrations that featured the device locating volunteers hiding under heaps of debris. FINDER also will be tested further by the Federal Emergency Management Agency this year and next.

“The ultimate goal of FINDER is to help emergency responders efficiently rescue victims of disasters,” said John Price, program manager for the First Responders Group in Homeland Security’s Science and Technology Directorate in Washington. “The technology has the potential to quickly identify the presence of living victims, allowing rescue workers to more precisely deploy their limited resources.”

The technology works by beaming microwave radar signals into the piles of debris and analyzing the patterns of signals that bounce back. NASA’s Deep Space Network regularly uses similar radar technology to locate spacecraft. A light wave is sent to a spacecraft, and the time it takes for the signal to get back reveals how far away the spacecraft is. This technique is used for science research, too. For example, the Deep Space Network monitors the location of the Cassini mission’s orbit around Saturn to learn about the ringed planet’s internal structure.

“Detecting small motions from the victim’s heartbeat and breathing from a distance uses the same kind of signal processing as detecting the small changes in motion of spacecraft like Cassini as it orbits Saturn,” said James Lux, task manager for FINDER at JPL.

In disaster scenarios, the use of radar signals can be particularly complex. Earthquakes and tornadoes produce twisted and shattered wreckage, such that any radar signals bouncing back from these piles are tangled and hard to decipher. JPL’s expertise in data processing helped with this challenge. Advanced algorithms isolate the tiny signals from a person’s moving chest by filtering out other signals, such as those from moving trees and animals.

Similar technology has potential applications in NASA’s future human missions to space habitats. The astronauts’ vital signs could be monitored without the need for wires.

The Deep Space Network, managed by JPL, is an international network of antennas that supports interplanetary spacecraft missions and radio and radar astronomy observations for the exploration of the solar system and the universe. The network also supports selected Earth-orbiting missions.

On The Net:

Forensics May Rely On ‘Microbial Clock’ To Establish Time Of Death

[ Watch the Video: Time Of Death Determined By Microbial Clock ]

April Flowers for redOrbit.com – Your Universe Online

Forensic scientists already have an extensive toolbox of techniques for determining the time of death in cases involving human corpses, but an intriguing new study led by the University of Colorado may just give them a new one.

The findings, published in the new online science and biomedical journal eLIFE, describe a microbial clock that is essentially the lock-step succession of bacterial changes that occur postmortem as bodies move through the decay process. The current study used mice, however previous research into the human microbiome – the estimated 100 trillion or so microbes that live on and inside each of us – suggests that there is solid reason to believe such microbial clocks are ticking away on human corpses.

“While establishing time of death is a crucial piece of information for investigators in cases that involve bodies, existing techniques are not always reliable,” said Jessica Metcalf, a postdoctoral researcher at CU-Boulder ‘s BioFrontiers Institute. “Our results provide a detailed understanding of the bacterial changes that occur as mouse corpses decompose, and we believe this method has the potential to be a complementary forensic tool for estimating time of death.”

Forensic scientists currently use tools ranging from the timing of last text messages and corpse temperatures to insect infestations on bodies and “grave soil” analyses, with varying results, according to Metcalf. The longer the time lapse from death until the forensic scientist starts testing, the more difficult it becomes to determine the time of death with any accuracy.

The research team used high-technology gene sequencing techniques on both bacteria and microbial eukaryotic organisms like fungi, nematodes and amoeba postmortem to pinpoint time of mouse death after a 48-day period to within roughly four days. Testing analysis at 34 days postmortem resulted in even more accurate results, correctly estimating the time of death within about three days, said Metcalf.

Over the course of the 48-day study, the research team tracked microbial changes on the heads, torsos, body cavities and associated grave soil of 40 mice at eight different time points.

Chaminade University forensic scientist David Carter said the after-death stages include the “fresh” stage before decomposition, followed by “active decay” that includes bloating and subsequent body cavity rupture, followed by “advanced decay.”

“At each time point that we sampled, we saw similar microbiome patterns on the individual mice and similar biochemical changes in the grave soil,” said Laura Parfrey, a former CU-Boulder postdoctoral fellow and now a faculty member at the University of British Columbia who is a microbial and eukaryotic expert. “And although there were dramatic changes in the abundance and distribution of bacteria over the course of the study, we saw a surprising amount of consistency between individual mice microbes between the time points — something we were hoping for.”

“Blooms” of a common soil-dwelling nematode well known for consuming bacterial biomass that occurred at roughly the same time on individual mice remains were charted by the researchers during the decay period.

“The nematodes seem to be responding to increases in bacterial biomass during the early decomposition process, an interesting finding from a community ecology standpoint,” said Metcalf.

“This work shows that your microbiome is not just important while you’re alive,” said CU-Boulder Associate Professor Rob Knight, who runs the lab where the experiments took place. “It might also be important after you’re dead.”

The CU research team worked closely with assistant professors Sibyl Bucheli and Aaron Linne of Sam Houston State University (SHSU). SHSU is the home of the Southeast Texas Applied Forensic Science Facility, an outdoor human decomposition facility known popularly as a “body farm.” The body farm research team is testing bacterial signatures of human cadavers over time to learn more about the process of human decomposition and how it is influenced by weather, seasons, animal scavenging and insect infestations.

The current study is the latest in more than a dozen over the last several years from the CU-Boulder research team on human microbiomes. Another study, conducted by Professor Noah Fierer has revealed what could be another potential forensic tool: microbial signatures left on computer keys and computer mice, an idea enthralling enough it was featured on a “CSI: Crime Scene Investigation” television episode.

“This study establishes that a body’s collection of microbial genomes provides a store of information about its history,” said Knight, also an associate professor of chemistry and biochemistry and a Howard Hughes Medical Institute Early Career Scientist. “Future studies will let us understand how much of this information, both about events before death — like diet, lifestyle and travel — and after death can be recovered.”

“There is no single forensic tool that is useful in all scenarios, as all have some degree of uncertainty,” said Metcalf. “But given our results and our experience with microbiomes, there is reason to believe we can get past some of this uncertainty and look toward this technique as a complementary method to better estimate time of death in humans.”

Astronomers Find Densest Galaxy Yet Discovered

[ Watch the Video: Hubble Locates Densest Galaxy Ever ]

Brett Smith for redOrbit.com – Your Universe Online

The distance from our sun to the next closest star, Alpha Centauri, is 25.6 trillion miles. While that may seem far, try to imagine 10,000 stars sitting between the two solar systems, meaning the stars would be spaced about the distance from Earth to Pluto.

That’s exactly the scenario in M60-UCD1, a recently discovered ‘ultra-compact dwarf galaxy’ described in a study by a team of American and Australian astronomers in the latest edition of Astrophysical Journal Letters.

“This galaxy is more massive than any ultra-compact dwarfs of comparable size,” said study author Jay Strader, an astronomer at Michigan State University, “and is arguably the densest galaxy known in the local universe.”

“Traveling from one star to another would be a lot easier in M60-UCD1 than it is in our galaxy,” he added. “But it would still take hundreds of years using present technology.”

About half of the galaxy’s mass is located within a radius of around 80 light years. This compactness would make the concentration of stars approximately 15,000 times greater in M60-UCD1 than what is found in Earth’s corner of the Milky Way.

The team made their discovery using NASA’s Hubble Space Telescope, the space agency’s orbiting Chandra X-ray Observatory and the W. M. Keck Observatory on the summit of Mauna Kea in Hawaii. The 6.5-meter Multiple Mirror Telescope in Arizona was used to determine the amount of elements heavier than hydrogen and helium in the galaxy’s stars. The values were found to be close to those of our Sun.

“The abundance of heavy elements in this galaxy makes it a fertile environment for planets and, potentially, life to form,” said co-author Anil Seth, an astronomer at the University of Utah.

Study researchers also found that M60-UCD1 has a strong X-ray source in its center. One possible reason for this source could be a supermassive black hole weighing about 10 million times as much as our sun.

Chandra scientists said they are currently trying to see if M60-UCD1 and other similar galaxies are either born that way or were once larger galaxies that had stars ripped away from their exterior. Large black holes are not typically seen in star clusters, so if there is a large, central black hole inside M60-UCD1, the galaxy was most likely produced through collisions between a bigger galaxy and one or more other galaxies. The weight of the galaxy and the Sun-like abundances of elements also indicate that the galaxy is leftover from a much bigger galaxy.

“We think nearly all of the stars have been pulled away from the exterior of what once was a much bigger galaxy,” said co-author Duncan Forbes of Swinburne University in Australia. “This leaves behind just the very dense nucleus of the former galaxy, and an overly massive black hole.”

If this stripping of stars did take place, then M60-UCD1 was originally 50 to 200 times bigger than it is now, which would make it more like the Milky Way and other typical galaxies.

“Twenty years ago we couldn’t have done this,” Strader noted. “We didn’t have Hubble or Chandra. This is one of those projects where you bring together the full force of NASA’s great observatories, plus ground-based resources.”

It’s An Uphill Battle For The World’s Forests

Brett Smith for redOrbit.com – Your Universe Online

A relaxing stroll through the woods is increasingly becoming an intense cardiovascular workout, as a new report from researchers at Aarhaus University in Denmark indicates that that the world’s forests are slowly being relegated to steep, mountainous slopes.

According to study author Brody Sandel, the increasingly efficient removal of trees from flat areas around the world raises concerns about the biodiversity of the world’s forests in the future.

“The remaining forests on slopes are typically divided into smaller areas that are not continuous,” Sandel said. “For example, fragmentation reduces the availability of interior forest habitat that is preferred by many bird species. There are also a number of large predators, such as big cats like the tiger, which require extensive areas of continuous forest to be able to get enough food or avoid human persecution.”

According to the study, which was published Tuesday in the journal Nature Communications, developed countries are especially efficient at razing forests in flat areas of arable land. The study researchers identified a clear connection between a thriving economy, a more organized society and a more efficient means of restricting forests to steep slopes, due to their lower utility and value.

The team arrived at their findings through the analysis of images taken from satellites that monitor global forest ecosystems with a fine level of detail. High-resolution global satellite data revealed the distribution of global tree cover from 2000 to 2005 and its connection to terrain, climate, human activity, and a range of other factors.

Study researchers found that the relegation of forests to steep slopes has also recently accelerated in less well-developed countries, many of which have begun to clear forests in pursuit of greater agricultural capacity and urban development. In remote areas of the Amazon, Siberia and the Congo, there are still large, uninterrupted stretches of virgin forests, the Danish scientists said. However, as populations grow and human impacts increase, the researchers expect development to increasingly affect even these relatively desolate regions.

Some developed societies around the world have reforestation programs and other forests are naturally regrowing as people move from hillsides to the highly populated regions below. Both trends strengthen the movement towards future forests becoming pushed up and onto slopes, the Danish scientists said.

In addition to concerns over biodiversity, small and fragmented forests are more likely to be affected by wind impact, have intense sunlight on the forest floor, and be more disturbed. The result would be a hotter and drier microclimate, potentially promoting species that do not require a stable, dense forest and upsetting the balance of an ecosystem.

“On the other hand, species in steep mountainous areas can better track their preferred climate as it becomes warmer,” said co-author Jens-Christian Svenning, a professor in the Department of Bioscience at Aarhus University.

“Hence, considering future climate change, it’s fortunate that forests will especially occur on steep terrain in the future,” he added. “It’s thus a blessing in disguise that the general loss of forests has less effect on slopes.”

Scientists Push Closer To Understanding Mystery Of Deep Earthquakes

Scientists broke new ground in the study of deep earthquakes, a poorly understood phenomenon that occurs where the oceanic lithosphere, driven by tectonics, plunges under continental plates – examples are off the coasts of the western United States, Russia and Japan.

This research is a large step toward replicating the full power of these earthquakes to learn what sets them off and how they unleash their violence. It was made possible only by the construction of a one-of-a-kind X-ray facility that can replicate high-pressure and high-temperature while allowing scientists to peer deep into material to trace the propagation of cracks and shock waves.

“We are capturing the physics of deep earthquakes,” said Yanbin Wang, a senior scientist at the University of Chicago who helps run the X-ray facility where the research occurred. “Our experiments show that, for the first time, laboratory-triggered brittle failures during the olivine-spinel (mineral) phase transformation has many similar features to deep earthquakes.”

Wang and a team of scientists from Illinois, California and France simulated deep earthquakes at the U.S. Department of Energy’s Argonne National Laboratory by using pressure of 5 gigapascals, more than double the previous studies of 2 GPa. For comparison, pressure of 5 GPa is 4.9 million times the pressure at sea level.

At this pressure, rock should be squeezed too tight to rapture and erupt into violent earthquakes. But it does. And that has puzzled scientists since the phenomenon of deep earthquakes was discovered nearly 100 years ago. Interest spiked with the May 24 eruption in the waters near Russia of the world’s strongest deep earthquake – roughly five times the power of the great San Francisco quake of 1906.

These deep earthquakes occur in older and colder areas of the oceanic plate that gets pushed into the earth’s mantle. It has been speculated that the earthquakes are triggered when a mineral common in the upper mantle,  olivine, undergoes a phase transformation that weakens the whole rock temporarily, causing it to fail.

“Our current goal is to understand why and how deep earthquakes happen. We are not at a stage to predict them yet; it is still a long way to go,” Wang said.

The work was conducted at the GeoSoilEnviroCARS beamline operated by the University of Chicago at Argonne’s Advanced Photon Source.

“GSECARS is the only beamline in the world that has the combined capabilities of in-situ X-ray diffraction and imaging, controlled deformation, in terms of stress, strain and strain rate, at high pressure and temperature, and acoustic emission detection,” Wang said. “ It took us several years to reach this technical capability.”

This new technology is a dream come true for the paper’s coauthor, geologist Harry Green, a distinguished professor of the graduate division at the University of California, Riverside.

More than 20 years ago, he and colleagues discovered a high-pressure failure mechanism that they proposed then was the long-sought mechanism of very deep earthquakes (earthquakes occurring at more than 400 km depth). The result was controversial because seismologists could not find a seismic signal in the earth that could confirm the results.

Seismologists have now found the critical evidence. Indeed, beneath Japan, they have even imaged the tell-tale evidence and showed that it coincides with the locations of deep earthquakes.

In the Sept. 20 issue of the journal Science, Green and colleagues explained how to simulate these earthquakes in a paper titled “Deep-Focus Earthquake Analogs Recorded at High Pressure and Temperature in the Laboratory”.

“We confirmed essentially all aspects of our earlier experimental work and extended the conditions to significantly higher pressure,” Green said.  “What is crucial, however, is that these experiments are accomplished in a new type of apparatus that allows us to view and analyze specimens using synchrotron X-rays in the premier laboratory in the world for this kind of experiment — the Advanced Photon Source at Argonne National Laboratory.”

The ability to do such experiments has now allowed scientists like Green to simulate the appropriate conditions within the earth and record and analyze the “earthquakes” in their small samples in real time, thus providing the strongest evidence yet that this is the mechanism by which earthquakes happen at hundreds of kilometers depth.

The origin of deep earthquakes fundamentally differs from that of shallow earthquakes (earthquakes occurring at less than 50 km depth). In the case of shallow earthquakes, theories of rock fracture rely on the properties of coalescing cracks and friction.

“But as pressure and temperature increase with depth, intracrystalline plasticity dominates the deformation regime so that rocks yield by creep or flow rather than by the kind of brittle fracturing we see at smaller depths,” Green explained.  “Moreover, at depths of more than 400 kilometers, the mineral olivine is no longer stable and undergoes a transformation resulting in spinel, a mineral of higher density.”

The research team focused on the role that phase transformations of olivine might play in triggering deep earthquakes.  They performed laboratory deformation experiments on olivine at high pressure and found the “earthquakes” only within a narrow temperature range that simulates conditions where the real earthquakes occur in earth.

“Using synchrotron X-rays to aid our observations, we found that fractures nucleate at the onset of the olivine to spinel transition,” Green said. “Further, these fractures propagate dynamically so that intense acoustic emissions are generated. These phase transitions in olivine, we argue in our research paper, provide an attractive mechanism for how very deep earthquakes take place.”

“Our next goal is to study the ‘real’ material, the silicate olivine (Mg,Fe)2SiO4, which requires much higher pressures,” Wang said.

The research was funded by grants from the Institut National des Sciences de l’Univers and L’Agence Nationale de la Recherche and the National Science Foundation. Use of the Advanced Photon Source was funded by U.S. Department of Energy Office of Science.

The authors of the study were Alexandre Schubnel at the Ecole Normale Supérieure, France; Fabrice Brunet at the Université de Grenoble, France; and Nadège Hilairet, Julian Gasc and Wang at the University of Chicago, and Green of UC Riverside.

On the Net:

Implanted Device Cuts Central Sleep Apnea Episodes Significantly

[ Watch the Video: Small Implant Makes Big Difference In Sleep Apnea ]

Michael Harper for redOrbit.com – Your Universe Online

Unlike obstructive sleep apnea, central sleep apnea can be both difficult to diagnose and potentially more dangerous.

According to sleep medicine experts from Ohio State University Wexner Medical Center, more than one-third of heart failure patients are affected by central sleep apnea, making their cardiovascular condition even worse. These doctors now say they’ve tested an intravenous device to alleviate the condition in heart patients.

After trials, doctors saw a 56 percent reduction in overall apnea events per hour and a more than 80 percent reduction in central sleep apnea events. This device could prove beneficial to many as recent studies have shown prolonged bouts of sleep apnea can lead to a myriad of other health conditions.

“One of the concerning features of central sleep apnea is that these patients don’t fit the usual profile of obstructive sleep apnea,” explained Dr. Rami Khayat, one of the sleep medicine experts working on this study at Ohio State. “They generally don’t snore, so they’re tougher to diagnose, and the symptoms of sleepiness and fatigue overlap with symptoms associated with heart failure.”

As the name suggests, obstructive sleep apnea occurs when the airway is obstructed during sleep. Patients with either type of sleep apnea often stop breathing for longer than ten seconds at a time during sleep. Central sleep apnea, on the other hand, occurs when signals to the brain are interrupted. In these cases, a patient’s brain is not being told to breathe for longer than ten seconds at a time.

Dr. Khayat, along with Dr. Ayesha Hasan, Dr. Ralph Augostini and consultant Dr. William Abraham, tested a device made by Respicardia Inc. from Minnesota. The medical company also funded this study.

The device is implanted just below the collar bone with a stimulation lead running intravenously along a phrenic nerve. The device also uses a pulse generator and a sensing lead to detect respiration. Similar to a pacemaker, the device detects when a person stops breathing and can electronically signal the diaphragm to breathe again.

The Ohio State University doctors implanted this device in 47 test patients and gave them one month to completely heal. Once healed, the patients returned to the hospital to have the doctors turn on their devices and have them programmed to their sleep habits.

“The device normalized breathing during sleep, it reduced apnea episodes and, in association with that, we saw improvements in sleepiness symptoms and patients’ quality of life,” said Dr. Abraham, a consultant for Respicardia.

“We also noted a reduction in blood pressure in patients with hypertension,” he added.

The doctors will now set out to compare these results with current methods for treating central and obstructive sleep apnea in larger and controlled clinical trials. Participants for these trials will be separated into two groups. The first group will have their devices turned on shortly after surgery, while the second group will have their devices turned on six months after the operation.

“If these initial findings bear out in the larger studies, an implantable device could be a good option for central sleep apnea patients who cannot tolerate positive airway pressure therapy,” said Dr. Khayat in a statement.

Such a device could prove beneficial to many patients and prevent many other health issues. A June study found that even moderate cases of obstructive sleep apnea can significantly increase the risk of sudden cardiac death while asleep.

Sleep apnea has also been tied to asthma and Alzheimer’s disease.

Study Explains Risk Factors Associated With Video Game Addiction

[ Watch the Video: Can You Admit To Your Video Game Addiction? ]

redOrbit Staff & Wire Reports – Your Universe Online

Escapism, social interaction and virtual achievements are the primary risk-factors that contribute to potential video game addiction, researchers from the University of Missouri claim in a new study.

“The biggest risk factor for pathological video game use seems to be playing games to escape from daily life,” study author Joseph Hilgard, a doctoral candidate in the Department of Psychological Sciences in the MU College of Arts and Science, explained Monday in a statement.

“Individuals who play games to get away from their lives or to pretend to be other people seem to be those most at-risk for becoming part of a vicious cycle,” he added. “These gamers avoid their problems by playing games, which in turn interferes with their lives because they’re so busy playing games.”

These factors spur on problematic habits among adults, no matter if they consider themselves casual gamers or hardcore devotees of the interactive entertainment products, the researchers wrote in a paper published earlier this month in the journal Frontiers In Psychology. They believe that understanding the motives that contribute to these behaviors could help counselors identify and treat video game addicts.

Becoming addicted to video games is more than just playing for inordinate amounts of time, Hilgard and his colleagues explained. True problematic gaming also includes other unhealthy behaviors, such as lying about the amount of time spent playing games and missing work or other obligations due to video gaming.

[ Watch the Video: Risk Factors For Addictive Video Game Use Among Adults ]

“People who play games to socialize with other players seem to have more problems as well,” Hilgard said. “It could be that games are imposing a sort of social obligation on these individuals so that they have to set aside time to play with other players.”

“For example, in games like World of Warcraft, most players join teams or guilds. If some teammates want to play for four hours on a Saturday night, the other players feel obligated to play or else they may be cut from the team. Those play obligations can mess with individuals’ real-life obligations,” he added.

The researchers noted that problematic video game use is not all that different from other addictive behaviors, including alcoholism and drug abuse. All of them can result from poor coping strategies, including gamers who are obsessed with reaching the next level or collecting a certain amount of in-game items.

“When people talk about games being ‘so addictive,’ usually they’re referring to games like Farmville or Diablo that give players rewards, such as better equipment or stronger characters, as they play,” Hilgard said. “People who are especially motivated by these rewards can find it hard to stop playing.”

He added that understanding the reasons why people play video games can help researchers, consumers and game developers better understand what makes certain types of software attractive to certain individuals. Furthemore, Hilgard said that his team found evidence supporting the notion that massively multiplayer online role-playing games (MMORPGs) like the aforementioned World of Warcraft are the most addictive video game genre.

“[MMORPGs] provide opportunities for players to advance levels, to join teams and to play with others,” he said. “In addition, the games provide enormous fantasy worlds that gamers can disappear into for hours at a time and forget about their problems. MMORPGs may be triple threats for encouraging pathological game use because they present all three risk factors to gamers.”

“Consistent with previous research, we did not find a perfect relationship between total time spent playing games and addictive video game behaviors,” added study co-author Christopher Engelhardt, a postdoctoral research fellow in the MU Department of Health Psychology. “Additionally, other variables, such as the proportion of free time spent playing video games, seem to better predict game addiction above and beyond the total amount of time spent playing video games.”

Heartbeat Passwords May Make Implanted Medical Devices Unhackable

[ Watch the Video: Securing Your Implanted Device With A Heartbeat Password ]

redOrbit Staff & Wire Reports – Your Universe Online

Researchers at Rice University have found a way to use the unique signature of a person’s heartbeat as a biometric security identifier to prevent implanted medical devices (IMD) from being hacked.

Implantable devices such as defibrillators and insulin pumps typically come with wireless connectivity that allows doctors to update software or download data. However, this wireless capability also gives hackers an opportunity to remotely alter the device in potentially life-threatening ways.

Masoud Rostami, one of the Rice University researchers involved in the current study, said IMDs generally lack the kind of password security found on home Wi-Fi networks because emergency medical technicians often need quick access to the information the devices store to save a life.

The downside of this, however, it that it leaves the IMDs open to attack, he said.

“If you have a device inside your body, a person could walk by, push a button and violate your privacy, even give you a shock,” he said in a statement.

To address this vulnerability, the researchers developed a new security feature that uses the patient’s own heartbeat as a kind of password that could only be accessed through touch.

A hacker “could make (an insulin pump) inject insulin or update the software of your pacemaker. But our proposed solution forces anybody who wants to read the device to touch you,” Rostami said.

The new system, dubbed Heart-to-Heart, would require software in the IMD to talk to the “touch” device, called the programmer. When a medical technician touches the patient, the programmer would pick up an electrocardiogram (EKG) signature from the beating heart. The internal and external devices would compare minute details of the EKG and execute a “handshake.” If signals gathered by both at the same instant match, they become the password that grants external access to the device.

“The signal from your heartbeat is different every second, so the password is different each time,” Rostami said. “You can’t use it even a minute later.”

Rostami compared the EKG to a chart of a financial stock.

“We’re looking at the minutia. If you zoom in on a stock, it ticks up and it ticks down every microsecond. Those fine details are the byproduct of a very complex system and they can’t be predicted.”

A human heartbeat is the same, in that every beat has unique characteristics that can be read and matched, he said.

“We treat your heart as if it were a random number generator.”

Rice electrical and computer engineer Farinaz Koushanfar said the system could potentially be used with the millions of IMDs already in use.

“To our knowledge, this is the first fully secure solution that has small overhead and can work with legacy systems,” she said. “Like any device that has wireless access, we can simply update the software.”

Koushanfar said the software would require very little of an IMD’s power, unlike other security solutions that require computationally intensive – and battery draining – cryptography.

“We’re hopeful,” she said, adding that implementation would require cooperation with device manufacturers, and approval by the Food and Drug Administration (FDA). “We think everything here is a practical technology.”

Rostami said the need for technology such as Heart-to-Heart will only grow over time.

“People will have more implantable devices, not fewer,” he said.

Indeed, there are more than 300,000 wireless electronic medical devices implanted in people every year in the United States alone.

“We already have devices for the heart and insulin pumps, and now researchers are talking about putting neuron stimulators inside the brain. We should make sure all these things are secure.”

Koushanfar and Rostami worked with Ari Juels, former chief scientist at RSA, to develop the new technology. The researchers will present their findings at the Association for Computing Machinery’s Conference on Computer and Communications Security in Berlin in November.

A paper describing the Heart-to-Heart authentication system can be viewed here.

New Evidence Indicates Moon Is 100 Million Years Younger Than Thought

[ Watch the Video: Moon May Be Much Younger Than Previously Believed ]

Lawrence LeBlond for redOrbit.com – Your Universe Online

Earth’s closest neighbor, the moon, has been studied intently by astronomers for centuries. In that time, we thought we had discovered nearly everything there is to know about the origins of our natural satellite.

However, new research by geochemist Richard Carlson of the Carnegie Institution of Washington has shaken things up a bit. According to Carlson’s studies, our orbiting partner is somewhat younger than previously thought – about 100 million years younger to be exact.

Previously, astronomers had pegged the moon to be around 4.56 billion years old, with prevailing theories suggesting a planet similar in size to Mars may have slammed into earth 4.56 billion years ago. The dust and debris from this impact was thrown back out into space and eventually amalgamated to create the moon.

While experts generally concur that the impact theory is the most likely scenario, Carlson is not so much investigating how this event happened, but rather when the moon formed as a result of this impact.

Carlson’s radioactive dating analysis of lunar rocks collected and returned during the Apollo missions suggest the moon formed between 4.4 and 4.45 billion years ago. If this new understanding holds any water, then all we think we have known about the history of the satellite can be thrown out the cosmic window.

Carlson had also analyzed levels of zircon in Earth rock taken from Western Australia, according to io9. Zircon, which is an extremely durable mineral, can give clues to the geologic events that occurred early in the planet’s history. These rocks do indicate a “major differentiation event” occurred at around the same time as the hypothetical impact event that created the moon.

Carlson said previous studies in this area had a large margin of error, but he believes improved technology has allowed him to make a more accurate measurement of when the moon formed, greatly narrowing any margin of error.

“Back in the 1970s, you couldn’t distinguish between 4.45 and 4.55 billion years,” he told Deborah Netburn of the Los Angeles Times. “Today, we can, and everything we are seeing suggests the 4.4 billion number.”

Scientists know that the Solar System is 4.568 billion years old, and they can determine the age of smaller interplanetary bodies, such as asteroids, with a fair degree of accuracy – analyzing periods of extensive melting that typically occur when they collide with even smaller bodies known as “planetesimals.”

Another leading theory suggests the moon had a global ocean of molten rock shortly after its formation. The lunar rocks that may have formed that ocean have been aged to about 4.360 billion years, according to researchers.

Carlson noted that one of the most interesting aspects of his research is imagining what the Earth may have been like before an impact; before it had a moon.

He suggests that “the Earth had two phases of its life — one before the giant impact, and another one greatly modified by the impact.”

With the recent discoveries, and with improved dating methods, scientists may now be able to create more accurate estimates of the moon’s and, perhaps, the Earth’s age. Estimating the age of planets is much more difficult than estimating age of smaller bodies, but the new techniques could help paint a clearer picture.

Carlson’s research could lead to new questions about the origins and early history of our planet, including the possibility that the Earth’s early atmosphere was destroyed by the impact that led to the creations of our orbiting partner.

Carlson presented his “Age of the Lunar Crust: Implications for the Time of Moon Formation” research on Monday at the “Origins of the Moon” conference of the Royal Society.

The “Origins of the Moon” conference continues today, with an “Origins of the Moon – Challenges and Prospects” meeting scheduled for Wednesday and Thursday as well.

Mathematical Simulation Accurately Predicts Rise Of Complex Societies

[ Watch the Video: Math and History Collide ]

redOrbit Staff & Wire Reports – Your Universe Online

A unique marriage of mathematics and history has helped researchers solve the mysteries surrounding the evolution of human society from small groups to the larger, more complex societies of the modern era.

A trans-disciplinary team of experts from the University of Connecticut, the University of Exeter in England, and the National Institute for Mathematical and Biological Synthesis (NIMBioS) have completed a cultural evolutionary model that accurately predicts when and where the largest-scale complex societies arose in human history. Their research appears this week in the journal Proceedings of the National Academy of Sciences.

“Simulated within a realistic landscape of the Afro-Eurasian landmass during 1,500 BCE to 1,500 CE, the mathematical model was tested against the historical record,” NIMBioS, an organization dedicated to solving basic and applied problems in the life sciences, explained in a statement. “During the time period, horse-related military innovations, such as chariots and cavalry, dominated warfare within Afro-Eurasia.”

They also discovered that geography played an important role in such developments, as nomads residing in the Eurasian Steppe helped influence societies that depended on agriculture for support and sustenance. By doing so, the study authors said that the nomads helped spread forms of offensive warfare into those agrarian societies.

“The study focuses on the interaction of ecology and geography as well as the spread of military innovations and predicts that selection for ultra-social institutions that allow for cooperation in huge groups of genetically unrelated individuals and large-scale complex states, is greater where warfare is more intense,” NIMBioS said.

“While existing theories on why there is so much variation in the ability of different human populations to construct viable states are usually formulated verbally, by contrast, the authors’ work leads to sharply defined quantitative predictions, which can be tested empirically,” they added.

The authors reported that the spread of larger societies predicted by their simulation was very similar to the actual, observed proliferation. In fact, they stated that their mathematical model was able to explain two-thirds of the variation in determining the rise of large-scale societies.

“What’s so exciting about this area of research is that instead of just telling stories or describing what occurred, we can now explain general historical patterns with quantitative accuracy,” said study co-author and NIMBioS director for scientific activities Sergey Gavrilets. “Explaining historical events helps us better understand the present, and ultimately may help us predict the future.”

In addition to Gavrilets, the authors of the paper include Peter Turchin of the University of Connecticut’s Department of Ecology and Evolutionary Biology, Thomas E. Currie of the University of Exeter’s Centre for Ecology and Conservation, and Edward A. L. Turner of South Woodham Ferrers, England. The study was edited by Charles S. Spencer of the American Museum of Natural History in New York.

Rainfall Redistribution Will Make Some Areas Warmer And Drier

April Flowers for redOrbit.com – Your Universe Online

A new study from Columbia University’s Lamont-Doherty Earth Observatory reveals that a northward shift of Earth’s wind and rain belts could make a broad swath of regions drier. The findings, published in Proceedings of the National Academy of Sciences, show that these drier regions include the Middle East, American West and Amazonia, while Monsoon Asia and equatorial Africa will become wetter as humans continue to heat the planet.

This new prediction is based on the warming that ended the last ice age around 15,000 years ago. During that warming, the North Atlantic Ocean began to churn more vigorously, melting Arctic sea ice, and setting up a temperature contrast with the southern hemisphere where sea ice was expanding around Antarctica. The tropical rain belt and mid-latitude jet stream were pushed north by the temperature gradient between the poles. This redistributed water in two bands around the planet.

Currently, the Arctic sea ice is retreating again and the northern hemisphere is heating up faster than the south. Because of this, history could repeat itself. “If the kinds of changes we saw during the deglaciation were to occur today that would have a very big impact,” said Wallace Broecker, a climate scientist at Columbia University’s Lamont-Doherty Earth Observatory.

Broecker and his colleague Aaron Putnam, a climate scientist at Lamont-Doherty, combined climate data collected from around the world, from tree-rings, polar ice cores, cave formations, and lake and ocean sediments, which allowed them to create their theory that the wind and rain belts shifted north from about 14,600 years ago to 12,700 years ago as the northern hemisphere was heating up.

At this time, at the southern edge of the tropical rain belt in the Bolivian Andes, the great ancient Lake Tauca nearly dried up while rivers in eastern Brazil slowed to a trickle and rain-fed stalagmites in the same region stopped growing. The northward advance of the jet stream in the middle latitudes may have caused Lake Lisan, a precursor to the Dead Sea in Jordan’s Rift Valley, to shrink, along with several prehistoric lakes in the western U.S., including Lake Bonneville in present day Utah.

Changes continued to stack up, with a northward shift of the tropical rains that recharged rivers draining the Cariaco Basin in Venezuela and East Africa’s Lake Victoria and Lake Tanganyika while the stalagmites in China’s Hulu cave grew bigger. Scientists have found evidence of a strong Asian monsoon show up in the Greenland ice cores.

The study authors hypothesize that between 1300 and 1850, as northern Europe transitioned from the relatively warm medieval era to a colder period known as the Little Ice Age, the process worked in reverse. During this time, ocean circulation slowed and North Atlantic sea ice expanded, according to the climate record. The rainfall declined in Monsoon Asia, which led to a series of droughts that have been linked to the decline of Cambodia’s ancient Khmer civilization, China’s Ming dynasty and the collapse of kingdoms in present day Vietnam, Myanmar and Thailand.

The reconstruction of glacier extents in New Zealand’s Southern Alps in the southern hemisphere suggests that the mid-latitudes may have been colder during medieval times. Such evidence supports the idea of a temperature contrast between the hemispheres that altered rain and wind patterns.

Each year, a similar migration of the wind and rain belts on Earth occurs. The tropical rain belt and mid-latitude jet stream migrate north during the boreal summer. The northern hemisphere heats up disproportionately to the south, with more continents to absorb the sun’s energy. In winter, the northern hemisphere cools off as the winds and rains revert south.

The winds and rains have rearranged themselves for longer periods of time, somehow. For example, in the 1970s and 1980s a southward shift of the tropical rain belt is thought to have brought devastating drought to Africa’s Sahel region. This was attributed to air pollution cooling the northern hemisphere. The tropical rain belt has since reverted back and might be moving north, suggested by a number of recent droughts, including in Syria, northern China, western US, and northeastern Brazil, the research team says.

At least one climate model demonstrates the tropical rain belt moving north as carbon dioxide levels climb and temperatures warm. This is consistent with the study findings. “It’s really important to look at the paleo record,” said Dargan Frierson, an atmospheric scientist at University of Washington whose modeling work supports the authors’ hypothesis. “Those changes were huge, just like we’re expecting with global warming.”

The researchers admit that their theory has some challenges. Changes in sea ice cover in the past drove the temperature gradient between the two hemispheres. Today, rapidly rising industrial carbon emissions are responsible. Additionally, no clear evidence has been found so far that ocean circulation is increasing in the North Atlantic or that the monsoon rains over Asia are strengthening. There is, however, speculation that sulfate aerosols produced by burning fossil fuels may be masking this effect.

Temperatures may warm as the air pollution in the northern hemisphere declines. This may create the kind of temperature contrast that could move the winds and rains north again, said Jeff Severinghau, a climate scientist at Scripps Institution of Oceanography. Severinghaus was not involved in the study.

“Sulfate aerosols will probably get cleaned up in the next few decades because of their effects on acid rain and health,” he said. “So Broecker and Putnam are probably on solid ground in predicting that northern warming will eventually greatly exceed southern warming.”

Declining Corals May Drastically Affect Crustacean Biodiversity

Brett Smith for redOrbit.com – Your Universe Online
With many scientists expecting climate change to have a devastating effect on the world’s coral reefs over the coming century, new research from the University of Florida indicates that crustacean populations living near rapidly declining reef habitats could be at risk.
Appearing in the November issue of the journal Geology, the new study is based on an analysis of the fossil record surrounding decapod crustaceans, a group that includes shrimp, crab and lobster.
“We estimate that earth’s decapod crustacean species biodiversity plummeted by more than 50 percent during a sharp decline of reefs nearly 150 million years ago, which was marked by the extinction of 80 percent of crabs,” explained study author Adiël Klompmaker, a postdoctoral researcher at the Florida Museum of Natural History on the UF campus.
“If reefs continue to decline at the current rate during this century, then a few thousand species of decapods are in real danger. They may adapt to a new environment without reefs, migrate to entirely new environments or, more likely, go extinct.”
According to study researchers, their paper is the first comprehensive look at the rise of decapod crustaceans in the fossil record. The study is based on a worldwide specimen database of fossils from the Mesozoic Era, which spanned from just over 250 to 66 million years ago.
The researchers tracked patterns of diversity and found that an ancient increase in the number of decapod species was related to the abundance of reefs, due to the reefs serving as a place both to find shelter and forage. Dubbed the “Mesozoic decapod revolution,” this period in Earth’s history saw a 300-fold increase in species diversity over the previous period and in the rapid evolution of crabs.
The researchers noted the difficulty in compiling the data for the study since most decapods possess a fragile exoskeleton that does not fossilize well.
“Only a scant fraction of decapod crustaceans is preserved in rocks, so their fossil record is limited,” said study co-author Michal Kowalewski, curator of invertebrate paleontology at the Florida Museum. “But, thanks to efforts of paleontologists many of those rare fossils have been documented all around the world, finally giving us a chance to look at their evolutionary history in a more rigorous, quantitative way.”
“This new work builds a good case for the role of reefs in promoting the evolutionary diversification of crustaceans,” added David Jablonski, a paleontologist in the department of geophysical sciences at the University of Chicago who was not directly involved in the study.
“We have to take their argument for the flip side of that story very seriously. The positive relation between reefs and crustaceans implies that the damage caused to reefs by human activities — from overfishing to ocean acidification — is likely to have cascading consequences for associated groups, including crustaceans.”
He pointed out that the new study also opens up several avenues for future research.
“It would be very interesting to extend this analysis into the Cenozoic Era, the 65 million years leading up to the present day,” Jablonski said. “And it would be valuable to look at the spatial structure of the crustacean diversification, for example how closely their diversification was tied to the extensive reefs in the western Pacific and was damped in the eastern Pacific with their much sparser contingent of reefs.”

Most Kids Have Casual Attitude About Sibling Bullying

[ Watch The Video: Sibling Bullying Could Lead To Mental Problems ]

Brett Smith for redOrbit.com – Your Universe Online

Many recent public awareness campaigns have focused on preventing bullying among peers, whether that bullying takes place at school or the workplace. However, some new studies have begun looking at the home as the place where the psychological mechanisms that give rise to bullying first take root.

In one such study recently published in the Journal of Interpersonal Violence, researchers found a relatively casual attitude toward bullying between siblings.

Study researchers said they set out to discover if siblings see sibling bullying as normal and to examine the victim-perpetrator differences in perceptions of sibling bullying. Volunteers included 27 sibling pairs who provided stories about personal experiences of sibling bullying, completed surveys regarding these experiences and responded to their sibling’s stories.

The researchers said 75 percent of the participants reported being bullied by a sibling and 85 percent said they had bullied a sibling.

“Normally in bullying research, percentages are significantly lower for perpetration than victimization,” said study author Robin Kowalski, a psychologist at Clemson University. “Notably, in this research on sibling bullying, percentages were higher for those willing to admit to perpetrating sibling bullying, suggesting that it wasn’t all that big a deal.”

The researchers supported their findings with additional data that showed there is a norm of acceptance about sibling bullying among sibling pairs. The study also showed that victims and perpetrators did not see specific instances of sibling bullying the same way. Victims saw instances of sibling bullying more negatively than their perpetrators did.

Kowalski said she hopes these findings will ultimately raise awareness of an understudied phenomenon.

“People tend to think that siblings are going to tease and bully one another; just goes with the territory,” Kowalski said. “Minimizing the behavior in this way, however, fails to examine the consequences that sibling bullying can have for the relationship between the siblings involved, something that most definitely needs additional research.”

She added that annual checkups at the pediatrician’s office could be used as a venue to increase awareness about sibling bullying.

“Annual checkups with a pediatrician would certainly assist with increasing awareness about and preventing sibling bullying,” said Kowalski. “It’s a great forum for professionals to educate and talk to parents about what is happening with their children regarding bullying.”

A related study published in June found that sibling aggression can be just as traumatic for a young child or adolescent as bullying from an unrelated peer. Published in the journal Pediatrics, the study was based on data from The National Survey of Children’s Exposure to Violence.

“For all types of sibling aggression, we found that being the victim was linked to lower well-being for both children and adolescents,” lead author Corinna Jenkins Tucker, an associate professor of family studies at UNH, told USA Today.

“Even kids who reported just one instance had more mental health distress,” she added.

According to Tucker, the research showed that parents should regard sibling aggression as a serious factor when considering their child’s mental health.

“If siblings hit each other, there’s a much different reaction than if that happened between peers,” she said. “It’s often dismissed, seen as something that’s normal or harmless. Some parents even think it’s beneficial, as good training for dealing with conflict and aggression in other relationships.”

Another Reason To Hate School – Desks Are Giving Kids Chronic Back Pain

[ Watch The Video: Backpacks Could Lead To Backpain ]
Brett Smith for redOrbit.com – Your Universe Online
As adults around the United States watch with schadenfreude as kids around them settle back into their school-day routines, it may concern them to know that the ergonomics of a child’s school environment are less than optimal. In fact, some kids are suffering physical consequences because of the amount of time they spend sitting at a desk or lifting heavy books.
According to a new study the International Journal of Human Factors and Ergonomics, ill-fitting school chairs, low desks and heavy backpacks are adding up to chronic back pain in some adolescents.
Led by researchers at the Biomechanics and Functional Morphology Laboratory at the University of Lisbon, the scientists performed a cross-sectional study of almost 140 12- to 15 year-olds of various levels of maturity to determine the physical impact of a mismatch between school furniture dimensions, the weight of a typical school bag and the student’s physiological characteristics.
The researchers discovered that nearly two thirds (80) of the students studied had some type of back pain and that the difference between desk height and elbow height was linked with a greater chance of the adolescents experiencing pain. Girls were more likely to experience this discrepancy than boys – 59 percent compared to 47 percent of boys.
“Our results also showed that there was no association between backpack weight, body mass index (BMI) and back pain,” the study authors noted in their report. “These results highlight the importance to study the school environment to establish preventive programs for back pain in youths,” the researchers added.
The Portuguese team said that the number of school children and adolescents having frequent episodes of ergonomics-related pain has increased in the last few decades. They added that people suffering from this kind of pain during childhood are at increased risk for suffering similar pain in adulthood unless the problem is correctly treated.
Despite touting the results of their study, researchers admitted that back pain can be caused by a myriad of factors, including age, family history, gender and lifestyle. They noted that ergonomic factors found in day-to-day life also play a significant role. The team said educational officials may want to reconsider the amount of time kids spend sitting at a desk when scheduling physical activity and sports programs.
“These results highlight how relevant it is to study the school environment in order to establish preventive programs for back pain in children and adolescents, not only health-wise, but also in terms of school education,” the researchers concluded. “These results show the importance of promoting healthy lifestyles in what concerns physical activity and a balanced nutrition.”
The conclusions of the Portuguese study echo the sentiments of first lady Michelle Obama’s Let’s Move campaign. With less of a focus on ergonomics and more of an emphasis on reducing childhood obesity, the campaign calls for kids to spend less time sitting and more time being physically active.
“Combining comprehensive strategies with common sense, Let’s Move! is about putting children on the path to a healthy future during their earliest months and years,” reads a statement on the campaign’s website.

Chronic Aggressive Behavior In Boys: Epigenetic Sources?

Genes related to self-control could be ‘disabled’ by the prenatal environment

Chronic aggressive behavior exhibited by some boys from disadvantaged families may be due to epigenetic changes during pregnancy and early childhood. This is highlighted by two studies conducted by a team led by Richard E. Tremblay, professor emeritus at the University of Montreal and Moshe Szyf, professor at McGill University, published in the journal PLOS ONE. The first author of the two papers, Nadine Provençal, was jointly supervised by professors Szyf and Tremblay.

Epigenetic changes possibly related to the prenatal environment

In the first study, published in July, the team found that among men who had chronic aggressive behavior during childhood and adolescence, blood levels of four biomarkers of inflammation were lower than in men who exhibited average levels of aggressive behavior in their youth, from 6 to 15 years of age. “This means that using four specific biomarkers of inflammation, called cytokines, we were able to distinguish men with chronic physical aggression histories from those without,” says Tremblay, a researcher specializing in developmental psychology. In the second study, it was observed in the same men with aggressive pasts, that the DNA encoding the cytokines showed methylation patterns different from those of the comparison group.

“Methylation is an epigenetic modification—hence reversible—of DNA, in relation to parental imprinting. It plays a role in regulating gene expression”, says Szyf, who specializes in epigenetics.

The pre- and postnatal environment could cause these differences in biomarkers associated with chronic aggression,” Szyf added. Various studies conducted with animals show that hostile environments during pregnancy and early childhood have an impact on gene methylation and gene programming leading to problems with brain development, particularly in regard to the control of aggressive behavior.

Previous work by Tremblay’s team suggest that men with aggressive pasts have one thing in common: the characteristics of their mothers. “They are usually young mothers at the birth of their first child, with low education, often suffering from mental health problems, and with substance use problems,” Tremblay explained. The significant difficulties these mothers experienced during pregnancy and the early childhood of their child may have an impact on the expression of genes related to brain development, the immune system, and many other biological systems critical for the development of their child.

A nearly 30-year follow-up

The blood samples used in the studies published this summer in PLOS ONE were collected from 32 participants who took part in either of two longitudinal studies that begun nearly 30 years ago by Tremblay’s team. The first study followed young Quebecers from disadvantaged backgrounds, while the second involved a representative sample of children who were in kindergarten in Quebec in 1986-87.

It is important to note that in disadvantaged families, the rate of boys with chronic aggressive behaviour represents about 4% of the population. This greatly restricts the selection of potential participants. “Once they are adults, they are difficult to find because they have disorganized lifestyles,” Tremblay said.

A prevention perspective

This difficulty has not stopped him from pursuing his research further. “We are studying the impact of the socioeconomic environment on the third generation, now that these children are grown up and have children,” Tremblay noted. No study has yet been published on the subject, he anticipates “significant intergenerational ties, since we observed an association between parental criminality of the first generation and the behavior of their children.”

Nevertheless, the researcher, who has conducted his work for decades with a prevention perspective, is optimistic. “If our results show that behavioral problems originate from as far back as pregnancy, it means that we can reduce violence through preventive intervention from as early as pregnancy,” says Tremblay. We have already shown that support given to families of aggressive boys in kindergarten prevents school dropout and crime in adulthood.

On the Net:

Why Do You Want To Eat The Baby?

‘Odor is a means of chemical communication between mother and child’ — Johannes Frasnelli, University of Montreal

What woman has not wanted to gobble up a baby placed in her arms, even if the baby is not hers? This reaction, which everyone has noticed or felt, could have biological underpinnings related to maternal functions. For the first time, an international team of researchers has found evidence of this phenomenon in the neural networks associated with reward. “The olfactory — thus non-verbal and non-visual — chemical signals for communication between mother and child are intense,” explains Johannes Frasnelli, a postdoctoral researcher and lecturer at the University of Montreal’s Department of Psychology. “What we have shown for the first time is that the odor of newborns, which is part of these signals, activates the neurological reward circuit in mothers. These circuits may especially be activated when you eat while being very hungry, but also in a craving addict receiving his drug. It is in fact the sating of desire.”

Reward circuit

For their experiment, the researchers presented two groups of 15 women with the odours of others’ newborns while the women were subjected to brain imaging tests. The first group was composed of women who had given birth 3-6 weeks prior to the experiment, and the other group consisted of women who had never given birth. All the women were non-smokers. The odors of the newborns were collected from their pyjamas two days after birth.

Although the women in both groups perceived the odor of newborns with the same intensity, brain imaging showed greater activation in the dopaminergic system of the caudate nucleus of mothers compared to the women who had never given birth. Located in the center of the brain, the caudate nucleus is a double structure straddling the thalamus in both hemispheres. “This structure plays a role in reward learning,” explains Frasnelli. “And dopamine is the primary neurotransmitter in the neural reward circuit.”

This system reinforces the motivation to act in a certain way because of the pleasure associated with a given behavior. “This circuit makes us desire certain foods and causes addiction to tobacco and other drugs,” says the researcher. “Not all odors trigger this reaction. Only those associated with reward, such as food or satisfying a desire, cause this activation.”

Dopamine is also associated with sexual pleasure and other forms of gratification. Laboratory rats whose dopamine levels are stimulated by electrodes become so addicted that they stop eating.

For the research team, these results show that the odor of newborns undoubtedly plays a role in the development of motivational and emotional responses between mother and child by eliciting maternal care functions such as breastfeeding and protection. The mother-child bond that is part of the feeling of maternal love is a product of evolution through natural selection in an environment where such a bond is essential for the newborn’s survival.

Questions remain

The experiment, however, did not allow determining whether the greater activation of the dopaminergic system in mothers is due to an organic response related to childbirth itself or whether it is a consequence of the olfactory experience developed by mothers with their own babies. “It is possible that childbirth causes hormonal changes that alter the reward circuit in the caudate nucleus, but it is also possible that experience plays a role,” says Frasnelli.

It is also not known whether this reaction is specific to mothers, since men were not part of the experiment. “What we know now and what is new is that there is a neural response linked to the status of biological mother,” he says.

On the Net:

Cryonics: Exploring The Life-Or-Death Gamble Of Low-Temperature Preservation

redOrbit Staff & Wire Reports – Your Universe Online
Once thought to be a pursuit reserved for the rich and eccentric, scientists are now looking to bring cryonics – the low-temperature preservation of humans hoping to find cures and/or treatments for life-threatening conditions in the future – to the masses.
According to Rupert Jones of The Guardian, a US-based organization known as the Cryonics Institute currently has more than 100 men and women in “cryonic suspension” at a Michigan-based facility.
Cryonics involves placing a body in liquid nitrogen to preserve it indefinitely, Jones explains, with the hope that in the near future technological or medical breakthroughs will be able to assist the individual undergoing the process. The Cryonics Institute charges a minimum of $28,000 (plus fees for preparation and transportation) for their services.
A similar firm, the Arizona-based Alcor Life Extension Foundation, requires $80,000 for “neurocryopreservation” (preservation of the head only) or $200,000 for the entire body, while a Russian firm known as KrioRus is offering their services for as little as $12,000 for domestic residents (foreign clients apparently face a higher bill for services rendered).
It is medical research companies like Google’s recently-announced offshoot Calico that people opting to undergo the cryonics process are banking on. Calico is said to be an independent healthcare company to develop new technologies that fight age-related illnesses and extend the human lifespan.
“Illness and aging affect all our families… from the decreased mobility and mental agility that comes with age, to life-threatening diseases that exact a terrible physical and emotional toll on individuals and families,” explained Google co-founder and chief executive Larry Page.
“With some longer term, moonshot thinking around healthcare and biotechnology, I believe we can improve millions of lives,” he added during an announcement last Wednesday. “While this is clearly a longer-term bet, we believe we can make good progress within reasonable timescales with the right goals and the right people.”
Even if Calico or other medical research firms manage to succeed in finding cures for ailments like cancer, heart disease, or other life-threatening conditions, there is no guarantee that cryonics customers will be able to capitalize on those advances. After all, there are issues with the process itself, theoretical physicist Dr. Michio Kaku explained in a 2012 YouTube video.
“If you suddenly freeze the human body, the problem is that ice crystals begin to form inside the cells,” said Dr. Kaku, a professor at City College of New York. “As the ice crystals expand, they rupture the cells. So, in other words, freezing the human body seems to work only superficially. But if you look at the human tissue under a microscope, you find massive tearing and disruption of cell walls.”
“It is a gamble; it’s not a certainty that this will work,” 38-year-old Victoria Stevens, a mother of two who lives in North Yorkshire and has signed up for cryonic preservation, admitted to Jones. So why take the chance? “I really enjoy being alive,” she said. “I think the prospect of death… it just seems like an awful waste after people spend their lives learning and progressing. I’d like to live longer and see more and experience more. We are happy to prolong our lives with heart transplants and so on – it’s just one step on from that.”

Soldiers Are Emotionally Attached To Their Robots

Lee Rannals for redOrbit.com – Your Universe Online

Anthropomorphism is what allows humans to attach themselves to non-human objects, and new research indicates that it may affect soldiers on the battlefield.

Julie Carpenter of the University of Washington found that soldiers can have an emotional attachment to robots they use on the battlefield. She discovered that as robots continue to evolve, so does the attachment a solder feels towards it.

The researcher interviewed Explosive Ordnance Disposal military personnel for the study. This group is made up of highly trained soldiers who know how to use robots to disarm explosives. Carpenter wanted to know how the troops would feel if they saw their mechanical buddy get blown up.

“They were very clear it was a tool, but at the same time, patterns in their responses indicated they sometimes interacted with the robots in ways similar to a human or pet,” Carpenter said in a press release.

She found that many of the 23 explosive ordnance personnel had named their robots, usually after a celebrity or current wife or girlfriend. Some of the robots were even dressed in paint. Soldiers told Carpenter that their first reaction upon seeing their robot-battlefield-buddy get blown up was anger at losing an expensive piece of equipment. However, some of them described a feeling of loss.

“They would say they were angry when a robot became disabled because it is an important tool, but then they would add ‘poor little guy,’ or they’d say they had a funeral for it,” Carpenter said. “These robots are critical tools they maintain, rely on, and use daily. They are also tools that happen to move around and act as a stand-in for a team member, keeping Explosive Ordnance Disposal personnel at a safer distance from harm.”

Although some soldiers might feel an attachment, the non-human companions are used to minimize the risk of human life. They have better endurance, don’t have to deal with emotions, and they are impervious to chemical and biological weapons.

The Defense Advanced Research Projects Agency (DARPA) is developing humanoid robots to help troops out even more in the battlefield. The latest addition to DARPA’s line-up, ATLAS, is 6-foot-2-inches tall and is basically an empty shell for programmers to get creative with. DARPA is using ATLAS for its Robotics Challenge to see who can make the best use out of the humanoid.

Adding humanoid robots will only tug on a soldier’s heart strings even more due to something called anthropomorphism. This can cause a human to have compassion for inanimate objects that may have humanistic characteristics. Carpenter wants to know how human or animal-like looking robots would affect a soldier’s ability to make rational decisions, especially if the soldier begins treating the robot with affection.

“You don’t want someone to hesitate using one of these robots if they have feelings toward the robot that goes beyond a tool,” she said. “If you feel emotionally attached to something, it will affect your decision-making.”

NASA Highlights Asteroid Initiative At World Maker Faire

Lee Rannals for redOrbit.com – Your Universe Online
This weekend, NASA reached out for the public’s help in tracking potentially hazardous asteroids at the World Maker Faire.
The World Maker Faire took place September 21 and 22 at the New York Hall of Science. This event is a festival of invention, creativity and resourcefulness that has been around since 2006. The Maker Faire has been known for some interesting booths in the past, including a human-sized Mouse Trap board game for its 65,000 attendees to play back in 2008.
NASA wanted to capitalize on this event by asking attendees for ideas on how to find and track potentially hazardous asteroids, and protect the planet from their impacts. The space agency asked attendees to program science hardware and learn how small, do-it-yourself projects could be used to help track and understand these asteroids, using their own computers.
“Unlike traditional NASA missions of exploration and science, this grand challenge is driven by the idea that protecting our planet is an issue bigger than any one program, mission or country,” NASA Chief Technologist Mason Peck said in a press statement. “For the first time, NASA has reached out to industry, academia, stakeholder organizations and private citizens for ideas on how to find, track and deflect asteroids. These partnerships represent a new way of doing business for NASA and a call to action for Makers: join us to become a critical part of the future of space exploration.”
Asteroids are increasingly becoming an interest at NASA, with the space agency even planning a mission to land man on a space rock in the near future. NASA selected 96 asteroid initiative ideas about how to protect Earth from an impact and which space rock man should explore first. Some of these ideas included how to decrease an asteroid’s spin, nudge it away from a path toward Earth and take samples to return to Earth.
The space agency revealed some new imagery around a proposed asteroid redirect mission back in August. During this mission, a spacecraft would be sent to an asteroid for capture. After this, NASA plans to have a manned mission visit this asteroid in order to gather samples to return back to Earth.
“This mission represents an unprecedented technological feat and allows NASA to affordably pursue the Administration’s goal of visiting an asteroid by 2025,” NASA said. “It raises the bar for human exploration and discovery while taking advantage of the diverse talents at NASA.”
While the plans to head to an asteroid are still in development, NASA’s successful landing of Curiosity last year shows it has a proven track record of making innovative ideas come to life.

Scientists Discover New Rat Genus In Birthplace Of The Theory Of Evolution

Brett Smith for redOrbit.com – Your Universe Online

The story of evolution sometimes circles back to its roots as biologists announced they have recently discovered a new species in the very location that gave birth to what we now know as the Theory of Evolution.

With a high-flying tuft of spiny hair on the back, a white tail-tip and three pairs of teats, the discovery of the unique-looking rat also helped to solidify theories about a specific corner of the Pacific.

The new rodent was discovered in the Moluccan province of Indonesia, a place made famous by 19th century British naturalist Alfred Russell Wallace. Wallace devised a Theory of Evolution based on his observations throughout Indonesia. At first working independently from Charles Darwin, the two naturalists would go on to collaborate in the late 1850s over their budding theories.

The new species was discovered on Wallacea, an Eastern Indonesian region named after the British naturalist. The team said they were surprised to discover the new rodent close to Boki Mekot, a mountainous area under intense ecological threat as a result of mining and deforestation.

“This new rodent highlights the large amount of unknown biodiversity in this Wallacean region and the importance of its conservation,” said Pierre-Henri Fabre from the Center for Macroecology, Evolution and Climate. “It constitutes a valuable addition to our knowledge of the Wallacean biodiversity and much remains to be learned about mammalian biodiversity across this region. Zoologists must continue to explore this area in order to discover and describe new species in this highly diverse, but also threatened region.”

Dubbed Halmaheramys bokimekot, the newly discovered rodent has a medium-rodent body size with brownish grey fur on its back and a greyish white belly, according to a report in the Zoological Journal of the Linnean Society. When taken together with its other characteristics, the new species presents a unique set of features that have never been seen before in the Moluccan province.

The unique plants and animals noted by Wallace in this region, compared to those in the neighboring region of Australia, inspired him to identify a zoogeographical line dividing the Indonesian archipelago into two separate parts: a western portion containing animals largely of Asian origin, and an eastern portion where the ecosystem tends to reflect Australasia. This divider is known as the Wallace line.

“The Halmaheramys discovery supports Wallace’s idea of an important faunal breakup in this region,” said Pierre-Henri Fabre. “Most of the species on the island of Halmahera reflect eastern origins, but our genetic analysis revealed a western origin of the new rat genus. That reflects the unique transition zone found in the Indo-Pacific, and warrants much greater scientific investigation.”

The new discovery comes after the same team updated Wallace’s 1876 zoogeographical world map last year using DNA analysis and species records. The team essentially showed that Wallace’s zoogeographical boundaries of the world were quite accurate, even without these 21st century tools.

“Such a remarkable island setting inspired one of the greatest biologists of all time, and if Sir Alfred Russell Wallace were alive today he would surely be excited by the prospect of further conservation and biodiversity study within the Moluccas,” Pierre-Henri Fabre said.

Online Time Is A Brain Buster

[ Watch the Video: Facebook Time Impacts Working Memory ]

Brett Smith for redOrbit.com – Your Universe Online

You may be losing the capacity to store valuable memories or take in information by spending too much time online – except when reading stories on redOrbit, of course.

All joking aside, the human brain can be overwhelmed by the flood of information pouring off our computer screen – according to Erik Fransén, a researcher from Stockholm’s KTH Royal Institute of Technology.

The Sweden-based researcher focuses on the brain’s short-term, or working memory and ways to treat diseased neurons. He said that a brain can easily become scrambled by information overload after being exposed to a normal session of social media browsing. The result is that less information from the working memory gets archived into long-term storage.

“Working memory enables us to filter out information and find what we need in the communication,” he said. “It enables us to work online and store what we find online, but it’s also a limited resource.”

Previous research has shown that working memory has limits. At any one time, the working memory can juggle up to three or four items, according to Fransén. When we try to keep even more information up in the air, to extend the juggling metaphor, our ability to process information crashes to the ground.

“When you are on Facebook, you are making it harder to keep the things that are ‘online’ in your brain that you need,” he said. “In fact, when you try to process sensory information like speech or video, you are going to need partly the same system of working memory, so you are reducing your own working memory capacity.”

“And when you try to store many things in your working memory, you get less good at processing information,” he added.

Fransén said the sensory shock-and-awe of the Internet also takes time away from the brain performing some necessary housekeeping, as our minds are built for both activity and relaxation.

“The brain is made to go into a less active state, which we might think is wasteful; but probably memory consolidation, and transferring information into memory takes place in this state,” he said. “Theories of how memory works explain why these two different states are needed.”

“When we max out our active states with technology equipment, just because we can, we remove from the brain part of the processing, and it can’t work,” Fransén added.

For those looking to reboot their brains, WebMD lists several helpful exercises design to shift the mind into a lower gear. One of the simplest is simply a deep breathing exercise.

“Give yourself a 5-minute break from whatever is bothering you and focus instead on your breathing,” the website advises. “Sit up straight, eyes closed, with a hand on your belly. Slowly inhale through your nose, feeling the breath start in your abdomen and work its way to the top of your head. Reverse the process as you exhale through your mouth.”

WebMD also recommends meditative and mindfulness exercises that many people with chronic health conditions use to help alleviate stress and enhance overall well-being.