Satellites Reveal Lost Cities Of Libya

Fall of Gaddafi lifts the veil on archaeological treasures

Satellite imagery has uncovered new evidence of a lost civilization of the Sahara in Libya’s south-western desert wastes that will help re-write the history of the country.

The fall of Gaddafi has opened the way for archaeologists to explore the country’s pre-Islamic heritage, so long ignored under his regime.

Using satellites and air-photographs to identify the remains in one of the most inhospitable parts of the desert, a British team has discovered more than 100 fortified farms and villages with castle-like structures and several towns, most dating between AD 1-500.

These “lost cities” were built by a little-known ancient civilization called the Garamantes, whose lifestyle and culture was far more advanced and historically significant than the ancient sources suggested.

The team from the University of Leicester has identified the mud brick remains of the castle-like complexes, with walls still standing up to four meters high, along with traces of dwellings, cairn cemeteries, associated field systems, wells and sophisticated irrigation systems. Follow-up ground survey earlier this year confirmed the pre-Islamic date and remarkable preservation.

“It is like someone coming to England and suddenly discovering all the medieval castles. These settlements had been unremarked and unrecorded under the Gaddafi regime,” says the project leader David Mattingly FBA, Professor of Roman Archaeology at the University of Leicester.

“Satellite imagery has given us the ability to cover a large region. The evidence suggests that the climate has not changed over the years and we can see that this inhospitable landscape with zero rainfall was once very densely built up and cultivated. These are quite exceptional ancient landscapes, both in terms of the range of features and the quality of preservation,” says Dr Martin Sterry, also of the University of Leicester, who has been responsible for much of the image analysis and site interpretation.

The findings challenge a view dating back to Roman accounts that the Garamantes consisted of barbaric nomads and troublemakers on the edge of the Roman Empire.

“In fact, they were highly civilized, living in large-scale fortified settlements, predominantly as oasis farmers. It was an organized state with towns and villages, a written language and state of the art technologies. The Garamantes were pioneers in establishing oases and opening up Trans-Saharan trade,” Professor Mattingly said.

The professor and his team were forced to evacuate Libya in February when the anti-Gaddafi revolt started, but hope to be able to return to the field as soon as security is fully restored. The Libyan antiquities department, badly under-resourced under Gaddafi, is closely involved in the project. Funding for the research has come from the European Research Council who awarded Professor Mattingly an ERC Advanced Grant of nearly 2.5m euros, the Leverhulme Trust, the Society for Libyan Studies and the GeoEye Foundation.

“It is a new start for Libya’s antiquities service and a chance for the Libyan people to engage with their own long-suppressed history,” says Professor Mattingly.

“These represent the first towns in Libya that weren’t the colonial imposition of Mediterranean people such as the Greeks and Romans. The Garamantes should be central to what Libyan school children learn about their history and heritage.”

Image 1: This is a satellite image of area of desert with archaeological interpretation of features: fortifications are outlined in black, areas of dwellings are in red and oasis gardens are in green. Credit: Copyright 2011 Google, image copyright 2011 DigitalGlobe

Image 2: This is a photo of the mudbrick village with castle like structure visible in the center of the image. Credit: Photo by Toby Savage

Image 3: This is a photo of mudbrick and stone castle-like structure. Credit: Photo by Toby Savage

On the Net:

Becoming A Father Can Lessen Bad Habits In Men

After men become fathers for the first time, they show significant decreases in crime, tobacco and alcohol use, according to a new, 19-year study.

Researchers assessed more than 200 at-risk boys annually from the age of 12 to 31, and examined how men´s crime, tobacco, alcohol, and marijuana use changed over time. While previous studies showed that marriage can change a man´s negative behavior, they had not isolated the additional effects of fatherhood.

“These decreases were in addition to the general tendency of boys to engage less in these types of behaviors as they approach and enter adulthood,” said David Kerr, assistant professor of psychology at Oregon State University and lead author of the study. “Controlling for the aging process, fatherhood was an independent factor in predicting decreases in crime, alcohol and tobacco use.”

The study was published in the current issue of the Journal of Marriage and Family. Collaborators included the Oregon Social Learning Center in Eugene, Ore., and the University of Houston.

The researchers also found that men who were well into their 20s and early 30s when they became fathers showed greater decreases in crime and alcohol use, compared to those who had their first child in their teens or early 20s. Men who had children at a more developmentally-expected time could have been more able or willing to embrace fatherhood and shed negative lifestyle choices, Kerr said.

“It is hopeful that for both older and younger men, tobacco use tended to decrease following the birth of a first child,” Kerr said. “This kind of change could have important health consequences for men and for their families.”

The study adds to a body of research pointing to key periods when men from disadvantaged backgrounds may be ripe for intervention, Kerr said.

“This research suggests that fatherhood can be a transformative experience, even for men engaging in high risk behavior,” he said. “This presents a unique window of opportunity for intervention, because new fathers might be especially willing and ready to hear a more positive message and make behavioral changes.”

Deborah Capaldi, Lee Owen and Katherine Pears with the Oregon Social Learning Center and Margit Wiesner with the University of Houston contributed to the study. The research was supported by awards to the Oregon Social Learning Center from the National Institute on Drug Abuse.

On the Net:

Stress Triggers Disease Flares In Patients With Vasculitis

Study shows psychological health important to controlling Wegener’s granulomatosis

In patients with a devastating form of vasculitis who are in remission, stress can be associated with a greater likelihood of the disease flaring, according to a new study by investigators at Hospital for Special Surgery (HSS).

This is the first study to suggest that mental health is a risk factor in patients with vasculitis, a group of autoimmune disorders characterized by the inflammatory destruction of blood vessels. The study, in a form of the disease known as Wegener’s granulomatosis (WG), will be presented on Nov. 8 at the American College of Rheumatology’s annual meeting.

“When this disease flares, people can be really sick. It often affects the lungs, kidneys, sinuses and nerves. It can cause fevers and rashes. People can die from this illness. It is a very robust, active, inflammatory disease when it is active,” said Robert Spiera, M.D., director of the Vasculitis and Scleroderma Program at HSS, who led the study. “When patients are in remission, however, they can do very, very well.”

He says that doctors caring for patients with this disease should be attentive to their psychological health. “This study points out that mental health should be part of your medical assessment,” said Dr. Spiera. “You should pay attention to the patient’s mental well being and be more aggressive about intervening if a patient is in a bad place. Make sure that patients take it seriously.”

Prior to this report, a few small studies had suggested that psychological stress can trigger flares of lupus, another autoimmune disease, and doctors have observed that WG patients often say that stress in their lives, caused by perhaps a death of someone close or losing a job, made their disease flare. To investigate this anecdotal evidence in a more quantifiable way, researchers at HSS conducted a retrospective analysis of data from the Wegener’s Granulomatosis Etanercept Trial (WGET). The primary objective of this randomized, placebo-controlled clinical trial was to evaluate the safety and efficacy of using etanercept (Enbrel; Immunex Corporation) to get patients with WG into remission and maintain that remission.

All patients in this multicenter trial had active disease at the beginning of the study and most patients went into remission. Checkups occurred every three months. “We assessed their disease activity at defined time intervals, in terms of how active their vasculitis was or whether they were in remission, and we also collected information at every visit regarding the patient’s physical and mental health,” Dr. Spiera said. Investigators measured disease activity using the Birmingham Vasculitis Activity Score for Wegener’s Granulomatosis, a validated tool. At every visit, patients also filled out the Short Form 36 Health Survey, which includes a physical and mental component. Summary scores for each component are measured on a scale of 0 to 100, with 100 being the healthiest.

For their retrospective analysis of WGET, HSS investigators reviewed records of all patients who had a sustained remission of at least six months (143 patients). They then reviewed data from all checkups after the time of sustained remission to assess the relationship between flare status and the physical and mental health scores from the previous visit. They found that patients were 19 percent more likely to experience a disease flare if they had a five point lower mental health score (P<0.01) at the checkup immediately prior to the flare. The physical component score did not predict an activation of the disease.

“If you looked at patients who were in remission for six months or longer and assessed their mental health as captured by this mental health score, those with a lower mental health score at a given point in time would be more likely to be flaring at the next visit, within three months,” Dr. Spiera said. “This is the first time that as an independent variable, stress seemed to predict a greater likelihood of flaring.”

The study suggests that doctors need to be attentive to a patient’s psychological state and be proactive about interventions to help them manage stress. “There are a lot of things that can be done proactively in stress management on the patient’s side outside of seeing a psychiatrist,” Dr. Spiera said. For example, exercise and yoga have been shown to be effective stress relievers.

The HSS researchers next hope to prospectively examine the association between psychological state and flares in upcoming trials of vasculitis and other autoimmune diseases. Some investigators have hypothesized that stress-related hormones lead to immune dysregulation, but research is needed to tease out the mechanisms.

“Going forward, we can even think of trials where you would take patients who have declined in their mental component score and randomize half of them to receive some sort of stress management program and half of them not to receive it, to see if it changes their outcomes,” Dr. Spiera said.

Wegener’s granulomatosis was recently renamed granulomatosis with polyangiitis. The rare disease, in which inflamed blood vessels interfere with blood circulation, mainly affects vessels in the nose, sinuses, ears, lungs and kidneys, although other areas may be involved. It is most common in middle-aged adults.

On the Net:

Young Women With Rheumatoid Arthritis At More Risk For Broken Bones

Women under 50 with rheumatoid arthritis are at greater risk of breaking bones than women without the condition, according to a Mayo Clinic (http://www.mayoclinic.org) study being presented at the American College of Rheumatology (http://www.rheumatology.org/index.asp) annual scientific meeting in Chicago. Men with rheumatoid arthritis also are in more danger of fractures, but that risk seems to surface when they are older, researchers found.
Rheumatoid arthritis (http://www.mayoclinic.com/health/rheumatoid-arthritis/DS00020) can lead to chronic, debilitating inflammation of the joints and other parts of the body. People over 50 with the condition are more likely to break a bone from a fall or sometimes even mild stress such as coughing. However, little has been known about the fracture risk among rheumatoid arthritis patients under 50.
Researchers studied two groups of 1,155 adults each, all from the same community: one set with a new diagnosis of rheumatoid arthritis, the other without the condition. Based on gender and birth year, each person was paired with someone from the other group, and the medical records of each duo were examined over time for new fractures unrelated to cancer or severe trauma. In women and men with rheumatoid arthritis, new fractures were more likely than in their counterparts, regardless of their age when they were diagnosed with rheumatoid arthritis.
Women under 50 when diagnosed with rheumatoid arthritis were more likely than their counterparts without the condition to have their first new fracture even before age 50. While men with rheumatoid arthritis were also more vulnerable to fractures, that danger didn’t grow until they got older.
“Understanding what contributes to the risk for fractures for all with rheumatoid arthritis, including young women, would help us better prevent them,” says lead researcher Shreyasee Amin, M.D., (http://www.mayoclinic.org/bio/11251484.html) a rheumatologist (http://www.mayoclinic.org/bio/11251484.html) at Mayo Clinic in Rochester, Minn. Women under 50 with rheumatoid arthritis need to know that even though they are young, they need to take greater care to prevent fractures, she says.
Dr. Amin will discuss the study, ACR Presentation 1632, at 3:15 p.m. Monday, Nov. 7, in room W471b at the McCormick Place Convention Center.(http://mccormickplace.com/) She will be available for media questions and a briefing at 1:30 p.m. Tuesday, Nov. 8, in the press conference room, W175C.

On the Net:

Biologically Inspired Tape Uses Some Of Nature’s Tricks To Stick

Insects can run up walls, hang from ceilings, and perform other amazing feats that have for centuries fascinated human observers. Now scientists from the Zoological Institute at the University of Kiel, in Germany, who have been studying these able acrobats, have borrowed some of the insects’ tricks to make a dry tape that can be repeatedly peeled off without losing its adhesive properties. The researchers presented their work at the AVS Symposium, held Oct. 30 — Nov. 4, in Nashville, Tenn.

The key to many insects’ wall-scaling ability lies in the thousands of tiny hairs that cover their feet and legs. The hairs have flattened tips that can splay out to maximize contact on even rough surfaces. “The main issue for good adhesion is intimate contact with the substrate,” explains Stanislav Gorb, a lead researcher on the project. “Due to multiple contacts points (hairs), they can build proper contact with almost any surface.” Using the same idea, the researchers manufactured a silicone tape patterned with similar tiny hairs. They found the patterned tape was at least two times harder to pull off of a surface than a flat tape of the same material. The insect inspired tape can also work under water, leaves behind no sticky residues, and can be attached and detached for thousands of cycles without losing its ability to grip. One team member even succeeded in dangling himself from the ceiling using a 20 x 20 centimeter square piece of the new tape.

Bioinspired adhesives have many potential commercial applications, from wall-climbing search robots to industrial pick-and-place machines. And the research group hasn’t stopped looking to nature for new inspirations. The team is currently investigating a number of other natural surfaces, including beetle coverwings, snake skin, and anti-adhesive plants. “From nature we can get rather unconventional ideas,” says Gorb. “Not all solutions from nature are doable and not all of them are cheap. But they are numerous.”

The AVS 58th International Symposium & Exhibition was held Oct. 30 — Nov. 4 at the Nashville Convention Center.

Image Caption: Achim Oesert, a member of the Functional Morphology and Biomechanics group at the University of Kiel, Germany hangs from the ceiling using bioinspired polymer tape while surrounded by other team members. Credit: University of Kiel, Germany

On the Net:

EU Biofuels Are As Carbon Intensive As Petrol

University of Leicester research into greenhouse gas emissions from oil palm plantations provides robust measures now being used to inform international policies on greenhouse gas emissions

A new study on greenhouse gas emissions from oil palm plantations has calculated a more than 50% increase in levels of CO2 emissions than previously thought — and warned that the demand for ‘green’ biofuels could be costing the earth.

The study from the University of Leicester was conducted for the International Council on Clean Transportation, an international think tank that wished to assess the greenhouse gas emissions associated with biodiesel production. Biodiesel mandates can increase palm oil demand directly (the European Biodiesel Board recently reported big increases in biodiesel imported from Indonesia) and also indirectly, because palm oil is the world’s most important source of vegetable oil and will replace oil from rapeseed or soy in food if they are instead used to make biodiesel.

The University of Leicester researchers carried out the first comprehensive literature review of the scale of greenhouse gas emissions from oil palm plantations on tropical peatland in Southeast Asia. In contrast to previous work, this study also provides an assessment of the scientific methods used to derive emissions estimates.

They discovered that many previous studies were based on limited data without appropriate recognition of uncertainties and that these studies have been used to formulate current biofuel policies.

The Leicester team established that the scale of greenhouse gas emissions from oil palm plantations on peat is significantly higher than previously assumed. They concluded that a value of 86 tons of carbon dioxide (CO2) per hectare per year (annualized over 50 years) is the most robust currently available estimate; this compares with previous estimates of around 50 tons of carbon dioxide (CO2) per hectare per year. CO2 emissions increase further if you are interested specifically in the short term greenhouse gas implications of palm oil production — for instance under the EU Renewable Energy Directive which assesses emissions over 20 years, the corresponding emissions rate would be 106 tons of carbon dioxide (CO2) per hectare per year.

The findings have been published as an International White Paper from the ICCT.

Ross Morrison, of the University of Leicester Department of Geography, said: “Although the climate change impacts of palm oil production on tropical peatland are becoming more widely recognized, this research shows that estimates of emissions have been drawn from a very limited number of scientific studies, most of which have underestimated the actual scale of emissions from oil palm. These results show that biofuels causing any significant expansion of palm on tropical peat will actually increase emissions relative to petroleum fuels. When produced in this way, biofuels do not represent a sustainable fuel source”.

Dr Sue Page, Reader in Physical Geography at the University of Leicester, added: “Tropical peatlands in Southeast Asia are a globally important store of soil carbon — exceeding the amount stored in tropical forest vegetation. They are under enormous pressure from plantation development. Projections indicate an increase in oil palm plantations on peat to a total area of 2.5Mha by the year 2020 in western Indonesia alone —an area equivalent in size to the land area of the United Kingdom.”

Growth in palm oil production has been a key component of meeting growing global demand for biodiesel over recent decades. This growth has been accompanied by mounting concern over the impact of the oil palm business on tropical forests and carbon dense peat swamp forests in particular. Tropical peatland is one of Earth’s largest and most efficient carbon sinks. Development of tropical peatland for agriculture and plantations removes the carbon sink capacity of the peatland system with large carbon losses arising particularly from enhanced peat degradation and the loss of any future carbon sequestration by the native peat swamp forest vegetation.

Although there have been a number of assessments on greenhouse gas emissions from palm oil production systems, estimates of greenhouse gas emissions from land use have all been based on the results of a limited number of scientific studies. A general consensus has emerged that emissions from peat degradation have not yet been adequately accounted for.

The results of the Leicester study are important because an increase in the greenhouse gas emissions associated with biodiesel from palm oil, even if expansion on peat only occurs indirectly, will negate any savings relative to the use of diesel derived from fossil fuel.

If these improved estimates are applied to recent International Food Policy Research Institute modeling of the European biofuel market , they imply that on average biofuels in Europe will be as carbon intensive as petrol , with all biodiesel from food crops worse than fossil diesel and the biggest impact being a 60% increase in the land use emissions resulting from palm oil biodiesel. Bioethanol or biodiesel from waste cooking oil, on the other hand, could still offer carbon savings.

This outcome has important implications for European Union policies on climate and renewable energy sources.

Dr Sue Page said: “It is important that the full greenhouse gas emissions ‘cost’ of biofuel production is made clear to the consumer, who may otherwise be mislead into thinking that all biofuels have a positive environmental impact. In addition to the high greenhouse gas emissions associated with oil palm plantations on tropical peatlands, these agro-systems have also been implicated in loss of primary rainforest and associated biodiversity, including rare and endangered species such as the orangutan and Sumatran tiger.

“We are very excited by the outcomes of our research – our study has already been accepted and used by several scientists, NGOs, economists and policy advisors in Europe and the USA to better represent the scale of greenhouse gas emissions from palm oil biodiesel production and consumption.

“The findings of this research will be used by organizations such as the US Environmental Protection Agency, European Commission and California Air Resources Board to more fully account for greenhouse gas emissions and their uncertainties from biofuel produced from palm oil. This is essential in identifying the least environmentally damaging biofuel production pathways, and the formulation of national and international biofuel and transportation policies.”

Dr Chris Malins of the ICCT said, “Peat degradation under oil palm is a major source of emissions from biodiesel production. Recognizing that emissions are larger than previously thought will help regulators such as the US Environmental Protection Agency (EPA), European Commission (EC) and California Air Resources Board (CARB) identify which biofuel pathways are likely to lead to sustainable greenhouse gas emissions reductions”.

The research was funded by the International Council on Clean Transportation (ICCT), an international think-tank made up of representatives from the world’s leading vehicle manufacturing nations. The research was commissioned by Dr Chris Mallins of the ICCT and led by Dr Susan Page and Ross Morrison, both of the Department of Geography, University of Leicester. Other contributors to the work were Professor Jack Rieley of the University of Nottingham and chair of the scientific advisory board of the International Peat Society (IPS), Dr Aljosja Hooijer of Deltares in the Netherlands, and Dr Jyrki Jauhiainen of the University of Helsinki. The research was conducted over a period of three months during spring of this year and has recently been published as an International White Paper by the ICCT.

Image 1: Oil palm plantations on peat: note the leaning trunks owing to low load-bearing capacity of peat soils (Image: J. Jauhiainen).

Image 2: Conditions at a mature oil palm plantation site, 18 years after conversion: (left image) open canopy (causing increased soil temperatures), limited ground cover (causing lowered soil moisture content), intensive fertilization (white patches around palm trunks), and (right image) a loose top soil structure (leaning oil palms, footprints). Credit: University of Leicester

Image 3: Subsidence pole inserted in peatland in Johor, peninsular Malaysia. The pole was inserted beside an oil palm plantation in 1978 and at the time of this photograph (2007), 2.3 m of subsidence had occurred (the human “measuring stick,” Dr. Chris Banks, is 2 meters tall). Credit: Image: J. Jauhiainen

On the Net:

Concurrent Chemo And Radiation Confers Survival Benefit In Nansopharyngeal Carcinoma Patients

The combination of chemotherapy and radiation significantly improved the 5-year overall survival of patients with stage II nasopharyngeal carcinoma (NPC), according to a phase III study published Nov. 4 in the Journal of the National Cancer Institute.

Nasopharyngeal carcinoma is endemic in Southern China and Southeast Asia, where radiotherapy (RT) has been the primary treatment. Although the National Comprehensive Cancer Network (NCCN) recommends concurrent chemo-radiotherapy (CCRT) for stage II disease, evidence regarding its efficacy is weak, and this has not been defined as a primary endpoint in phase III trials.

To determine whether or not combined chemotherapy and radiotherapy confers survival benefit to stage II NPC patients, Qui-Yan Chen, M.D., Ph.D., of the Sun Yat-sen University Cancer Center in the People’s Republic of China, and colleagues, conducted a phase III trial of patients randomly assigned to receive either radiation therapy (114 patients) or combined chemotherapy and radiation (116 patients).

They also found that after a median follow-up of 60 months, 22.8% of patients in the radiation group had disease progression, compared to 11.2% in the concurrent therapy group. The researchers found that 5-year overall survival, progression-free survival, and distant metastasis-free survival were statistically significantly higher in the concurrent therapy group compared with the group receiving radiation alone.

The authors conclude that based on the results of this trial, which they believe to be the first phase III trial to compare CCRT and RT, the NCCN guidelines are reasonable. They hypothesize that early-stage disease may have a smaller distant tumor bulk, and thus CCRT may be more effective in eradicating distant micro-metastases. Although patients in the combined therapy group experienced more toxic side effects than patients who only received radiotherapy, the regimen was overall well tolerated when the chemotherapy drug dose was reduced.

Chen et al write, “In summary, we think that the optimal choice for early-stage NPC is cisplatin, at a weekly dose of 30 mg/m2, for both an optimal chemotherapy effect to eradicate small distant tumors and to ensure NPC patient compliance.”

On the Net:

More Young Adults Living With Their Parents Still

According to the U.S. Census Bureau, the proportion of young adults living at home with their parents increased between 2005 and 2011.

The report said that the percentage of men age 25 to 34 living in the home of their parents jumped from 14 percent in 2004 to 19 percent in 2011.

The “The increase in 25 to 34 year olds living in their parents’ home began before the recent recession, and has continued beyond it,” Rose Kreider, a family demographer with the Fertility and Family Statistics Branch and author of the report, said in a press release.

The census found that 59 percent of men aged 18 to 24 and 50 percent of women in the same age bracket still resided in their parents’ home in 2011.  This figure is up from 53 percent of men and 46 percent of women in 2005.

The Census Bureau also said that of the 74.6 million children younger than 18 in 2011, 69 percent lived with two parents, while another 27 percent lived with one parent.

Among the children who lived with just one parent in 2011, 87 percent of them lived with their mother.

The report also noted that in 2011, married couples with children made up 20 percent of all households.  This number down from 40 percent in 1970.

The Annual Social and Economic Supplement to the Current Population Survey was conducted in February, March and April of 2011.

On the Net:

Inactivity Increases Risk Of Cancer

The analysis of a new major study has concurred with decades of research that being active means being healthy, and also reduces the risk of cancer.

But just being active doesn´t mean you are fit for life. The new evidence suggests that people who have sedentary lifestyles, even when they do have exercise routines, are at an increased risk of cancer.

The study, published in the journal Cancer Prevention Research and presented at the American Institute for Cancer Research (AICR) annual conference in Washington, DC, has revealed a strong connection between inactivity and cancerous cell growth. Around 92,000 cases of breast and colon cancer each year can be attributed to lack of exercise, and researchers are now urging people to get and stay fit, adding in a few minutes of physical activity for every hour they spend being inactive.

“This gives us some idea of the cancers we could prevent by getting people to be more active,” ‘ead study author epidemiologist Christine Friedenreich of Alberta Health Services in Calgary, Canada, told USA Today. “This is a conservative estimate. The more physical activity you do, the lower your risk of these cancers,” she said.

The numbers “seem like very reasonable estimates,” added Alpa Patel, an American Cancer Society epidemiologist who studied the data.

One study of post-menopausal women confirmed that taking brisk daily walks helps to reduce several key biological indicators of cancer risk, including sex hormone levels, insulin resistance, inflammation and body fatness.

“In breast and colon cancers, for example, we´re seeing overall risk reductions of about 25 to 30 percent associated with higher levels of physical activity,” Friedenreich said. Experts have known for years that physical activity decreases the risk of chronic disease, but Friedenreich said the new data gives estimates on the number of cases that could be prevented if people were more physically active.

“A brisk daily walk of at least 30 minutes could lower a person’s risk over time for breast cancer and colon cancer,” Alice Bender, a registered dietitian with AICR, told USA Today.

Friedenreich reviewed more than 200 cancer studies from around the world and found convincing evidence that physical activity on a regular basis reduces the risk of breast cancer, colon cancer and endometrial cancer by 25 to 30 percent. And there is some evidence that it also reduces the risk of lung, prostate and ovarian cancer as well, Friedenreich said.

Patel has also investigated the health risks of sitting too long without moving around, which is known as “sitting disease.”

In that study, Patel and colleagues looked at 123,000 people and found that the more people sat around, the higher their risk of dying early was. “Even among individuals who were regularly active, the risk of dying prematurely was higher among those who spent more time sitting,” she said.

Even if you do 30 minutes of exercise a day, you need to make sure you are not just sitting around the rest of the day, said Patel. “You have to get up and take breaks from sitting.”

James Levine, a professor of medicine at Mayo Clinic in Rochester, Minnesota, said many people sit an average of seven to 9 and a half hours a day. “If you´ve sat for an hour, you´ve probably sat too long,” he told USA Today.

Another study highlighted that even those who are physically active but sit for long periods are at greater risk of developing cancers. Researchers from Australia´s Baker IDI Heart and Diabetes Institute discovered that that even breaks as short as one minute can help prevent health complications.

“Sitting time is emerging as a strong candidate for being a cancer risk factor in its own right,“ said Neville Owen, who presented evidence at the Washington conference. “It seems highly likely that the longer you sit, the higher your risk. This phenomenon isn’t dependent on body weight or how much exercise people do.”

His study revealed that the majority of adults´ days are spent being inactive.

The study found that on average, 60 percent (9.3 hours) of a person´s waking day was spent sedentary. This includes mealtimes, commutes to and from work, computer and TV time. Another 35 percent (6.5 hours) was spent in light activity such as walking to a meeting.

Office workers on average spend up to 75 percent of their workday sitting, with about 30 minutes of light activity during that time.

“When you´re sitting, the big muscles, especially in lower part of body, are completely unloaded. They´re not doing their job,” Owen said. That inactivity prompts changes in the body´s metabolism, Owen said, and produces a number of biological signals, what scientists call biomarkers, which are linked to cancer.

“It´s been surprisingly consistent with what strong relationships there are between physical inactivity and these biomarkers of cancer risk,” Owen said.

Owen is hopeful that the findings will prompt practical recommendations on workplace health, such as removing office waste baskets, using standing desks, and meetings with standing breaks.

“Taken together, this research suggests that every day, we´re each given numerous opportunities to be active and protect ourselves from cancer, not one,” AICR spokesperson Alice Bender added. “We need to start thinking in terms of make time and break time.”

For the most part, cancer researchers have emphasized the importance of getting a certain amount of dedicated exercise to lower risk of disease, and experts say it is still a good idea to follow those guidelines.

The US Centers for Disease Control and Prevention recommends adults should get 150 minutes of moderate-intensity exercise or 75 minutes of vigorous exercise each week, along with weekly muscle-strengthening activities.

But now it seems that the health benefits of being active require more than a regular dosage of daily exercise.

“It´s scary to think that even if I am going to the gym 30 to 45 minutes every day, that might not be enough,” said Patel. “But the other important message here is for the two thirds of U.S. adults who don´t engage in regular physical activity, there´s benefit in just moving around.”

Joan Vernikos, the former director of life sciences at the National Aeronautics and Space Administration (NASA) and author of “Sitting Kills, Moving Heals,” said simply standing once every 30 minutes can help stimulate the body by fighting the forces of gravity.

“It´s not how long you stay standing, but how often you stand up, how often you challenge your body to respond,” Vernikos told ABC News. “What provides the baseline of physiological activity in the body is small to large movement, intermittently all day, every day.”

On the Net:

Latex Gloves Lead To Lax Hand Hygiene In Hospitals

Healthcare workers who wear gloves while treating patients are much less likely to clean their hands before and after patient contact, according to a study published in the December issue of Infection Control and Hospital Epidemiology, the journal of the Society for Healthcare Epidemiology of America. This failure of basic hand hygiene could be contributing to the spread of infection in healthcare settings, the researchers say.
Glove use is appropriate for situations when contact with body fluids is anticipated or when patients are to be managed with contact precautions. However, use of gloves should not be considered a substitute for effective hand hygiene practices taking place before and after patient contact. Although gloves can reduce the number of germs transmitted to the hands, germs can sometimes still get through latex. Hands can also be contaminated by “back spray” when gloves are removed after contact with body fluids.
The researchers, led by Dr. Sheldon Stone of the Royal Free Hospital NHS Trust, observed more than 7,000 patient contacts in 56 intensive care and acute care of the elderly wards in 15 United Kingdom hospitals, making this one of the largest and most detailed studies on gloves and their impact on hand hygiene. Overall, the study found that hand hygiene compliance was “disappointingly low,” at just 47.7 percent. Compliance was even lower in instances where gloves were worn, dipping to just over 41 percent.
“The chances of hands being cleaned before or after patient contact appear to be substantially lower if gloves were being worn,” said Dr. Stone, the principal investigator. “We call this the phenomenon of the ‘Dirty Hand in the Latex Glove.'”
Though troubling, the results also reveal an opportunity to reduce healthcare associated infections by focusing further hand hygiene improvement efforts on better hand hygiene when using gloves. Doing so may prove the critical step in getting overall hand hygiene levels to the levels needed to prevent transmission of infection, the researchers say.
Dr. Stone and his colleagues suggest further study on the behavioral reasons behind why healthcare workers are less likely to wash their hands when wearing gloves. Regardless, the researchers recommend that campaigns such as the World Health Organization’s Clean Care is Safer Care program should emphasize better hand hygiene associated with gloving practices.
Christopher Fuller, Joanne Savage, Sarah Besser, Andrew Hayward, Barry Cookson, Ben Cooper, Sheldon Stone, “The Dirty Hand in the Latex Glove: A Study of Hand-Hygiene Compliance When Gloves Are Worn.” Infection Control and Hospital Epidemiology 32:12 (December 2011)

On the Net:

Controversial Treatment Makes Brown Eyes Blue

A California scientist claims he has developed a simple procedure that can turn brown eyes into blue eyes, but once the procedure is done it is irreversible, CBS News reports.

Dr Gregg Homer, of Stroma Medical in California said his new Lumineyes technology uses a laser tuned to a specific frequency to permanently turn brown eyes blue, by removing the brown pigment, or melanin, from the top layer of the iris, leaving the blue eye color to emerge over the course of a few weeks.

Homer says his Lumineyes treatment could become a permanent alternative to colored contact lenses. The process is permanent because melanin does not grow back and cannot be replaced. Brown eyes, the most common eye color in the world, appear so because of the layer of pigment in front of the eye.

Homer said the permanent procedure takes only about 20 seconds. He told KTLA.com that he is convinced the procedure is safe and does not affect vision in any way.

The idea of using laser light to change eye color makes sense, said Dr Elmer Tu, associate professor of clinical ophthalmology at the University of Illinois at Chicago. “Theoretically, it’s possible if you go in and laser the eye to release the pigment that causes brown eyes,” he told CBS News.

Tu believes, however, that safety could be an issue. The released pigment “has to go somewhere,” he said. The procedure could cause a potentially blinding condition called pigmentary glaucoma, which is associated with the chronic seepage of melanin into the fluid within the eye.

So why not just wear colored contact lenses like so many people do?

Doug Daniels, CEO of Stroma Medical, told MSNBC.com that “Nineteen million people wear colored contact lenses, but light-colored contacts on dark eyes look unnatural and the wearer can´t see as well.” And colored contact lenses aren´t without risks as well — they have been known to cause serious infections within the eye.

The Lumineyes treatment would cost around $4,800 and could be available in countries outside the US within 18 months, said Homer. But the Daily Mail reported that clinical trials have yet to be completed.

Homer has spent the better part of the last decade working on the procedure technology and has recently begun some human testing, but is also seeking up to $800,000 to complete the trial phase. He submitted a patent for the laser eye-pigment changer in 2005.

Homer said he has already received thousands of responses from potential clients who are interested in having the procedure done on them.

Daniels admitted he wasn´t so sure about the concept when Homer first told him about it. “I was very skeptical frankly, but I learned a long time ago that all the great ideas start out as blasphemy.”

Eye color is inherited, although brown eyes are dominant across the world; blue eyes are a recessive trait. A blue pigment doesn´t actually exist in nature. Instead, people with blue eyes have a brown pigment at the back of the irises and low concentrations at the front of the irises. This means longer wavelengths of light are absorbed by the dark back of the eye, while shorter wavelengths are scattered.

In 2008 scientists from the University of Copenhagen found that all people with blue eyes were descended from a single ancestor with a blue eye mutation who lived from six to ten thousand years ago. Before then, everyone had brown eyes, according to the study.

The mutation was not then, nor is now, a negative mutation. It works much the same way as other mutations in the human body, such as hair color, baldness, freckles, etc. These mutations do not increase or reduce a human´s chance of survival.

“It simply shows that nature is constantly shuffling the human genome, creating a genetic cocktail of human chromosomes and trying out different changes as it does so,” said study authors at the time.

Saber-toothed Fossil Sheds New Light On Ancient Mammals

A remarkable 94-million-year-old fossil found in South America is shedding new light on the ancient history of mammals.

The specimen, dubbed Cronopio dentiacutus, is one of the very few mammal fossils to come out of South America from the era when dinosaurs ruled the Earth.   

The mouse-sized creature had a long snout, dagger-like canines and a powerful set of muscles it used to chew its insect food.

The mammal is a dryolestoid, an extinct group of animals distantly related to today´s marsupials and placentals.

“The new dryolestoid, Cronopio, is without a doubt one of the most unusual mammals that I have seen, extinct or living,” said John Wible, curator of mammals at the Carnegie Museum of Natural History.

University of Louisville paleontologist Guillermo Rougier and his team found the fossil, which breaks a roughly 60-million-year gap in what is known about South American mammals and their evolution.

“It looks somewhat like Scrat, the saber-toothed squirrel from ℠Ice Age,´ ” said Rougier, professor of anatomy and neurobiology at University of Louisville.

But even before they knew what it might look like, the researchers realized the importance of the discovery when they found the two fossilized skulls in 2006.

The skulls were embedded in rock in a remote area of northern Patagonia, about 100 miles from the city of Allen in the Argentinian province of Rio Negro.  It took several years of patient lab work to remove the specimens from the rocks.

“We knew it was important, based on the age of the rocks and because we found skulls,” Rougier said.

“Usually we find teeth or bone fragments of this age. Most of what we know of early mammals has been determined through teeth because enamel is the hardest substance in our bodies and survives well the passage of time; it is usually what we have left to study.”

“The skull, however, provides us with features of the biology of the animal, making it possible for us to determine this is the first of its kind dating to the early Late Cretaceous period in South America.”

“This time period in South America was somewhat of a blank slate to us. Now we have a mammal as a starting point for further study of the lineage of all mammals, humans included.”

The prospects for further investigation on the southern continents are exciting.

“Until now, all we have had are isolated teeth and a few jaw fragments “¦ which don’t really help much in deciphering broader relationships,” said Rich Cifelli, presidential professor of zoology at the University of Oklahoma and a researcher.

“The new fossils provide a sort of Rosetta Stone for understanding the genealogy of early South American mammals, and how they fit in with those known from northern landmasses,” said Cifelli, who has spent his career discovering and identifying mammal remains.

“Now,” he said, “the burden is on the rest of us to find similarly well preserved fossils from elsewhere, so that the broader significance of Rougier´s finds can be fully placed in context.”

The study was published online November 2 in the journal Nature. 

Image Caption: Artist impression of Cronopio dentiacutus. Credit: Jorge Gonzalez/Guillermo Rougier

On the Net:

Which Came First? Humans Or Malaria?

Malaria Revealed As Ancient, Adaptive And Persistent Foe

One of the most comprehensive analyses yet done of the ancient history of insect-borne disease concludes for the first time that malaria is not only native to the New World, but it has been present long before humans existed and has evolved through birds and monkeys.

The findings, presented in a recent issue of American Entomologist by researchers from Oregon State University, are based on the study of insect specimens preserved in amber.

The study outlines the evolution of several human diseases, including malaria, leishmaniasis and trypanosomiasis. It makes clear that these pathogens have existed for at least 100 million years, and suggests that efforts to conquer them will be an uphill battle against such formidable and adaptive foes.

“Amber tells us that these diseases have been here for many millions of years, have co-evolved with their hosts and move readily from one species to another,” said George Poinar, Jr., a professor of zoology at OSU and one of the world´s leading experts on the study of fossils in this semi-precious stone.

“Malaria is one of the greatest insect-borne killers in human history, and more than one million people a year are still dying from it,” Poinar said. “But the evolutionary record suggests it can easily change its protein coat in response to vertebrate immune reactions. That´s why it´s always becoming resistant to drugs, and efforts to create vaccines will be very difficult.”

Insects preserved for tens of millions of years are offering new clues to the ancient history of these diseases. Blood-feeding vectors trapped eons ago in oozing tree sap reveal in near-perfect detail stages of vertebrate pathogens they were carrying when they became entombed.

“Most people think of malaria as a tropical disease, which today it primarily is,” Poinar said. “But historically it occurred in many parts of the world, including temperate zones.”

“As recently as 1935 there were 900,000 cases of malaria in the United States,” he said. “Near Portland, Ore., malaria almost wiped out some local Indian tribes in the 1830s, and the mosquitoes that carried it are still prevalent there. In the 1600s it hindered colonization from Massachusetts to Georgia. And there are 137 million people right now living in areas of risk in the Americas.

“It´s possible epidemics could explode again, almost anywhere in the world,” he said.

Having traveled much of the world to pursue amber, Poinar knows first-hand the risks involved.

“I caught malaria in the 1970s in the Ivory Coast in Africa,” he said. “My arm had bumped up against some mosquito netting while I slept. The following day, I started shaking with cold, then sweating with a high fever, thinking I was going to die.”

Millions have died. Globally, about 300-500 million cases of malaria occur each year, with more than a million deaths in Africa alone.

Among the points made in this report:

– Discoveries in amber have helped to pin down the minimum ages, origins and early hosts of several insect-borne human diseases.

– An archaic and now extinct malarial parasite was found in 100 million-year-old amber.

– Mosquitoes carrying malaria of the genus Plasmodium, the type that causes human illness, were established in the New World at least 15 million years ago, long before modern humans existed. At that time, the disease infected various types of birds.

– Spaniards arriving in South America found that when native peoples acquired fevers, they drank  infusions of cinchona bark, which was later found to contain quinine, an effective anti-malarial drug.

– Malaria apparently first went from birds to monkeys and eventually into humans.

Anatomically modern humans are only about 200,000 years old, experts say. These findings indicate they evolved with malaria for their entire existence.

Image 1: This culicine mosquito was discovered in amber from the Dominican Republic, and carried a type of Plasmodium malaria able to infect birds. It shows malaria was established in the New World at least 15 million years ago. (Photo courtesy of Oregon State University)

Image 2: These oocysts that carry malaria were found in amber from the Dominican Republic, preserved from millions of years ago. (Photo courtesy of Oregon State University)

On the Net:

Report Calls For Creation Of A Biomedical Research And Patient Data Network For More Accurate Classification Of Diseases, Move Toward ‘Precision Medicine’

A new data network that integrates emerging research on the molecular makeup of diseases with clinical data on individual patients could drive the development of a more accurate classification of disease and ultimately enhance diagnosis and treatment, says a new report from the National Research Council. The “new taxonomy” that emerges would define diseases by their underlying molecular causes and other factors in addition to their traditional physical signs and symptoms. The report adds that the new data network could also improve biomedical research by enabling scientists to access patients’ information during treatment while still protecting their rights. This would allow the marriage of molecular research and clinical data at the point of care, as opposed to research information continuing to reside primarily in academia.

“Currently, a disconnect exists between the wealth of scientific advances in research and the incorporation of this information into the clinic,” said Susan Desmond-Hellmann, co-chair of the committee that authored the report and chancellor of the University of California, San Francisco. “Often it can take years for biomedical research information to trickle to doctors and patients, and in the meantime wasteful health care expenditures are carried out for treatments that are only effective in specific subgroups. In addition, researchers don’t have access to comprehensive and timely information from the clinic. Overall, opportunities are being missed to understand, diagnose, and treat diseases more precisely, and to better inform health care decisions.”

“Developing this new network and the associated classification system will require a long-term perspective and parallels the challenges of building Europe’s great cathedrals — one generation will start building them, but they will ultimately be completed by another, with plans changing over time,” said committee co-chair Charles Sawyers, a Howard Hughes Medical Institute investigator and the inaugural director of the Human Oncology and Pathogenesis Program at Memorial Sloan-Kettering Cancer Center. “Dramatic advances in biology and technology have enabled rapid, comprehensive, and cost-efficient analysis of patients’ health information, which has resulted in an explosion of data that could dramatically alter disease classification. Health care costs have also steadily increased without translating into significantly improved clinical outcomes. These circumstances make it a perfect time to modernize disease classification.”

Typically, disease taxonomy refers to the International Classification of Diseases (ICD), a system established more than 100 years ago that is used to track and diagnose disease and determine reimbursement for care. Under ICD, which is in its 10th edition, disease classifications are primarily based on signs and symptoms and seldom incorporate rapidly emerging molecular data, incidental patient characteristics, or socio-environmental influences on disease.

This approach may have been adequate in an era when treatments were largely directed toward symptoms rather than underlying causes, but diagnosis based on traditional signs and symptoms alone carries the risk of missing or misclassifying diseases, the committee said. For instance, symptoms in patients are often nonspecific and rarely identify a disease unambiguously, and numerous diseases, such as cancer and HIV infection, are asymptomatic in the early stages. Moreover, many subgroups of certain diseases have diverse molecular causes and are classified as one disease and, conversely, multiple diseases share a common molecular cause and are not categorized in the same disease classification.

The committee noted several areas where classification of diseases based on genetic makeup is already happening with new drug approvals. For example, in a set of trials on patients with non-small-cell lung cancer, a drug was shown to produce dramatic anti-tumor effects in approximately 10 percent of the patients while other patients did not respond at all. Aided by the dramatic tumor shrinkage, the drug was approved and used on a broad range of lung cancer patients but did nothing for most other than increase costs and side effects. Subsequently, it was discovered that patients who responded to the drug carried specific genetic mutations. This allowed doctors to predict which patients would respond and led to the design of more effective clinical trials, reduced treatment costs, and increased treatment effectiveness. Since then, other studies have further divided lung cancers into subsets that are defined by driver mutations.

Framework to Achieve a New Disease Taxonomy

The committee recommended a modernization and reorientation of the information systems used by researchers and health care providers to attain the new taxonomy and move toward precision medicine. It suggested a framework for creating a “knowledge network of disease” that integrates the rapidly expanding range of information on what causes diseases and allows researchers, health care providers, and the public to share and update this information. The first stage in developing the network would involve creating an “information commons” that links layers of molecular data, medical histories, including information on social and physical environments, and health outcomes to individual patients. The second stage would construct the network and require data mining of the information commons to highlight the data’s interconnectedness and integrate it with evolving research. Fundamentally, data would be continuously deposited by the research community and extracted directly from the medical records of participating patients.

To acquire information for the knowledge network, the committee recommended designing strategies to collect and integrate disease-relevant information; implementing pilot studies to assess the feasibility of integrating molecular parameters with medical histories in the ordinary course of care; and gradually eliminating institutional, cultural, and regulatory barriers to widespread sharing of individuals’ molecular profiles and health histories while still protecting patients’ rights. Much of the initial work necessary to develop the information commons should take the form of observational studies, which would collect molecular and other patient data during treatment. Having this access at point of care could reduce the cost of research, make scientific advances relevant to real-life medicine, and facilitate the use of electronic health records.

The committee noted that moving toward individualized medicine requires that researchers and health care providers have access to very large sets of health and disease-related data linked to individual patients. These data are also critical for developing the information commons, the knowledge network of disease, and ultimately the new taxonomy.

On the Net:

How Chromosomes Find Each Other

[ Watch the Video ]
After more than a century of study, mysteries still remain about the process of meiosis–a special type of cell division that helps insure genetic diversity in sexually-reproducing organisms. Now, researchers at Stowers Institute for Medical Research shed light on an early and critical step in meiosis.
The research, to be published in the Nov. 8, 2011 issue of Current Biology, clarifies the role of key chromosomal regions called centromeres in the formation of a structure known as the synaptonemal complex (SC). “Understanding this and other mechanisms involved in meiosis is important because of the crucial role meiosis plays in normal reproduction–and the dire consequences of meiosis gone awry,” says R. Scott Hawley, Ph.D., who led the research at Stowers.
“Failure of the meiotic division is probably the most common cause of spontaneous abortion and causes a number of birth defects such Down syndrome,” Hawley says.
Meiosis reduces the number of chromosomes carried by an individual’s regular cells by half, allocating precisely one copy of each chromosome to each egg or sperm cell and thus ensuring that the proper number of chromosomes is passed from parent to offspring. And because chromosomes come in pairs–23 sets in humans–the chromosomes must be properly matched up before they can be divvied up.
“Chromosome 1 from your dad has to be paired with chromosome 1 from your mom, chromosome 2 from your dad with chromosome 2 from your mom, and so on,” Hawley explains, “and that’s a real trick. There’s no room for error; the first step of pairing is the most critical part of the meiotic process. You get that part wrong, and everything else is going to fail.”
The task is something like trying to find your mate in a big box store. It helps if you remember what they are wearing and what parts of the store they usually frequent (for example, movies or big-screen TVs). Similarly, chromosomes can pair up more easily if they’re able to recognize their partners and find them at a specific place.
“Once they’ve identified each other at some place, they’ll begin the process we call synapsis, which involves building this beautiful structure–the synaptonemal complex–and using it to form an intimate association that runs the entire length of each pair of chromosomes,” Hawley explains.
Some model organisms employed in the study of meiosis, such as yeast and the roundworm Caenorhabditis elegans, use the ends of their chromosomes to facilitate the process. “These organisms gather all the chromosome ends against the nuclear envelope into one big cluster called a bouquet or into a bunch of smaller clusters called aggregates, and this brings the chromosome ends into proximity with each other,” Hawley says. “This changes the problem of finding your homologue in this great big nucleus into one of finding your mate on just the surface of the inside of the nucleus.”
But the fruit fly Drosophila melanogaster–the model organism in which meiosis has been thoroughly studied for more than a century, and which Hawley has studied for almost 40 years — has unusual chromosome ends that don’t lend themselves to the same kind of clustering.
“So even though the study of meiosis began in Drosophila, we really haven’t had any idea how chromosomes initiate synapsis in Drosophila,” Hawley says. “Now, we show that instead of clustering their chromosome ends, flies cluster their centromeres–highly organized structures that chromosomes use to move during cell division. From there, the biology works pretty much as you would expect: synapsis is initiated at the centromeres, and it appears to spread out along the arms of the chromosomes.”
The ramifications of the findings extend beyond fruit flies, as there’s some evidence that synapsis starts at centromeres in other organisms. In addition, Hawley and coauthors found that centromere clustering may play a role later in meiosis, when chromosomes separate from their partners.
“There’s reason to believe that some parts of that process will be at least explorable and potentially applicable to humans,” Hawley said.
The work also is notable as an example of discovery-based science, Hawley said. “We didn’t actually set out to study the initiation of meiosis; we were simply interested in characterizing the basic biology of early meiosis.”
But postdoctoral researcher and first author Satomi Takeo, Ph.D., noticed that centromere clustering and synaptonemal complex initiation occurred in concert, and her continued observations revealed the role of centromeres in initiating synapsis.
“I was staring with tired eyes at the cells that I was analyzing,” Takeo recalls. “Somehow I started looking at the spots I had previously ignored–probably because I thought they were just background noise–until I saw the connection between centromere clustering and synapsis initiation. After going through many images, I wrote an email to Scott, saying, ‘This is really important, isn’t it??’ With that finding, everything else started to make sense.”
In addition to Hawley and Takeo, the paper’s authors include Cathleen M. Lake at the Stowers Institute for Medical Research and Eurico Morais-de-Sá and Cláudio D. Sunkel at Universidade do Porto in Porto, Portugal, who provided information on the earliest stages of the process.

Image Caption: In Drosophila females, sequential meiotic stages are observable in a string of developing egg chambers called the ovariole. Meiosis starts at the anterior region (top-right) and meiotic cells form the synaptonemal complex (shown in purple) to pair up homolog chromosomes. Centromeres are shown in orange and DNA is labeled green Credit: Courtesy of Dr. Satomi Takeo, Stowers Institute for Medical Research

On the Net:

Architecture And Design Help The Brain To Recover

How does the hospital environment affect our rehabilitation? New research from the University of Gothenburg, Sweden, into how the space around us affects the brain reveals that well-planned architecture, design and sensory stimulation increase patients’ ability to recover both physically and mentally. Digital textiles and multisensory spaces can make rehabilitation more effective and reduce the amount of time spent in care.

In an interdisciplinary research project, Kristina Sahlqvist has used research into the recovery of the brain to examine how hospitals can create better environments for rehabilitation.

“We want to help patients to get involved in their rehabilitation, a side effect of which can be an improvement in self-confidence,” says Sahlqvist, interior architect and researcher at the University of Gothenburg’s School of Design and Crafts (HDK).

The project drew on all the expertise used on a ward, with input from neurologists, rehabilitation doctors, nurses, psychologists, occupational therapists and physiotherapists. The result is a conceptual solution for an optimal rehabilitation ward.

“Our concept gives the ward a spatial heart, for example, where patients and their families can prepare food and eat together, which allows for a more normal way of spending time together in a hospital environment,” says Sahlqvist.

In tandem with her research work, she has teamed up with a designer and researcher at the Swedish School of Textiles in Borås on an artistic development project where they redesigned furniture, developed easy-grip cups and cutlery and used smart textiles, in other words textiles with technology embedded in them. The concept includes a table and chairs, a rug and a muff with integral heating, a cardigan with speakers and a soft bracelet that is also a remote control.

In order to measure and test the research theories Sahlgrenska University Hospital will be developing an intensive care room featuring multimodal stimulation, where all the senses are affected. The work involves an architect, doctors, hospital staff, musicians, a designer, an acoustician and a cognition specialist. In a bid to see what kind of results the environment can produce in practice, the researchers will take account of the entire social situation of patients, family and staff.

There are other interesting tricks in the field of neuroarchitecture, where it is possible, for example, to use spatial expressions to improve learning. Although these are currently used predominantly in schools, they could also have potential for the elderly.

“It’s worth wondering why there are so many educational models for preschool children but so few for the elderly. Many old people need a far more stimulating environment than they have at the moment,” says Sahlqvist.

On the Net:

Regimen May Improve Cell Transplantation Outcomes For Older Adults With Blood, Bone Marrow Cancers

Older patients with advanced hematologic malignancies, such as leukemia and lymphoma, who received a conditioning regimen that included minimal-intensity radiation therapy prior to allogeneic (genetically different) hematopoietic cell transplantation (HCT; receipt of bone marrow or stem cells transplant) had survival and progression-free survival outcomes suggesting that this treatment approach may be a viable option for older patients with these malignancies, according to a study in the November 2 issue of JAMA.

“Increasing age has been historically implicated in higher mortality after high-dose allogeneic HCT for patients with hematologic malignancies [cancers of the blood or bone marrow]. Such transplants are preceded by intense, cytotoxic [toxic to cells] conditioning regimens aimed at reducing tumor burden. The risk of organ toxicities has limited the use of high-dose regimens to younger patients in good medical condition. Therefore, age cutoffs of 55 to 60 years have been in place for decades for high-dose HCT. This excluded the vast majority of patients from allogeneic HCT, given that median [midpoint] ages of patients at diagnoses of most hematologic malignancies range from 65 to 70 years,” according to background information in the article.

To address this limitation, a nonmyeloablative (an approach using lower doses of chemotherapy and/or radiation that does not lead to eradication of all bone marrow cells prior to stem cell transplant) conditioning regimen for allogeneic HCT was developed. The regimen relies on graft-vs.-tumor effects to cure cancer and consists of the chemotherapy drug fludarabine and a low dose of total-body irradiation before HCT and a course of immunosuppression. “This regimen has allowed extension of allogeneic HCT to a previously unserved population of older or medically infirm patients,” the authors write.

Mohamed L. Sorror, M.D., M.Sc., of the Fred Hutchinson Cancer Research Center, Seattle, and colleagues analyzed the outcomes of older patients with advanced hematologic malignancies who received minimally toxic nonmyeloablative allogeneic HCT. From 1998 to 2008, 372 patients ages 60 to 75 years (median age, 64 years) were enrolled in prospective clinical HCT trials at 18 collaborating institutions using conditioning with low-dose total body irradiation alone or combined with fludarabine, before related (n = 184) or unrelated (n = 188) donor transplants. They also received postgrafting immunosuppression therapy. The primary outcomes measured for the study were overall and progression-free survival.

As of June 23, 2010, 133 of the 372 patients were alive, with a median follow-up of 55 months. Overall, disease progression or relapse has been the most common cause of death (n = 135). Nonrelapse deaths occurred among 104 patients, mainly due to infections, graft-vs.-host disease (GVHD), and multi-organ failure. Five-year rates of overall survival and progression-free survival were 35 percent and 32 percent, respectively. The overall 5-year cumulative incidence of relapse was 41 percent.

Cumulative incidences for nonrelapse mortality at 5 years were comparable among the 3 age groups (27 percent for patients ages 60-64 vs. 26 percent for those ages 65-69 vs. 31 percent for those 70 or older). Five-year rates of overall survival were 38 percent for patients ages 60 through 64, 33 percent for those ages 65 through 69, and 25 percent for those 70 years or older.

Also, comorbid conditions and risks for disease relapse, but not increasing age, were associated with worse outcomes. More than half of the older patients were never hospitalized, and two-thirds of survivors experienced eventual resolution of their chronic GVHD with return to normal or near-normal physical function.

“While there is much room for improvement, particularly with regard to relapse, these results are encouraging given the poor outcomes with nontransplantation treatments, especially for patients with high-risk acute myeloid leukemia, fludarabine-refractory chronic lymphocytic leukemia, or progressive lymphoma. The older population is increasing; demographic changes in the United States suggest that 20 percent of the population will be 65 years or older by 2030. Furthermore, increases of up to 77 percent in the number of newly diagnosed hematologic malignancies among the older population are expected to occur in the next 20 years. Greater age is also associated with increased medical comorbid conditions. Thus, establishing treatment options with curative outcomes and near-normal long-term physical function have become an important future goal for older patients with hematologic malignancies,” the authors write.

(JAMA. 2011;306[17]:1874-1883. Available pre-embargo to the media at www.jamamedia.org)

Editor’s Note: Please see the article for additional information, including other authors, author contributions and affiliations, financial disclosures, funding and support, etc.

Editorial: Overcoming the Age Barrier in Hematopoietic Stem Cell Transplantation

In an accompanying editorial, Shin Mineishi, M.D., of the University of Michigan, Ann Arbor, Mich., comments on the findings of this study.

“Development and refinement of reduced-intensity, nonmyeloablative allogeneic hematopoietic stem cell transplantation (HSCT) has been an important accomplishment in HSCT over the last 15 years. As reported by Sorror et al, even among patients aged 60 through 75 years overall survival, progression-free survival, and other outcomes now appear almost comparable to those in younger patients. Although age alone should no longer be considered a limiting factor for allogeneic HSCT, more questions have been raised, and more problems need to be resolved for achieving optimal outcomes for older patients receiving allogeneic HSCT.”

(JAMA. 2011;306[17]:1918-1920. Available pre-embargo to the media at www.jamamedia.org)

Editor’s Note: Please see the article for additional information, including financial disclosures, funding and support, etc.

On the Net:

Not All Brain Regions Are Created Equal

Just as the Occupy Wall Street movement has brought more attention to financial disparities between the haves and have-nots in American society, researchers from Indiana University and the University Medical Center Utrecht in The Netherlands are highlighting the disproportionate influence of so called “Rich Clubs” within the human brain.

Not all regions of the brain, they say, are created equal.

“We’ve known for a while that the brain has some regions that are ‘rich’ in the sense of being highly connected to many other parts of the brain,” said Olaf Sporns, professor in the Department of Psychological and Brain Sciences in IU’s College of Arts and Sciences. “It now turns out that these regions are not only individually rich, they are forming a ‘rich club.’ They are strongly linked to each other, exchanging information and collaborating.”

The study, “Rich-Club Organization of the Human Connectome,” is published in the Nov. 2 issue of the Journal of Neuroscience. The research is part of an ongoing intensive effort to map the intricate networks of the human brain, casting the brain as an integrated dynamic system rather than a set of individual regions.

Using diffusion imaging, which is a form of MRI, Martijn van den Heuvel, a professor at the Rudolf Magnus Institute of Neuroscience at University Medical Center Utrecht, and Sporns examined the brains of 21 healthy men and women and mapped their large-scale network connectivity. They found a group of 12 strongly interconnected bihemispheric hub regions, comprising the precuneus, superior frontal and superior parietal cortex, as well as the subcortical hippocampus, putamen and thalamus. Together, these regions form the brain’s “rich club”.

Most of these areas are engaged in a wide range of complex behavioral and cognitive tasks, rather than more specialized processing such as vision and motor control. If the brain network involving the rich club is disrupted or damaged, said Sporns, the negative impact would likely be disproportionate because of its central position in the network and the number of connections it contains. By contrast, damage to regions outside of the rich club would likely cause specific impairments but would likely have little influence on the global flow of information throughout the brain.

Sporns said the cohesive nature of the rich club’s interconnections was surprising and unexpected. It would not have been implausible to have highly connected nodes that did not interact or influence each other to the same degree.

“You sort of wonder what they’re talking about when they’re communicating with each other,” he said. “All these regions are getting all kinds of highly processed information, from virtually all parts of the brain.”

The rich club, said van den Heuvel, might be the “G8 summit of our brain.”

“It’s a group of highly influential regions that keep each other informed and likely collaborate on issues that concern whole brain functioning,” he said. “Figuring out what is discussed at this summit might be an important step in understanding how our brain works.”

Sporns said he and van den Heuvel hope the findings and subsequent research could shed light on the network basis of brain disorders affecting mental health. Van den Heuvel’s prior research has already shown characteristic disturbances of brain networks in schizophrenia. Whether these disturbances specifically affect the brain’s rich club is an open question.

Interest in creating a comprehensive map of the human brain’s neural connections, the connectome, has accelerated in the last few years. In the U.S., the National Institutes of Health are currently funding a project involving a consortium of more than 70 scientists, including Sporns, who are working together to create a first map of the human connectome. Similar projects are planned or already under way in Europe and Asia.

“People are coming around to the idea that mapping the connectome is not only technically feasible but also very important to do,” Sporns said. “It’s a fundamental step towards understanding the brain as a networked system. Networks are everywhere these days, found in technology, social media and economics, ecology and systems biology — They’re becoming more and more central in many areas of science. The human brain is perhaps the most challenging example to date.”

Image Caption: This image shows the group connectome, with the nodes and connections colored according to their rich-club participation. Green represents few connections. Red represents the most. Credit: Reprinted with permission: Van den Heuvel, et al. The Journal of Neuroscience 2011

On the Net:

Bisexual Men: When Sexual Health Requires Stealth

Bisexual men have unique health needs compared to exclusively homosexual and heterosexual men, but the stigma they face makes learning of their needs — and even reaching these men in their “hidden communities” — difficult for public health professionals, say Indiana University researchers.

The reported need for privacy, because of the perceived stigma and lack of acceptance in both homosexual and heterosexual communities, is so pervasive that bisexual men often do not feel comfortable accessing sexual health-related services, even those targeted toward “gay and bisexual men,” because of a concern over what others would think of their bisexuality. A more general approach to providing services, framed as “men’s health” or “men’s sexual health,” will most likely be more effective, researchers learned.

“In terms of designing a specific program for behaviorally bisexual men, we’ve learned it will not be effective to openly advertise about it or put it on billboards; we have to be more discreet,” said Brian Dodge, associate director of the Center for Sexual Health Promotion at IU. Dodge’s research for nearly 10 years has involved bisexual behavior and associated health needs, yet these findings from his recent study were “surprising.”

“The fear of disclosure, desire for privacy, and anticipation of stigma are even more problematic than we anticipated,” he said. “The reasons for these issues eventually need to be addressed not only with bisexual men but also at the societal level if we are to increase participation in effective health services without operating in stealth.”

This and three other studies discussed at the American Public Health Association’s annual meeting are part of a larger study by Dodge and his collaborators, who are looking at health issues specific to bisexual men. This research approach is unique because most studies have combined bisexual men with gay men in previous behavioral science research.

The IU research involved 75 men in the Indianapolis area who had sex with at least one man and one woman within the previous six months. The participants each underwent in-depth interviews, 15 of which were conducted in Spanish. Of the participants, 25 were black, 25 were white and 25 were Latino.

Dodge’s study “Administering Sexual Health-Related Services to Bisexual Men: Privacy, Trust and Appropriate Messaging” was the recipient of the annual Excellence in Abstract Submission award from the HIV/AIDS Section of the APHA. Dodge is delivering oral presentations about this, as well as “Community Based Research in ‘Hidden’ Communities: Understanding Individual and Social Health Concerns among Bisexual Men” and a poster presentation, “Sexual Behaviors and Experiences among Bisexual Men in the Midwestern United States.” Co-investigator Omar Martinez is also presenting on issues specific to Latino participants in his talk “Sexual Health and Access to Care: Voices from Bisexual Latino Men in the Midwestern United States.”

Dodge, an associate professor in the Department of Applied Health Science in IU’s School of Health, Physical Education and Recreation, will discuss “Administering Sexual Health-Related Services” Tuesday, Nov. 1, at 10:30 a.m. in the Washington Convention Center. Co-authors are Phillip Schnarrs, Gabriel Goncalves, Michael Reece and Omar Martinez, all with the IU School of HPER; David Malebranche of Emory University School of Medicine; Ryan Nix of Step Up Inc. in Indianapolis; Barbara Van Der Pol of IU School of HPER; and J. Dennis Fortenberry of the IU School of Medicine.

On the Net:

BRCA Family History Not Always A Greater Risk Factor For Breast Cancer

US researchers on Monday revealed that women do not automatically have a greater risk of developing breast cancer just because someone in their family tested positive for breast cancer genes.

The findings are based on analysis of more than 3,000 families including women with breast cancer. Researchers found that close relatives of women who carry mutations in a BRCA gene — but do not have the mutation themselves — do not have an increased risk of developing breast cancer compared to relatives of women with breast cancer who do not have the mutations.

The results support most previous evidence regarding risks for non-carriers of BRCA mutations, however, they do contradict a 2007 study that showed that first-degree relatives of women with BRCA gene mutations are several times more likely than the general population to develop breast cancer — despite not having the mutation themselves.

The new finding suggests that women who test negative for the BRCA gene mutation may not need extra cancer screening and other preventative treatments.

“The results are encouraging and reassuring,” said Dr. Allison Kurian of Stanford University School of Medicine, whose study appears in the Journal of Clinical Oncology.

Women in the United States have, on average, a 12 to 13 percent chance of developing breast cancer in their lifetime. About 5 to 10 percent of breast cancers are genetic, and most of those cases are caused by abnormalities in the BRCA1 and BRCA2 genes. Women with these mutations have a 5 to 20 times greater risk of developing breast or ovarian cancer, and must take necessary precautions to reduce their risk of developing cancer. Many elect to have their breasts or ovaries removed to prevent cancer development.

Once these mutations show up in a woman, it is important for other female family members to be screened as well.

In the past, women who test negative have traditionally been told they have the same risk as women in the general population. But the 2007 study contradicted those beliefs, causing mass hysteria among physicians and patients alike. Kurian believed those study findings to be false-positives, and wanted to ease women´s minds.

For the study, Kurian and colleagues studies women with breast cancer in 3,047 families in three population-based cancer registries in Northern California (1,214), Australia (799) and Canada (1,034) through a consortium called the Breast Cancer Family Registry. They found 292 families in which a woman had a BRCA mutation.

They compared the risk of breast cancer among first-degree relatives (mothers, sisters, daughters) of breast cancer patients who did and did not carry a BRCA mutation and found no significant difference, meaning that non-carriers of a familial BRCA mutation did not have a markedly elevated risk of developing breast cancer.

They also found that a small percentage (3.4%) of women in the general populations of Australia, Canada and the US who are at highest risk for reasons other than BRCA mutations account for nearly 32 percent of all breast cancer cases. That finding reflects the wide range of factors that can play a role in breast cancer development.

“Earlier reports of higher risk among non-carriers of family-specific BRCA mutations compared to risks in the general population may reflect a comparison of women with and without a family history of breast cancer,” noted study co-author Alice Whittemore, PhD, professor of epidemiology and biostatistics at Stanford University School of Medicine. “The control group we used – relatives of breast cancer patients in families without a BRCA mutation – was important.“

“The results suggest that women who test negative for their family´s BRCA mutation have no greater breast cancer risk than a woman who also has relatives with breast cancer but no family-specific mutation,” said Whittemore.

“First-degree relatives of breast cancer patients are themselves at higher risk than women in the general population. So some reports of higher risk among non-carriers, as compared to risk in the general population, may have been an inappropriate comparison of apples and oranges,” Whittemore added.

“One strength of the current study is the control women it used as a yardstick for comparing the breast cancer incidence in non-carriers of family-specific mutations. The control women were also relatives of breast cancer patients, but of patients without mutations. This is a more appropriate yardstick than average risk in the general population, since close relatives of all breast cancer patients have somewhat higher than average risks,” noted the researchers.

The findings should quell questions about risk based on a familial BRCA mutation. But, “it´s important for patients and clinicians to remember this doesn´t rule out other risk factors, which might increase a non-carrier´s probability of getting breast cancer,” Kurian concluded. 

On the Net:

Happy Life Equals Long Life: Study

According to new research by a team from University College, London, the happier someone is, the longer they will live.

Researchers studied 3,800 people from ages 52 to 79 and found that those who had the highest rating of happiness were significantly less likely to die in the following five years.

The team took impact of age, disease and lifestyle factors into account and still found that the happiest group had a 35 percent lower risk of death than the least happy.

“The happiness could be a marker of some other aspect of people’s lives which is particularly important for health,” Professor Andrew Steptoe, who led the study, said in a statement. “For example, happiness is quite strongly linked to good social relationships, and maybe it is things like that that are accounting for the link between happiness and health.”

The team said their study took into account people’s moods at four points on a particular day.

They said this reduced the risk that people’s memories of how happy they had been would differ from reality and confound with the results, according to The Telegraph.

Happiness was measured by participants answering questions about themselves in several categories on a scale from one to five.

Positive affect was taken as a combination of people’s self-reported scores for happiness, excitement and contentment.

“On a five point scale you could feel four points happy and two points worried at the same time. It is not a single dimension you are looking at, it is much more complicated than that,” Steptoe said in a statement.

The researchers said that five years on from their assessment, just 3.6 percent of the happiest participants had died.

About 4.6 percent of those who were averagely happy, and 7.3 percent of the group with the lowest positive affect had died during the same time.

The researchers found after accounting for other medical factors that the happiest people were more than a third less likely to die.

The research was published in the Proceedings of the National Academy of Sciences.

On the Net:

Faster Mapping Of Blood Vessels May Aid Cancer Research

Like normal tissue, tumors thrive on nutrients carried to them by the blood stream. The rapid growth of new blood vessels is a hallmark of cancer, and studies have shown that preventing blood vessel growth can keep tumors from growing, too. To better understand the relationship between cancer and the vascular system, researchers would like to make detailed maps of the complete network of blood vessels in organs. Unfortunately, the current mapping process is time-consuming: using conventional methods, mapping a one-centimeter block of tissue can take months. In a paper published in the October issue of the Optical Society’s (OSA) open-access journal Biomedical Optics Express, computational neuroscientists at Texas A&M University, along with collaborators at the University of Illinois and Kettering University, describe a new system, tested in mouse brain samples, that substantially reduces that time.

The method uses a technique called knife-edge scanning microscopy (KESM). First, blood vessels are filled with ink, and the whole brain sample is embedded in plastic. Next, the plastic block is placed onto an automated vertically moving stage. A diamond knife shaves a very thin slice — one micrometer or less — off the top of the block, imaging the sample line by line at the tip of the knife. Each tiny movement of the stage triggers the camera to take a picture. In this way, the researchers can get the full 3-D structure of the mouse brain’s vascular network — from arteries and veins down to the smallest capillaries — in less than two days at full production speed. In the future the team plans to augment the process with fluorescence imaging, which will allow researchers to link brain structure to function.

Paper: “Fast macro-scale transmission imaging of microvascular networks using KESM,” Biomedical Optics Express, Mayerich et al., Vol. 2, Issue 10, pp. 2888-2896 (2011).

Image 1: This is a complex network of blood vessels in the mouse brain imaged by knife-edge scanning microscopy. The image represents an area about 2.9 millimeters across. Credit: Biomedical Optics Express

Image 2: Reconstruction of a small section from the previous image, showing the relative thickness of each blood vessel in the network (color-coded by thickness). The area depicted in the image is about 0.275 millimeters across. Credit: Biomedical Optics Express

On the Net:

Study: Unprecedented Marketing Of Sugary Drinks To Youth

US children and teenagers are becoming major targets of the soft drink industry, with exposure to more advertising and online marketing than ever before, according to a new study from the Yale Rudd Center for Food Policy & Obesity, according to various media reports.

The study is the most widespread and scientific-based assessment of sugary drink nutrition and marketing ever conducted.

Data from the study shows that soft drink companies are aggressively targeting young people, especially black and Hispanic youth. Researchers are presenting their findings today during the American Public Health Association´s annual meeting in Washington, DC.

Soft drink ads on TV geared toward children and teens have doubled from 2008 to 2010, the report found, with increased marketing from Coca-Cola Co and Dr Pepper Snapple Group Inc.

On a positive note, children were exposed to 22 percent fewer ads from PepsiCo Inc., the study found.

Black children and teens saw 80 to 90 percent more ads than white children, including double the exposure for the energy drink 5-Hour Energy and Coca-Cola´s vitamin water and Sprite. Hispanic children also saw 49 percent more advertising for sugary and energy drinks on Spanish television, while Hispanic teens saw 99 percent more ads.

“Our children are being assaulted by these drinks that are high in sugar and low in nutrition,” Yale´s Kelly Brownell, co-author of the report, told Reuters. “The companies are marketing them in highly aggressive ways.”

The findings come from syndicated data from The Nielsen Company, comScore, Inc., and Arbitron Inc.  on 14 beverage companies. They examined the nutritional quality of nearly 600 products, including full-calorie soda, energy drinks, fruit drinks, flavored water, sports drinks, and iced teas. They also examined diet energy drinks and diet children´s fruit drinks.

The Rudd Center also implemented independent studies, content analyses and store audits due to unavailability of some information.

“Beverage companies have pledged to improve child-directed advertising,” said lead researcher Jennifer Harris, director of marketing initiatives at the Rudd Center, in a recent statement.

“But we are not seeing a true decrease in marketing exposure,” she said. “Instead companies have shifted from traditional media to newer forms that engage youth through rewards for purchasing sugary drinks, community events, cause-related marketing, promotions, product placements, social media, and smartphones.”

Study co-author Marlene Schwartz, deputy director of the Rudd Center, said: “The beverage industry needs to clean up their youth-directed products: reduce the added sugar, take out the artificial sweeteners, and stop marketing products high in caffeine and sugar to young people. We also need the nutrition facts, including caffeine content, for all beverages, especially energy drinks.”

“Our results clearly show that the beverage industry´s self-regulatory pledges are not working,” said Brownell. “Children are seeing more, not less marketing, for drinks that increase the risk for serious diseases. If the beverage companies want to be considered public health partners, they need to do better.”

About 15 percent of children are overweight or obese, according to the US Centers for Disease Control and Prevention (CDC). Children today are likely to have shorter life spans than their parents, which will affect their ability to work and pay taxes, and potentially drive up healthcare costs.

Even though the American Academy of Pediatrics warns that highly caffeinated energy drinks are not appropriate for children and adolescents, such energy drinks as Red Bull and Amp aggressively market their products toward young people, the report said.

Teens saw an 18 percent increase in TV ads, and heard 46 percent more radio ads in 2010 for energy drinks than adults did.

Brownell said this new report is the first to analyze the data from several firms to measure the complete picture of youth exposure to marketing and advertising. He also said it was crucial to consider the online interaction children have with brands, especially since they are online longer than they watch TV.

For an example, the report found that 21 sugary drink brands had YouTube channels in 2010 with more than 229 million views by June 2011. Coca-Cola is the most popular brand on Facebook, with more than 30 million fans, the study found.

They also found the most-visited soft drink websites were MyCokeRewards.com and Capri Sun.

The study also examined drinks themselves. They discovered that the average 8-ounce serving of fruit drink has 110 calories and seven teaspoons of sugar — the same amount found in an 8-ounce serving of soda or energy drink.

The report was supported by grants from the Robert Wood Johnson Foundation and the Rudd Foundation.

On the Net:

Genetic Testing Could Result In Personalized Cancer Treatment

Advances in DNA testing could open the door for more expensive cancer treatments to be approved, as well as more personalized treatment options to emerge, one expert told Telegraph Medical Correspondent Stephen Adams on Saturday.
Matthew Seymour, director of the National Cancer Research Network (NCRN), told Adams that genetic testing techniques were advancing so rapidly that doctors could soon administer such an exam to find out which types of medicine would work best for them. That, in turn, could encourage regulators to approve costly treatment options, even if they only prove most effective on a handful of patients.
“New drugs are often turned down by the National Institute for Health and Clinical Excellence (NICE), because trials show that — on average — they only enable patients with advanced cancer to live a few weeks longer,” Adams wrote, noting that NICE opted not to approve Avastin for bowel cancer because “trials showed it only prolonged life by six weeks on average.”
Some, however, “lived much longer” thanks to the drug, he added. If medical experts could pinpoint which people would benefit most from medications like Avastin, Seymour says, it could lead to what Adams calls “smart prescriptions” individualized to each patent’s genetic disposition.
“What we are finding is that most people get no benefit, or are even harmed, by new drugs. But we are finding that some receive great benefit,” Seymour, a professor of gastrointestinal cancer medicine at Leeds University, said. “We have to get clever about how to target drugs. Medications for cancer have to be personalized because no two cancers are identical.”
“Most drugs companies now understand that their world has changed, and if they don’t introduce personalized medicine, they are simply not going to get their drugs approved and purchased,” he added.
According to Adams, a study conducted by Seymour illustrates the point. A trial of a chemotherapy drug called panitumumab revealed that it did not “significantly benefit” patients. However, it was later learned that as two out of every five people carried a specific gene mutation that kept them from benefitting from the drug, and further work pinpointed the group of cancer patients who benefitted most from the medication, many of whom lived “significantly longer” because of the drug.
The results of the study, which was sponsored by Cancer Research UK, will be presented during the National Cancer Reseach Institute conference next month, the Telegraph reported, and it marks the first time that genetic testing had been joined directly with a bowel cancer trial.
“This heralds a new generation of clinical research,” Seymour said. “When we have got drugs that only benefit a small number of patients, it is really important to find out who it is that is going to benefit“¦ The idea of treating 100 patients and only a few benefiting is ludicrous. We shouldn’t be doing it anymore.”

On the Net:

Google Announces Plans For Revamped TV Service

The tech giant behind the world’s most popular online search engine is taking another crack at the interactive television market with a redesigned version of its Google TV service, Bloomberg Businessweek reported on Friday.

According to the Bloomberg report, the new version of the software, which displays Internet content on a user’s television screen, will feature an easier-to-use interface intended to encourage customers to try out more of the service’s features, Google VP of Product Management Mario Queiroz said.

“The new version, which also is designed to show the YouTube video-sharing service better, opens up the platform for Android developers to build applications for TV,” the business magazine said in its online report, adding that software upgrades will launch in Sony devices early next week and in Logitech units “soon after that.”

That announcement comes one day after Tim Stevens of Engadget reported seeing Logitech Revue boxes, which he said had “just hit the sales floor of a major electronics retailer” and were “prominently sporting” stickers touting “New & Improved: Google TV with Android 3.1 and Android Market.”

That had briefly led to speculation that the units might already contain the updated software, but Logitech representatives squashed those rumors, as Stevens reported in an update to his original story: “We’d like to clarify that these products do not include the next version of Google TV software. The boxes were prematurely updated with the stickers in anticipation of the next release of the Google TV software, which, once available, will be a free and automatic update pushed to all Logitech Revue boxes that are installed and connected to the Internet.”

When the upgrade does finally arrive, it will reportedly help users find programming more easily while browsing than by typing keywords into a box, Gartner Inc. analyst Van Baker told Businessweek. However, Baker doubted that this new version of Google TV would not generate a ton of interest from consumers, saying that it was “still a use model that most consumers don’t really understand.”

“This is one of the early miles of the marathon. We’re running hard, and this is another important step in bringing this functionality to TV,” Queiroz told Bloomberg. “We’re working with a lot of cable providers and with networks to bring whatever content they think is appropriate to TV. Our goal is really to bring content that adds value as opposed to replicate redundant content.”

On the Net:

Report: Lions, Tigers, Cheetahs Could Be Extinct In 10 Years

Big cats such as lions, tigers, leopards, and cheetahs could be facing extinction within the next two decades, leading conservation groups to call for increased efforts to save them, USA Today’s Dan Vergano reported in a Friday articles.

“The populations of lions, leopards, cheetahs and especially tigers have been decimated in the past half-century,” Vergano said, adding that leading scientists report that tigers “have become so rare that lions have become their soup-bone substitutes, sought for Asian medicines and ‘tiger bone’ wine.”

“Do we want to live in a world without lions in the wild?” Luke Dollar, a biologist with Duke University and a member of the National Geographic-sponsored Big Cats Initiative (BCI), told USA Today. “That is the choice we are facing.”

According to estimated statistics from the International Union for Conservation of Nature (IUCN), over the past 50 years lions living in the African wild have decreased from 450,000 to 25,000.

During that same period, leopards have decreased from 750,000 to 50,000, cheetahs from 45,000 to 12,000, and tigers from 50,000 to 3,000 (of which only 1,200 are breeding-age females).

“Lions play a role in keeping migrations going, and keep populations in check,” naturalist Dereck Joubert, the co-founder of the Big Cats Initiative (BCI), told Vergano. “Big predators play a role in keeping prey species vital and alert.”

“The habitat doesn’t recover,” added his wife, BCI co-founder and photographer Beverly Joubert. “We’re left with just hyenas or their equivalent.”

Vergano reports that the Jouberts have spent a quarter of a century making nature documentaries, including “Eternal Enemies: Lions and Hyenas” for PBS. Concerns over the fate of creatures such as the lion and the cheetah led them to contact National Geographic and convince them to re-double their conversation efforts for these and other big cats.

“Over the course of 18 months, the initiative awarded 19 grants to conservation efforts across Africa,” USA Today reported, and their efforts will continue this Monday, as they are encouraging Halloweeners to collect donations while going door-to-door, trick-or-treating. They are also accepting donations via text-message and online through the National Geographic website.

“We are seeing the effects of 7 billion people on the planet,” Dereck Joubert told USA Today. “At present rates, we will lose the big cats in 10 to 15 years.”

On the Net:

Scientists To Revisit Speed Of Light Neutrino Experiment

The scientists who reported about particles that moved faster than light said on Friday that they were going to revisit their experiment.

The scientists said that they began a new test earlier this week because critics of the experiment say the results were “a statistical quirk,” according to the AFP news agency.

The team said on September 23 that they measured neutrinos that traveled about 3.75 miles per second faster than the velocity of light, breaking Einstein’s claim of light being the highest speed possible.

Einstein said that nothing should be able to travel faster than light, and evidence that neutrinos were capable of doing so would have a fundamental impact on understanding the universe.

The neutrinos were measured at the European Center for Nuclear Research (CERN) along the 454-mile-long Large Hadron Collider.

The scientists went over the results of the Opera experiment for about six months before making the announcement.

They said they were bewildered by their findings and asked for other scientists to help explain what they discovered.

Some critics argue that the timing measurements may have been misread.  Dr. Sergio Bertolucci, director of research at CERN, told BBC News that using shorter pulses could help solve this problem.

Professor Matt Strassler of Rutgers University, who identified possible flaws in the original experiment, said the new test would help clarify the data.

Strassler wrote on his blog: “It’s like sending a series of loud and isolated clicks instead of a long blast on a horn; in the latter case you have to figure out exactly when the horn starts and stops, but in the former you just hear each click and then it’s already over.”

On the Net:

Which Is Better? Juice Or Extract

Study shows cranberry juice is better than extract at fighting bacterial infections
With scientific evidence now supporting the age-old wisdom that cranberries, whether in sauce or as juice, prevent urinary tract infections, people have wondered if there was an element of the berry that, if extracted and condensed, perhaps in pill form, would be as effective as drinking the juice or eating cranberry sauce. A new study from researchers at Worcester Polytechnic Institute helps to answer that question.
The study tested proanthocyanidins or PACs, a group of flavonoids found in cranberries.  Because they were thought to be the ingredient that gives the juice its infection-fighting properties, PACs have been considered a hopeful target for an effective extract. The new WPI report, however, shows that cranberry juice, itself, is far better at preventing biofilm formation, which is the precursor of infection, than PACs alone. The data is reported in the paper “Impact of Cranberry Juice and Proanthocyanidins on the Ability of Escherichia coli to Form Biofilms,” which will be published on-line, ahead of print, Oct. 31, 2011, by the journal Food Science and Biotechnology.
“What we have shown is that cranberry juice´s ability to prevent biofilms is more complex than we may have originally thought,” said Terri Camesano, professor of chemical engineering at WPI and senior author on the paper. “For a while, the field focused on these PACs, but the data shows that they aren´t the silver bullet.”
Camesano´s lab explores the mechanisms that the virulent form of E. coli bacteria, the primary cause of most urinary tract infections (UTIs) in people, uses to form biofilms. This strain of E. coli is covered with small hair-like projections known as fimbriae that act like hooks and latch onto cells that line the urinary tract. When enough of the virulent bacteria adhere to cells, they form a biofilm and cause an infection. Previous work by Camesano´s lab has shown that exposure to cranberry juice causes the fimbriae on E. coli to curl up, reducing their ability to attach to urinary tract cells.
In the new study, Camesano´s team, which included graduate student Paola Andrea Pinzón-Arango and intern Kerrie Holguin, incubated two different strains of E. coli in the presence of two different mixtures of commercially available cranberry juice cocktail. They also incubated the bacteria separately in the presence of PACs, but not juice. While the juice cultures completely prevented biofilm formation, the PACs showed only limited ability to reduce biofilm formation, and only after extended exposure to the E. coli.
“Cranberries have been recognized for their health benefits for a number of years, especially in the prevention of UTIs,” the authors write in the new paper. “While the mechanisms of action of cranberry products on bacterial adhesion and biofilm formation are not fully understood“¦this study shows that cranberry juice is better at inhibiting biofilm formation than isolated A-type cranberry flavonoids and PACs, although the reasons for this are not yet clear.”
The research detailed in the current study was supported by grants from the National Institutes of Health, the National Science Foundation, the Cranberry Institute, and the Wisconsin Cranberry Board.

On the Net:

Major Companies Hit By Massive Hack Attack

It has been discovered that several major companies have been hacked by a command and control server. The list of companies includes nearly 20 percent of the Fortune 100 and other massive corporations, according to CNN´s Money website.
A command and control server is a computer that hackers use to direct the fleets of compromised PC´s that they have gained control over. After a period of time the infected PC´s communicate with the command and control server giving away access to secrets.
The attack was originally disclosed by the security company RSA in March after they discovered their network was breached. This attack received worldwide attention and highlights the challenges that are faced in detecting and blocking these cyber attacks. The companies that were attacked cover a wide range of fields from telecommunications companies to financial investment houses. The attacks appear to have started during November 2010, according to Krebson Security.
Some of the companies, though, may not have been directly attacked. For instance Google and Amazon are listed as victims, but they may have been compromised by insecure computers connected to their internet Domain Name Services, that helps people surf the web.
Other technology giants that were compromised include companies such as Intel, IBM, Facebook and Microsoft. The list of companies was discovered on the command and control server itself. Other companies, such as McAfee and other anti-virus and computer security companies, probably compromised their own computers in order to reverse engineer the malware used in these attacks.
Krebson Security notes that out of the 338 documented command and control attacks, 299 are located in or around Beijing, China while the next largest source of attacks is South Korea with a total of 16 attacks.

On the Net:

Graphene Grows Better On Certain Copper Crystals

New observations could improve industrial production of high-quality graphene, hastening the era of graphene-based consumer electronics, thanks to University of Illinois engineers.

By combining data from several imaging techniques, the team found that the quality of graphene depends on the crystal structure of the copper substrate it grows on. Led by electrical and computer engineering professors Joseph Lyding and Eric Pop, the researchers published their findings in the journal Nano Letters.

“Graphene is a very important material,” Lyding said. “The future of electronics may depend on it. The quality of its production is one of the key unsolved problems in nanotechnology. This is a step in the direction of solving that problem.”

To produce large sheets of graphene, methane gas is piped into a furnace containing a sheet of copper foil. When the methane strikes the copper, the carbon-hydrogen bonds crack. Hydrogen escapes as gas, while the carbon sticks to the copper surface. The carbon atoms move around until they find each other and bond to make graphene. Copper is an appealing substrate because it is relatively cheap and promotes single-layer graphene growth, which is important for electronics applications.

“It´s a very cost-effective, straightforward way to make graphene on a large scale,” said Joshua Wood, a graduate student and the lead author of the paper.

“However, this does not take into consideration the subtleties of growing graphene,” he said. “Understanding these subtleties is important for making high-quality, high-performance electronics.”

While graphene grown on copper tends to be better than graphene grown on other substrates, it remains riddled with defects and multi-layer sections, precluding high-performance applications. Researchers have speculated that the roughness of the copper surface may affect graphene growth, but the Illinois group found that the copper´s crystal structure is more important.

Copper foils are a patchwork of different crystal structures. As the methane falls onto the foil surface, the shapes of the copper crystals it encounters affect how well the carbon atoms form graphene.

Different crystal shapes are assigned index numbers. Using several advanced imaging techniques, the Illinois team found that patches of copper with higher index numbers tend to have lower-quality graphene growth. They also found that two common crystal structures, numbered (100) and (111), have the worst and the best growth, respectively. The (100) crystals have a cubic shape, with wide gaps between atoms. Meanwhile, (111) has a densely packed hexagonal structure.

“In the (100) configuration the carbon atoms are more likely to stick in the holes in the copper on the atomic level, and then they stack vertically rather than diffusing out and growing laterally,” Wood said. “The (111) surface is hexagonal, and graphene is also hexagonal. It´s not to say there´s a perfect match, but that there´s a preferred match between the surfaces.”

Researchers now are faced with balancing the cost of all (111) copper and the value of high-quality, defect-free graphene. It is possible to produce single-crystal copper, but it is difficult and prohibitively expensive.

The U. of I. team speculates that it may be possible to improve copper foil manufacturing so that it has a higher percentage of (111) crystals. Graphene grown on such foil would not be ideal, but may be “good enough” for most applications.

“The question is, how do you optimize it while still maintaining cost effectiveness for technological applications?” said Pop, a co-author of the paper. “As a community, we´re still writing the cookbook for graphene. We´re constantly refining our techniques, trying out new recipes. As with any technology in its infancy, we are still exploring what works and what doesn´t.”

Next, the researchers hope to use their methodology to study the growth of other two-dimensional materials, including insulators to improve graphene device performance. They also plan to follow up on their observations by growing graphene on single-crystal copper.

“There´s a lot of confusion in the graphene business right now,” Lyding said. “The fact that there is a clear observational difference between these different growth indices helps steer the research and will probably lead to more quantitative experiments as well as better modeling. This paper is funneling things in that direction.”

Lyding and Pop are affiliated with the Beckman Institute for Advanced Science and Technology at the U. of I. The Office of Naval Research, the Air Force Office of Scientific Research, and the Army Research Office supported this research.

Image 1: An illustration of rendered experimental data showing the polycrystalline copper surface and the differing graphene coverages. Graphene grows in a single layer on the (111) copper surface and in islands and multilayers elsewhere. Graphic by Joshua D. Wood

Image 2: Professors Joseph Lyding, left; and Eric Pop, center; and graduate student Josh Wood identified copper crystal structures that work best for growing high-quality graphene. Photo by L. Brian Stauffer

On the Net:

Why Do Some People Gain Weight Back After A Diet?

Obesity studies for years have shown that when overweight people lose weight their metabolism slows and they experience hormonal changes that increase appetite, a finding that has been backed by new Australian research that shows those changes persist for a longer period of time.

Scientists theorized that these biological changes could explain why most obese dieters quickly gain back much of their weight that they had previously lost. They found that for at least a year, subjects who lost weight on a low-calorie diet were hungrier than when they started and had higher levels of hormones telling the body to eat more, conserve energy and store fuel as fat.

The study, published Wednesday in the New England Journal of Medicine, recruited 50 healthy people who were either overweight or obese and put them on a 10-week highly restricted diet that led them to lose at least ten percent of their body weight. The participants were then kept on a diet to maintain that weight loss.

On average, participants lost about 30 pounds during the study period, faster than the standard advice of losing 1 to 2 pounds per week. They took in 500 to 550 calories per day on the OptiFast plus vegetables diet for eight weeks. The last two weeks of the study they were gradually reintroduced to ordinary foods.

However, only 34 people lost as much as the plan called for and stuck with the study long enough for researchers to analyze the data. After a year, researchers discovered that the participants´ metabolism and hormone levels had not returned to the levels before the study began.

And despite counseling and written advice about how to maintain their new weights, they gained an average of 12 pounds back over that year. So they were still at lower weights than when they started.

The findings suggest that dieters who regained their weight are not only slipping back into old eating habits, but are struggling against a persistent biological urge.

“People who regain weight should not be harsh on themselves, as eating is our most basic instinct,” Joseph Proietto of the University of Melbourne in Australia, lead author of the study, told USA Today in an email.

While it is no surprise that hormone levels changed shortly after the participants lost weight, “what is impressive is that these changes don´t go away,” Dr. Rudolph Leibel, an obesity researcher at Columbia, told the New York Times.

“It is showing something I believe in deeply – it is very hard to lose weight,” Dr. Stephen Bloom, an obesity researcher at Hammersmith Hospital in London, told Gina Kolota of the New York Times. And the reason, he said, is that “your hormones work against you.”

The researchers said that more than one solution to the crisis of obesity will most likely be necessary: “a combination of medicines” that will have to be safe for long-term use. However, drug companies have had a rough time getting weight loss and diet medications approved for market. The US Food and Drug Administration has rejected four different weight-loss drugs over the past four years. And has ordered the withdrawal of one prescription medication (Meridia) already on the market.

The study gives us a “very comprehensive” and “really discouraging” look into the breadth of the body´s response to weight loss, said Dr. Daniel Bessesen, an endocrinologist and obesity researcher at University of Colorado’s Denver Health Medical Center. It captures just how many resources the body musters to ensure that pounds are put back on – a long list of hormones that regulate appetite, feelings of fullness after eating and how calories are used.

As part of the study, Proietto and his team checked the blood levels of nine hormones that influence appetite. The major finding came from comparing the hormone levels from before the weight-loss program to one year after it was over. At the end of the study period, researchers found six hormones were still acting in a way that would boost hunger.

One hormone, leptin, which tells the brain how much body fat is present, fell by two-thirds immediately after the participants lost their weight. When leptin falls, appetite increases and metabolism slows. The team found one year out that leptin levels were still a third lower than they were when the study began, and leptin levels increased as subjects regained their weight.

Other hormones, including ghrelin and peptide YY, also changed a year out in a way that made the subjects´ appetites stronger than at the start of the program. Ghrelin levels increased and peptide YY decreased.

The team also had participants rate their hunger levels after meals at the one-year mark, compared to what they reported before the diet program started.

Experts not affiliated with the Australian study said the persistent effect of hormone levels was not surprising, and that it probably had nothing to do with the speed of weight loss.

People who lose less than 10 percent of their body weight would probably show similar hormonal changes, though to a lesser degree, Dr. George Bray of the Pennington Biomedical Research Center in Baton Rouge, Louisiana, told USA Today.

A key measure of the study is that “it´s better not to gain weight than try to lose it,” he added.

The results show that losing weight “is not a neutral event,” and that it is no accident that more than 90 percent of people who lose a lot of weight gain it back, said Leibel. “You are putting your body into a circumstance it will resist,” he said. “You are, in a sense, more metabolically normal when you are at a higher body weight.”

One solution might be to restore hormones to normal levels by giving drugs after dieters lose their weight. But it is possible, said Dr. Jules Hirsch of Rockefeller University, that researchers just do not know enough about obesity to prescribe solutions.

One thing is clear, he told the New York Times: “A vast effort to persuade the public to change its habits just hasn´t prevented or cured obesity.” More knowledge is needed, he added. “Condemning the public for their uncontrollable hedonism and the food industry for its inequities just doesn´t seem to be turning the tide.”

People who lose significant weight not only gain bigger appetite but also burn fewer calories than normal, creating “a perfect storm for weight regain,” Leibel said. Avoiding weight regain appears to be a fundamentally different problem from losing weight in the first place, and that researchers should pay more attention to it, he added.

The study was supported by the Australian government, medical professional groups and a private foundation. Proietto served on a medical advisory board of Nestle, maker of OptiFast, until last year.

Two-thirds of Americans are overweight or obese, and while obesity rates have begun to stabilize, there hasn´t been any real decline. Public health officials already fear that an entire generation of Americans will suffer poorer health and earlier deaths due to obesity. 

On the Net:

Steps Being Taken Towards Achieving An Early Diagnosis Of Cancer Of The Large Intestine

Itxaro Perez, a biochemist at the University of the Basque Country (UPV/EHU), has contributed in such a way that, in the long term, the early diagnosis of cancer of the large intestine could be feasible. Specifically, she has focused on certain enzymes known as peptidases and their activity (working rate): she has studied how their activity changes by comparing the tissue encountered at different stages of the disease. If these fluctuations could be correctly distinguished, they would be of use in the future when it comes to knowing how to go about detecting this type of cancer early. The line of research has only just begun, but it could provide many keys. The researcher has defended these initial results in a thesis entitled Peptidasen aktibitatearen aldaketak heste lodiko neoplasietan (Changes in the activity of peptidases in the neoplasms of the large intestine).

“Cancer of the large intestine does not display any symptoms until it has reached a fairly advanced stage,” explains Perez. So the challenge for researchers in this discipline is to secure an early diagnosis. Fortunately, this specific disease has characteristics that lend themselves to research and comparisons: “It has an intermediate phase known as an adenoma. This can be regarded as a cancer since uncontrolled cell growth takes place, but it is benign. The fact that it has this intermediate phase is very good for comparison purposes: firstly we can extract healthy tissue, and then from the adenoma, and after that from the cancer itself. By contrast, in the case of other diseases, the cancer is malignant right away and can only be compared with healthy tissue.” This way she has had the chance to observe how the activity of the peptidases evolves when three types of samples are extracted from the intestine (from the colon) of each patient: specifically, from healthy tissue, from an adenoma and from an already developed malignant tumor. 

Blood samples, her greatest contribution

In addition to intestinal samples, Perez has also analyzed the plasma by comparing blood samples from patients suffering this cancer with those of healthy individuals. This is in fact the main contribution of her thesis: the taking of steps to be able to identify evidence of the disease in the blood itself. “Obtaining plasma from the patient is straightforward. If it could help to make an early diagnosis, it would be a very valuable method for clinical applications”, she explained.

Plasma has already been used with the same aim in other types of cancer. As proof of this, Perez has long been conducting research on renal cancer in collaboration with her colleagues at the Department of Physiology at the UPV/EHU.  But this is a field that has not been studied very much in cancer of the large intestine, and considerable differences have been found: “In the kidneys we saw that the activity changes in many of the peptidases, but this does not happen in the large intestine. Some change, others do not. We were not aware of that.”

So a plasma analysis of the peptidases that are susceptible to undergoing changes of activity could, in the long term, become a useful tool for diagnosing cancer of the large intestine. But that is not all: these changes also take place differently depending on the phase or condition that the cancer is in. This means that this analysis can also be used for prognosis purposes.

Although the results obtained do shed some light, more exhaustive research now needs to be undertaken to determine how relevant the peptidases are in the formation, evolution and causes of this type of cancer. For example, the conclusions need to be verified by other means, other characteristics of these enzymes (apart from their activity) need to be studied, etc. In connection with this, Perez and her colleagues will be taking another step forward from next year onwards: “The samples we studied date back to 2007. As five years have now passed, our next piece of research will be focusing on the survival of these patients (how many of them remain alive, among other things). We want to see what happens; above all, what the prognosis is.”

On the Net:

Heart Healthy Broccoli Developed In UK

Shoppers looking for heart-healthy vegetables are now able to choose a new variety of broccoli with increased levels of a key phytonutrient. The new broccoli, which will be known as Beneforté, was developed from publicly-funded research at two of the UK´s world-leading biological research institutes, the Institute of Food Research (IFR) and the John Innes Centre.

Scientists at the two institutes are working to develop our understanding of what it is about broccoli that makes it a particularly healthy food.

Beneforté was specially grown to help ward off heart disease by containing two to three times the normal amount of glucoraphanin which breaks fat down in the body, preventing it from clogging the arteries. It is only found in significant amounts in broccoli, reports the ASsociated Press (AP).

“Vegetables are a medicine cabinet already,” Richard Mithen told AP. Mithen led the team of scientists at the Institute for Food Research in Norwich, England, that developed the new broccoli. “When you eat this broccoli … you get a reduction in cholesterol in your blood stream.”

To create the “super broccoli,” Mithen and colleagues cross-bred a traditional British broccoli with a wild, bitter Sicilian variety that has no flowery head, and a big dose of glucoraphanin. The enhanced hybrid was produced after 14 years of hybridization, no genetic modification was used.

Vegetable producers have been seeking ways to inject extra nutrients into foods for some time. Calcium-enriched orange juice to fortified sugary cereals and milk with added omega 3 fatty acids are just some of the goals.

In Britain, Beneforté is sold as part of a line of vegetables that includes mushrooms with extra vitamin D, and tomatoes and potatoes with added selenium.

Experts suggest that eating foods packed with extra nutrients would probably have only minimal impact compared with healthier lifestyle choices such as quitting smoking and increasing exercise.

Glenys Jones, a nutritionist at Britain´s Medical Research Council told AP’s Maria Cheng: “Eating this new broccoli is not going to counteract your bad habits.” She continues by expressing doubts whether adding the nutrients in broccoli to more popular foods would work to improve people´s overall health.

“If you added this to a burger, people might think it´s then a healthy food and eat more burgers, whereas this is not something they should be eating more of,” Jones said. She also thought the  price of it, in the UK Beneforté costs up to 30 percent more, might discourage penny-pinching customers.

On the Net:

Land Animals Suffered Catastrophic Losses After Permian Period

The cataclysmic events that marked the end of the Permian Period some 252 million years ago were a watershed moment in the history of life on Earth. As much as 90 percent of ocean organisms were extinguished, ushering in a new order of marine species, some of which we still see today. But while land dwellers certainly sustained major losses, the extent of extinction and the reshuffling afterward were less clear.

In a paper published in the journal Proceedings of the Royal Society B, researchers at Brown University and the University of Utah undertook an exhaustive specimen-by-specimen analysis to confirm that land-based vertebrates suffered catastrophic losses as the Permian drew to a close. From the ashes, the survivors, a handful of genera labeled “disaster taxa,” were free to roam more or less unimpeded, with few competitors in their respective ecological niches. This lack of competition, the researchers write, caused vicious boom-and-bust cycles in the ecosystems, as external forces wreaked magnified havoc on the tenuous links in the food web. As a result, the scientists conclude from the fossil record that terrestrial ecosystems took up to 8 million years to rebound fully from the mass extinction through incremental evolution and speciation.

“It means the (terrestrial ecosystems) were more subject to greater risk of collapse because there were fewer links” in the food web, said Jessica Whiteside, assistant professor of geological sciences at Brown and co-author on the paper.

The boom-and-bust cycles that marked land-based ecosystems’ erratic rebound were like “mini-extinction events and recoveries,” said Randall Irmis, a co-author on the paper, who is a curator of paleontology at the Natural History Museum of Utah and an assistant professor of geology and geophysics at Utah.

The hypothesis, in essence, places ecosystems’ recovery post-Permian squarely on the repopulation and diversification of species, rather than on an outside event, such as a smoothing out of climate. The analysis mirrors the conclusions reached by Whiteside in a paper published last year in Geology, in which she and a colleague argued that it took up to 10 million years after the end-Permian mass extinction for enough species to repopulate the ocean – restoring the food web – for the marine ecosystem to stabilize.

“It really is the same pattern” with land-based ecosystems as marine environments, Whiteside said. The same seems to hold true for plants, she added.

Some studies have argued that continued volcanism following the end-Permian extinction kept ecosystems’ recovery at bay, but Whiteside and Irmis say there’s no physical evidence of such activity.

The researchers examined nearly 8,600 specimens, from near the end of the Permian to the middle Triassic, roughly 260 million to 242 million years ago. The fossils came from sites in the southern Ural Mountains of Russia and from the Karoo Basin in South Africa. The specimen count and analysis indicated that approximately 78 percent of land-based vertebrate genera perished in the end-Permian mass extinction. Out of the rubble emerged just a few species, the disaster taxa. One of these was Lystrosaurus, a dicynodont synapsid (related to mammals) about the size of a German shepherd. This creature barely registered during the Permian but dominated the ecosystem following the end-Permian extinction, the fossil record showed. Why Lystrosaurus survived the cataclysm when most others did not is a mystery, perhaps a combination of luck and not being picky about what it ate or where it lived. Similarly, a reptilian taxon, procolophonids, were mostly absent leading to the end-Permian extinction, yet exploded onto the scene afterward.

“Comparison with previous food-web modeling studies suggests this low diversity and prevalence of just a few taxa meant that links in the food web were few, causing instability in the ecosystem and making it susceptible to boom-bust cycles and further extinction,” Whiteside said.

The ecosystems that emerged from the extinction had such low animal diversity that it was especially vulnerable to crashes spawned by environmental and other changes, the authors write. Only after species richness and evenness had been re-established, restoring enough population numbers and redundancy to the food web, did the terrestrial ecosystem fully recover. At that point, the carbon cycle, a broad indicator of life and death as well as the effect of outside influences, stabilized, the researchers note, using data from previous studies of carbon isotopes spanning the Permian and Triassic periods.

“These results are consistent with the idea that the fluctuating carbon cycle reflects the unstable ecosystems in the aftermath of the extinction event,” Whiteside said.

The National Science Foundation and the University of Utah funded the work. Reporters and the general public have free access to the manuscript through an award from the University of Utah J. Willard Marriott Library Open Access Publishing Fund.

Image 1: Lystrosaurus, a relative to mammals, was one of a handful of “disaster taxa” to escape from the rubble of the Permian Period, along with the meter-high spore-tree Pleuromeia. Low diversity of animals delayed the full recovery of land ecosystems by millions of years. Credit: Victor Leshyk

Image 2: Jessica Whiteside, collecting specimens at Ghost Ranch, N.M. A low diversity of species leads to boom-bust cycles in the food supply. Credit: Randall Irmis

On the Net:

New Smart Thermostat Cuts Energy Usage 20 To 30 Percent

Former Apple and Google engineers are launching a new company that will sell a new type of thermostat, “Nest Leaf”.

Nest Labs’ new thermostat has a smart technology that learns owner’s settings after a few days of being used.

The Nest Leaf is also expected to be able to cut energy by 20 to 30 percent by simply having a set schedule.

The new device is able to be networked through Wi-Fi so people can program it from an iOS or Android smartphone device.

A motion sensor on the device will allow the thermostat to “see” when people are usually in a room.

The device’s software uses artificial intelligence to improve home efficiency and utilizes a user friendly interface to allow easy communication with owners.

Nest Labs CEO and co-founder Tony Fadell led the team that created the first 18 generations of the iPod and the first three generations of Apple’s iPhone.

Yoky Matsuoka, Nest Labs Vice President of Technology, served as the Head of Innovation at Google and Professor of Computer Science and Engineering at the University of Washington.

“So, we created Nest Labs and began recruiting many of the amazing people we´ve gotten to know and work with in the Valley over the years,” Fadell said in a blog post. “They were as excited as we were to reinvent such an important yet unloved device that would make people´s lives easier and hopefully, make the world a little better too.

On the Net:

What Makes Humans And Chimps Different?

For years, scientists believed the vast phenotypic differences between humans and chimpanzees would be easily explained — the two species must have significantly different genetic makeups. However, when their genomes were later sequenced, researchers were surprised to learn that the DNA sequences of human and chimpanzee genes are nearly identical. What then is responsible for the many morphological and behavioral differences between the two species? Researchers at the Georgia Institute of Technology have now determined that the insertion and deletion of large pieces of DNA near genes are highly variable between humans and chimpanzees and may account for major differences between the two species.

The research team lead by Georgia Tech Professor of Biology John McDonald has verified that while the DNA sequence of genes between humans and chimpanzees is nearly identical, there are large genomic “gaps” in areas adjacent to genes that can affect the extent to which genes are “turned on” and “turned off.” The research shows that these genomic “gaps” between the two species are predominantly due to the insertion or deletion (INDEL) of viral-like sequences called retrotransposons that are known to comprise about half of the genomes of both species. The findings are reported in the most recent issue of the online, open-access journal Mobile DNA.

“These genetic gaps have primarily been caused by the activity of retroviral-like transposable element sequences,” said McDonald. “Transposable elements were once considered ℠junk DNA´ with little or no function. Now it appears that they may be one of the major reasons why we are so different from chimpanzees.”

McDonald´s research team, comprised of graduate students Nalini Polavarapu, Gaurav Arora and Vinay Mittal, examined the genomic gaps in both species and determined that they are significantly correlated with differences in gene expression reported previously by researchers at the Max Plank Institute for Evolutionary Anthropology in Germany.

“Our findings are generally consistent with the notion that the morphological and behavioral differences between humans and chimpanzees are predominately due to differences in the regulation of genes rather than to differences in the sequence of the genes themselves,” said McDonald.

The current analysis of the genetic differences between humans and chimpanzees was motivated by the group´s previously published findings (2009) that the higher propensity for cancer in humans vs. chimpanzees may have been a by-product of selection for increased brain size in humans.

On the Net:

Coffee Linked With Decreased Risk For Skin Cancer

New research announceded on Monday offers evidence that drinking coffee is associated with a decreased risk of a common and slow-growing type of skin cancer known as basal cell carcinoma.

Scientists from Brigham and Women´s Hospital and Harvard Medical School in Boston examined the risks of basal cell carcinoma (BCC), squamous cell carcinoma (SCC) and melanoma in connection with coffee consumption and found the decreased risk was only seen in BCC.

The team presented their findings at the 10th American Association for Cancer Research International Conference on Frontiers in Cancer Prevention Research, held October 22 to 25.

They found that women in the study who drank more than 3 cups of coffee per day were 20 percent less likely to develop BCC, than those who drank less than a cup per day. Men who consumed more than three cups per day shown a 9 percent reduction in risk of BCC.

Data for the study came from the Nurses´ Health Study, which followed 72,921 people between 1984 and 2008, and the Health Professionals Follow-Up Study, which followed 39,976 people between 1986 and 2008. The researchers found 25,480 skin cancer cases, of which BCC represented 22,786 of the cases — or 88 percent.

BCC rarely spreads to other parts of the body, and rarely returns if promptly removed. However, any apparent health benefit that is found to come from our diet is a huge bonus, the researchers said.

“Given the nearly 1 million new cases of BCC diagnosed each year in the United States, daily dietary factors with even small protective effects may have great public health impact,” said Fengju Song, Ph.D., from Brigham and Women´s Hospital and Harvard Medical School, and lead author of the study. “Our study indicates that coffee consumption may be an important option to help prevent BCC.”

“Daily dietary factors with even small protective effects may have great public health impact,” Song told the Los Angeles Times in a statement. “Our study indicates that coffee consumption may be an important option to help prevent basal cell carcinoma.”

Song said one limiting factor in the study, however, was that participants were in the healthcare field and may have had better habits than the average person.

The researchers noted that they found an association, but not a direct cause-effect link. Song said further research is needed to confirm the findings and probe how coffee may act to reduce the risk of BCC.

Song and colleagues were surprised by the inverse connection in BCC cases only. Previous studies in animals had suggested a link between caffeine consumption and a reduced risk of skin cancer, but studies in people had not been conclusive.

“Mouse studies have shown that oral or topical caffeine promotes elimination of UV-damaged keratinocytes via apoptosis (programmed cell death) and markedly reduces subsequent SCC development,” Song said. “However, in our cohort analysis, we did not find any inverse association between coffee consumption and the risk for SCC.”

Consumption of coffee has also been linked to a reduced risk of breast cancer and prostate cancer and cancer overall. “To the best of our knowledge, coffee consumption is a healthy habit,” Song said.

The biggest risk factor for skin cancer is exposure to UV radiation. When in the sun for prolonged periods, the researchers suggest applying sunscreen is more important than drinking an extra cup of coffee to avoid the risk of skin cancer.

BCC is the most common form of skin cancer in the United States. The Skin Cancer Foundation and the American Cancer Society offer more information about basal cell carcinomas.

The study has yet to be published.

On the Net:

Five Soft Drinks A Week Can Lead To More Teenage Violence

A new study suggests that teens who drink more than five cans of soft drinks a week are significantly more likely to behave aggressively.

The researchers studied 1,878 teens from 22 public schools in Boston, Massachusetts who were part of the Boston Youth Survey.

The teens were asked how many carbonated non-diet soft drinks they had drank over the past seven days.

The team then divided the responses up into two groups, those drinking up to four cans over the preceding week and those drinking five or more. 

The team looked at potential links to violent behavior in the group by asking if they had been violent towards their peers, a sibling, or a partner.

Responses were assessed in the light of factors likely to influence the results, including age and gender, alcohol consumption and how much sleep they had the night before.

The study found that those who drank 5 or more cans every week were more likely to have drunk alcohol and smoked at least once in the previous month.

The results also showed that heavy use of carbonated non-diet soft drinks were associated with carrying a gun or knife and having a violent attitude towards peers, family members and partners.

The researchers found that 23 percent of those drinking one or no cans of soft drinks a week carried a gun/knife, while 43 percent of those drinking 14 or more cans carried a weapon. 

Twenty-seven percent of the participants who said they drink 14 or more soft drinks a week showed more violence towards a partner as well.

The team said the probability of aggressive behavior was 9 to 15 percentage points higher for teens who were heavy consumers of non-diet carbonated soft drinks.

“There may be a direct cause-and-effect-relationship, perhaps due to the sugar or caffeine content of soft drinks, or there may be other factors, unaccounted for in our analyses, that cause both high soft drink consumption and aggression,” the authors wrote in research published online in Injury Prevention.

On the Net:

Scientists Claim Earth’s Temperature Is Rising

A new analysis by a group of scientists claims that the Earth’s surface is getting warmer.

Scientists with the Berkeley Earth Project used new methods and new data and found the same warming trend seen by groups like the U.K. Met Office and NASA.

The project received funds from sources that back organizations lobbying against action on climate change.

The group said it has also found evidence that changing sea temperatures in the north Atlantic may be a major reason for the Earth’s average temperature varying globally from year to year.

University of California physics professor Richard Muller gathered a team of 10 scientists, including the winner of this year’s Nobel Physics Prize for research Saul Perlmutter.

The groups work examined claims from “skeptical” bloggers that temperature data from weather stations did not show a true global warming trend.

The claim was that many stations have registered warming because they are located in or near cities, and those cities have been growing.  This is known as the urban heat island effect.

The group found that about 40,000 weather stations around the world whose output had been recorded and stored in digital form.

The Berkley group then developed a new way of analyzing the data to plot the global temperature since 1800.

“Our biggest surprise was that the new results agreed so closely with the warming values published previously by other teams in the US and the UK,” Muller said in a press release.

“This confirms that these studies were done carefully and that potential biases identified by climate change skeptics did not seriously affect their conclusions.”

The group found that since the 1950s, the average temperature over land increased by 1.8 degrees Fahrenheit.

The Berkley group says it has also found evidence that changing sea temperatures in the North Atlantic may be the reason why the Earth’s average temperature varies globally from year to year.

The team also found that the urban heat island effect does not contribute significantly to average land temperature rises as a whole because urban areas make up less than 1 percent of the Earth’s and area.

On the Net:

Pig Organs Could Be Transplanted Into Humans Within 2 Years

Scientists said that organs grown in genetically modified pigs could be transplanted into humans in as soon as two years.

Pittsburgh University scientists say that a trial transplanting pig corneas into humans with eye problems could begin as early as 2013.

“With new genetically modified pigs becoming available that are likely to improve the outcome of cellular and corneal xenotransplantation further, we believe that clinical trials will be justified within the next two to three years,” the authors wrote in the journal The Lancet.

The researchers said transplants of larger organs like the lungs, hearts and kidneys is likely to take longer due to problems with clots forming.

“These problems mean that the longest survival time for pig organs in non-human primates to date ranges from a few days for lungs to around six to eight months for hearts, and trials of solid organ transplants of this nature in humans are likely to be several years away,” they wrote.

However, they said that in dire situations, a heart transplant from a pig might be feasible.

“Life-saving transplants of a pig liver or heart could be justified as a bridge until a human organ becomes available.”

On the Net:

University Of Iowa, NYU Biologists Describe Key Mechanism In Early Embryo Development

New York University and University of Iowa biologists have identified a key mechanism controlling early embryonic development that is critical in determining how structures such as appendages–arms and legs in humans–grow in the right place and at the right time.

In a paper published in the journal PLoS Genetics, John Manak, an assistant professor of biology in the UI College of Liberal Arts and Sciences, and Chris Rushlow, a professor in NYU’s Department of Biology, write that much research has focused on the spatial regulatory networks that control early developmental processes. However, they note, less attention has been paid to how such networks can be precisely coordinated over time.

Rushlow and Manak find that a protein called Zelda is responsible for turning on groups of genes essential to development in an exquisitely coordinated fashion.

“Zelda does more than initiate gene networks–it orchestrates their activities so that the embryo undergoes developmental processes in a robust manner at the proper time and in the correct order,” says Rushlow, part of NYU’s Center for Developmental Genetics.

“Our results demonstrate the significance of a timing mechanism in coordinating regulatory gene networks during early development, and bring a new perspective to classical concepts of how spatial regulation can be achieved,” says Manak, who is also assistant professor of pediatrics in the Roy J. and Lucille A. Carver College of Medicine and researcher in the UI Roy J. Carver Center for Genomics.

The researchers note that their findings break new ground.

“We discovered a key transcriptional regulator, Zelda, which is the long-sought-after factor that activates the early zygotic genome,” says Rushlow.

“Initially, the embryo relies on maternally deposited gene products to begin developing, and the transition to dependence on its own zygotic genome is called the maternal-to-zygotic transition,” she adds. “Two hallmark events that occur during this transition are zygotic gene transcription and maternal RNA degradation, and interestingly, Zelda appears to be involved in both processes.”

The research showed that when Zelda was absent, activation of genes was delayed, thus interfering with the proper order of gene interactions and ultimately disrupting gene expression patterns, the researchers noted, adding that the consequence to the embryo of altered expression patterns is a drastic change in the body plan such that many tissues and organs are not formed properly, if at all.

The researchers used Drosophila, or fruit flies, to investigate these regulatory networks. The fruit fly has the advantage of being a tractable genetic model system with a rapid developmental time, and many of the genetic processes identified in flies are conserved in humans. Additionally, pioneering fly research has led to many of the key discoveries of the molecular mechanisms underlying developmental processes in complex animals.

The study brought together Rushlow, who discovered Zelda and is an expert in genetic regulatory networks in development, and Manak, a genomics expert whose laboratory focuses on how a genome is constructed and coordinately functions.

“I had always wanted to work with Chris, and this was a wonderful opportunity for us to combine our complementary areas of expertise in a truly synergistic fashion,” says Manak.

“Our collaboration is a marvelous example of how a problem can be viewed from two different perspectives, a systems view of early gene networks and an individualistic view of single genes and single embryos, and result in novel and significant discoveries,” says Rushlow.

On the Net:

Too Much Stress Can Lead To Higher Mortality Rates

[ Watch the Video ]
A new study concludes that men who experience persistently moderate or high levels of stressful life events over a number of years have a 50 percent higher mortality rate.
In general, the researchers found only a few protective factors against these higher levels of stress — people who self-reported that they had good health tended to live longer and married men also fared better. Moderate drinkers also lived longer than non-drinkers.
“Being a teetotaler and a smoker were risk factors for mortality,” said Carolyn Aldwin, lead author of the study and a professor of human development and family sciences at Oregon State University. “So perhaps trying to keep your major stress events to a minimum, being married and having a glass of wine every night is the secret to a long life.”
This is the first study to show a direct link between stress trajectories and mortality in an aging population. Unlike previous studies that were conducted in a relatively short term with smaller sample sizes, this study was modified to document major stressors — such as death of a spouse or a putting a parent into a retirement home — that specifically affect middle-aged and older people.
“Most studies look at typical stress events that are geared at younger people, such as graduation, losing a job, having your first child,” Aldwin said. “I modified the stress measure to reflect the kinds of stress that we know impacts us more as we age, and even we were surprised at how strong the correlation between stress trajectories and mortality was.”
Aldwin said that previous studies examined stress only at one time point, while this study documented patterns of stress over a number of years.
The study, out now in the Journal of Aging Research, used longitudinal data surveying almost 1,000 middle-class and working-class men for an 18-year period, from 1985 to 2003. All the men in the study were picked because they had good health when they first signed up to be part of the Boston VA Normative Aging Study in the 1960s.
Those in the low-stress group experienced an average of two or fewer major life events in a year, compared with an average of three for the moderate group and up to six for the high stress group. One of the study´s most surprising findings was that the mortality risk was similar for the moderate versus high stress group.
“It seems there is a threshold and perhaps with anything more than two major life events a year and people just max out,” Aldwin said. “We were surprised the effect was not linear and that the moderate group had a similar risk of death to the high-risk group.”
While this study looked specifically at major life events and stress trajectories, Aldwin said the research group will next explore chronic daily stress as well as coping strategies.
“People are hardy, and they can deal with a few major stress events each year,” Aldwin said. “But our research suggests that long-term, even moderate stress can have lethal effects.”
Michael Levenson, Heidi Igarashi, Nuoo-Ting Molitor and John Molitor with Oregon State University and Avron Spiro III with Boston University all contributed to this study, which was funded by the National Institute on Aging as well as an award from the U.S. Department of Veterans Affairs.

On the Net:

IQ Can Change Significantly During Teen Years

New research funded by the Wellcome Trust suggests that IQ scores in teenagers can dramatically change in conjunction with changes that occur in the brain, according to various media reports.

IQ, the standard measure of intelligence, has been thought to remain stable across a person´s life, and childhood scores are often used to predict education outcome and job prospects as an adult. However, based on findings from the study, researchers caution using the 11+ exam for grammar school entrance to predict academic ability.

“A testing industry has developed around the notion that IQ is relatively fixed and pretty well set in the early years of life. This study shows in a compelling way that meaningful changes can occur throughout the teenage years,” Robert Sternberg from the Oklahoma State University, who studies intelligence but was not involved in the research, told the Guardian.

“People who are mentally active and alert will likely benefit, and the couch potatoes who do not exercise themselves intellectually will pay a price,” he added.

The researchers found that mental ability of teenagers can improve or decline on a far greater scale than previously thought. Tests conducted on teenagers at an average age of 14 and then repeated at an average age of 18, showed improvements — and deterioration.

These results have implications for how pupils are assessed, and the age at which decisions about their futures are made.

The study, led by Professor Cathy Price from the Wellcome Trust Center for Neuroimaging at University College London, and published in the journal Nature, involved 19 boys and 14 girls, all undergoing a combination of brain scans and verbal and non-verbal IQ tests in 2004, and then again in 2008.

“We found a considerable amount of change in how our subjects performed on the IQ tests in 2008 compared to four years earlier,” explained UCL´s Sue Ramsden, co-author of the study. “Some subjects performed markedly better but some performed considerably worse. We found a clear correlation between this change in performance and changes in the structure of their brains and so can say with some certainty that these changes in IQ are real.”

Price noted that the average of all scores stayed the same across the 4-year period, but individual scores rose or fell by as many as 21 points, enough of a difference to either take a person of “average” intelligence to “gifted” status, or vice versa.

The results showed that a change in verbal IQ was found in 39 percent of the teens, with 21 percent showing a change in “performance IQ” — a test of spatial reasoning.

“On average it all washes out, but there are fluctuations from individual to individual,” Price told Guardian reporter Ed Yong.

The teens split evenly between those whose IQ improved and those whose IQ worsened. “It was not the case that young low performers got better, and the young high performers averaged out. Some highs got even better, and some lows got even worse,” Price noted.

The brain scans found drifting IQ mirrored by changes in density of nerves and other cells in parts of brains, suggesting drifts are real changes in ability, not varying concentration, mood or motivation.

The findings are seen to have greater validity because for the first time the variations in IQ correlated with changes in two particular areas of the brain.

Shifts in verbal IQ — including memory, vocabulary, arithmetic and general knowledge — were reflected in the left motor cortex, the area of the brain used for speech. Shifts in non-verbal IQ — problem solving and the ability to spot patterns — came with changes in the anterior cerebellum, which correlates to hand movements.

“We have a tendency to assess children and determine the course of their education relatively early in life,” said Price in a press release. “But here we have shown that their intelligence is likely to be still developing.”

“We have to be careful not to write off poorer performers at an early age when in fact their IQ may improve significantly given a few more years,” she added.

Price said it is not clear why IQ should have changed so much and why some people´s performance improved whilst others´ declined. It is possible that the differences are due to some of the subjects being early or late developers, but it is equally possible that education played a role in changing IQ, and this has implications for how schoolchildren are assessed.

The researchers did not seek to understand the causes of these changes.

“The question is, if our brain structure can change throughout our adult lives, can our IQ also change?” asks Price. “My guess is yes. There is plenty of evidence to suggest that our brains can adapt and their structure changes, even in adulthood.”

One of Wellcome Trust´s biggest strategic challenges is ℠understanding the brain.´ It funds a portfolio of neuroscience and mental health research. Scientists at the Wellcome Trust Center for Neuroimaging study higher cognitive function to understand how thought and perception arise from brain activity, and how such processes break down in neurological and psychiatric disease.

“This interesting study highlights how ℠plastic´ the human brain is,” said Dr John Williams, Head of Neuroscience and Mental Health at the Wellcome Trust. “It will be interesting to see whether structural changes as we grow and develop extend beyond IQ to other cognitive functions. This study challenges us to think about these observations and how they may be applied to gain insight into what might happen when individuals succumb to mental health disorders.”

Future work may focus on how adaptable the brain may be beyond teenage years, and the implications for tackling mental diseases and other neurological conditions.

The study contradicts a long-standing view of intelligence as fixed. The study contradicts a long-standing view of intelligence as fixed. Alfred Binet, father of modern intelligence tests, believed mental development ended at 16, while child psychologist Jean Piaget thought it ended even earlier. 

On the Net:

Researchers Find First Human From Primate Gene Split In Brain

A new analysis has found that the first genes which appeared after the primate branch split are more likely to be expressed in the developing human brain.
Researchers believe that evolutionary recent genes may be responsible for constructing the uniquely powerful human brain.
“We found that there is a correlation between new gene origination and the evolution of the brain,” senior author Manyuan Long, PhD, Professor of Ecology & Evolution at the University of Chicago, wrote in the journal PLoS Biology. “There are some 50 to 60 human-specific genes in the frontal cortex of the brain, the part that makes humans diverge with other non-human primates.”
Scientists have been trying to determine how the human brain evolved into its anatomy and functional capacity that currently separates us from our primate ancestors.
The researchers merged a database of gene age with transcription data from humans and mice to look for when and where young genes are expressed.
The team found that a higher percentage of primate-specific young genes were expressed in brain than mouse-specific young genes.
They said human-specific young genes are more likely to be seen in fetal brain, when the organ is developing.
The authors stressed their finding is only a correlation between the appearance of young, human-specific genes and the evolutionary appearance of advanced brain structures.
They said future research will look at the function of these genes and the role they may have played in building today’s unique human brain.
“Traditionally, people don’t believe that a new protein or new gene can play any role in an important process. Most people only pay attention to the regulation of genes,” Long said in a press release. “But out of a total of about 1,300 new genes, only 13 percent were involved in new regulation. The rest, some 1,100 genes, are new genes that bring a whole new type of function.”

On the Net:

“ShakeOut” Earthquake Drills To Take Place This Week

“ShakeOuts” to be held in California, Nevada, Guam, Oregon, Idaho, British Columbia

[ Watch the Video ]

On Oct. 20, 2011, “Great ShakeOut” earthquake drills will be held in California, Nevada, Guam, Oregon, Idaho and British Columbia, and involve more than 8.7 million participants.

The Shakeout will motivate people to be prepared to “Drop, Cover, and Hold On” to protect themselves during earthquakes at work, school and home.

To participate, register on the Great California ShakeOut website.

The ShakeOut began in southern California in 2008 as a way of involving the general public in a large-scale emergency management exercise.

It is based on a magnitude 7.8 earthquake along the San Andreas fault, and on the “ShakeOut Scenario” developed by a team of experts.

The Southern California Earthquake Center (SCEC) developed advanced simulations of this earthquake that were used to estimate potential losses and casualties, and to show the public how the shaking would be felt throughout the region.

SCEC is headquartered at the University of Southern California and funded primarily by the National Science Foundation (NSF) and United States Geological Survey (USGS).

“ShakeOut has been one of the great successes from long-term NSF and USGS support for SCEC’s research and education activities,” says Greg Anderson, program director in NSF’s Division of Earth Sciences.

“ShakeOut started in southern California and has grown to become the largest public preparedness exercise in the United States. It’s a great example of the broader impacts of NSF investments in basic science and education.”

SCEC is a community of more than 600 scientists, students and staff members from some 60 institutions, in partnership with other science, engineering, education and government organizations worldwide.

In addition to scientific contributions to the ShakeOut Scenario, SCEC also hosts the ShakeOut website and created a registration system where participants could be counted in the overall total.

More than 5.4 million people participated in California in 2008, with schools for the first time coordinating earthquake drills on the same day.

Part of the appeal of the ShakeOut is its simplicity. At a minimum, participants practice “Drop, Cover, and Hold On,” the recommended procedure for self-protection in an earthquake.

Many schools and other organizations also practice additional aspects of their preparedness plans.

While the 2008 California ShakeOut was initially conceived as an one-time event, participant demand convinced organizers to develop the ShakeOut into a statewide, annual event.

More than 6.9 million people participated in the 2009 California ShakeOut, and more than 7.9 million in 2010.

The 2011 Great California ShakeOut will be held on Oct. 20, 2011, at 10:20 a.m. PST, with more than 8 million people in businesses, government offices, neighborhoods, schools and as individuals currently registered.

“ShakeOut participation continues to grow, and the California drill now is larger than last year’s record 7.9 million participants,” said Mark Benthien, SCEC education and outreach director.

“Registering allows us to know what people are planning for their drills and how many people are involved.  Once registered, they will receive updates and preparedness information.”

Because of the success of the ShakeOut in California, SCEC has since worked with several other regions to create ShakeOut drills, with websites and registration systems all managed by SCEC.

In addition to the areas listed above that are holding drills on October 20th, eleven states in the Central United States participated in April 2011 and will again in February 2012.

Utah is holding its first ShakeOut on April 17, 2012, the first Tokyo Shakeout is planned for March 9, 2012, and New Zealand is planning a nationwide ShakeOut in September 2012.

Washington, Puerto Rico, Arizona, Alaska and several countries including New Zealand, Turkey, Chile, China and others also have expressed interest.

SCEC has been able to develop a common set of messages and resources, which has grown as each new region develops materials and concepts.

“It’s truly remarkable how collaborative the ShakeOut continues to be,” said Benthien.

“The motto is that ‘we are all in this together,’ and this spirit has spread to other regions in creating new alliances and working across state and regional borders.

“ShakeOut is changing the way people and organizations are approaching community-wide earthquake preparedness.”

Image Caption: Drop, Cover, and Hold On! ShakeOut participants will learn how to deal with a quake. Credit: SCEC

On the Net:

Cells Are Crawling All Over Our Bodies

Biologists at Florida State devise new way to watch how cells move

For better and for worse, human health depends on a cell’s motility —— the ability to crawl from place to place. In every human body, millions of cells —are crawling around doing mostly good deeds ——— though if any of those crawlers are cancerous, watch out.

“This is not some horrible sci-fi movie come true but, instead, normal cells carrying out their daily duties,” said Florida State University cell biologist Tom Roberts. For 35 years he has studied the mechanical and molecular means by which amorphous single cells purposefully propel themselves throughout the body in amoeboid-like fashion ——absent muscles, bones or brains.

Meanwhile, human cells don’t give up their secrets easily. In the body, they use the millions of tiny filaments found on their front ends to push the front of their cytoskeletons forward. In rapid succession the cells then retract their rears in a smooth, coordinated extension-contraction manner that puts inchworms to shame. Yet take them out of the body and put them under a microscope and the crawling changes or stops.

But now Roberts and his research team have found a novel way around uncooperative human cells.

In a landmark study led by Roberts and conducted in large part by his then-FSU postdoctoral associate Katsuya Shimabukuro, researchers used worm sperm to replicate cell motility in vitro —— in this case, on a microscope slide.

Doing what no other scientists had ever successfully done before, Shimabukuro disassembled and reconstituted a worm sperm cell, then devised conditions to promote the cell´s natural pull-push crawling motions even in the unnatural conditions of a laboratory. Once launched, the reconstituted machinery moved just like regular worm sperm do in a natural setting —— giving scientists an unprecedented opportunity to watch it move.

Roberts called his former postdoc’s signal achievement “careful, clever work” —— and work it did, making possible new, revealing images of cell motility that should help to pinpoint with never-before-seen precision just how cells crawl.

“Understanding how cells crawl is a big deal,” Roberts said. “The first line of defense against invading microorganisms, the remodeling of bones, healing wounds in the skin and reconnecting of neuronal circuits during regeneration of the nervous system —— all depend on the capacity of specialized cells to crawl.

“On the downside, the ability of tumor cells to crawl around is a contributing factor in the metastasis of malignancies,” he said. “But we believe our achievements in this latest round of basic research could eventually aid in the development of therapies that target cell motility in order to interfere with or block the metastasis of cancer.”

Funding for Robert’s worm-sperm study came from the National Institutes of Health. The findings are described in a paper (“Reconstitution of Amoeboid Motility In Vitro Identifies a Motor-Independent Mechanism for Cell Body Retraction”) published online in the journal Current Biology.

Why worm sperm?

For one thing, said Roberts, the worm sperm is different from most cells in that it doesn´t use molecular motor proteins to facilitate its contractions; it shimmies along strictly by putting together and tearing down its tiny filaments. And the simple worm sperm makes a good model because, while it is similar to a human cell it has fewer moving parts, making it less complicated to take apart and reassemble than, say, brain or cancer cells.

Armed with the newfound ability to reconstitute amoeboid motility in vitro, cell biologists such as Roberts may be able to learn the answers to some major moving questions. Among them: How can some cells continue to crawl even after researchers have disabled their supply of myosin, the force-producing “mover protein” that functions like a motor to help power muscle and cell contraction?

For Roberts and his team, the next move will be to determine if what they’ve learned about worm sperm also applies to more conventional crawling cells, including tumor cells.

“As always, there will be more questions,” Roberts said. “Are there multiple mechanisms collaborating to drive cell body retraction? Is there redundancy built into the motility systems?”

Co-authors of the Current Biology paper include Roberts, a professor in the FSU Department of Biological Science; Shimabukuro, a former FSU postdoctoral associate in biology who now is a research scientist at the Japan Science and Technology Agency; Naoki Noda, of the Marine Biological Laboratory at Woods Hole, Mass.; and Murray Stewart, of the Medical Research Council’s Laboratory of Molecular Biology in Cambridge, England.

Image Caption: This is an electron microscope image of two crawling worm sperm magnified ~5,000X. Credit: Courtesy, Tom Roberts, FSU Dept. of Biological Science

On the Net:

Many Women Receive False-positives With Annual Mammogram

During a decade of receiving mammograms, more than half of cancer-free women will be among those summoned back for more testing because of false-positive results, and about one in 12 will be referred for a biopsy.

Simply shifting screening to every other year lowers a woman’s probability of having one of these false-positive episodes by about a third — from 61 percent to 42 percent — over the course of a decade.

A new study delving into false-positives in mammography looked at nearly 170,000 women between the ages of 40 and 59 from seven regions around the United States, and almost 4,500 women with invasive breast cancer. Because of the added decade of testing alone, it found, women who start mammograms at 40 instead of 50 are more likely to have false-positive results that lead to more testing.

“This study provides accurate estimates of the risk of a false-positive mammography and breast biopsy for women undergoing repeat mammography in community practice, and so provides important information about the potential harms of undergoing regular mammography,” said co-author Karla Kerlikowske, a professor of medicine at the UCSF School of Medicine.

The study will be published in Annals of Internal Medicine. The research was led by Group Health Research Institute of Seattle for the Breast Cancer Surveillance Consortium.

“Recalls” for a second mammogram for what turn out to be non-cancer results, known as false positives, may cause inconvenience and anxiety. Recommendations for fine-needle aspiration or surgical biopsy are less common, but can lead to unnecessary pain and scarring. The additional testing also contributes to rising medical costs.

Kerlikowske is the lead author of an additional report — to be published in the same issue of Annals — that for the first time in the United States examines the accuracy of film mammography against digital, which has increasingly replaced older film screening.

That study looked at nearly 330,000 women between the ages of 40 and 79. The data was pooled from the Breast Cancer Surveillance Consortium, a collaborative network of mammography registries in the United States.

The researchers found that overall cancer detection rates were similar for both methods. However, digital screening may be better for women between the ages of 40 and 49 who are more likely to have extremely dense breasts associated with lower cancer detection. The study also found new evidence that digital mammography is better at detecting estrogen receptor-negative tumors, particularly in women aged 40 to 49 years.

Breast cancer may not be detected, the researchers caution, if a radiologist fails to identify a visible breast lesion or if a tumor is obscured by normal breast tissue. Additionally, an imperceptible tumor may grow quickly and be discovered through a clinical exam prior to the next mammogram.

Digital mammography was developed in part to improve the detection of breast cancer in dense breasts by improving the ability to distinguish normal dense breast tissue from isodense invasive cancer.

The authors note that for every 10,000 women 40 to 49 who are given digital mammograms, two more cases of cancer will be identified for every 170 additional false-positive examinations.

Healthy women will undergo 12 screening mammograms in their lifetimes if they follow U.S. Preventive Services Task Force guidelines that recommend biennial screening starting at age 50 and continuing until age 74. This is controversial, with many practitioners recommending annual mammograms.

If women start biennial screening at 40, they will undergo 17 exams; those who start annual screenings at age 40 will undergo 34 exams.

For the false-positive study, the researchers found that after a decade of annual screening, a majority of women will receive at least one false-positive result, and 7 to 9 percent will receive a false-positive biopsy recommendation.

“We conducted this study to help women know what to expect when they get regular screening mammograms over the course of many years,” said study leader Rebecca Hubbard, PhD, an assistant investigator at Group Health Research Institute. “We hope that if women know what to expect with screening, they’ll feel less anxiety if — or when — they are called back for more testing. In the vast majority of cases, this does not mean they have cancer.”

The researchers say that screening every other year would likely lessen the probability of false-positive results “but could also delay cancer diagnosis.” However, for those diagnosed with cancer, the authors found women screened every two years were not significantly more likely to be diagnosed with late-stage cancer compared to those screened at one-year intervals.

The study stresses the importance of radiologists being able to review a patient’s previous mammograms because it “may halve the odds of a false-positive recall.”

Co-authors of both studies are Diana L. Miglioretti, PhD, of Group Health Research Institute, and Bonnie C. Yankaskas, PhD, of the University of North Carolina at Chapel Hill.

The National Cancer Institute funded the studies.

On the Net:

Scientists Create Computing Building Blocks From Bacteria And DNA

Scientists have successfully demonstrated that they can build some of the basic components for digital devices out of bacteria and DNA

Scientists have successfully demonstrated that they can build some of the basic components for digital devices out of bacteria and DNA, which could pave the way for a new generation of biological computing devices, in research published today in the journal Nature Communications.

The researchers, from Imperial College London, have demonstrated that they can build logic gates, which are used for processing information in devices such as computers and microprocessors, out of harmless gut bacteria and DNA. These are the most advanced biological logic gates ever created by scientists.

Professor Richard Kitney, co-author of the paper from the Centre for Synthetic Biology and Innovation and the Department of Bioengineering at Imperial College London, says:

“Logic gates are the fundamental building blocks in silicon circuitry that our entire digital age is based on. Without them, we could not process digital information. Now that we have demonstrated that we can replicate these parts using bacteria and DNA, we hope that our work could lead to a new generation of biological processors, whose applications in information processing could be as important as their electronic equivalents.”

Although still a long way off, the team suggest that these biological logic gates could one day form the building blocks in microscopic biological computers. Devices may include sensors that swim inside arteries, detecting the build up of harmful plaque and rapidly delivering medications to the affected zone. Other applications may include sensors that detect and destroy cancer cells inside the body and pollution monitors that can be deployed in the environment, detecting and neutralizing dangerous toxins such as arsenic.

Previous research only proved that biological logic gates could be made. The team say that the advantage of their biological logic gates over previous attempts is that they behave more like their electronic counterparts. The new biological gates are also modular, which means that they can be fitted together to make different types of logic gates, paving the way for more complex biological processors to be built in the future.

In the new study, the researchers demonstrated how these biological logic gates worked. In one experiment, they showed how biological logic gates can replicate the way that electronic logic gates process information by either switching “on” or “off”.

The scientists constructed a type of logic gate called an “AND Gate” from bacteria called Escherichia coli (E.Coli), which is normally found in the lower intestine. The team altered the E.Coli with modified DNA, which reprogrammed it to perform the same switching on and off process as its electronic equivalent when stimulated by chemicals.

The researchers were also able to demonstrate that the biological logic gates could be connected together to form more complex components in a similar way that electronic components are made. In another experiment, the researchers created a “NOT gate” and combined it with the AND gate to produce the more complex “NAND gate”.

The next stage of the research will see the team trying to develop more complex circuitry that comprises multiple logic gates. One of challenges faced by the team is finding a way to link multiple biological logic gates together, similar to the way in which electronic logic gates are linked together, to enable complex processing to be carried out.

On the Net:

Relativity Corrections Could Explain Faster Than Light Neutrinos

When researchers at the European Center for Nuclear Research (CERN) observed what they believed to be sub-atomic particles moving faster than the speed of light, some believed that the discovery could challenge the very fundamental laws of the universe.
It appears that those concerns were unfounded, as new research from scientists has shed light on exactly why CERN officials believe they witnessed beams of tiny particles of neutrinos traveling 60 nanoseconds (or 60 billionths of a second) quicker than the speed of light from their laboratory near Geneva, Switzerland to the Gran Sasso facility in Italy some 500 miles (730 km) away.
Furthermore, as Evan Ackerman of the website Dvice wrote Friday, “In an ironic twist, the very theory that these neutrinos would have disproved may explain exactly what happened.”
According to Ackerman, the rapid movement of neutrinos and the relatively short distance they had to travel as part of the Oscillation Project with Emulsion-tRacking Apparatus (OPERA) experiment meant that “in order to figure out exactly how long it takes a given neutrino to make the trip, you need to know two things very, very precisely: the distance between the two points, and the time the neutrino leaves the first point (the source) and arrives at the second point (the detector).”
During the original experiment, he said, CERN researchers used GPS to measure both distance and time for the OPERA experiment. They were able to determine the distance down to approximately 20 centimeters, Ackerman added, and they were able to use time signals from those same GPS satellites to clock the particles’ travel time. However, he points out that the scientists may have forgotten to take one variable into account, and that is relativity.
Dutch researcher Ronald A.J. van Elburg has written a new paper explaining, in the words of As Damon Poeter of PCMag, how “the effects of relativity as they pertain to the GPS satellite’s measurements require two corrections to the perceived time of travel.”
Those corrections alter the travel time of the neutrinos by 64 seconds, enough to return “the apparent velocities of neutrinos back to a value not significantly different from the speed of light,” van Elburg claims, according to Poeter. This could explain the results, though the PCMag writer adds that CERN scientists are claiming that they did account for these specific factors in their original findings.
Van Elburg is not the only one working to debunk CERN’s findings, however.
According to an October 14 story by Wired’s Adam Mann, “In the three weeks after the announcement, more than 80 explanations have been posted to the preprint server arxiv.”
“While some suggest the possibility of new physics, such as neutrinos that are traveling through extra dimensions or neutrinos at particular energies traveling faster than light, many offer less revolutionary explanations for the OPERA experiment,” he added, with different researchers citing astrophysical observations, the Standard Model of physics, and other grounds in an attempt to explain the supposed “faster than light speed” particles observed by CERN.
In September, shortly after the initial report, physicist and OPERA spokesman Antonio Ereditato called the discovery “a complete surprise,” and the team told AFP reporters that they had spend some six months “checking, testing, controlling and rechecking everything” before making a public announcement.
“We have high confidence in our results. We have checked and rechecked for anything that could have distorted our measurements but we found nothing,” Ereditato added, in a separate interview with the Telegraph. “We now want colleagues to check them independently.”

On the Net:

Questions Arise Over Cellphone Radiation Guidelines

Researchers said on Monday that measuring radiation exposure using current Federal Communication Commission (FCC) guidelines understates how much radiation most people receive from their mobile devices.

The researchers said one reason why is that the current assessment method bases evaluations of how much radiation people are exposed to from their phones on measurements taken using a large, liquid-filled plastic model of the adult human head.

The authors of the study published in the journal Electromagnetic Biology and Medicine said 97 percent of the population will have higher proportional exposure than what is assessed.

The team said the current assessment uses a model of a person that is 6-foot-2 and 220 pounds.  This body-type only represents 3 percent of the population.

The researchers from several members of Environmental Health Trust said children will receive twice as much microwave radiation to the head from phones as adults, and 10 times the amount to bone marrow.

The study said current assessments do not examine exposure to parts of the body other than the head.

The researchers said the cellphone industry should stop using the SAM-based system to certify phones for use.  Instead, they believe the industry should begin using the computer-based “virtual family” simulation approach. 

This approach assesses the radiation absorption for 10 different-sized people, including: a 5-year-old girl, a 6-year-old boy, an 8-year old girl, an 11-year-old girl, a 14-year-old boy, a 26-year-old woman, a 35-year-old man, an obese man and three women at different stages of pregnancy.

“The SAM-based certification process should be discontinued forthwith,” the authors wrote in the paper. “Because billions of young children and adults with heads smaller than SAM are now using cell phones extensively … it is essential and urgent that governments around the world revise approaches to setting standards for cell phone radiation.”

On the Net: