Eating Processed Meat Linked To Increased Heart Failure Risk In Men

redOrbit Staff & Wire Reports – Your Universe Online
Consumption of processed red meats such as cold cuts, hot dogs and bacon has been linked to an increased risk of heart failure and corresponding death in men, researchers reported this week in the journal Circulation: Heart Failure.
In the study, researchers from the Warsaw University of Life Sciences and the Institute of Environmental Medicine at Sweden’s Karolinska Institutet investigated, for the first time, the impact eating red meat products that have been smoked, cured, salted or treated with added preservatives had separately from unprocessed red meat.
“Processed red meat commonly contains sodium, nitrates, phosphates and other food additives, and smoked and grilled meats also contain polycyclic aromatic hydrocarbons, all of which may contribute to the increased heart failure risk,” senior author Alicja Wolk, a professor at the at the Karolinska Institutet’s Division of Nutritional Epidemiology, explained in a recent statement.
“Unprocessed meat is free from food additives and usually has a lower amount of sodium,” she added.
As part of the study, Wolk and her colleagues recruited 37,035 men who were between the ages of 45 and 79 and had no history of heart failure, ischemic heart disease or cancer. Each study participant completed a 96-item questionnaire on their food intake, as well as other lifestyle factors.
The investigators followed them from 1998 until they were diagnosed with heart failure, until they died, or until the study ended in 2010. After nearly 12 years of study, a total of 2,891 men were diagnosed with heart failure, and 266 died as a result of the condition.
After adjusting for other variables, the study authors found men who ate at least 75 grams of processed red meat each day had a 28 percent higher risk of heart failure in comparison to men who ate a maximum of 25 grams per day. Furthermore, those who consumed the most processed red meat were found to have a twofold increase of death resulting from cardiovascular failure compared to those in the lowest category.
In fact, for every 50 gram increase of daily processed meat consumption (equal to approximately one or two slices of ham), there is an eight percent increase in the risk of heart failure incidence rate, as well as a 38 percent increase in the risk of death. No increase in either rate was observed in men who ate only unprocessed red meat.
The surveys completed at the beginning of the study included questions on the consumption of sausages, cold cuts such as ham or salami, blood pudding and liver pate during the past year. It also polled study participants on the amounts of unprocessed meat (such as pork, veal, hamburger or ground-minced meat) consumed over that time.
The authors said their findings are consistent with the Physicians’ Health Study, which reported men eating the most total red meat faced a 24 percent increase in heart failure incidence rate versus those who ate the least total amount. They also noted they expected to see similar results in an ongoing study involving women.
“To reduce your risk of heart failure and other cardiovascular diseases, we suggest avoiding processed red meat in your diet, and limiting the amount of unprocessed red meat to one to two servings per week or less,” said lead author Dr. Joanna Kaluza, an assistant professor in the at Warsaw University of Life Sciences Department of Human Nutrition. “Instead, eat a diet rich in fruit, vegetables, whole grain products, nuts and increase your servings of fish.”

Email Invites From LinkedIn Criticized As Spam, Lawsuit Filed

Peter Suciu for redOrbit.com – Your Universe Online

Chances are we’ve all gotten one of those emails that says, “I’d like to add you to my professional network” from a friend, colleague or perhaps even a total stranger. These emails from “professional network” LinkedIn have become prevalent; but instead of being a helpful way to build connections, some users have argued the tactics used have blurred the line of spam.

Some users have sued the company over its aggressive marketing practices, and Gigaom reported those users, who include publishing and movie executives, had filed a complaint last year that accused LinkedIn of “breaking into” Gmail accounts as a way to send out invitations to anyone they’d ever emailed!

The complaint is based on the LinkedIn feature that invites new users to “Connect with people you know” as well as to existing users to “See who you already know.” It makes matches based on users’ email address books and then emails the contacts with automated emails – and worse, it follows up twice!

US District Judge Lucy Koh ruled on Thursday that multiple emails risked harming individuals “rights of publicity” under California law. She further expressed how LinkedIn’s tactics were an unfair business practice, which “could injure users’ reputations by allowing contacts to think the users are the types of people who spam their contacts or are unable to take the hint that their contacts do not want to join their LinkedIn network.”

Koh further said customers may pursue claims that LinkedIn did, in fact, violate a user’s right of publicity, which protects them from unauthorized use of their name and likeness for commercial purposes.

She did dismiss other claims however, including one that claimed LinkedIn violated federal wiretap law – but said customers may file an amended lawsuit.

The current lawsuit against LinkedIn seeks class action status, a halt to the alleged improper email harvesting and marketing, and money damages.

LinkedIn, which has about 300 million users, did not respond for comment.

“I’m definitely glad to see that LinkedIn is being called on its increasingly egregious ‘spamvitations,'” Charles King, principal analyst at Pund-IT, told redOrbit. “While I understand the need to pursue/capture additional revenue sources, doing so in ways that damage your reputation and brand seems delusional. I hope this wakes up the company’s management.”

LinkedIn, along with social networks including Facebook and Twitter, have already changed the way some people communicate. While it is easy enough to email or phone a colleague, these operations have activity-encouraged communication specifically through their services.

“At what point do usually incompatible, often warring social networking services become impediments to the interactions they actually claim to promote,” pondered King. “I think we’re past that already but aside from damaging company’s relationships with their customers, these activities also run contrary to unification/integration efforts we’ve long seen in other areas of communication.

“Long ago, service providers realized that they stood to gain more by working together than by tearing one another apart. It’s too bad social networking players have yet to experience a similar ‘ah-ha’ moment.”

Biomarkers May Determine How Long Oil Lingers In The Environment

Brett Smith for redOrbit.com – Your Universe Online

While oil from the 2010 Deepwater Horizon oil spill isn’t visible on the surface of the ocean, it continues to wash ashore in the form of small “patties” of oil and sand.

In an attempt to analyze and learn about this oil, researchers from the Woods Hole Oceanographic Institution in Massachusetts have developed a system for determining the source and fate of petroleum hydrocarbons in the environment, according to a new report published in the journal Environmental Science & Technology.

“We were looking at two questions: how could we identify the oil on shore, now four years after the spill, and how the oil from the spill was weathering over time,” said study author Christoph Aeppli, a marine chemistry and geochemistry researcher at WHOI during the research.

The study team applied extensive two-dimensional gas chromatography (GCxGC) to assess levels of degradation in biomarkers, or molecular fossils, found within the oil. Each source of oil has specific quantities of distinct biomarkers, which can serve as identifiers, like human fingerprints or DNA. These biomarkers are typically inert in reservoirs, but when subjected for a long period of time to the ecosystem, some are modified due to normal processes. Oil includes tens of thousands of compounds, and many of them can be broken down by bacteria or even sunlight. The new study was focused on figuring out the durability of specific biomarkers and to see how they performed when exposed to conditions on shore.

“We found that some biomarkers—homohopanes and triaromoatic steroids (TAS), specifically – degraded within a few years following the Deepwater Horizon spill,” said study author Chris Reddy, a marine pollution expert at WHOI. “These biomarkers are not as resilient as once thought and they may provide a future window into determining how much, and how quickly, these oil components may linger in the environment when exposed to air, sunlight, and the elements.”

The team also looked into the source of the biomarkers degradation via the analysis of the Gulf Shore “patties” collected over a 28-month period. The researchers reported most biomarker compounds were robust and could be used to identify oil from the 2010 spill. Some biomarkers, however, degraded too quickly.

“This knowledge is helping us improve our oil spill forensics,” Aeppli said. “It is providing a foundation for better, longer-term identification techniques that account for exposure of oil to wind, waves, sunlight, and microbial degradation over long times.”

In April – on the fourth anniversary of the spill – BP announced it had ended its “active cleanup” of Louisiana’s coast, which follows last year’s end of active cleanup in Florida, Alabama and Mississippi.

The Coast Guard announced it would be transitioning from its active cleanup phase to a new response phase in which crews and equipment will be pre-positioned for future response to cleanup if new reports of oil contamination surface.

“Our response posture has evolved to target re-oiling events on coastline segments that were previously cleaned,” said Coast Guard Capt. Thomas Sparks, the federal on-scene coordinator for the BP response to the spill. “But let me be absolutely clear: This response is not over – not by a long shot. The transition to the Middle Response process does not end clean-up operations, and we continue to hold the responsible party accountable for Deepwater Horizon cleanup costs.”

Image 2 (below): Years after the 2010 Deepwater Horizon Oil spill, oil continues to wash ashore as oil-soaked “sand patties.” Credit: Catherine Carmichael, Woods Hole Oceanographic Institution

Astronomers Discover Nearly 200 Previously Unknown ‘Red’ Galaxies

John P. Millis, Ph.D. for redOrbit.com – Your Universe Online
One of the greatest aspects of NASA’s astronomical research program is that the data accumulated from virtually all of the instruments – X-ray satellites, Infrared detectors, gamma-ray satellites – is available to the public. This means professional and amateur astronomers alike have the ability to make breakthrough discoveries.
For instance, in 2007 Dutch school teacher Hanny van Arkel discovered a peculiar blob in an image of the spiral galaxy IC 2497 while participating in the scientific crowd sourcing project Galaxy Zoo. Now known as Hanny’s Voorwerp (Dutch for Hanny’s object) the source of the radiation is a hot topic in the astronomical community.
But beyond citizen science, professionals are getting in on the action as well. Astronomers Ivana Damjanov, Margaret Geller, Ho Seong Hwang, and Igor Chilingarian of the Harvard-Smithsonian Center for Astrophysics (CfA), combed through archival data from the Sloan Digital Sky Survey, looking for dense red galaxies known in the scientific community as “red nuggets”.
These galaxies are characterized by masses some 10 times greater than that of the Milky Way, but have volumes 100 times smaller. They are also important to cosmology theories as well, because most models predict the early Universe would have been filled with these types of galaxies.
However, previous attempts to find examples of these galaxies in the nearby regions of the Universe yielded few results. This is puzzling because low-mass red stars – the types that would be abundant in these galaxies – are long-lived, even longer than the age of the Universe. So, if absent, theories of galaxy evolution would be in question.
The challenge is that in optical images these galaxies appear as red stars, making them difficult to pick out. “These red nugget galaxies were hiding in plain view, masquerading as stars,” says Damjanov. But, using the archival survey data, the team was able to identify candidate objects for their study.
Using various other telescopes – such as the Hubble Space Telescope and the Canada-France-Hawaii Telescope – the team then turned their attention to spectroscopically analyzing these candidates, which would assist them in determining which objects were stars and which were red nuggets. The study revealed about 200 of the objects were, in fact, the target galaxies.
This was an important confirmation of the particular abundance of these galaxies in our neighborhood of the Universe. “Now we know that many of these amazingly small, dense, but massive galaxies survive. They are a fascinating test of our understanding of the way galaxies form and evolve,” explains Geller.
This result can now be used to refine models of how galaxies in the Universe evolved, and how the cosmos itself may have progressed over billions of years. “Many processes work together to shape the rich landscape of galaxies we see in the nearby universe,” says Damjanov.
The research was presented on Wednesday, June 11 at a meeting of the Canadian Astronomical Society (CASCA) in Quebec, QC.

Image 2 (below): This series of photos shows three “red nugget” galaxies at a distance of about 4 billion light-years, and therefore seen as they were 4 billion years ago. At left, a lonely one without companion galaxies. The one in the middle is alone as well, although it appears to be next to a larger spiral galaxy. That blue spiral is actually much closer to us, only one billion light-years away. Finally, the red nugget on the right might have some companion galaxies residing nearby. Credit: Ivana Damjanov & CFHT MegaCam Team

Multiple Telescopes Provide A Detailed View Of Near-Earth Asteroid Dubbed “The Beast”

Gerard LeBlond for redOrbit.com – Your Universe Online

An asteroid, designated “2014 HQ124” at least 1,200 feet wide on its long axis, passed by Earth on June 8, 2014. It came within 776,000 miles, or approximately three times the distance to the moon. Images captured of the asteroid are some of the most detailed in NASA’s history.

The asteroid was first spotted on April 23, 2014 by NASA’s NEOWISE mission. NEOWISE was adapted for capturing infrared light emitted from comets and asteroids. When 2014 HQ124 was between 864,000 miles and 902,000 miles, NASA began closer observations of the space rock.

[ Watch the Video: Radar Observations Of Asteroid 2014 HQ124 ]

These observations were led by scientists Marina Brozovic and Lance Benner from NASA’s Jet Propulsion Laboratory (JPL), Pasadena, California. Researchers Michael Nolan, Patrick Taylor, Ellen Howell and Alessondra Springmann from Arecibo Observatory in Puerto Rico  were also involved.

“There is zero chance of an impact,” said Don Yeomans, manager of NASA’s Near-Earth Object Program Office at NASA’s Jet Propulsion Laboratory in Pasadena, California. “In fact, it’s fairly common for asteroids to pass near Earth. You’d expect an object about the size of 2014 HQ124 to pass this close every few years.”

By using the 230-foot Deep Space Network antenna and two other radio telescopes, the team was able to capture detailed images of the asteroid as it passed by Earth. The Goldstone antenna sends a radar signal at an asteroid and the other antenna receives the reflection. This greatly improves the detail of images captured by radar.

The first telescope used with the Goldstone antenna was a newly upgraded 1,000-foot Arecibo radio telescope in Puerto Rico. A 112-foot antenna located about 20 miles away was used next.

By linking the antenna with the telescopes, high quality detailed images as small as 12 feet wide could be captured.

“By itself, the Goldstone antenna can obtain images that show features as small as the width of a traffic lane on the highway. With Arecibo now able to receive our highest-resolution Goldstone signals, we can create a single system that improves the overall quality of the images,” Benner said.

A total of 21 radar images were captured in a four-and-a-half hour time span. The asteroid rotated a few degrees per frame suggesting a complete rotation in just under 24 hours. The first five images in the collage were images collected by Arecibo from Goldstone. They are 30 times brighter than Goldstone can obtain alone.

The images detailed features including what appeared to be a hill near the middle of the asteroid. “This may be a double object, or ‘contact binary,’ consisting of two objects that form a single asteroid with a lobed shape,” Benner said.

Radar helps researchers determine the objects size, shape, rotation, surface features and orbits along with asteroid’s distances and velocities. NASA’s Near-Earth Object Program or “Spaceguard,” discovers these objects and determines if they are a potential threat to collide with Earth.

Cigarette Smoking Down Amongst Teens, But Nearly Half Admit To Texting While Driving

redOrbit Staff & Wire Reports – Your Universe Online

The number of teenagers who smoke cigarettes is at a 22-year-low, but nearly half of all high school students report having texted or emailed while driving, according to a new US Centers for Disease Control and Prevention (CDC) study released Thursday.

The 2013 Youth Risk Behavior Surveillance System (YRBSS) report found cigarette smoking rates among high school students has dropped to 15.7 percent. However, the CDC study, which used national data and 42 states, also said 41 percent of students who had operated a motor vehicle over the past 30 days admitted to having texted or emailed while doing so.

According to Reuters reporter David Beasley, the teenage smoking rate is the lowest recorded since the survey began in 1991. Health officials told Beasley the results were encouraging, but they expressed concern that any gains made by anti-smoking campaigns would be offset by the increasing popularity of electronic cigarettes.

“We’re particularly concerned about e-cigarettes re-glamorizing smoking traditional cigarettes and maybe making it more complicated to enforce smoke-free laws that protect all non-smokers,” CDC Director Tom Frieden told Reuters.

Likewise, Matthew Myers, president of the Campaign for Tobacco-Free Kids, said the YRBSS study findings “are a powerful reminder that the fight against tobacco is an entirely winnable battle, but the job is still far from done.”

This year marked the first year the CDC inquired about teen texting-and-driving habits, and in stark contrast to the smoking statistics, the results were not encouraging – especially among older students. In fact, Alex Wayne of Bloomberg News reports more than half of all high-school seniors confessed to having texted while behind the wheel.

“Texting and driving can lead to car crashes, the number 1 cause of deaths among adolescents, the Atlanta-based agency said in its report released today,” Wayne said. “The rate rises as students age, with 61 percent of 12th-grade boys and 59.5 percent of girls reporting they sent a message while driving.”

At the state level, 61.3 percent of students in South Dakota admitted to texting or emailing while driving, noted Mike Esterl of the Wall Street Journal. That was the highest reported incidence rate, while the lowest was observed in Massachusetts, where just 32.3 percent said they had engaged in this high-risk behavior.

More than one-third of students (34.9 percent) said they had consumed at least one alcoholic drink during the past 30 days, Esterl said. That’s down from 38.7 percent in 2011, and part of a steady decline that has been going on for more than a decade. At the state level, teen drinking rates ranged from a high of 39 percent in New Jersey to a low of just 11 percent in Utah, the Wall Street Journal writer added.

The study also found 23.4 percent of students said they had tried marijuana at least once over the past month, compared to 23.1 percent in 2011 and 25.3 percent in 1995. Use of other types of illicit drugs, including cocaine, heroin and ecstasy, had fallen since 2011 and were said to be lower than they were two decades ago.

The percentage of US high school students who had participated in at least one fight over the past 12 months was down from 42 percent in 1991 to 25 percent in 2013, and the number of fights occurring on school grounds dropped by half (from 16 percent to 8 percent) since 1993, the CDC report said.

The agency added that the percentage of high school students who are currently sexually active declined from 38 percent in 1991 to 34 percent in 2013. Those who do engage in intercourse were found to be less likely to use condoms, as 63 percent said they used the contraceptive in 2003 compared to just 59 percent in 2013.

What is Biodiversity?

Hi, I’m Emerald Robinson, and in this “What Is” video, we are going to examine the earth’s wide variety of organisms through a concept called Biodiversity.

Biodiversity, short for “biological diversity,” simply means that all life is different: the earth is home to organisms that have different genes, live in different habitats, and function in different ecosystems.

Scientists study three main types of biodiversity: species, genetic, and ecosystem.

Species biodiversity is what most people think of when they hear the term “biodiversity.” A species is a group of genetically distinct organisms that can have offspring. To date, over 1.7 million species have been named, and new kinds of plants, animals, fungi, and microorganisms are being discovered all of the time. Scientists have predicted that there are between 3 and 30 million species on earth!

Genetic biodiversity means that the genes within a species vary. For example, all dogs belong to the same species, but there are many different breeds that have genes for different traits such as size, color, and coat type.

Ecosystem biodiversity means that ecosystems, communities of living things, are different from one another. For example, some ecosystems are warm and wet, like tropical rain forests, and others are cold and dry, like arctic tundras.

Biodiversity is not consistent from place to place. Because warm areas are able to support so many different kinds of life, places near the equator tend to have a high amount of biodiversity, while areas near the poles have a much lower biodiversity.

Biodiversity is important because all species in an ecosystem have a role to play. When one or more of those species is eliminated, an ecosystem’s balance will suffer, and other species are forced to migrate or become extinct. A diverse ecosystem is strong and able to withstand natural disasters, disease, and drought. With an estimated one third of all species in danger of extinction, habitat conservation is more important than ever.

New Type of Cancer Discovered By Mayo Clinic Researchers

Rebekah Eliason for redOrbit.com – Your Universe Online

Researchers from Mayo Clinic have recently discovered a new type of cancer. When two perfectly harmless genes, PAX3 and MAML3, abnormally combine during recurring chromosomal mismatch, they can turn dangerous. As the two genes combine, they become a chimera, which is a gene that is made up of half of each.

This unique phenomenon causes cancerous biphenotypic sinonasal sarcoma. The tumor usually forms in the nose and continues to spread across the rest of the face. In order to save the person, often a terribly disfiguring surgery is required.

Since the Mayo Clinic pathology researchers have now discovered and described the molecular makeup of this rare tumor, there are several existing cancer drugs that can be used to target this cancer. It was in 2004 when Mayo Clinic pathologists Andre Oliveira, MD, PhD, and Jean Lewis, MD, initially noticed there was something strange about the tumor they were analyzing. They began collecting data and by 2009 had noticed the same pathology a number of times. By 2012, the team defined a new class of tumor that had never been described before.

In this most recent study, the researchers are describing the genetic structures and molecular signature of this rare and difficult to recognize type of cancer. In 75 percent of cases, this cancer affects women.

No one is sure how rare this type of cancer is since most cases the researchers examined were initially diagnosed incorrectly as other cancer types. As Mayo Clinic is considered to be one of the world’s largest referral centers for sarcoma diagnosis and treatment, they were successful at identifying and characterizing this rare tumor.

Dr. Oliveira, who subspecializes in the molecular genetics of sarcomas, said “It’s unusual that a condition or disease is recognized, subsequently studied in numerous patients, and then genetically characterized all at one place. Usually these things happen over a longer period of time and involve separate investigators and institutions. Because of Mayo’s network of experts, patient referrals, electronic records, bio repositories, and research scientists, it all happened here. And this is only the tip of the iceberg. Who knows what is in our repositories waiting to be discovered.”

Although the cancer was not formally identified until the year 2009, when Mayo Clinic’s medical records were searched, it was discovered a Mayo patient had this specific cancer in 1956. An identical description of the cancer was discovered in physician notes that were in Mayo’s computerized database.

Additionally, it was confirmed that the 1956 case was the same cancer through microscopic analysis of the tumor found preserved in Mayo’s bio repositories. Dr. Oliveira’s research confirmed the same genetic chimera in the old tumor.

Researchers are excited by this discovery because of the exciting potential disease model.

“The PAX3-MAML3 chimera we identified in this cancer has some similarities to a unique protein found in alveolar rhabdomyosarcoma, a common cancer found in children,” says Mayo Clinic molecular biologist and co-author Jennifer Westendorf, PhD. “Our findings may also lead to a better understanding of this pediatric disease for which, unfortunately, there is no specific treatment.” This study was published in the journal Nature Genetics.

Eating White Bread Encourages Growth Of Beneficial Gut Bacteria

redOrbit Staff & Wire Reports – Your Universe Online

While many health experts advise against eating white bread due to a lack of nutritional value, new research appearing in Wednesday’s edition of the Journal of Agricultural and Food Chemistry suggests it could benefit our health by encouraging the growth of beneficial gut bacteria.

Furthermore, Sonia González of the University of Oviedo’s Department of Functional Biology and colleagues from the Instituto de Productos Lácteos de Asturias – Consejo Superior de Investigaciones Cientı́ficas (IPLA-CSIC) found it is essential to look at a person’s entire diet and not just individual parts of it when analyzing the impact of food on his or her microbiomes.

According to the American Chemical Society (ACS), which publishes the Journal of Agricultural and Food Chemistry, González and her team report that a person’s gut bacteria plays a key role in his or her health. When there is a decrease in some populations of those bacteria, men and women become more prone to disease.

Eating right is one of the most effective ways to keep our microbiomes healthy and well-balanced, the organization explained. Experts have analyzed the impact of individual fibers and probiotics to determine what dietary ingredients promote helpful bacteria, but there has been little research into the role polyphenols play in this phenomenon.

Polyphenols are micronutrients that are common in many of the foods that we eat, including fruits, vegetables, spices and teas. González and her associates wanted to see what impact they had on our gut bacteria, both alone and in combination with fibers, so they recruited 38 healthy adults to take part in a pilot study.

Each participant answered questions about their diets and agreed to have the bacteria present in their stool samples analyzed by the research teams. The study authors discovered pectin, a compound typically found in citrus fruits, decreases the levels of some helpful bacteria when interacting with other substances contained in oranges.

“A negative association was found between the intake of pectins and flavanones from oranges and the levels of Blautia coccoides and Clostridium leptum,” the study authors wrote. “By contrast, white bread, providing hemicellulose and resistant starch, was directly associated with Lactobacillus.

“Because some effects on intestinal microbiota attributed to isolated fibers or polyphenols might be modified by other components present in the same food, future research should be focused on diet rather than individual compounds,” they added.

The new paper comes just days after research presented as part of the 21st annual European Congress on Obesity (ECO 2014), which reported that consuming white bread instead of whole-grain bread could increase a person’s chances of becoming overweight or obese.

In that study, author, nutritional expert and University of Navarra professor Miguel Martinez-Gonzalez, analyzed the dining habits of more than 9,000 Spanish university graduates. In order to measure the impact of bread type in a culture where it is a dietary staple, he had each participant complete a food questionnaire and then monitored all of them over the next five years.

Martinez-Gonzalez discovered those who consumed a minimum of three slices of white bread each day were 40 percent more likely to gain weight than those who ate just one portion per week. Whole-grain bread consumption was not linked to obesity or weight gain, since it contains dietary fiber and complex carbohydrates that make people feel full longer, and mixing both bread types did not increase a person’s risk of becoming overweight.

SHOP NOW: Panasonic SD-YD250 Automatic Bread Maker with Yeast Dispenser, White

Earth And Moon Are Older Than Previously Thought

John P. Millis, Ph.D. for redOrbit.com – Your Universe Online

Dating the formation of the Earth is a difficult process, because usually geological dating methods do not work as well. Early in its existence the Earth was more fluid, with materials all mixing together in a super-heated mix. It took millions of years for differentiation and cooling to achieve the layered Earth we live on today.

So traditional geological dating gives us more of a lower bound for the age of the Earth, while using radio-dating of solar system rocks yields an upper-limit. This is why we usually quote the Earth’s age as “less than 4.568 billion years old”. Our best estimate previously was that the Earth formed about 100 million years less than that – about 4.468 billion years – based on the geologic data and estimates for how long the formation process took.

However, geochemists Guillaume Avice and Bernard Marty from University of Lorraine in Nancy, France, have found the Earth may be even older.

According to Avice, “It is not possible to give an exact date for the formation of the Earth. What this work does is to show that the Earth is older than we thought, by around 60 [million years],” with an error of about 20 million years.

Their conclusions came by studying xenon gas found in South African and Australian quartz, which had been dated to 3.4 and 2.7 billion years respectively. By comparing the relative isotopic concentrations with those of today, the team could refine estimates of when the Earth began to form.

As Avice explains, “The composition of the gases we are looking at changes according the conditions they are found in, which of course depend on the major events in Earth’s history. The gas sealed in these quartz samples has been handed down to us in a sort of ‘time capsule’. We are using standard methods to compute the age of the Earth, but having access to these ancient samples gives us new data, and allows us to refine the measurement. The xenon gas signals allow us to calculate when the atmosphere was being formed, which was probably at the time the Earth collided with a planet-sized body, leading to the formation of the Moon. Our results mean that both the Earth and the Moon are older than we had thought”.

Of course, refining the estimate of Earth’s formation from 4.468 billion years to 4.408 billion years may not seem significant, but there are implications in various areas of planetary formation theory.

“This might seem a small difference, but it is important. These differences set time boundaries on how the planets evolved, especially through the major collisions in deep time which shaped the solar system,” added Marty.

Results of this research were presented on June 10 at the Goldschmidt Geochemistry Conference in Sacramento, California.

SHOP NOW: Ancient Earth, Ancient Skies: The Age of Earth and its Cosmic Surroundings

Personal Breathalyzer Paints Troubling Picture Of Drinking Trends

Alan McStravick for www.redorbit.com – Your Universe online

It has been said of the second largest and second most populous state in the Union that everything is bigger in Texas. Nowhere is this more true than in the state’s capital dome (which reaches higher into the air than the capital dome in Washington, DC), its cowboy hats and, regretfully, in its drunk driving deaths.

Statistics for the year 2012 for this grim figure, released seven months ago, show that a staggering 1,296 individuals lost their lives on Texas roads and highways as a result of driving under the influence. California, ranked as the most populous state in the nation, saw nearly 500 fewer of its citizens’ lives claimed by this reckless and preventable act. It should come as no surprise then that the US National Transportation Safety Board (NTSB) has recommended further lowering the legal definition of intoxication to .05, down from the current .08 blood alcohol content (BAC).

San Francisco-based BACtrack, the North American leader in both professional and personal breathalyzer products, released a comprehensive breakdown of data collected by their award winning personal breathalyzer product, BACtrack mobile. Recognized by Popular Science with the ‘Best of What’s New’ award for 2013 for health innovation, the BACtrack mobile product is a bluetooth enabled device that sends BAC readings to a user’s smartphone. Data for this report was anonymously collected among those who had the geolocation functions of their phones turned on.

The findings were both interesting and revelatory of date-specific and location-specific drinking trends nationwide. For instance, with this year’s Summer solstice fast approaching, it should be noted that last June 22 saw the heaviest average drinking nationwide at .115 percent. BACtrack believes this data could aid traffic and law enforcement to formulate more effective policies and tactics meant to combat drunk driving. However, BACtrack claims their product is primarily meant to raise the awareness of clients of just when they may have passed the threshold of being impaired.

Other dates and the average BAC associated with them were perhaps less surprising than the weekend that officially kicks off the start of summer. For instance, New Year’s Eve revelers presented an average BAC of .095 while those testing their intoxication on Super Bowl Sunday registered .087 percent. A day that is often heralded as America’s drinking holiday — St. Patrick’s Day — came in at one of the lower average BAC’s recorded with just .057 percent among BACtrack Mobile users and their friends.

While the date-specific data is interesting, geographic data painted a definitive picture of which states and cities were the hardest drinking in the country. Residents of Montana and South Dakota proved they frequently over-imbibe, registering state averages of .101 percent. New Hampshire’s state motto could easily be changed to ‘Live Alcohol Free or Die’ with the nation’s lowest average BAC of .012 percent. Not Surprisingly, Utah, with the heavy influence of the Mormon Church, also registered one of the nation’s lowest BAC averages at .031 percent.

The geographic data collected by BACtrack matches law enforcement statistics for arrests due to drunk driving. Per capita, Montana had the highest number of arrests while New Hampshire had the lowest.

Individual cities were analyzed as well. Dallas came in with the highest average BAC of .091 percent. The Texas city was followed very closely by Oakland, Scottsdale and Indianapolis. Interestingly, another Texas city had the lowest BAC average in the country. Houston came in at just .034

The results of this data are definitely eye opening and serve the BACtrack company mission which is, according to Keith Nothacker, CEO and founder of BACtrack, “…to enlighten the general public on alcohol consumption habits so that they become more responsible drinkers.”

Evernote, Feedly Both Affected By Distributed Denial Of Service Attack

Peter Suciu for redOrbit.com – Your Universe Online
On Tuesday Evernote reported it was hit with a distributed denial of service (DDoS) attack, and it was unavailable for most of its 100 million users. The service announced the outage on Twitter on Tuesday evening.
“We’re actively working to neutralize a denial of service attack. You may experience problems accessing your Evernote while we resolve this,” read the company’s tweet at 4:38 pm PT.
An hour later, the popular online note-taking service confirmed it was the victim of a DDoS attack. Evernote spokeswoman Ronda Scott told Cnet on Tuesday evening that the attack began around 2:35 pm PT, but the company had the problem resolved by 6:15 pm PT.
“We expect that there may be a hiccup here and there in the coming hours, but Evernote is now accessible,” Scott told Cnet. “We do not know its specific origins and this is the first time Evernote has been impacted by a DDoS attack. I can confidently report as is the usual case with these types of attacks, no accounts were compromised and no data was lost.”
That could have been the end of the story, but on Wednesday morning the website Feedly was also hit by a DDoS attack. However, this was no mere attack. It was actually conducted to hold the site for ransom, with attackers demanded payment to stop it.
“Criminals are attacking Feedly with a distributed denial of service attack (DDoS). The attacker is trying to extort money to make it stop. We refused to give in and are working with our network providers to mitigate the attack as best as we can,” Feedly CEO Edwin Khodabakchian posted at just after 2:00 am PT on Wednesday morning. “We are working in parallel with other victims of the same group and with law enforcement.”
By 6:30 am PT the company was able to confirm it was making changes to its infrastructure to bring the service back online.
“It appears as though we are seeing another example of a damaging DDoS attack, impacting revenue generating web properties,” said Dave Larson, CTO of Corero Network Security. “With over 100 million users worldwide, Evernote most likely manages significant infrastructure to support all of their customers. A DDoS attack that could take a network of this size and sophistication down had to have been significant and complex.
“The attack began yesterday afternoon, and within just a few hours after the attack had been identified, Evernote was able to successfully mitigate the attack, and resume services,” Larson told redOrbit via email. “Evernote most likely had an effective incident response plan in place to deal with such situations and were therefore able to ensure that service disruption was kept to a minimum.”
DDoS attacks do not typically cause breaches of user data but instead overwhelm a company’s servers with a massive amount of data – which in turn makes it impossible for the site to be accessed by legitimate users.
“The companies have suggested that the DDOS attacks are being perpetuated by an unknown entity demanding payment for [sic] to stop,” said Charles King, principal analyst at Pund-IT. “That makes this an example of online extortion with the perpetrator being little more than a bully asking for a smaller kid’s lunch money or a thug offering business owners ‘protection’ against damages to their places of business.”
Just as the thug can ruin a business, so too can this type of attack, warned King.
“Those are pretty conventional scenarios but where the Evernote and Feedly examples become concerning is in the competitive nature of online business,” King told redOrbit. “Service providers like Evernote and Feedly succeed by delivering easy, seamless access to simple, common services. If those services are interrupted for sustained periods, the companies’ customers will probably consider, try and perhaps stay with new services. That makes these attacks more serious than simple theft since, if unchecked, they could result in significant financial and competitive losses.”
Moreover, “Ransom attacks are a large threat to the growing digital economy,” added Chris Morales, practice manager for architecture and infrastructure with security firm NSS Labs. “While old in nature, the discovery of new ways of performing DDOS attacks with unsecure internet protocols (NTP) requiring small requests that lead to massive traffic spikes, a large number of infected devices on the Internet participating in botnets where the attacks emanate simultaneously, and now common usage of multi-vector DDOS attacks, make DDOS attacks very successful and difficult to defend against.
“It is worth noting that Evernote operates its own infrastructure as of 2011 and no longer uses cloud infrastructure,” Morales told redOrbit. “While not a statement of better or worse capabilities, it would be interesting to me to know what their total bandwidth capabilities are in relation to what a cloud computing provider could offer.”
This is not the first high-profile attack on Evernote. The website last year was the victim of a cyber-attack that compromised the company’s servers. In that attack the hackers were able to access usernames and email addresses in addition to encrypted passwords.
PROTECT YOURSELF TODAY – Norton Antivirus

Statins Associated With Decreased Physical Activity In Older Men

redOrbit Staff & Wire Reports – Your Universe Online

Using a common type of medication to lower their cholesterol could cause older men to become less physically active, according to new research appearing in Monday’s edition of the journal JAMA Internal Medicine.

In the study, Oregon State University/Oregon Health and Science University College of Pharmacy assistant professor David Lee and his colleagues discovered a link between the use of statins, some of the most prescribed drugs in the world, and reduced physical activity levels in older men.

“Physical activity in older adults helps to maintain a proper weight, prevent cardiovascular disease and helps to maintain physical strength and function,” Lee said in a statement. “We’re trying to find ways to get older adults to exercise more, not less. It’s a fairly serious concern if use of statins is doing something that makes people less likely to exercise.”

Lee and his fellow researchers examined the relationship between self-reported physical activity and statin use over a seven-year period in 3,039 participants who were approximately 73 years old on average. Of those individuals, 24 percent of them were using statins at baseline, 25 percent reported using a statin for the first time during the follow-up period, and 48 percent claimed to have never used the medications during the follow-up period.

Overall, a self-reported questionnaire revealed that physical activity declined by an average of 2.5 points per year for non-statin users and 2.8 points per year for prevalent users. The difference was deemed to be not statistically significant by the study authors, though the decline occurred at a faster rate for new users than for nonusers.

The study participants were community-living men at least 65 years of age from six geographic regions throughout the US. Statin users averaged approximately 40 minutes less of moderate physical activity over a one-week period when compared to those who did not use the medication, Lee and his colleagues noted. That would equal the loss of approximately 150 minutes of slow-paced walking each week, according to the lead author.

“For an older population that’s already pretty sedentary, that’s a significant amount less exercise. Even moderate amounts of exercise can make a big difference,” Lee explained, noting that an increase in sedentary behavior had also been observed in some of the study participants.

“Given these results, we should be aware of a possible decrease in physical activity among people taking a statin,” he added. “This could decrease the benefit of the medication. If someone is already weak, frail, or sedentary, they may want to consider this issue, and consult with their doctor to determine if statin use is still appropriate.”

In addition to OSU and the Oregon Health & Science University, institutes involved in the research included the Department of Veterans Affairs Medical Center in Portland, the California Pacific Medical Center Research Institute in San Francisco, the Stanford Prevention Research Center and the Department of Medicine at the University of California. It was funded by the National Institutes of Health and the Medical Research Foundation of Oregon.

Physical Exercise, Higher Protein Linked To Healthier Gut Microbiota

Rebekah Eliason for redOrbit.com – Your Universe Online
A new study has discovered that exercise and high dietary levels of protein increase the diversity of gut bacteria. According to the authors, these findings have important implications for overall long term health. Obesity and other health problems have been linked to low variation of gut microbes (microbiota). In contrast, diverse microbiota is associated with a healthy metabolic profile and immune system response.
For this study, researchers analyzed the fecal and blood samples of 40 professional rugby players who were participating in an intensive physical training program to determine and assess the range of microbiota the players were hosting in their guts.
Since extreme exercise is often associated with extreme dieting, the researchers chose elite athletes for this study.
As a control, the rugby player’s samples were compared with samples taken from 46 healthy men who were similar in size and age of the rugby players but were not professional athletes.
Of the control group, half of the men had a normal body mass index (BMI) of 25 or less and the other half had a high BMI of 28 and above.
All of the men in the study were asked to complete a food frequency questionnaire that detailed the amount and frequency that 187 specific foods had been consumed over the previous four weeks. In addition, all of the participants were asked to describe their normal physical activity level.
Although the elite athletes had much higher levels of creatine kinase, which is an enzyme that indicates muscle and tissue damage, they had less inflammatory markers than any of the men from the control group. Additionally, they had a better metabolic profile than the men who had a high BMI.
The rugby players had a much wider range of gut microbiota than any of the other men. There was particularly a noticeable difference between the athletes and the men with a high BMI.
The amount of microbial types (taxa) present was significantly higher in the rugby players. The athletes had higher proportions than men with a high BMI of 48 taxa and had more of 40 taxa than the men with a normal BMI.
Particularly noteworthy, the athletes had much higher proportions of Akkermansiaceae, which is a species of bacteria that is linked with lower obesity rates and metabolic disorders.
When the dietary habits of the participants were compared, it was discovered that the rugby players consumed more of all the food groups. Protein accounted for 22% of their energy intake while it only accounted for 15-16% of non-athletes energy intake.
The athlete’s protein intake mostly consisted of meat and meat products, but they also consumed a large amount of protein supplements. Additionally, the athletes consumed much more fruit and vegetables and less snacks than the control group.
“Our findings indicate that exercise is another important factor in the relationship between the microbiota, host immunity and host metabolism, with diet playing an important role,” conclude the authors.
Dr. Georgina Hold of the Institute of Medical Sciences, Aberdeen University, pointed out in an accompanying editorial that our guts are colonized by trillions of bacteria, the composition of which has been implicated in many conditions and is known to determine how well we harvest the energy from the foods we eat.
“Understanding the complex relationship among what we choose to eat, activity levels and gut microbiota richness is essential,” she writes. “As life expectancy continues to increase, it is important that we understand how best to maintain good health. Never has this been more important than in respect of our resident microbiota,” she said.
This study was published in the journal Gut.

Winter Road Salt Is Wreaking Havoc On Summer Butterflies

April Flowers for redOrbit.com – Your Universe Online

In the winter months, road salt is just one of those things we take for granted. It makes driving easier in icy and snowy conditions.

We already know that road salt takes a toll on lakes and rivers, but what is it doing to the organisms that live and forage at the edge of our roadways? Very little is understood about consequences for the development and evolution of wild animals created by how the use of road salt has altered patterns of sodium availability.

Emilie Snell-Rood, a behavioral and evolutionary biologist with the U of M’s College of Biological Sciences, led a new study that suggests the availability of the micronutrient sodium might alter selection on foraging behavior for butterflies and other roadside developing invertebrates. Small, sparing amounts of micronutrients such as sodium and iron are required by all living things, but such nutrients can play a large role in development.

“Salt is normally limited in availability and sodium plays an important role in development” says Snell-Rood. “After experiencing my first Minnesota winter, I began wondering how road salt might be affecting the development of organisms along roadsides.”

Milkweed, the main food of monarch butterflies, grows in abundance along the country roads near Cedar Creek Ecosystem Science Reserve in East Bethel, MN, where the team collected their samples. This made the monarch a perfect model organism for the study, which has been published in the Proceedings of the National Academy of Sciences.

“We compared monarchs reared on roadside-collected and prairie-collected host plants,” says Snell-Rood. “Monarchs were chosen because milkweed is a common roadside plant and investment in sodium-rich muscle should be important for a migratory species like monarchs.”

Sodium concentrations were found to be elevated in the tissue of some plants by as much as 30 times the normal rate. Butterflies feeding on these plants experience a similar elevation. The effects of this rise in sodium differ according to the sex of the butterfly, the team found, and can even have opposing effects on the same tissue. For instance, females saw gains in brain size, while males did not. Likewise, males had an increased investment in flight muscle while females exhibited the opposite pattern.

Moderation is the key for health in most organisms, and the same is true for butterflies with sodium. The study found that a slight rise in levels showed some benefits, but too much became toxic to the insects. In the control group, excessive sodium was found to lead to a significant rise in mortality. Snell-Rood would like to see further research in urban areas where road salt concentration would be higher. She believes the results of this future research would underscore the downsides for roadside dwelling organisms.

Ketchup And Other Tomato Products May Help Stave Off Heart Disease

Lawrence LeBlond for redOrbit.com – Your Universe Online
A Mediterranean diet has long been touted as a surefire way to improve cardiac health, but it has been a mystery as to what the underlying mechanism of the diet was that helped keep the heart healthy. Now, researchers from University of Cambridge say tomatoes may be the answer.
The researchers, publishing a paper in the journal PLOS ONE, have found that a daily supplement of an extract found in tomatoes may improve blood vessel function in patients with cardiovascular disease (CVD).
The incidence of cardiovascular disease varies around the world, but is noticeably lessened in southern Europe, where people largely subsist on ‘Mediterranean diet’ foods, such as fruit, vegetables and olive oils. Recent studies suggest that this type of diet reduces the incidence of events related to CVD, including heart attack and stroke. As well, it has been suggested that this type of diet reduces incidences of heart attack and stroke in patients who are at high cardiovascular risk or in those who have previously had the disease.
One fruit that is widespread in Mediterranean dishes is the tomato. Tomatoes, as well as other fruits, contain a powerful antioxidant called lycopene, which has been previously thought to play a role in reducing cardiovascular disease risk. Lycopene is ten times more potent than vitamin E and its potency appears to be enhanced when it is consumed pureed, as is done in ketchup or in the presence of olive oil. While there is strong evidence that supports the role of lycopene in reducing cardiovascular risk, the mechanism by which it does so has remained unclear.
Dr Joseph Cheriyan, of Addenbrooke’s Hospital and Associate Lecturer at Cambridge, worked with researchers at the Cambridge University Hospitals National Health Service Foundation Trust to demonstrate one mechanism by which lycopene is believed to reduce cardiovascular risk.
“There’s a wealth of research that suggests that the Mediterranean diet – which includes lycopene found in tomatoes and other fruit as a component – is good for our cardiovascular health. But so far, it’s been a mystery what the underlying mechanisms could be,” Cheriyan said in a statement.
Cheriyan and colleagues carried out a randomized, placebo-controlled trial investigating the effects of lycopene using a gold standard method of measuring the function of blood vessels called forearm blood flow, which is predictive of future cardiovascular risk. The study included 36 CVD patients and 36 healthy volunteers. The participants were randomly given either Ateronon – an off-the-shelf supplement containing seven mg of lycopene – or a placebo. As part of the double-blind trial, neither the participants nor the researchers were aware of which treatment was being provided.
All patients who had CVD were already on cholesterol-lowering drugs known as statins. However, all patients still had a relatively impaired function of the endothelium (inner lining of blood vessels) compared to the healthy participants. This function is determined by the response of blood vessels in the forearm to a naturally occurring molecule called acetylcholine. Since endothelial function predicts future events, a healthy endothelium is key to preventing future heart disease.
The research team found that seven mg of oral lycopene supplementation improved and normalized endothelial function in the CVD patients, but not in the healthy volunteers. Lycopene improved the responses of the blood vessels to acetylcholine by more than half (53 percent) compared to baseline in those taking the pill after correction for placebo effects. The team determine, however, that the supplement had no effect on blood pressure, arterial stiffness or lipid levels.
“We’ve shown quite clearly that lycopene improves the function of blood vessels in cardiovascular disease patients,” added Dr Cheriyan. “It reinforces the need for a healthy diet in people at risk from heart disease and stroke. A daily ‘tomato pill’ is not a substitute for other treatments, but may provide added benefits when taken alongside other medication. However, we cannot answer if this may reduce heart disease – this would need much larger trials to investigate outcomes more carefully.”
Despite the outcome of this trial, Dr Cheriyan cautions reliance on the use of lycopene solely as a way to fight the risk of CVD.
“A daily ‘tomato pill’ is not a substitute for other treatments, but may provide added benefits when taken alongside other medication. However, we cannot answer if this may reduce heart disease – this would need much larger trials to investigate outcomes more carefully,” noted Dr Cheriyan.
“Impaired endothelial function is a known predictor of increased risk of future heart disease. Further work is needed to understand whether the beneficial effects seen in this small study translate into clinical benefit for at-risk patients,” added Prof Jeremy Pearson, Associate Medical Director at the British Heart Foundation.
The “tomato pill” used for the study comes from CamNutra, a Cambridge University-based company that developed Ateronon as a way to help protect people from CVD. The study was funded and sponsored by Cambridge University Hospitals NHS Foundation Trust, and included further support from Wellcome Trust, the British Heart Foundation and the National Institute of Health Research Cambridge Comprehensive Biomedical Research Centre.
SHOP NOW: The Mediterranean Prescription: Meal Plans and Recipes

Angry Face Adds Weight To Negotiation Tactics

Brett Smith for redOrbit.com – Your Universe Online
Research has found that facial expressions can convey more information than verbal communication alone and a new Harvard University study has found that an angry glare can add effectiveness to a negotiator’s demands.
Published in Psychological Science, the study found that an angry glare adds additional gravity to a negotiator’s threat to walk away from the talks. The researchers also saw that the glared-at party tended to offer more money than they otherwise would have.
“Our facial expressions are relatively more difficult to control than our words,” said study author Lawrence Ian Reed, a psychologist from Harvard. He added that because facial expressions are harder to control, they are seen as a better indication of a person’s motivations.
“In this way, facial expressions can carry the weight of our words,” Reed said.
When two parties enter in to a negotiation, each party wants to walk away with their demands being met. However, each party wants their demands to be seen as reasonable – so as not to scare off their negotiating partner.
The researchers said they went into their study with the theory that an angry expression would add credibility to a person’s demands – and make it more believable that they would walk away if their demands weren’t met. The team also speculated that an angry glare would have no effect if the offer being made seemed reasonable to the other party.
To test their theories, the study team recruited 870 participants online and told them they would be playing a negotiation game in which they would give an offer on how to split a $1 sum. If a “responder” agreed to the split, each party would walk away with the amount they had agreed to receive. However, if the two parties couldn’t come to an agreement – neither party would receive any money.
Before making their proposition, each participant was shown a video threat that was said to come from the responder. The responder was actually a female actor, who was told to produce specific facial expressions in the video clips. One clip exhibited her making a neutral expression, while a different one showed her making an angry expression.
The videos went along with a written call for either a 50-50 split or a larger share of 70 percent, which would leave just 30 percent for the participant.
The researchers found that the responder’s facial expression did have a bearing on the amount proposed by the participant, but only when the responder asked for the larger share.
Facial expression had no impact on participants’ offers when the responder asked for an equal share, as the researchers had expected. The study team also found that participants offered larger amounts in reaction to angry facial expressions even when they were informed that they belonged to a “typical responder,” as opposed to a specific partner.
The scientists said they were amazed at how robust the effect was, regardless of the experimental setting.
“We created our anger expression by filming a deliberately posed expression rather than a spontaneously emitted one,” Reed said. “We were surprised to find that the expression had an effect even though it was literally faked.”
He added that his team’s results are relevant to numerous everyday situations.
“The idea that bargaining offers are mediated in part by emotions and motivations speaks towards the importance of emotions and their expression in any bargaining situation,” Reed said. “These include not only the division of resources, but also in buying a car or house, and/or disciplining students or children.”

Lunar Farside Highlands Mystery Solved By Penn State Team

John P. Millis, Ph.D. for redOrbit.com – Your Universe Online

Despite being the nearest astronomical object to Earth, and the only extraterrestrial site of human visitors, the Moon still contains mysteries that have puzzled scientists for the better part of a century. Perhaps the greatest question surrounding lunar history is its two-faced nature.

Due to the mutual gravitational interaction of the Earth-Moon system, the giant satellite is tidally locked with our planet, so we always see the same side. The familiar Earth-facing surface is rife with features known as “maria” – placid regions created when subsurface lava oozed through the crust to fill in craters left by ancient impacts.

However, when probes first imaged the far side of the Moon some 55 years ago, astronomers quickly noticed a distinct lack of these maria. The only conclusion is that the far side of the Moon somehow was formed and evolved differently than the near side. But how is that possible?

“I remember the first time I saw a globe of the moon as a boy, being struck by how different the farside looks,” said Jason Wright, assistant professor of astrophysics at Penn State University, in a statement. “It was all mountains and craters. Where were the maria? It turns out it’s been a mystery since the fifties.”

Previous studies suggested that perhaps the Earth once had two Moons, but that early on the two collided and formed into one – the differences in the sides resulting from the different histories of the two Moons prior to their merger. Now, Dr. Wright and his colleagues Steinn Sigurdsson (professor of astrophysics) and Arpita Roy (graduate student in astronomy and astrophysics) believe they have settled the problem once and for all.

The general consensus is that the Moon was formed when a Mars-sized planetoid collided with the Earth shortly after it formed. The glancing blow would have vaporized some of the Earth’s material, as well as that of the planetoid, but some of Earth’s outer structure would have been ejected into outer space. Our planet’s gravity, though, would have pulled much of it into orbit around us, eventually forming what we now know as the Moon.

But rather than looking like it does now, the Moon (as well as the Earth) would have been much hotter, and much closer together. In fact, the Moon would have been 10 to 20 times closer to Earth than it is now. “The moon and Earth loomed large in each others skies when they formed,” said Roy.

At this distance, the nearly 2,500 degree (Celsius) Earth would have warmed the side of the Moon – which would have immediately become tidally locked – closest to us. As the far side of the Moon cooled, the near side would have remained warm, heated by the Earth, which retained its temperature much longer due to its larger size.

As the Moon was subsequently impacted, the near side would have seen the crater sites filled in by the molten lava lurking beneath the still heated surface, while the far side the crust would have become too thick for the asteroids to penetrate. The result is a far side of the Moon covered in valleys, craters and highlands, but virtually no maria.

Results of this research are published in Astrophysical Journal Letters.

SHOP NOW: Celestron 127EQ PowerSeeker Telescope

Research Into MicroRNA Molecule Could Improve Treatment Of Depression

redOrbit Staff & Wire Reports – Your Universe Online

Levels of a tiny molecule known as miR-1202 are lower in the brains of people suffering from depression, suggesting that it could serve as a marker for depression and help detect which patients are most likely to respond to antidepressant treatment, experts from McGill University and the Douglas Institute claim in a new study.

Their research, which has been published by the journal Nature Medicine, shows that miR-1202 could ultimately improve treatment options from those suffering from the psychological condition. The molecule is only found in humans and other primates, lead author Dr. Gustavo Turecki and his colleagues explained.

Depression is a common disorder, and while there are many types of drugs that can be used to treat it, it can be difficult to determine which medication is right for which patient. A lot of trial and error is involved, the study authors noted, but this new research could help eliminate some of that guesswork and make treatment more efficient.

“Using samples from the Douglas Bell-Canada Brain Bank, we examined brain tissues from individuals who were depressed and compared them with brain tissues from psychiatrically healthy individuals,” Dr. Turecki, a psychiatrist at the Douglas Institute and a professor at the McGill University Department of Psychiatry, said in a statement Sunday.

“We identified this molecule, a microRNA known as miR-1202, only found in humans and primates and discovered that it regulates an important receptor of the neurotransmitter glutamate,” he added. Their discovery could provide “a potential target for the development of new and more effective antidepressant treatments.”

Dr. Turecki and his colleagues conducted a series of experiments which demonstrated that antidepressants altered the levels of miR-1202. In clinical trials, they treated depressed patients that had lower levels of the microRNA compared to their non-depressed counterparts with the commonly prescribed treatment citalopram. The researchers found that miR-1202 increased during the treatment process, and that individuals no longer felt depressed.

While antidepressant drugs, which are the most prescribed medications in North America, “are clearly effective,” Dr. Turecki explained that “there is variability in how individuals respond to antidepressant treatment.” During their clinical trial, he and his colleagues “found that miR-1202 is different in individuals with depression and particularly, among those patients who eventually will respond to antidepressant treatment,” he added.

In related research published earlier this year, University of Cambridge professor Joe Herbert detailed how a saliva-based test could determine if teenage boys experiencing minor bouts of depression would be more likely to experience major depression later in life.

In that study, Herbert and his colleagues found that teenage boys with mild depression symptoms and elevated levels of the stress hormone cortisol were up to 14 times more likely to experience clinical depression as they grew older compared to those possessing low or normal cortisol levels. The work was deemed noteworthy because it offered a method of physically measuring a condition that has its roots within the brain.

Experiments Performed At COSY Accelerator Confirm Exotic Particle

Forschungszentrum Jülich

For decades, physicists have searched in vain for exotic bound states comprising more than three quarks. Experiments performed at Jülich’s accelerator COSY have now shown that, in fact, such complex particles do exist in nature. This discovery by the WASA-at-COSY collaboration has been published in the journal Physical Review Letters. The measurements confirm results from 2011, when the more than 120 scientists from eight countries discovered for the first time strong indications for the existence of an exotic dibaryon made up of six quarks.

For a long time, physicists were only able to reliably verify two different classes of hadrons: volatile mesons comprising one quark and one antiquark and baryons consisting of three quarks. Protons and neutrons, which make up atomic nuclei, are examples of the latter. In recent years, however, there has been growing evidence for the existence of additional types of hadrons, for example, hybrids, glueballs, and multiquarks. In 1964, the physicist Freeman Dyson was the first to predict such more complex states. But any reliable verification proved impossible for many years because almost no measurements could be reproduced.

Only recently, other research groups – independently of each other – found strong indications for short-lived, exotic particles comprising four quarks, so called “tetraquarks”. The new bound state, which has now been verified at COSY, means that yet another class of exotic particles has been identified. “The new resonance that we observed confirms that quarks really do exist in six-packs. This discovery could open the door to new physical phenomena,” says group spokesman Prof. Heinz Clement from the University of Tübingen.

The structure that was first discovered in 2011 is extremely short-lived and could only be detected via its decay products. The transient intermediate state – technical term: resonance – exists for a mere hundred-sextillionth (10 to the power of -23) of a second before it decays. This time span is so short that, for example, light can travel just a distance equivalent to the diameter of a small atomic nucleus.

Whether all six quarks form a single compact entity or rather a “hadronic molecule” has yet to be clarified. The latter would be composed of several nuclear building blocks – for example of excited protons and neutrons bound to each other – yet much more strongly than inside an atomic nucleus.

“The measurements that we performed at COSY in 2011 were already very precise. But because the experiments could not be repeated at any other accelerator worldwide, we had to come up with another experiment to verify the results,” explains Prof. Hans Ströher, director at the Nuclear Physics Institute (IKP-2) in Jülich.

In order to gain further unequivocal evidence of the exotic resonance named d*(2380), the scientists scanned the relevant energy range in an elastic scattering experiment. They bombarded a proton target with polarized, heavy hydrogen nuclei known as deuterons. The exotic bound state formed during the collision influenced the angle with which the particles moved away from each other after the collision, thus allowing it to be identified .

“The findings are part of a bigger picture. If this particle exists, then theoretically a whole range of other exotic states can be expected,” says director at Jülich’s IKP-1 Prof. James Ritman. The nuclear physicist is in charge of Jülich’s contribution to the PANDA detector at the international accelerator complex FAIR in Darmstadt, where such exotic structures will be explored in more detail.

Asteroid Dubbed “The Beast” Expected To Approach Earth This Weekend

Brett Smith for redOrbit.com – Your Universe Online

Last year, a massive asteroid that had gone undetected exploded over Chelyabinsk, Russia in a huge fireball and another, much bigger asteroid spotted in April is expected to buzz past Earth this weekend.

While the incoming HQ124 asteroid, aka “the Beast,” is not expected to make contact with the Earth’s atmosphere, professional skywatchers will be keeping close tabs on the massive space rock nonetheless.

“What’s disconcerting is that a rocky/metallic body this large, and coming so very close, should have only first been discovered this soon before its nearest approach,” Bob Berman, an astronomer with the online astronomy collective Slooh, told National Geographic. “HQ124 is at least 10 times bigger, and possibly 20 times, than the asteroid that injured a thousand people last year in Chelyabinsk.”

According to reports, the Beast is nearly 1,100 feet wide – about three-and-a-half football fields.

“If it were to impact us, the energy released would be measured not in kilotons like the atomic bombs that ended World War II, but in H-bomb type megatons,” he added.

Slooh will be providing a video feed of the flyby, which started on Thursday night. Broadcast from Australia, the feed will include time-lapse images from a Slooh robotic observatory in Chile. The video feed should show HQ124 passing about three lunar distances away from Earth at a speed of about 31,000 miles per hour.

In May, Slooh announced a partnership with NASA designed to inspire amateur astronomers to look for threatening asteroids. As a part of NASA’s Asteroid Grand Challenge, Slooh will encourage amateur astronomers to check its telescope data for near-Earth asteroids.

Sky surveys are tracking around 90 percent of the possibly dangerous asteroids that are 3,200 feet and above in width. Space rocks at these sizes have the likelihood to demolish continents on impact. Just 30 percent of the 460-foot rocks have been tracked and less than 1 percent of the 98-foot Earth-orbit crossers have been spotted thus far.

While smaller rocks may not be devastating, they have the possibility to damage or level entire cities. The Chelyabinsk asteroid strike caused damage to buildings and blew out windows across the region.

According to a study published last month, the asteroid that exploded over Chelyabinsk may have had a collision with another asteroid that sent it hurtling toward Earth.

According to an analysis of a mineral called jadeite found embedded in fragments recovered after the explosion, the parent body of the meteor had collided with a larger asteroid more than 490 feet wide and at a relative speed of 3,000 mph.

“This impact might have separated the Chelyabinsk asteroid from its parent body and delivered it to the Earth,” wrote lead author Shin Ozawa, from the University of Tohoku in Japan.

Jadeite is only formed under extreme pressure and high temperatures. The jadeite found in the Chelyabinsk meteorite fragments was formed under pressures of at least 3 to 12 gigapascals during a shock that was longer than 70 milliseconds, according to the study.

Scientists are still analyzing the fragments of the Chelyabinsk meteor and calculating its path toward Earth.

Bromine Discovered To Be Essential For Life

Rebekah Eliason for redOrbit.com – Your Universe Online

Previously, scientists have considered twenty-seven elements to be essential for human life. Recently they have discovered bromine to be the twenty-eighth essential element.

Researchers from Vanderbilt University have shown for the first time that, out of the 92 naturally occurring elements, bromine is essential for tissue development in all animals ranging from sea creatures to humans.

“Without bromine, there are no animals. That’s the discovery,” said Billy Hudson, Ph.D., the paper’s senior author and Elliott V. Newman Professor of Medicine.

Led by co-first authors Scott McCall, Christopher Cummings, Ph.D., and Gautam (Jay) Bhave, M.D., Ph.D., the research team discovered that fruit flies perish without bromine in their diet, but live when it is present.

This discovery is important for treating human disease. “Multiple patient groups … have been shown to be bromine deficient,” said McCall, an M.D./Ph.D. student. Patients on dialysis or total parenteral nutrient (TPN) may improve with bromine supplementation.

This study is part of series of papers published by the Vanderbilt team which have improved the definition of collagen IV scaffolds that undergird the basement membrane of all tissues, including kidney filtration units.

Hudson noted that the foundation for this current discovery started 30 years ago when he was at the University of Kansas Medical School.

His interest in two rare kidney diseases led to the discovery of two new proteins that twist around each other forming the triple-helical collagen IV molecule. This unique structure acts similar to cables supporting a bridge. When these protein cables are defective or damaged disease often follows.

In 2002 Hudson moved to Vanderbilt and continued his work.

In 2009, a study led by Roberto Vanacore, Ph.D. discovered a sulflimine bond between sulfur and nitrogen atoms. This bond acted as a “fastener” to connect the collagen IV molecules which form scaffolds for cells.

It was also discovered that a defective bond possibly triggers Goodpasture’s syndrome, which is a rare auto-immune disorder. Because of the bond discovery, researchers wanted to know how the bond is formed. In 2012, Bhave, assistant professor of Medicine, Cummings and Vanacore led a team of researchers and discovered the answer is enzyme peroxidasin.

If this enzyme becomes overactive, it may play a role in disease by excessive deposition of collagen IV and thickening of the basement membrane, which can cause impaired kidney function.

In this current study, researchers showed the essential and unique role of ionic bromide to enable peroxidasin to form the sulfilimine bond.

The authors report that consequently, the element bromine is “essential for animal development and tissue architecture.”

This study was published in the journal Cell.

International Collaboration Completes First-Ever Sheep Genome Sequence

April Flowers for redOrbit.com – Your Universe Online

The first complete sequence of the sheep genome has been completed by an international team of scientists after eight years of collaboration.

The 73 team members from 26 institutions across eight countries included scientists from the Human Genome Sequencing Center at Baylor College of Medicine, CSIRO Australia, BGI Shenzhen in China and the University of Edinburgh, among many others. Their findings, published online in Science, shed new light on the species’ unique and specialized digestive and metabolic systems.

Sheep are very important to the agriculture industry, as they provide a major source of meat, milk and fiber in the form of wool. In Australia alone, there are more than 70 million sheep. Globally, the number reaches close to one billion. Because of such abundance, the results of this study could have a massive impact for the rural economy of many countries.

“We investigated the completed genome to determine which genes are present in a process called gene annotation, which resulted in an advanced understanding of the genes involved in making sheep the unique animals that they are,” said CSIRO project leader Dr Brian Dalrymple.

“Given the importance of wool production, we focused on which genes were likely to be involved in producing wool. We identified a new pathway for the metabolism of lipid in sheep skin, which may play a role in both the development of wool and in the efficient production of wool grease (lanolin).”

The researchers identified a previously unknown gene representing a subfamily of the late cornified envelope (LCE) genes. This gene, called LCE7A, is expressed in the skin of sheep, cattle and goat, but not in the rumen (the first chamber of their stomach which helps digests plant material to animal protein). They also found that LCE7A is expressed under positive selection in sheep. They believe that the expansion of these gene is associated with wool formation.

BGI researchers also examined the MOGAT pathway in sheep skin, assuming it might facilitate wool production. In sheep, they found that MOGAT2 and MOGAT3 had undergone tandem gene expression. Furthermore, they were expressed in the sheep’s skin, not in the liver. In humans, MOGAT3 is an important liver enzyme-encoding gene. This suggests that the loss of MOGAT2 and MOGAT3 in the sheep liver may reduce the importance of the liver in metabolizing long chain fatty acids in ruminants when compared to non-ruminants.

The team, called the International Sheep Genomics Consortium, revealed another key finding involving the rumen, which is essential for the animals to convert hard-to-digest plant materials into animal protein. The rumen, thought to have evolved around 35-40 million years ago, has a tough, keratin-rich surface that is similar to skin. The researchers identified many genes normally expressed in the skin that were also highly expressed in the rumen, as well as a number of new genes that are rumen-specific. Among this latter group, one gene appears to be present in most mammals, but so far has only been identified as being expressed in ruminants.

For the study, the research group assembled the reference genome from two Texel sheep. They conducted RNA-Seq on 94 tissue samples collected from 40 tissues, including 83 from additional Texel sheep. The reference genome is approximately 2.61Gb, with ~99 percent anchored onto the 26 autosomes and the X chromosome. Many segmental duplications were found in the sheep genome. The team identified 4,850 single-copy orthologous genes when they compared the sheep genome to the sequences of goat, cattle, yak, pig, camel, horse, dog, mouse, opossum and human proteins. From these genes, they constructed a phylogenetic tree.

The study is the first to pinpoint the unique genetic qualities that differentiate sheep from other animals, and may someday lead to the development of DNA testing to speed-up selective breeding programs to improve stock. The results might also reveal new insights into diseases that affect sheep.

According to Professor Alan Archibald, Head of Genetics and Genomics at The Roslin Institute, “Sheep were one of the first animals to be domesticated for farming and are still an important part of the global agricultural economy. Understanding more about their genetic make-up will help us to breed healthier and more productive flocks.”

Sheep are also an important biomedical model, and the genetic sequence provided by the research team will help to build a strong foundation or the detailed exploration of the similarities and differences between sheep and humans at the molecular level. The research team hopes that this will lead to improved medical therapies for a number of conditions such as sepsis and asthma.

Dad’s Alcohol Consumption Could Influence Sons’ Drinking

University of Pittsburgh Schools of the Health Sciences

Even before conception, a son’s vulnerability for alcohol use disorders could be shaped by a father who chronically drinks to excess, according to a new animal study from the University of Pittsburgh School of Medicine. The findings, published online Wednesday in PLOS ONE, show male mice that were chronically exposed to alcohol before breeding had male offspring that were less likely to consume alcohol and were more sensitive to its effects, providing new insight into inheritance and development of drinking behaviors.

Previous human studies indicate that alcoholism can run in families, particularly father to son, but to date only a few gene variants have been associated with Alcohol Use Disorder and they account for only a small fraction of the risk of inheriting the problem, said senior investigator Gregg E. Homanics, Ph.D., professor of anesthesiology and pharmacology & chemical biology, Pitt School of Medicine.

“We examined whether a father’s exposure to alcohol could alter expression of the genes he passed down to his children,” Dr. Homanics said. “Rather than mutation of the genetic sequence, environmental factors might lead to changes that modify the activity of a gene, which is called epigenetics. Our mouse study shows that it is possible for alcohol to modify the dad’s otherwise normal genes and influence consumption in his sons, but surprisingly not his daughters.”

In the study, he and lead author Andrey Finegersh, M.D./Ph.D. student in the Department of Pharmacology & Chemical Biology graduate program, chronically exposed male mice over five weeks to intermittent ethanol vapor, leading to blood alcohol levels slightly higher than the legal limit for human drivers. Then, they mated them to females who had not been exposed to alcohol.

Compared to those of ethanol-free sires, adult male offspring of ethanol-exposed mice consumed less alcohol when it was made available and were less likely to choose to drink it over water. Also, they were more susceptible to alcohol effects on motor control and reduction of anxiety.

“We suspected that the offspring of alcohol exposed sires would have an enhanced taste for alcohol, which seems to be the pattern for humans,” Mr. Finegersh said. “Whether the unexpected reduction in alcohol drinking that was observed is due to differences between species or the specific drinking model that was tested is unclear.”

The researchers plan to examine other drinking models such as binge drinking, identify how alcohol modifies the genes, and explore why female offspring appear unaffected.

Component From Artificial Sweetener Truvia Could Be Used As An Insecticide

Brett Smith for redOrbit.com – Your Universe Online
A study in the journal PLOS ONE that began as a sixth-grade science fair project could lead to the development of a powerful, yet safe, insecticide – erythritol, the main component of the sweetener Truvia®.
Erythritol is a naturally occurring sugar alcohol, which attracts flies even when other foods are available. The substance would make for a particularly useful insecticide because it is safe for human consumption.
Ninth grade student Simon D. Kaschock-Marenda said he was motivated to study sugar substitutes in order to understand why his parents stopped eating white sugar when trying to eat healthier.
“He asked if he could test the effects of different sugars and sugar substitutes on fly health and longevity for his science fair, and I said, ‘Sure!” said Simon’s co-author and father Daniel Marenda, an assistant professor of biology in Drexel University, in a recent statement.
Using “baby” flies and growth medium from Marenda’s lab, Simon raised flies in several different types of sweeteners at home.
“After six days of testing these flies in our house, he came back to me and said, ‘Dad, all the flies in the Truvia® vials are dead…'” Marenda said. “To which I responded, ‘OK…we must have screwed up somehow. Let’s repeat the experiment!'”
Working in his own lab, Marenda found flies raised on food with erythritol survived for an average of 5.8 days, as opposed to 38.6 to 50.6 days for flies raised on foods without erythritol. Flies fed erythritol also exhibited noticeable motor impairments ahead of their deaths.
“Indeed what we found is that the main component of Truvia, the sugar erythritol, appears to have pretty potent insecticidal activity in our flies,” Marenda said.
“I feel like this is the simplest, most straightforward work I’ve ever done, but it’s potentially the most important thing I’ve ever worked on,” added Sean O’Donnell, a professor of biology and biodiversity, earth and environmental science at Drexel.
The team discovered that the toxic impact did not originate from the stevia plant extract, which happens to be present in equally Truvia® as well as the sweeteners PureVia®, which was a part of the tests and had no toxic effect on the flies.
“We are not going to see the planet sprayed with erythritol and the chances for widespread crop application are slim,” O’Donnell said. “But on a small scale, in places where insects will come to a bait, consume it and die, this could be huge.”
The team noted that they can’t confirm which insects erythritol might kill besides fruit flies, or how its toxic effects work. Erythritol is made by natural means in some insects, which use it as anti-freeze to guard their bodies against the cold. However, that may be insignificant as the study showed that certain doses can be poisonous.
The scientists plan on conducting additional trials with other insects such as termites, cockroaches, bed bugs and ants. They will also test erythritol’s toxicity as it moves up the food chain by experimenting on praying mantids and other organism that eat fruit flies.

Disparities In Actual Sugar Content Found In Popular Soft Drinks

Rebekah Eliason for redOrbit.com – Your Universe Online

A new study by the Childhood Obesity Research Center (CORC) at the Keck School of Medicine of the University of Southern California has discovered that soda drinkers are consuming much more sugar-fructose than soda labels imply.

In this study, the chemical composition of 34 popular soda and juice drinks made with high fructose corn syrup (HFCS) were analyzed. The researchers discovered that drinks such as Coca-Cola, Pepsi, Dr Pepper, Mountain Dew and Sprite are all composed of 50 percent more fructose than glucose. This ratio provides evidence disproving the claim that HFCS and sugar are in effect the same.

Michael Goran, PhD, director of the CORC and lead author of the study said, “We found what ends up being consumed in these beverages is neither natural sugar nor HFCS, but instead a fructose-intense concoction that could increase one’s risk for diabetes, cardiovascular disease and liver disease. The human body isn’t designed to process this form of sugar at such high levels. Unlike glucose, which serves as fuel for the body, fructose is processed almost entirely in the liver where it is converted to fat.”

A trade group representing HFCS producers, the Corn Refiners Association, has adamantly claimed that there is only a negligible difference between HFCS and natural sugar, sucrose, which is composed of equal amounts fructose and glucose. According to Goran’s recent beverage analysis, the drinks made with HFCS were comprised of a 60 to 40 fructose to glucose ratio. This is a much different ratio than is found in natural sugar, which challenges the industry’s claim that “sugar is sugar.”

Another disturbing discovery from the team revealed that the product labels on some drinks are not accurately disclosing the fructose content. For example, Pepsi Throwback is labeled as containing natural sugar, but the analysis by Goran’s team found that it contained more than 50 percent fructose. Other drinks such as Sierra Mist, Gatorade and Mexican Coca-Cola all contain higher fructose concentrations than implied by their labeling. These findings indicate that these drinks do not disclose the HFCS that is probably used in production.

In this study, the team purchased drinks for analysis based on the product popularity. Sugar composition for each beverage was analyzed in three different laboratories using three different methods. Across all the different methods, results consistently found that the average sugar composition of drinks containing HFCS was 60 percent fructose and 40 percent glucose.

Over the last three decades, Americans doubled their consumption of HFCS and consume more per capita of the sugary concoction than any other nation. In the same period of time, American diabetes rates have tripled. A great portion of this increase is attributed to the mass consumption of soda, sports drinks and energy drinks.

“Given that Americans drink 45 gallons of soda a year, it’s important for us to have a more accurate understanding of what we’re actually drinking, including specific label information on the types of sugars,” said Goran.

This study was published online in the journal Nutrition.

Free-Form Gestures Could Be Passwords Of The Future

Peter Suciu for redOrbit.com – Your Universe Online

Security experts have long recommended that a “strong” password should be considered when using a PC, but the need for complex and hard to crack passwords has become even more necessary with the rise of mobile devices including phones or tablets. While hackers can try to remotely break into a system with mobile devices, users also need to watch out for prying eyes.

“All it takes to steal a password is a quick eye,” said Janne Lindqvist, assistant professor in Rutgers School of Engineering’s Department of Electrical and Computer Engineering, in a statement. “With all the personal and transactional information we have on our phones today, improved mobile security is becoming increasingly critical.”

Studies have shown that traditional passwords based on alpha-numeric sequences could be difficult to remember but yet all too easy to be seen while being typed on a device. Clearly there is a need for more robust password security

The new Rutgers study, which was led by Lindqvist, found that free-form gestures – what we might think of as just “squiggly lines” – could be used to unlock phones and grant access to apps. These gestures could be made by sweeping one’s finger across the device’s screen to make a series of shapes.

[ Watch the Video: MobiSys’14 Video: User-Generated Free-Form Gestures For Authentication: Security And Memorability ]

According to the findings, these gestures could be less likely than traditional typed passwords, or even the newer “connect-the-dots” grid exercises, as each of those could be observed and subsequently reproduced by so-called “shoulder surfers” who might spy on users to gain access to a device or a website.

Lindqvist said that this could be the first of its kind study to explore how free-form gestures could be utilized as passwords. The researchers from Rutgers, along with collaborators from Max-Planck Institute for Informatics, including Antti Oulasvirta, and University of Helsinki, studied the viability of utilizing the free-form gestures for access authentication and found that this allowed for the ability to create shapes that could act as a form of password.

Since these shapes could be created without following a template the researchers predicted that this could allow the gestures to have greater complexity than grid-based offerings.

“You can create any shape, using any number of fingers, and in any size or location on the screen,” Lindqvist added. “We saw that this security protection option was clearly missing in the scientific literature and also in practice, so we decided to test its potential.”

During the study, the researchers tested the actual security of the gestures, which included having seven computer science and engineering students, each with considerable experience with touch screens, make an attempt to steal a free-form gesture password by shoulder surfing. The researchers said that none of the participants were able to replicate the gestures with enough accuracy, which they said suggests that gestures appear extremely powerful against attacks

The researchers also tested the ‘memorability’ of free-form gestures, and even developed a method to measure the complexity and accuracy that these gestures present as a password, yet some questions remain.

For one, “reliable ‘replicability’ would be a major challenge for most people,” said Charles King, principal analyst at Pund-IT. “In fact, I expect that the level of frustration would eventually be so high that extending the middle finger of one hand might become the gesture equivalent of ‘1,2,3,4’ in written passwords.

“It’s likely that the folks researching this have something more complex in mind,” King told redOrbit. “But if this succeeds and becomes commonplace, standing in line at the ATM could become the equivalent of visiting the Ministry of Silly Walks.”

The researchers on the project will publish their findings later this month as part of the proceedings of the MobiSys ’14 international conference in mobile computing.

How Does High Blood Pressure In Middle Age Affect Memory In Old Age?

American Academy of Neurology
New research suggests that high blood pressure in middle age plays a critical role in whether blood pressure in old age may affect memory and thinking. The study is published in the June 4, 2014, online issue of Neurology, the medical journal of the American Academy of Neurology.
“Our findings bring new insight into the relationship between a history of high blood pressure, blood pressure in old age, the effects of blood pressure on brain structure, and memory and thinking,” said study author Lenore J. Launer, PhD, of the National Institute on Aging in Bethesda, Md., and a member of the American Academy of Neurology.
For the study, 4,057 older participants free of dementia had their blood pressure measured in middle-age, (average age of 50). In late life (an average age of 76) their blood pressure was remeasured and participants underwent MRIs that looked at structure and damage to the small vessels in the brain. They also took tests that measured their memory and thinking ability.
The study found that the association of blood pressure in old age to brain measures depended on a history of blood pressure in middle age. Higher systolic (the top number on the measure of blood pressure) and diastolic (the bottom number on the measure of blood pressure) blood pressure were associated with increased risk of brain lesions and tiny brain bleeds. This was most noticeable in people without a history of high blood pressure in middle age. For example, people with no history of high blood pressure in middle age who had high diastolic blood pressure in old age were 50 percent more likely to have severe brain lesions than people with low diastolic blood pressure in old age.
However, in people with a history of high blood pressure in middle age, lower diastolic blood pressure in older age was associated with smaller total brain and gray matter volumes. This finding was reflected in memory and thinking performance measures as well. In people with high blood pressure in middle age, lower diastolic blood pressure was associated with 10 percent lower memory scores.
“Older people without a history of high blood pressure but who currently have high blood pressure are at an increased risk for brain lesions, suggesting that lowering of blood pressure in these participants might be beneficial. On the other hand, older people with a history of high blood pressure but who currently have lower blood pressure might have more extensive organ damage and are at risk of brain shrinkage and memory and thinking problems,” said Launer.

Commercial Orbital Transportation Services Final Report Released By NASA

NASA

With two companies now providing commercial cargo launch services for the International Space Station, NASA is issuing its final report on the now-complete Commercial Orbital Transportation Services (COTS) program that laid the groundwork for those flights.

The report, titled “Commercial Orbital Transportation Services, A New Era in Spaceflight” (NASA/SP-2014-617), documents the work of NASA’s Commercial Crew & Cargo Program Office (C3PO) between 2005 and 2013 to partner with private industry to take over more routine operations in low-Earth orbit. This move toward more cooperative engagement with industry partners allowed NASA to focus more on scientific research, technology development and exploration goals.

› Video: News conference on COTS program completion
› Video: Commercial Program Success

The effort represented the culmination of years, even decades, of initiatives to encourage the growth of the private spaceflight sector in the US. Similar relationships have been used throughout US history, such as the 1925 Contract Air Mail Act that provided incentives for commercial aviation by allowing the US Post Office to contract with private companies.

Commercial companies have been involved in NASA programs as contractors since NASA’s founding in 1958. In the 1980s the agency began actively to study ways to turn over routine space operations to the private sector. The planned retirement of the space shuttle, announced in 2004, accelerated these efforts and led to establishment of the COTS program.

Under COTS partnerships, two companies — Space Exploration Technologies (SpaceX) of Hawthorne, California, and Orbital Services Corp. of Dulles, Virginia – developed and demonstrated capabilities to transport cargo to low-Earth orbit.

Because these were partnerships, not traditional contracts, NASA leveraged its $800M COTS program budget with partner funds. This resulted in two new US medium-class launch vehicles and two automated cargo spacecraft and demonstrated the efficiency of such partnerships.

In 2008, NASA began the transition from partner to customer by awarding two contracts — one to Orbital and one to SpaceX — for commercial resupply services to the space station. At the time of award, NASA ordered eight flights from Orbital valued at about $1.9 billion and 12 flights from SpaceX valued at about $1.6 billion.

SpaceX’s Dragon became the first commercial vehicle to fly cargo to the space station under a commercial cargo contract with NASA in October 2012 and was followed by Orbital’s Cygnus spacecraft in January 2014. So far, SpaceX has flown three successful cargo delivery missions, and Orbital is ready to launch its second this summer.

The final report chronicles the historical foundations of the COTS program, how NASA selected and supported its partners, the partners’ COTS development and demonstration efforts and the evolution of COTS into both the Commercial Cargo Service contracts and NASA’s Commercial Crew Program.

Making The Internet Of Things Safer With New Tool Developed By Computer Scientists

University of California, San Diego
Computer scientists at the University of California, San Diego, have developed a tool that allows hardware designers and system builders to test security- a first for the field. One of the tool’s potential uses is described in the May-June issue of IEEE Micro magazine.
“The stakes in hardware security are high”, said Ryan Kastner, a professor of computer science at the Jacobs School of Engineering at UC San Diego.
There is a big push to create the so-called Internet of Things, where all devices are connected and communicate with one another. As a result, embedded systems—small computer systems built around microcontrollers—are becoming more common. But they remain vulnerable to security breaches. Some examples of devices that may be hackable: medical devices, cars, cell phones and smart grid technology.
“Engineers traditionally design devices to be fast and use as little power as possible,” said Jonathan Valamehr, a postdoctoral researcher in the Department of Computer Science and Engineering at UC San Diego. “Oftentimes, they don’t design them with security in mind.”
The tool, based on the team’s research on Gate-level Information Flow Tracking, or GLIFT, tags critical pieces in a hardware’s security system and tracks them. The tool leverages this technology to detect security-specific properties within a hardware system. For example, the tool can make sure that a cryptographic key does not leak outside a chip’s cryptographic core.
There are two main threats in hardware security. The first is confidentiality. In some types of hardware, one can determine a device’s cryptographic key based on the amount of time it takes to encrypt information. The tool can detect these so-called timing channels that can compromise a device’s security. The second threat is integrity, where a critical subsystem within a device can be affected by non-critical ones. For example, a car’s brakes can be affected by its CD player. The tool can detect these integrity violations as well.
Valamehr, Kastner, and Ph.D. candidate Jason Oberg started a company named Tortuga Logic to commercialize this technology. The company is currently working with two of the top semiconductor companies in the world. Their next step is to focus on medical devices, computers in cars, and military applications.
The team recently were awarded a $150,000 grant from the National Science Foundation to grow their business and further their research.

Children See Improvement In Language When They Are Physically Fit

Gerard LeBlond for www.redorbit.com – Your Universe Online

Physically fit children are not only healthier, they have faster and more robust neuro-electrical brain responses while reading, according to a new study by researchers from the University of Illinois. The findings were published in the Brain and Cognition journal.

Although the research doesn’t prove that higher fitness directly effects the changes in the electrical activity in the brain, it does offer a mechanism to explain why physical fitness associates closely with improved cognitive performance with a variety of tasks and language skills.

The difference between physically fit children and unfit children is that better language skills are obtained with children that are fit. The study also revealed no difference while the child was reading correct sentences or ones with errors.

“All we know is there is something different about higher and lower fit kids,” said University of Illinois kinesiology and community health professor Charles Hillman, who led the research with graduate student Mark Scudder and psychology professor Kara Federmeier. “Now whether that difference is caused by fitness or maybe some third variable that (affects) both fitness and language processing, we don’t know yet.”

The research consisted of using electroencephalography (EEG), placing an electrode cap on the head of the participant to capture the electrical impulses in the brain. The data was transferred to a readout which resembles the seismic readings during an earthquake. These lines are associated with different tasks performed by the person.

Event-related potentials (ERPs) is what these line patterns are called and vary with each person being evaluated. There are two brain waveforms — the N400 is associated with processing the meaning of words while reading a sentence and the P600 is associated with the grammatical rules of a sentence. For the study, these two waveforms were evaluated.

“We focused on the N400 because it is associated with the processing of the meaning of a word. Previous reports have shown that greater N400 amplitude is seen in higher-ability readers,” Scudder said.

“And then we also looked at another ERP, the P600, which is associated with the grammatical rules of a sentence,” Federmeier added.

By evaluating these two waveforms, the findings showed that children who were more physically fit (taking in more oxygen while exercising) had higher N400 and P600 readings than the lesser-fit children while reading normal sentences, signifying they processed the information more quickly, leading the researchers to believe it improves reading performance and language comprehension.

Compared to previous studies, “our study shows that the brain function of higher fit kids is different, in the sense that they appear to be able to better allocate resources in the brain towards aspects of cognition that support reading comprehension,” Hillman said.

More research needs to be done, according to Hillman, but the new study does show a link between fitness and healthy brain function.

Previous studies with children and adults, “have repeatedly demonstrated an effect of increases in either physical activity in one’s lifestyle or improvements in aerobic fitness, and the implications of those health behaviors for brain structure, brain function and cognitive performance,” Hillman concluded.

Specialized Leg Organs Help Spiders Glean Information From Their Silk

Brett Smith for redOrbit.com – Your Universe Online
A spider’s web is more than just a home or trap for unsuspecting prey, it’s also a communications network capable of telling a spider information about prey, mates, and its own structural integrity.
According to a new report in the journal Advanced Materials, spider silk can be tuned to a wide range of harmonics and these various frequencies provide a wealth of information to spiders, which sense the frequencies using leg organs called slit sensillae.
“Most spiders have poor eyesight and rely almost exclusively on the vibration of the silk in their web for sensory information,” said study author Beth Mortimer, a biologist at the Oxford Silk Group at Oxford University. “The sound of silk can tell them what type of meal is entangled in their net and about the intentions and quality of a prospective mate. By plucking the silk like a guitar string and listening to the ‘echoes’ the spider can also assess the condition of its web.”
This aspect is utilized by the spider through the “tuning” of silk to pick up sensory information. To examine the sonic qualities of silk, the scientists used ultra-high-speed cameras to capture the threads as they reacted to the impact of bullets. Specialized lasers were used to pick up the silk’s smallest vibrations.
“The fact that spiders can receive these nanometer vibrations with organs on each of their legs, called slit sensillae, really exemplifies the impact of our research about silk properties found in our study,” said study author Shira Gordon, a researcher at the University of Strathclyde.
“These findings further demonstrate the outstanding properties of many spider silks that are able to combine exceptional toughness with the ability to transfer delicate information,” added study author Fritz Vollrath, an ecology professor in the Oxford Silk Group. “These are traits that would be very useful in light-weight engineering and might lead to novel, built-in ‘intelligent’ sensors and actuators.”
“Spider silks are well known for their impressive mechanical properties, but the vibrational properties have been relatively overlooked and now we find that they are also an awesome communication tool,” said Chris Holland, a research fellow at the University of Sheffield. “Yet again spiders continue to impress us in more ways than we can imagine.”
“’It may even be that spiders set out to make a web that ‘sounds right’ as its sonic properties are intimately related to factors such as strength and flexibility,” Mortimer said.
Spider silk has inspired researchers for years and a team from the University of Akron revealed in May that it had developed a more efficient and stronger commercial and biomedical adhesive inspired by the material.
According to a report in the Journal of Polymer Physics, the researchers used a process called electrospinning to draw very fine fibers from liquid polyurethane using electrical charges. This allowed them to imitate the efficient staple-pin design by pinning down a nylon thread with the electrospun fibers.
“This adhesive architecture holds promise for potential applications in the area of adhesion science, particularly in the field of biomedicine where the cost of the materials is a significant constraint,” the study authors wrote.

Cholera In Sudan, Ebola Threat In Sierra Leone, MERS Invades Algeria

Lawrence LeBlond for redOrbit.com – Your Universe Online

CHOLERA

The Ministry of Health of South Sudan declared a cholera outbreak in Juba on May 15, 2014 after four cases were laboratory confirmed following tests conducted by the African Medical Research Foundation in Nairobi, Kenya. The first case identified with onset of illness was on April 23.

As of May 25, a total of 586 cholera cases had been reported, including 22 deaths – 13 hospital and 9 community deaths. The majority of hospital deaths were those that occurred upon arrival. Cases had been reported from 15 sub-counties (payams) within Juba, with the most affected payam being Muniki, reporting 25 percent of all cases.

As of June 2, the total case count was at 1,106, noted UN Humanitarian Coordinator Toby Lanzer, as reported by Radio Tamazuj. He said that new cases have been confirmed in Kajo Keji in Central Equatoria and Kaka in Upper Nile State. Medical reports suggest the outbreak is being contained in some areas, while in others it is spreading unchecked.

The total number of deaths from cholera now stands at 27, as of May 31, according to the Ministry of Health. As well, the MOH reported that 896 patients were discharged after successful treatment.

In response to the cholera outbreak, MOH officials have developed a cholera response plan and established a Cholera Response Task Force which coordinates both health and Water, Sanitation and Hygiene (WASH) activities. The MOH also plans to establish a Cholera Command and Control Center (C4) in Juba. C4 will strengthen the coordination efforts of the outbreak response and support the emergency response task forces in all 10 states.

The World Health Organization and its partners are supporting the MOH in its cholera coordinating response. As well, the WHO and partners are working to conduct rapid assessments, alert and outbreak investigations and confirmation; establish Cholera Treatment Centres and infection prevention and control; engage in active surveillance as well as supervising safe burial of the deceased.

The MOH, with the support of WHO, UNICEF, MSF, and Medair, have conducted oral cholera vaccination campaigns in February 2014, achieving more than 80 percent vaccine coverage in Tomping and Juba camps of 33,000 internally displaced persons (IDPs) in an effort to prevent a possible cholera outbreak among those IDPs.

Despite efforts to corral the outbreak, the sheer number of rising cases in Juba paint a worrisome picture for health officials.

“There is a risk of the outbreak spreading to other surrounding counties and villages if community interventions are not rigorously conducted,” WHO said in a statement, as cited by Radio Tamazuj. “Plans and budgets for community level interventions have been developed, however their implementation is challenging due to financial constraints.”

Health officials warn that most cholera cases being contracted are coming from drinking from unsafe water sources, eating foods from roadside markets, or practicing poor hygiene. It can also be contracted from contact with the body of someone who died from the disease.

EBOLA

According to WHO, new cases and deaths attributed to Ebola virus disease (EVD) outbreak continue to be reported from new (Telimele and Boffa) and already-affected districts (Conakry and Macenta). These districts had previously been report-free for more than 42 days. Since the last update on May 28, 10 new cases and seven new deaths have been reported.

As of May 28, a total of 291 clinical cases of EVD, including 193 deaths have been reported. The classification of these cases are as follows: 172 confirmed cases and 108 deaths; 71 probable cases and 62 deaths; 48 suspected cases and 23 deaths. Gueckedou has by far seen the most cases with 179, including 133 deaths, followed by Conakry, Macenta and others.

As of May 29, one suspected case was reported in Liberia. The case, reported from the Foya district, resulted in death. The case is currently under investigation by Liberia and Sierra Leone officials.

In Sierra Leone, a total of 34 new cases (seven confirmed, three probable, and 24 suspected) were reported as of May 29, 2014. One suspected death has also been reported from the five affected districts. The cumulative number of clinical cases of EVD in Sierra Leone is now at 50, including six deaths.

The rise in new EVD cases in Sierra Leone has concerned staff from British-owned and operated London Mining, resulting in a number of “non-essential” employees at its Marampa mine to pack up and leave the country. The firm has restricted some travel to the area but said production in the mine was unaffected.

The company said it was working with local and international agencies to monitor the situation.

The 50 clinical cases in Sierra Leone is troublesome, given that the disease is highly contagious and incurable, and also the fact that neighboring Guinea has seen more than 100 deaths since the outbreak began.

London Mining said essential staff would continue to travel in and out of the country and it would carefully continue to monitor the health of all its employees. So far, the company reported that eight employees have already departed the country.

“Following consultation with the relevant authorities, [London Mining] has imposed restrictions on travel in the region and continues to work with employees to promote awareness of the disease, including the provision of information on how it is transmitted and the signs and symptoms,” the mining firm said in a statement to the BBC. “A number of non-essential personnel have left the country due to voluntary restrictions on non-essential travel.”

“London Mining has also established proactive health monitoring of the workforce, including working with trained personnel to screen all staff and visitors entering our sites, and has ensured the Marampa facility has the appropriate medication and equipment to manage any potential occurrences of the disease,” it added. “Production at Marampa is not currently affected.”

Ebola is spread from person to person by contact with infected blood, body fluids or organs or through contact with contaminated environments. While Sierra Leone is doing what it can to limit the spread of the disease, families of several infected patients went to a rural clinic and forcibly removed their relatives, stating they wanted traditional African care for their families.

It is this contact and removal of infected patients that may have resulted in further spread of the disease, according to BBC international development correspondent Mark Doyle.

WHO and its partners have deployed experts to both Sierra Leone and Guinea to support the outbreak response. Tasks include coordination, disease outbreak investigation, risk assessment, establishment of treatment facilities, case management, infection prevention and control in the newly affected districts, and social mobilization targeting the resistant communities.

In Sierra Leone, WHO and partners have established a treatment center in Koindu and are coordinating lab testing of samples from Kailahun district, Sierra Leone to be tested in Gueckedou, Guinea.

MERS

Middle East respiratory syndrome-coronavirus (MERS-CoV) continues its wrath in Saudi Arabia, with six new cases between May 31 and June 2, 2014. The new cases occurred in  Jeddah, Al Jawf, Mecca, and Qunfudhah. Five of the patients were men with ages between 31 and 57 years of age. The 31-year-old patient had contact with another MERS patient before getting sick himself, according to the Saudi Arabian Ministry of Health.

In one case, a 42-year-old man from Al Jawf died from the illness. As well, the MOH reported two deaths in previously reported cases, one in a 55-year-old man from Riyadh and another in a 45-year-old woman from Jeddah. As well, one case reported from Jordan has resulted in death of the patient – a 69-year-old man who had diabetes, dying on May 28, five days after hospitalization. His death raised Jordan’s death toll from MERS to six since the outbreak began in April 2012.

For the most part, MERS has largely been contained to the Middle East. However, some cases have been confirmed outside the region, with just a few cases each in several neighboring countries, such as France, Italy, Germany, and Tunisia, all confirmed in 2013.

In 2014, the first American case of infection from MERS was reported one month ago. Since then, two other people have reportedly been infected with the SARS-like disease, with only one other being confirmed and making a full recovery after showing only limited symptoms.

Shortly thereafter the disease was discovered in the Netherlands, with at least two patients confirmed as having the deadly disease.

Now, the disease has shown up for the first time in another country in Africa. The first African case was confirmed in Tunisia last fall.

WHO reported on May 31 that two men from Algeria who had gone on an Umrah pilgrimage to Saudi Arabia have contracted MERS. WHO’s Regional Office for Africa said the two Algerian cases involved a 66-year-old man and a 59-year-old man who were both in Saudi Arabia but had not traveled together.

The 66-year-old sought care for fever and dyspnea after arriving in Algeria from Mecca on May 23. The younger man got sick with flu-like illness and diarrhea while in Saudi Arabia on May 23. Upon return to Algeria he was hospitalized on May 29.

A MERS-CoV diagnosis was given to both men on May 30, according to the WHO statement. No additional details about the men’s conditions were given, their possible exposures while in Saudi Arabia, or who they had contact with and if monitoring was being conducted.

Algeria is the 21st country to report MERS. Iran also reported two cases last week in women who had also gone on a pilgrimage to Saudi Arabia, according to CIDRAP.

WHO RECOMMENDATIONS

The World Health Organization has maintained a presence in each of the countries infected with the all three outbreaks. In response to all outbreaks – cholera, Ebola and MERS-CoV – WHO does not currently recommend any trade or travel restrictions within any country or region affected by one or more of the outbreaks.

Exposure To Blue Light Before And During Your Evening Meal Linked To Increased Hunger

Alan McStravick for redOrbit.com – Your Universe Online

It cannot be argued that in the last decade we have been spending more time looking at our phones, tablets and computers than ever before. Back in 2010, the Harvard Medical School began sounding the alarm about the potential hazards of blue light, the hue of light emitted by our electronic devices and energy-efficient light bulbs, and how it may be detrimental to our health, especially as the sun sets and we move from day into night.

Blue light, the publication contends, has the ability to affect our circadian rhythm causing poor and diminished sleep. Even worse, they say, blue light could be a significant contributor to ill health in the forms of heart disease, diabetes, cancer and obesity.

Cutting out all blue light is not necessarily recommended, however, as during daylight hours, it aids in boosting our attention, reaction times and mood. At night, however, increased attention brought on by blue wavelengths can suppress the secretion of melatonin, a hormone that influences our circadian rhythm. As of 2010, there was at least some evidence that diminished levels of melatonin could explain the association with diagnoses of cancer.

A separate Harvard study also examined the links between blue wavelengths of light and the onset of diabetes and obesity. A cohort of 10 participants was placed on a gradually changing schedule which shifted the timing of their circadian rhythms. The researchers noted an increase in blood sugar levels to such a degree that it placed the participants into a pre-diabetic state. Additionally, the participants leptin levels, a hormone associated with the feeling of fullness, decreased as well.

All of the Harvard research preceded a new study that builds on the idea of hunger and metabolism manipulation as a result of exposure to blue-enriched light. Researchers at Northwestern University in Chicago, Illinois have determined that with just 15 minutes of exposure to blue light prior to mealtime there is an increase in the sensation of hunger that remains for a full two hours after the completion of the meal. They also detected the same lack of sleeplessness and increased insulin resistance noted in previous studies.

“It was very interesting to observe that a single three-hour exposure to blue-enriched light in the evening acutely impacted hunger and glucose metabolism,” said study co-author Ivy Cheung, a doctoral candidate in the Interdepartmental Neuroscience program at the university. “These results are important because they suggest that manipulating environmental light exposure for humans may represent a novel approach of influencing food intake patterns and metabolism.”

Like the Harvard study noted above, the Northwestern University study consisted of a cohort of 10 healthy adult participants who presented regular sleep and eating schedules. During the course of the study they received identical carbohydrate-rich isocaloric meals. Over four days of the study, they were subjected to dim-light conditions on days one and two. Dim-light conditions refers to having been exposed to less than 20 lux during 16 hours awake and less than 3 lux during eight hours of sleep. Beginning on day three, the light exposure was increased to 260 lux, blue-enriched light for three hours. The effects of the increased lux exposure was compared to the conditions experienced on day two.

According to Cheung, further research is needed to determine the mechanisms of action involved in the relationship between light exposure, hunger and metabolism.

With the growing prevalence of energy-efficient lighting, including compact fluorescent light bulbs, it may seem that we are technologically advancing ourselves to ill health. Experts recommend a few keys to combating prolonged exposure to blue wavelengths such as:

– Use dim red lights in the evening. Red light has the least power to shift circadian rhythm and suppress melatonin.

– Avoid looking at bright screens beginning two to three hours before bed.

– Graveyard shifters are urged to invest in blue-blocking glasses (approx. $80)

– Seek prolonged exposure to bright light during the day to help facilitate sleep in the evening.

– Seek out software products for your devices that automatically shift your screens light to a diminished amount of blue light as the evening progresses. F.lux is one provider of blue-blocking software available.

Results of the study, having already been published online in the journal SLEEP, will be formally presented at this year’s SLEEP 2014 in Minneapolis, Minnesota. This year’s conference marks the 28th annual meeting of the Associated Professional Sleep Societies, LLC.

Rare Chemical Phenomenon To Harvest Solar Energy Demonstrated By Scientists

National University of Singapore

Landmark study opens doors to further studies into chemical modification of materials for alternative energy conversion

A team of international scientists led by Professor Jagadese J Vittal of the Department of Chemistry at the National University of Singapore’s (NUS) Faculty of Science has successfully unraveled the chemical reaction responsible for propelling microscopic crystals to leap distances up to hundreds of times their own size when they are exposed to ultraviolet (UV) light.

This popping effect, akin to the bursting of popcorn kernels at high temperatures, demonstrates the conversion of light into mechanical motion. It is the first instance of a “photosalient effect” driven by a photochemical reaction in solids to be reported. The rare phenomenon provides a new way to transfer light energy into mechanical motion, and potentially offers a fresh approach to harness solar energy to power light-driven actuators and mechanical devices.

These novel findings were published as the cover story in the English version of German scientific journal Angewandte Chemie International Edition on 2 June 2014.

Popcorn-like explosion of tiny crystals demonstrated

The NUS team has been actively looking for ways to control the reactivity of solids. While studying the metal complex polymerization in the solid state, Mr Raghavender Medishetty, a PhD candidate, and Ms Bai Zhaozhi, a third-year undergraduate student, of the Department of Chemistry at the NUS Faculty of Science, found that very tiny crystals leap violently when exposed to UV light. Interestingly, even when the crystals are irradiated with weak UV light, the single crystals burst violently to travel up to hundreds of times their sizes. Such a distance is equivalent to that of a human jumping few hundred meters.

To understand the reactions behind the self-actuation of the crystals, the NUS team worked with a research team from the New York University Abu Dhabi led by Associate Professor Panče Naumov to capture the rapid motion of the crystals with an optical microscope coupled to a high-speed camera. They also collaborated with a research team from the Max Plank Institute for Solid State Research in Germany, led by Professor Robert E. Dinnebier to model the kinetics by time-resolved powder X-ray diffraction methods.

Through the use of a variety of analytical methods, the researchers discovered that the cause for the popping and disintegration of these single crystals was due to the strain generated during the photochemical reaction in the crystal, leading to the formation of metal coordination polymers. Sudden expansion of volume during this reaction results in the release of the stress in the form of ballistic events. Such a chemical reaction is very similar to the popping of corn kernels on a hot plate as a result of rapid expansion of the inner kernel compared to the outer shell.

Elaborating on the findings, Prof Vittal said, “Photoactuated movements are induced by the application of light to certain type of crystals, but they are observed to be less efficient than the biomechanical motions of plant and animal tissues. In our work, we observed that the conversion of energy in the crystals may be able to mimic the motility of biological systems and provide a new way to transfer light energy into mechanical motion.”

He added, “Our work validates that the so called “bad” UV light from sources such as the sun can be utilised to convert chemical reactions to drive mechanical motions with practical uses. Knowledge and application of such behaviour is very important towards addressing the global energy crisis.”

This study opens doors for further studies into materials for alternative energy conversion.

Further research

The NUS research team is examining a series of new compounds to better understand the mechanism and enhance the efficiency of the photosalient effect. They are also conducting systematic studies to look into the effects of chemical modification on the photosalient effect.

The team hopes to eventually develop new materials that could convert solar energy effectively into mechanical energy. In addition, the team also hopes to leverage on the principle of the photosalient effect to create a new source of reversible chemical energy by controlling the shape and size of crystals used for energy conversion.

How to use fibromyalgia scales

With a better understanding of the syndrome and the willingness to combine treatments to get the right regimen, fibromyalgia treatment has never been better. However, one of the biggest challenges that fibromyalgia poses is the difficulty in diagnosing it in the first place.

Doctors need to be able to identify the syndrome before they can effectively treat for it. Because of the condition’s long history of generating skepticism and the tendency to think it’s “all in your head,” it is especially important to have reliable methods for diagnosis.

However, the syndrome has symptoms that are a part of various conditions. Widespread pain, which is the major symptom of fibromyalgia, is also a symptom of many other things. Since most of fibromyalgia is still a mystery, it’s impossible to test for other things that may indicate that a patient has the condition. There isn’t any simple test for the condition. We wouldn’t know what to test for, and so we’re stuck judging based off symptoms. Since, as we’ve said, fibromyalgia symptoms are also often symptoms a wide variety of other things, doctors will need to rule those other things out in order to make a confident diagnosis of fibromyalgia.

The most pertinent symptom of fibromyalgia is that the syndrome hurts all over. Unfortunately, one person’s “hurts all over,” doesn’t necessarily mean the same thing as another person’s. Diagnosis has focused generally on the “tender points” that come with fibromyalgia. There are 18 tender points associated with the syndrome. If a patient has experienced widespread pain for three months, and they have experienced pain in 11 of those 18 spots, then they will be diagnosed with the syndrome.

However, the counting of tender points has always been a bit of an obstacle. A person might have 11 points counted in one examination, but fewer might be counted on further examinations on subsequent days. This means that practitioners and patients are no closer than they were before to getting a diagnosis.

The tender point criteria were developed as criteria for research studies, and it’s still being used for the purpose of selecting candidates for research. However, for clinical diagnostic purposes, the conditions can be too strict.  There are at least some who might be suffering from fibromyalgia, but don’t meet all the criteria. Besides, most general practitioners have never been comfortable doing a tender point count. A more straightforward alternative has been needed for a while.

How to use fibromyalgia scales

In a newer approach to diagnosis, the focus on tender points has been abandoned in favour of an inventory of symptoms, and general pain. This approach uses a combination of two scores from different scales to make a diagnosis. It uses the Widespread Pain Index and the Symptom Severity Scale.

Widespread Pain Index

The pain index is a 19 point checklist of different areas of the body where a patient has experienced pain during the previous week.  The WPI is scored as simply the total of the areas indicated by the patient, for a possible score of 19. To get an accurate score, it is recommended that the patient how he or she felt in the previous week, how the felt on their current treatments and medications, and what other conditions they know they suffer from that might cause pain. The areas considered for the index are:

  • Shoulder girdle (between neck and shoulder), left and right
  • Upper arm, left and right
  • Lower arm, left and right
  • Hips, left and right
  • Upper leg, left and right
  • Lower leg, left and right
  • Jaw, left and right
  • Chest
  • Abdomen
  • Neck
  • Upper and Lower Back

Symptom Severity Scale

The scale measures the patients’ experience of three symptom areas: sleep that fails to refresh, chronic fatigue and problems with cognitive function. The scale assigns a ranking of from 0 to 3 in four areas for a possible total of twelve. The areas are Fatigue, Waking unrefreshed, Cognitive symptoms and finally, other symptoms. Each area is scored from zero to three, zero being “no problem” and three being “severe.” Like the WPI, this takes into account how you’ve felt over the past week. The other symptoms that it considers are:

  • Muscle pain
  • Irritable Bowel syndrome
  • Fatigue or tiredness
  • Thinking or remembering problem
  • Muscle Weakness
  • Headache
  • Abdominal pain or cramps
  • Numbness or tingling
  • Dizziness
  • Insomnia
  • Depression
  • Constipation
  • Pain in upper abdomen
  • Nausea
  • Nervousness
  • Chest pain
  • Blurred Vision
  • Nervousness
  • Diarrhea
  • Dry mouth
  • Itching
  • Wheezing
  • Raynaud’s Syndrome
  • Hives/Welts
  • Ringing in Ears
  • Vomiting
  • Heartburn
  • Oral ulcers
  • Loss or changes in taste
  • Seizures
  • Dry eyes
  • Shortness of breath
  • Loss of Appetite
  • Rash
  • Sun sensitivity
  • Hearing difficulties
  • Easy bruising
  • Hair loss
  • Frequent urination
  • Painful
  • Bladder spasms

The new approach has delivered good results, and it’s thought that it will discover more than 88% of cases. Nonetheless, there has been some reluctance from both practitioners and organizations to eliminating the tender point criteria. Tender points have been a definitive feature of fibromyalgia, and the elimination of them from diagnostic criteria has naturally caused concern.

However, this kind of controversy often indicates that things are moving in the right direction. In any case, a healthy argument is always good for clarifying. However the syndrome will be diagnosed, if you have any suspicions that you have fibromyalgia, it will be a worthwhile measure to begin a pain diary to keep track of the details of your pain. You should record the locations and any circumstances surrounding your pain, the severity and duration. All of these details will make it easier for your doctor to fully understand your symptoms.

Both the index and scale are taken into consideration to come to a diagnosis of fibromyalgia. A patient can be diagnosed with the syndrome when he or she satisfies one of two situations and symptoms have been present for more than three months:

  • WPI score greater than 7 and SS score greater than 5
  • WPI score between 3 and 6 and SS score greater than 9

Fibromyalgia Impact Questionnaire (Revised)

Unlike the first two, this is not a diagnostic tool, but rather a tool used to figure out how severe your fibromyalgia is. It does this by assigning scores that reflect the difficulty that syndrome sufferers have conducting certain tasks. The questions are divided into three “domains”: the functional domain, the overall domain and the symptom domain. The functional domain focuses on 9 everyday tasks and the patient’s ability to perform them. The overall domain asks 2 questions focussed on whether the syndrome interfered with the patients goals for the week. Finally, the symptom domain asks 10 questions regarding the various symptoms that a patient suffers.

Each question is assigned a value from zero to ten, and then the scores are added together for each domain, and then “normalized” so that the scores can be added together to get one score for the questionnaire. The functional domain is normalized by dividing by three, the symptom domain is divided by two, and the overall domain score is left alone. This makes the final score add up to a maximum of one hundred.

As an example, if your scores in the functional domain added up to 60, your symptom domain added up to 80 and your overall score was 15, then your total score would be 75. 60 divided by three (20), plus 80 divided by two (40), plus 15. And your score would be out of one hundred, which, despite the difficulty in getting there, is actually a very straightforward concept to wind up with.

The original questionnaire was developed in the late ‘80s and published in 1991. It has since become commonly used in the evaluation of fibromyalgia. It was revised in 2009 to update it with further knowledge both of society in general and the disease specifically. The original questionnaire assumed a white, middle class woman as the subject, which the current revision has corrected. It was also even more complex to score properly, so, believe it or not, the current system of scoring is actually a simplified version of the original.

As you can see from all of these scales, they all depend on your own accurate description of your experiences to be effective. Both the diagnostic and severity questionnaire require you to have good information to give to the health care practitioners that use these tools.

Further Reading

“Changes in the FIQR that address issues in the original FIQ.” The Fibromyalgia Information Foundation. http://fiqrinfo.ipage.com/FIQR%20changes.htm.

“A New Way of Diagnosing Fibromyalgia: Pain Index Plus Symptoms, Not Tender Points, Equals Fibromyalgia, New Study Says.” By Denise Mann. WebMD. http://www.webmd.com/fibromyalgia/news/20100526/a-new-way-of-diagnosing-fibromyalgia.

“New Clinical Fibromyalgia Diagnostic Criteria.” http://neuro.memorialhermann.org/uploadedFiles/_Library_Files/MNII/NewFibroCriteriaSurvey.pdf.

“Fibromyalgia Diagnosis.” by Healthline Editorial Team. Healthline.com http://www.healthline.com/health/fibromyalgia-diagnosis.

“Fibromyalgia symptoms or not? Understand the fibromyalgia diagnosis process.” By Mayo Clinic Staff. Mayoclinic.org. http://www.mayoclinic.org/diseases-conditions/fibromyalgia/in-depth/fibromyalgia-symptoms/art-20045401.

Knowing More Than One Language Helps Keep Aging Brain Healthy

Gerard LeBlond for www.redorbit.com – Your Universe Online
Researchers from the University of Edinburgh Centre for Cognitive Ageing and Cognitive Epidemiology have found that people who speak two or more languages have long-lasting benefits to their brain health.
As part of the Lifelong Health and Wellbeing Initiative, the study found that knowing more than one language helps brain health by slowing down cognitive decline from aging even if they acquired the second language in adulthood. It is also believed to delay dementia in older adults.
The findings of the study are published in the Annals of Neurology.
Previous research on studying the impact of learning more than one language has proven difficult. The focus is on whether individuals who learn a new language improve their cognitive functions or those who have an improved cognitive baseline have a better chance of becoming bilingual.
“Our study is the first to examine whether learning a second language impacts cognitive performance later in life while controlling for childhood intelligence,” said lead author Dr. Thomas Bak from the Centre for Cognitive Aging and Cognitive Epidemiology at the University of Edinburgh. “These findings are of considerable relevance,” Bak added.
The researchers used previous data acquired from the Lothian Birth Cohort 1936, which used 835 individuals living in Edinburgh, Scotland who natively spoke English. They were given an intelligence test at the age of 11 in 1947. Between 2008 and 2010 while they were in their early 70s, they were retested. Of them, 262 had learned one or more languages — 195 learned a second language before they were 18 years of age; the other 65 participants learned the second language after they were 18. No negative effects were found in any group who were bilingual.
The results indicated that the individuals who were bilingual had significantly improved cognitive functions, especially in general intelligence and reading. It was also found that it didn’t matter when the second language was acquired, even in those who learned their second language later in life — they all had the same improved cognitive effect.
“The Lothian Birth Cohort offers a unique opportunity to study the interaction between bilingualism and cognitive aging, taking into account the cognitive abilities predating the acquisition of a second language. These findings are of considerable practical relevance. Millions of people around the world acquire their second language later in life. Our study shows that bilingualism, even when acquired in adulthood, may benefit the aging brain,” Bak concluded.
“The epidemiological study by Dr. Bak and colleagues provides an important first step in understanding the impact of learning a second language and the aging brain. This research paves the way for future causal studies of bilingualism and cognitive decline prevention,” Dr. Alvaro Pascual-Leone, an Associate Editor for Annals of Neurology and Professor of Medicine at Harvard Medical School in Boston, Massachusetts said after reviewing the study.

How Psychological Health Ties Into Fibromyalgia

What is Fibromyalgia?

Fibromyalgia is a disorder of the central nervous system that causes a number of problems that are usually interlinked. The most common signs that signal the presence of fibromyalgia are the presence of consistent and chronic pain along with slight to severe depression.

Other symptoms include the disturbance of sleep patterns and moods. Another symptom that is commonly seen is the loss or deterioration of cognitive function.  Patients who suffer from fibromyalgia tend to have problems in short term and long-term memory.  They also have problems concentrating on tasks and performing complex duties.  When of the most common problems that is seen in patients suffering from this disease is that they’re unable to multitask and prioritize their work correctly.

The most problematic issue for all patients who suffer from this disease is that of the chronic pain.  This chronic pain can cause a number of secondary problems.  The first of these is depression.  Pain related depression affects people even when they’re not an active pain.  This is caught because the patients tend to anticipate the pain that they know will be coming shortly.  This can cause a lot of problems in daily life and can seriously affect the way a person functions at both work and in social relationships.  This disease is a very complicated mix of psychological and physiological phenomenon and even with the best minds in medical science working on it continuously, there’s still no clear-cut cause that has been identified as the primary one.

What Causes Fibromyalgia?

Fibromyalgia can be seen in a number of patients with very different characteristics.  What this means is that and that it is difficult to point out the exact cause for this disease in every single patient.  There are a number of similarities that can be seen between various patients who suffer from the disease all over the world. Few things are definitively known about the causes of fibromyalgia. Among the things that are clearly understood about this disease is the fact that it is hereditary and genetically linked. This places people who have relatives with the disease at a greater risk than those who do not.

How Psychological Health Ties Into Fibromyalgia

While the cause is unknown the effects of this disease have been studied in great detail.  The most common theory is that the main cause of the pain that is associated with his disease is caused due to abnormal functioning of the central nervous system in response to pain.  One way to study this affect is to use functional magnetic resonance imaging to compare the different responses of different patients and healthy people when they’re exposed to pain in controlled conditions.  The most significant difference between healthy people and patients afflicted with fibromyalgia can be seen in the neurons their close to the spinal cord and a responsible for handling pain sensations.  In healthy people the number of neurons that fire in response to pain is significantly less when compared to people suffering from fibromyalgia.  In patients the response to pain is greatly amplified and the impulses that serve to control the pain and reduce it are not very strong.

Where does psychology come in?

It is been consistently seen that people with psychological disorders form a greater fraction of fibromyalgia than it does in normal people.  This is because fibromyalgia is also thought to be triggered by psychological disorders such as depression and posttraumatic stress disorder.  In addition to this, people consistently suffer from chronic pain and thus are also chronically depressed have a higher risk of developing this disease.  This is the reason maintaining proper psychological health is of crucial importance in order to prevent diseases like fibromyalgia.  It is in fact hypothesize that depression might actually act as a trigger for this disease.

One sign that psychological problems can cause fibromyalgia can be seen from the fact that the issues of this disease respond well to psychological treatment.  Cognitive behavioral therapy and other psychological therapies have been demonstrated to cause a slight decrease in the pain and other symptoms associated with fibromyalgia.  Having a positive and healthy outlook on life is one of the most important things that anyone can do to maintain good psychological health.  Such an outlook and positive attitude help people by protecting them from depression and other psychological problems.  The elimination or proper management of the psychological issues can be shown to go a great way in not only managing the symptoms and the pain associated with fibromyalgia but also in controlling the triggers of this disease.  People were psychologically sound have a much lower probability of developing fibromyalgia than people who have been exposed to conditions in situations that can potentially cause psychological problems like stress and depression.  This is the reason why this disease also affects a large number of the armed forces.

Eating White Bread Instead Of Whole Grain May Increase Obesity Risk

redOrbit Staff & Wire Reports – Your Universe Online
Consuming white bread instead of whole-grain bread could increase a person’s chances of becoming overweight or obese, according to new research presented last week as part of the 21st annual European Congress on Obesity (ECO 2014) in Sofia, Bulgaria.
The study’s author, nutritional expert and University of Navarra professor Miguel Martinez-Gonzalez, studied the eating habits of over 9,000 Spanish university graduates to gauge the impact of bread type in a culture where it is a dietary staple. He had each participant complete a 136-item food questionnaire, and then continued monitoring them for a five-year period.
According to UPI reporter Alex Cukan, he found that those who consumed at least three slices of white bread per day were 40 percent more likely to pack on extra weight than those who only ate one portion each week, and that mixing white and whole-grain bread did not increase obesity risk.
Martinez-Gonzalez told Rebecca Smith, Medical Editor for The Telegraph, that whole-grain bread consumption was not associated with obesity. Likewise, he said that no link was found between weight gain and whole-grain bread consumption, because the products contain dietary fiber and complex carbohydrates that make people feel as though they are full for longer periods of time.
“Refined grains such as white bread start to taste sweet in the mouth almost as soon as you eat it. That is the starch being broken down into sugar. It is this feeling that leaves you wanting more,” he told Smith. “When white bread is a staple food, eating at one or two main meals a day then this is a lot of extra calories on a daily basis.”
In an interview with of The Independent’s Charlie Cooper, Martinez-Gonzalez added that the eating white bread “is equivalent to a high consumption of sugar,” that the process is similar to what occurs with soft drinks, as “their sugars are rapidly transformed into fat an organism [sic].”
White bread loses fiber during the refining process, which is part of the reason that medical experts have been debating the nutritional value of white break for several years. Cooper noted that scientists have suggested that there is a link between diets high in white bread consumption and an elevated risk of cardiovascular disease.
When asked about the study by the Daily Mail, Professor Jason Halford, chairman of the UK Association for the Study of Obesity, said: “I would say white bread is a concern because it is generally lower in useful nutrients such as fiber and it can contain added sugar and sometimes contains higher levels of salt.”
“The message is clear, go for whole grains instead of white bread when eating your meals,” Martinez-Gonzalez told The Telegraph. He added that the general trend suggested that consuming whole-grain could help combat weight gain, since it not only contained additional fiber and was broken-down and absorbed more slowly, but it also contains more vitamins and minerals than white bread.

Children More Likely To Eat Veggies With Early, Frequent Exposure

redOrbit Staff & Wire Reports – Your Universe Online

Giving infants and toddlers vegetables on a regular basis before they reach the age of two will help them develop a taste for greens and other types of produce, experts from the University of Leeds report in a new PLOS ONE study.

According to Sarah Knapton of The Telegraph, youngsters are more willing to try different types of vegetables during the first 24 months of their lives, and become more resistant to trying new foods as they grow older. The study also found that children grew fonder of different foods if they were offered them more frequently.

Lead researcher Professor Marion Hetherington of the university’s Institute of Psychological Sciences and her colleagues also found that it was unnecessary to sweeten vegetables to mask the taste and/or attempt to sneak them into an unsuspecting child’s meal.

“For parents who wish to encourage healthy eating in their children, our research offers some valuable guidance,” she said in a statement. “If you want to encourage your children to eat vegetables, make sure you start early and often. Even if your child is fussy or does not like veggies, our study shows that 5-10 exposures will do the trick.”

As part of their experiment, the researchers gave artichoke puree to 332 European children between the ages of four and 38 months. Each youngster was provided with between five and 10 servings of at least 100g of the puree in one of three varieties: basic, sweetened with sugar, or mixed with vegetable oil for additional energy.

Twenty percent of the children cleaned their plates, while 40 percent of them grew to like artichoke, according to BBC News. Overall, the researchers found that the younger children consumed more artichoke than the older study participants, reportedly because children become reluctant to try new foods and are more likely to even reject ones that they previously enjoyed after the age of two.

Hetherington’s team observed that four distinct groups emerged during their investigation. Most of the children (40 percent) were dubbed “learners” and increased their artichoke consumption over time. Of that group, 21 percent ate at least three-fourths of what they were offered each time, and became known as the “plate-cleaners.”

Those youngsters consuming less than 10 grams of the puree through the fifth helping were categorized as “non-eaters” and represented 16 percent of the group, and the remaining 23 percent were classified as “others” because their pattern of intake tended to vary over time.

“The age of child is key when introducing novel foods,” the authors wrote. “Age predicted initial intake of the novel vegetable both pre-intervention and during the initial exposure of the intervention period, with younger children consuming more compared to older children. Plate-clearers were also younger and less fussy whilst older children ate less consistently, were more likely to be non-eaters and were fussier compared to younger children.”

“Successful repeated exposure is dependent upon tasting even small amounts of the target food. Thus, repeated exposure is more likely to be effective at a time when most tastes are easily accepted, namely the weaning period,” they added. “The first year of life presents a window of opportunity before the onset of food neophobia, which then peaks around 2-6 years, thus introducing novel foods such as different vegetables is optimal earlier rather than later.”

Narcissists Are Capable Of Feeling Empathy, They Just Have To Be Shown The Way

Alan McStravick for redOrbit.com – Your Universe Online

If you know a narcissist or are one yourself, then you know how the condition leads to an inflated sense of self-importance combined with a deep need for admiration. At its root, narcissism is actually a cleverly constructed mask worn by those with a low or fragile self-esteem.

Like the many other personality disorders, narcissistic personality disorder causes the affected to behave in a manner that limits their ability to function in relationships, work and school. Characterized by a feeling of superiority to most everyone they encounter, partnered with little to no regard for the feelings of other people, treatment for this disorder has typically centered around psychotherapy.

A new study, however, has shown a new method to work past a narcissist’s ability to feel empathy for those around them. Researchers from the University of Surrey and the University of Southampton in England have discovered that by teaching a narcissist a few exercises, they can begin to understand and relate to those around them who are in distressing situations. This, the researchers claim, may be important for the prevention of violent and anti-social behaviors that are often linked to narcissistic personality disorder.

The first study conducted aimed to provide analysis of how sympathetic a narcissist might be to a person of the same gender who is suffering as a result of the break-up of an intimate relationship. Each subject was asked to rate, on a scale of 1-8, how empathetic they felt. Unsurprisingly, those individuals who were deemed to possess a high narcissism level lacked empathy for the distressed person.

In the second of three studies conducted, the situation the subjects were asked to react to involved a female who had been victimized in a domestic violence situation by a male perpetrator. One half of the participants were specifically tasked with imagining how the woman felt. This study showed those with high narcissism were, in fact, fully capable of showing empathy when they were made to put themselves in the shoes of the victim.

While self-reporting could be skewed by the participants choosing to give what they believed to be the desired answer, the third study tested the participants’ heartbeats when presented with another scenario. It has been well established that an increase in heart rate is indicative of an empathic response. Participants were made to listen to an audio blog of someone who had just suffered a relationship break-up. Those participants deemed to suffer from high narcissism presented a significantly lower heart rate than non-narcissistic participants.

Interestingly, when the research team instructed the participants to imagine they were the character who had just experienced this stressful scenario, the narcissistic participants’ heart rates increased to the same levels as those participants who possessed low narcissistic tendencies.

“Our results clearly show that if we encourage narcissists to consider the situation from their teammate or friend’s point-of-view, they are likely to respond in a much more considerate and sympathetic way,” said lead author, Dr. Erica Hepper from the University of Surrey. “This is not only good for the people around them, but also for their own wellbeing in the long-run as empathy helps to form and maintain close relationships.”

“Our research provides a crucial breakthrough, as other studies suggest narcissism is increasing across cultures. If narcissists have the physical capacity to feel empathy, interventions could be designed to help them do so in their everyday lives, with benefits to themselves, their family, friends and colleagues and for society as a whole,” said Hepper.

The research is published in the journal Personality and Social Psychology Bulletin.

WHO Calls For Higher Taxes On Tobacco Products To Save Lives

Lawrence LeBlond for redOrbit.com – Your Universe Online

Ahead of the WHO Framework Convention on Tobacco Control (FCTC) conference, which is slated for later this year, the UN’s health arm is using World No Tobacco Day to call on countries to raise taxes on tobacco products.

The tobacco epidemic results in six million deaths annually across the globe, of which more than 600,000 are non-smokers who die from breathing in the toxic second-hand smoke of cigarette users. Unless we act now, the epidemic will kill more than eight million people each year by 2030. Furthermore, more than 80 percent of these preventable deaths will occur in low- and middle-income countries, according to WHO data.

WHO’s FCTC maintains that countries should implement tax and price policies on tobacco products as a way to reduce tobacco use. Previous research has shown that higher taxes are effective in reducing tobacco use among lower-income groups and in preventing people from picking up the habit to begin with.

Increasing taxes on tobacco prices by just 10 percent decreases tobacco consumption by about four percent in high-income countries and by up to eight percent in low- and middle-income countries, according to WHO. Furthermore, increasing excise taxes on tobacco would be the most cost-effective tobacco control measure.

The World Health Report 2010 determined that a 50 percent increase in tobacco excise taxes would generate more than $1.4 billion (US) in additional funds in 22 low-income countries. If those funds were allocated to the health sector, government health spending in these countries could increase by as much as 50 percent.

WHO is not only looking to raise taxes on tobacco products, but is also championing the change of view on electronic cigarettes.

In an earlier report, WHO director-general Margaret Chan told an FCTC committee hearing that electronic cigarettes should be considered the same a regular tobacco products, stating they are no less dangerous than traditional cigarettes. The goal of that proposal is also to raise taxes on e-cigarettes and SNUs.

However, a leaked document from that hearing made its way into public eyes and subsequently a group of 53 leading scientists from around the world sent a signed letter to Chan, asking her and her agency to not classify electronic cigarettes the same as regular ones as these products are part of the solution to reducing death and disease, rather than part of the problem.

Currently, WHO and FCTC do not differentiate between the risks of different nicotine products. By applying FCTC measures to e-cigarettes, they would fall under the same category as tobacco products and likely result in advertising bans, smoke-free legislation and higher taxes and health warnings, reducing their use as a safer alternative to cigarettes.

“If the WHO gets its way and extinguishes e-cigarettes, it will not only have passed up what is clearly one of the biggest public health innovations of the last three decades that could potentially save millions of lives, but it will have abrogated its own responsibility under its own charter to empower consumers to take control of their own health, something which they are already doing themselves in their millions,” Professor Gerry Stimson, Emeritus Professor at the Imperial College in London, a signatory to the letter, and organizer of the upcoming Global Forum on Nicotine, said in statement.

The group of signatories said in that letter to Chan that electronic cigarettes could help prevent much of the cancer, heart and lung disease and strokes caused by the toxins found in traditional cigarettes. These e-cigarettes “could be among the most significant health innovations of the 21st century, perhaps saving hundreds of millions of lives,” the group said, according to AFP.

A recent study of nearly 6,000 people who quit smoking in England between 2009 and 2014 found they were 60 times more likely to succeed using e-cigarettes than using nicotine patches or gum, or even going cold turkey.

However, another study published in JAMA in March said that e-cigarettes “did not significantly predict quitting one year later.”

In the US, legislation has already worked its magic against electronic smokers, adding health warning labels and enforcing minimum age limits on the next-gen nicotine sticks. Even New York banned them from restaurants, bars, parks, beaches, and other public places.

Perhaps causing another setback for e-cigarettes is a recent finding that e-cigarette smokers provide a visual stimulant to other traditional smokers, giving them the urge to smoke as well.

That study found that in a controlled setting those who observe e-cigarette use have an increased urge to light up their own traditional cigarettes, revealing that the elevated desire to smoke is just as intense when observing e-cigarette use as when observing combustible cigarette use.

“E-cigarette use has increased dramatically over the past few years, so observations and passive exposure will no doubt increase as well,” said Andrea King, PhD, professor of psychiatry and behavioral neuroscience at the University of Chicago, in a recent statement. “It’s important to note that there could be effects of being in the company of an e-cigarette user, particularly for young smokers. For example, it’s possible that seeing e-cigarette use may promote more smoking behavior and less quitting.”

It will be studies like this one that may give electronic cigarettes a bad reputation, and spoil efforts by scientists, doctors and other health officials trying to tout their use as a life saver rather than a killer.

Such explanations will unlikely gain traction as WHO and FCTC continue to champion the demise of e-cigarettes. And with World No Tobacco Day around the corner – May 31 – the UN health body will likely not waver from its stance anytime soon.

“The ultimate goal of World No Tobacco Day is to contribute to protecting present and future generations not only from the devastating health consequences due to tobacco, but also from the social, environmental and economic scourges of tobacco use and exposure to tobacco smoke,” WHO wrote in a statement.

The specific goals of the 2014 campaign are that governments increase taxes on tobacco to reduce its consumption and for individuals and civil society organizations to encourage governments to increase taxes on tobacco products.

“Every year, on 31 May, WHO and partners everywhere mark World No Tobacco Day, highlighting the health risks associated with tobacco use and advocating for effective policies to reduce tobacco consumption. Tobacco use is the single most preventable cause of death globally and is currently responsible for 10 percent of adult deaths worldwide,” the WHO statement concluded.

New 3D Model Of King Richard III’s Spine Shows Prominent Scoliosis

Lawrence LeBlond for redOrbit.com – Your Universe Online
It’s been 21 months since an unmarked grave under a car park in Leicester, England revealed the remains of the Last Plantagenet King, Richard III, and since then a wealth of evidence has come to light about the short-lived king and his untimely demise.
The most recent findings of Richard’s remains, which were hastily buried without shroud or coffin, reveals that his spine shows strong evidence of scoliosis. Richard III was popularized in Shakespearean literature as a hunchback, and now everyone can explore the true shape of one of history’s most famous spinal columns, as University of Leicester scientists, employing the help of multimedia experts, have created a 3D model of Richard III’s spine.
The results of their work have been published on May 30 in The Lancet, offering a complete picture of the king’s scoliosis for the first time. The University of Cambridge, Loughborough University and the University Hospitals of Leicester in the UK were also part of this groundbreaking research.
Internet users can now click their mouse on the interactive 3D model of Richard’s spine, rotating it 360 degrees to get a true feeling of what the king’s spine really looked like. The visualization reveals how the king’s spine curved to the right, as well as some twisting, revealing a somewhat spiral shape.
The visualization is based on research led by University of Leicester School of Archaeology and Ancient History osteoarchaeologist Dr Jo Appleby.
Many historical references have told of the physical deformities of Richard III, who ruled England from 1483 to 1485. However, debate has raged for centuries over the extent to which these descriptions were true. Shakespeare referred to him as hunchback and others as crook-backed, but until now, it was unknown if these descriptions were based on his physical appearance or were fantasized as a way to damage his reputation.
The earliest examinations of Richard III’s remains did in fact reveal some curvature of the spine. The latest analysis reveals the deformity would have had a noticeable, yet small, effect on his appearance. Also, the researchers believe it would not have affected his ability to exercise.
Still, based on the findings, the team noted that Richard’s spine would have had a pronounced right-sided curve that was spiral in nature; his right shoulder would have been higher than his left; and his torso would have been relatively short compared to his arms and legs. But, because the spine was a “well-balanced curve,” Richard’s head and neck would have been straight and not tilted to one side. As well, he would likely not have had a limp as “his leg bones were normal and symmetric,” said Dr Appleby.
The team believe based on this information that his condition would not have been immediately visible to those he met, especially if he was wearing well-designed clothes or armor. The team also determined Richard would have stood about 5’ 8” tall without scoliosis – an average height for men in the medieval times. However, due to the curvature, he would have appeared several inches shorter than this.
The team of researchers constructed physical and computer-generated models of the spinal column of Richard III using CT scans at the Leicester Royal Infirmary, and using 3D prints of the bones created by Loughborough University from the CT image data. This allowed the team to analyze the remains to accurately determine the nature of Richard III’s condition and the extent to which it affected his appearance.
The results show that his scoliosis was unlikely an inherited trait, and that it probably began to appear sometime after his tenth birthday. Today, the condition would be called “adolescent onset idiopathic scoliosis,” and is one of the most common forms of scoliosis.
“The major finding we have made is being able to reconstruct the three-dimensional nature of the scoliosis and understand what it would have looked like,” said Dr Appleby. “Obviously, the skeleton was flattened out when it was in the ground. We had a good idea of the sideways aspect of the curve, but we didn’t know the precise nature of the spiral aspect of the condition.”
“The arthritis in the spine meant it could only be reconstructed in a specific way, meaning that we can get a very accurate idea of the shape of the curve. It’s really good to be able to produce this 3D reconstruction rather than a 2D picture, as you get a good sense of how the spine would have actually appeared,” Dr Appleby added.
“Examination of Richard III’s remains shows that he had a scoliosis, thus confirming that the Shakespearean description of a ‘bunch-backed toad’ is a complete fabrication – yet more proof that, while the plays are splendid dramas, they are also most certainly fiction not fact,” explained Dr Phil Stone, Chairman of the Richard III Society.
“History tells us that Richard III was a great warrior. Clearly, he was little inconvenienced by his spinal problem and accounts of his appearance, written when he was alive, tell that he was “of person and bodily shape comely enough” and that he “was the most handsome man in the room after his brother, Edward IV”,” Dr Stone added.
REINTERMENT OF THE KING
In other news, the High Court has upheld the Ministry of Justice license that was granted to the University of Leicester, permitting them to reinter Richard III’s remains at Leicester Cathedral.
The University pledged in 2012 that, should King Richard’s remains be discovered, they would be properly reinterred in Leicester. The Ministry of Justice, agreeing with Leicester’s commitment, granted the University an exhumation license to proceed with their plans. When an unmarked grave was discovered in September 2012, and the remains were later confirmed as those of Richard III, the University announced it would move forward with reinterment in a raised tomb at Leicester Cathedral.
However, their plans were uprooted by living relatives of Richard III, under the name of the Plantagenet Alliance, who challenged University of Leicester’s license to reinter the remains at Leicester Cathedral. The PA maintained in March 2013 that Richard III should be buried in York, where he called home in life.
A judicial review was opened last August to determine who had rights to the remains of Richard III. Last week, the High Court decided that the University of Leicester holds jurisdiction over the remains and granted approval to reinter the king in Leicester.

How Is Your Mood Affected By Your Mode Of Transportation? And Who Are The Happiest Travelers?

April Flowers for redOrbit.com – Your Universe Online

Transportation, no matter what mode you use, is part of our daily lives. We ride our bikes for exercise, ride the train or bus to work, or drive our cars on vacation. Researchers at Clemson University wanted to know how our different modes of transportation affect our moods, and which would make us the happiest.

The study, published in the journal Transportation, examined how emotions like happiness, pain, stress, sadness and fatigue vary during travel and by travel mode.

Data collected by the US Bureau of Labor Statistics as part of the American Time Use Survey was analyzed by the research team to determine the average mood felt by people using different modes of travel.

[ Watch the Video: Do Bikes Make You Happier? ]

“We found that people are in the best mood while they are bicycling compared to any other mode of transportation,” said Eric Morris, assistant professor in Clemson’s planning, development and preservation department, in a statement.

Cyclists, on the whole, tend to be a self-selected group who are very enthusiastic about their mode of transportation, according to Morris. The study found bicycle users were the happiest.

“Bicyclists are generally younger and physically healthy, which are traits that happier people usually possess,” he said.

Car passengers are the next happiest, according to the findings, followed by car drivers. The most negative emotions were expressed by bus and train riders. Morris cautions that part of this negativity can be attributed to the fact that mass transit is disproportionately used for commuting to and from work.

The researchers say that their findings show positive implications for bicycles beyond the typically cited health and transportation ones. They suggest that improving the emotional experience of transit riders could be as important as improving traditional service features, such as headways and travel speeds.

“Understanding the relationship between how we travel and how we feel offers insight into ways of improving existing transportation services, prioritizing investments and theorizing and modeling the costs and benefits of travel,” said Morris.

Eating Prunes Can Help With Weight Loss

University of Liverpool

Research by the University of Liverpool has found that eating prunes as part of a weight control diet can improve weight loss.

Consumption of dried fruit is not readily recommended during weight loss despite evidence it enhances feelings of fullness.

Low fiber consumers

However, a study by the University’s Institute of Psychology, Health and Society of 100 overweight and obese low fiber consumers tested whether eating prunes as part of a weight loss diet helped or hindered weight control over a 12-week period.

It also examined if low fiber consumers could tolerate eating substantial numbers of prunes in their diet, and if eating prunes had a beneficial effect on appetite.

To assess the effects of prunes on weight and appetite, participants in the study were divided into two groups – those who ate prunes every day (140g a day for women and 171g a day for men) and those who were given advice on healthy snacks over the period of active weight loss.

The researchers found that members of the group which ate prunes as part of a healthy life-style diet lost 2kg in weight and shed 2.5cm off their waists. However, the people in the group which was given advice on healthy snacks lost only 1.5kg in weight and 1.7cm from their waists.

The study also found that the prune eaters experienced greater weight loss during the last four weeks of the study. After week eight, participants showed increased feelings of fullness in the prune group. Moreover, despite the high daily doses, prunes were well tolerated.

Useful and convenient addition

Liverpool psychologist, Dr Jo Harrold who led the research, said: “These are the first data to demonstrate both weight loss and no negative side effects when consuming prunes as part of a weight management diet. Indeed in the long term they may be beneficial to dieters by tackling hunger and satisfying appetite; a major challenge when you are trying to maintain weight loss.”

Professor Jason Halford, Professor of Experimental Psychology and Director of the University’s Human Ingestive Behaviour Laboratory, added: “Maintaining a healthy diet is challenging. Along with fresh fruit and vegetables, dried fruit can provide a useful and convenient addition to the diet, especially as controlling appetite during dieting can be tough.”

The research was presented at the European Congress on Obesity in Sofia, Bulgaria.

Researchers Create Initial Catalog Of Human Proteins

Johns Hopkins Medicine

Important resource for speeding research and diagnostic development

Striving for the protein equivalent of the Human Genome Project, an international team of researchers has created an initial catalog of the human “proteome,” or all of the proteins in the human body. In total, using 30 different human tissues, the team identified proteins encoded by 17,294 genes, which is about 84 percent of all of the genes in the human genome predicted to encode proteins.

In a summary of the effort, to be published May 29 in the journal Nature, the team also reports the identification of 193 novel proteins that came from regions of the genome not predicted to code for proteins, suggesting that the human genome is more complex than previously thought. The cataloging project, led by researchers at The Johns Hopkins University and the Institute of Bioinformatics in Bangalore, India, should prove an important resource for biological research and medical diagnostics, according to the team’s leaders.

“You can think of the human body as a huge library where each protein is a book,” says Akhilesh Pandey, M.D., Ph.D., a professor at the McKusick-Nathans Institute of Genetic Medicine and of biological chemistry, pathology and oncology at The Johns Hopkins University and the founder and director of the Institute of Bioinformatics. “The difficulty is that we don’t have a comprehensive catalog that gives us the titles of the available books and where to find them. We think we now have a good first draft of that comprehensive catalog.”

While genes determine many of the characteristics of an organism, they do so by providing instructions for making proteins, the building blocks and workhorses of cells, and therefore of tissues and organs. For this reason, many investigators consider a catalog of human proteins — and their location within the body — to be even more instructive and useful than the catalog of genes in the human genome.

Studying proteins is far more technically challenging than studying genes, Pandey notes, because the structures and functions of proteins are complex and diverse. And a mere list of existing proteins would not be very helpful without accompanying information about where in the body those proteins are found. Therefore, most protein studies to date have focused on individual tissues, often in the context of specific diseases, he adds.

To achieve a more comprehensive survey of the proteome, the research team began by taking samples of 30 tissues, extracting their proteins and using enzymes like chemical scissors to cut them into smaller pieces, called peptides. They then ran the peptides through a series of instruments designed to deduce their identity and measure their relative abundance.

“By generating a comprehensive human protein dataset, we have made it easier for other researchers to identify the proteins in their experiments,” says Pandey. “We believe our data will become the gold standard in the field, especially because they were all generated using uniform methods and analysis, and state-of-the-art machines.”

Among the proteins whose data patterns have been characterized for the first time are many that were never predicted to exist. (Within the genome, in addition to the DNA sequences that encode proteins, there are stretches of DNA whose sequences do not follow a conventional protein-coding gene pattern and have therefore been labeled “noncoding.”) The team’s most unexpected finding was that 193 of the proteins they identified could be traced back to these supposedly noncoding regions of DNA.

“This was the most exciting part of this study, finding further complexities in the genome,” says Pandey. “The fact that 193 of the proteins came from DNA sequences predicted to be noncoding means that we don’t fully understand how cells read DNA, because clearly those sequences do code for proteins.”

Pandey believes that the human proteome is so extensive and complex that researchers’ catalog of it will never be fully complete, but this work provides a solid foundation that others can reliably build upon.

Researchers Create Tiny Artificial Lung

Fraunhofer Institute for Interfacial Engineering and Biotechnology IGB
What medications can be used to treat lung cancer, and how effective are they? Until now, drug companies have had to rely on animal testing to find out. But in the future, a new 3D model lung is set to achieve more precise results and ultimately minimize – or even completely replace – animal testing. From June 23-26, researchers will be presenting their new model at the BIO International Convention in San Diego, California (Germany Pavilion, Booth 4513-03).
Lung cancer is a serious condition. Once patients are diagnosed with it, chemotherapy is often their only hope. But nobody can accurately predict whether or not this treatment will help. To start with, not all patients respond to a course of chemotherapy in exactly the same way. And then there’s the fact that the systems drug companies use to test new medications leave a lot to be desired. “Animal models may be the best we have at the moment, but all the same, 75 percent of the drugs deemed beneficial when tested on animals fail when used to treat humans,” explains Prof. Dr. Heike Walles, head of the Würzburg-based “Regenerative Technologies for Oncology” project group belonging to the Fraunhofer Institute for Interfacial Engineering and Biotechnology IGB.
These tests are set to achieve better results in the future: “We’ve developed an innovative 3D test system that allows us to superbly simulate what happens in the human body. Our plan is for this system to replace animal tests in the future,” says Walles. Essentially what the researchers have done is to recreate the human lung in miniature – with a volume of half a cubic centimeter, each model is no bigger than a sugar cube. In a parallel effort, scientists at the Department of Bioinformatics at the University of Würzburg are working up computer simulation models for different patient groups. These are necessary because patients may have genetic variations that inhibit therapies from having the desired effect. Comparing the theoretical and biological models allows each research group to optimize their results.
The biological model is based human lung cancer cells growing on tissue. Thus an artificial lung is created. A bioreactor is used to make it breathe and to pump a nutrient medium through its blood vessels in the same way our bodies supply our lungs with blood. The reactor also makes it possible to regulate factors such as how fast and deeply the model lung breathes.
With the scientists having managed to construct the lung tissue, Walles is delighted to report that “treatments that generate resistance in clinics do the same in our model.” Researchers are now planning to explore the extent to which their artificial lung can be used to test new therapeutic agents. Should resistance crop up during testing, doctors can opt to treat the patient with a combination therapy from the outset and thus side-step the problem. Thinking long-term, there is even the possibility of creating an individual model lung for each patient. This would make it possible to accurately predict which of the various treatment options will work. The required lung cells are collected as part of the biopsy performed to allow doctors to analyze the patient’s tumor.
On the trail of metastases
Testing new medications is by no means the only thing the model lung can be used for. It is also designed to help researchers to better understand the formation of metastases; it is these that often make a cancer fatal. “As metastases can’t be examined in animals – or in 2D models where cells grow only on a flat surface – we’ve only ever had a rough understanding of how they form. Now for the first time, our 3D lung tissue makes it possible to perform metastases analysis,” explains Walles. “In the long term, this may enable us to protect patients from metastases altogether.” In order to travel through the body, tumor cells alter their surface markers – in other words, the molecules that bind them to a particular area of the body. Cancer cells are then free to spread throughout the body via the body’s circulatory system before taking up residence somewhere else by expressing their original surface markers. The scientists plan to use their model lung’s artificial circulatory system to research exactly how this transformation occurs. And in doing so, they may someday succeed in developing medication that will stop metastases from forming in the first place.

Human Evolution Traded Brawn For Brains

Alan McStravick for redOrbit.com – Your Universe Online

Have you ever wondered why it is that monkeys, chimpanzees, apes and other primates are frighteningly strong compared to us humans? If your answer was yes, you are not alone. And now a new study goes in depth in explaining why and how this phenomenon has occurred evolutionarily.

If we take the primate as the most logical known last point in human evolution, then describing primate strength and cognitive abilities as superhuman and subhuman, respectively, would be incorrect. In fact, human strength and cognition would better be described as subprimate and superprimate, again, respectively.

Humans were able to walk out of the forests and slowly civilize over millenia, eventually mastering and manipulating our environments. As we have progressed, we have done such things as create the car and the airplane, land men on the moon, and surf this virtual landscape we call the worldwide web. All of that brain power required more and more energy. With a finite amount of energy able to be ingested, some human features had to suffer. Muscle strength, it turns out, was an excellent candidate for energy to be siphoned from.

This finding was discovered as the result of a study conducted by scientists from Shanghai’s CAS-MPG Partner Institute for Computational Biology and other research teams based at the Max Planck Institutes in Germany. In their study, the teams investigated the evolution of metabolites – small molecules like sugar, vitamins, amino acids and neurotransmitters that represent key elements of our physiological functions. Their investigation showed how metabolite concentrations actually evolved in humans at a staggeringly fast pace compared to our primate cousins. This was especially true in two tissue areas: the brain and muscle.

The science world has been abuzz about the study of the human genome since the mid-1990s, and that field of study has yielded excellent research results. The genome, like the rings of a tree, is slow and methodical, recording steady changes that occur over time. However, this international team of scientists, led by Dr. Philipp Khaitovic from Shanghai, looked at an area that is regarded as far more responsible for the development of our distinctive human features, the metabolome. The metabolome is the compendium of metabolites that are present in human tissue. “Metabolites are more dynamic than the genome and they can give us more information about what makes us human,” stated Khaitovich. “It is also commonly known that the human brain consumes way more energy than the brains of other species; we were curious to see which metabolic processes this involves.”

The genome is the tortoise to the metabolomes hare, it appears. Though the metabolome has taken few rest breaks in this race, with the metabolome of the brain evolving some four times faster than that of the chimpanzee. If that figure is surprising, then learning that human muscle has undergone metabolic change ten times that of the chimp is mind blowing.

As humans have driven towards civilization, our lifestyles have become far more sedentary than our evolutionary ancestors. To account for this, the researchers simulated a high-sugar, high-fat, low movement environment for a group of macaque monkeys they used as subjects in their study. “For a long time, we were confused by metabolic changes in human muscle,” commented Dr. Kasia Bozek, lead author of the study, “until we realized that what other primates have in common in contrast to humans is their enormous muscle strength.”

Echoing Bozek’s statement, Dr. Josep Call of the Wolfgang Kohler Primate Research Center in Leipzig, Germany said, “This is common knowledge to all the zoo keepers, but it was never tested systematically.” To test this scientifically, the team devised a pull test in which several primates abilities were matched against the strength of university students and professional athletes. In each pull test, all human participants were woefully outmatched by the animal participants. Of course, ask any of the human participants to effectively communicate the answer to a math problem and you will realize primates have nothing on humans.

It was this last point that began to open up an hypothesis that was just too tantalizing to the team of researchers. Perhaps the metabolic roles of the human brain and brawn are intertwined. “Our results suggest a special energy management in humans that allows us to spare energy for our extraordinary cognitive powers at a cost of weak muscle,” contends Bozek. “The world of human metabolomics is just starting to open up its secrets to us,” Dr. Patrick Giavalisco, leader of the metabolome measurement effort at the Max Planck Institute for Molecular Plant Physiology in Golm said. “Such human-specific metabolic features we find could be related not only to physical or cognitive performance but also to common human metabolic diseases,” he concluded.

The full article detailing the study and the findings of this international team was published this week in the open-access journal PLOS Biology.

State Of Cyber Crime: Survey Finds Hackers Are Winning The Fight

Peter Suciu for redOrbit.com – Your Universe Online

The news for businesses of all sizes isn’t good. The hackers are winning and online attackers are more determined than ever to break into computers and steal data. Most worrisome is that cyber criminals are more technologically advanced than the businesses that are there to stop them.

That is the findings of a new survey of 500 executives of US businesses, law enforcement services and government agencies that was released on Wednesday by PwC US and CSO magazine.

The 2014 US State of Cybercrime Survey, an annual study of cybercrime trends, revealed that the number of cybercrime-related incidents and monetary losses associated with these attacks will continue to rise. Moreover, US organizations’ cyber security capabilities fall short of the persistence and technological skills of those doing the attacks.

The report found that only 38 percent of companies have the methodology to prioritize security investments based on risk and the potential impact to business strategy.

“Cyber criminals evolve their tactics very rapidly, and the repercussions of cybercrime are overwhelming for any single organization to combat alone,” David Burg, PwC’s Global and US Advisory Cybersecurity Leader, said in a statement. “It’s imperative that private and public organizations collaborate to combat cybercrime and gain intelligence about security threats and how to respond to them. A united response will prove to be an indispensable tool in advancing the state of cybersecurity.”

The report also noted that the United States Director of National Intelligence now ranked cybercrime as a top national security threat. This puts it higher than terrorism, espionage and even weapons of mass destruction.

US business leaders are now increasingly worried about such cyber-attacks far more than their respective global counterparts. The survey found that 69 percent of US business respondents said they were concerned about how cybercrime could threaten growth potential – compared to just 49 percent of global CEOs.

The survey found that on average there were 135 cybercrime incidents per organization over the last year, but actual costs remain largely unknown as more than two-thirds of respondents were unable to estimate the financial losses. Of those respondents who could make estimations, the average monetary loss was projected to be $415,000

The survey also found eight major cyber security deficiencies:

· Most organizations do not take a strategic approach to cyber security spending

· Organizations do not assess security capabilities of third-party providers

· Supply chain risks are not understood or adequately assessed

· Security for mobile devices is inadequate and has elevated risks

· Cyber risks are not sufficiently assessed

· Organizations do not collaborate to share intelligence on threats and responses

· Insider threats are not sufficiently addressed

PwC recommended that organizations can address these security deficiencies by investing in people and processes, in addition to technologies, while holding third parties to the same or higher standards. Companies should assess risks associated with supply chain partners; ensure that mobile security practices keep pace with adoption and use of mobile devices; perform cyber risk assessments regularly; while also taking advantage of information sharing internally and externally to gain intelligence on fast-evolving cyber risks.

Moreover, companies should strive to develop threat-specific policies, enhance training, and create workforce messaging to boost cyber security awareness across the organization.

“Internal threats have long been a part of the security landscape for enterprises,” said Charles King, principal analyst at Pund-IT. “While some incidents are criminally related (theft of IP, etc.) others stem from simple mistakes, like people erroneously attempting to access documents or portions of the company Intranet for which they are not approved. These kinds of issues, along with challenges related to opening data to approved partners such as those in the supply chain highlight the importance of information governance frameworks and solutions for most enterprises. The situation is likely to become increasingly complex and fraught, especially in light of the increasing risks related to mobile device use and volumes of data resulting from IoT strategies.”

“Smaller businesses are often targeted by cyber criminals due to their relatively porous and simplistic security solutions,” King told redOrbit. “It’s a bit like a smash and grab robbery or knocking over a convenience store. The rewards are considerably lower that [sic] cracking an enterprise environment but its often easy money with lower risk.”

Barriers To HIV Testing In Older Children

PLOS
Concerns about guardianship and privacy can discourage clinics from testing children for HIV, according to new research from Zimbabwe published this week in PLOS Medicine. The results of the study, by Rashida A. Ferrand of the London School of Hygiene & Tropical Medicine and colleagues, provide much-needed information on how to improve care of this vulnerable population.
More than three million children globally are living with HIV (90% in sub-Saharan Africa) and in 2011 an estimated 1000 infant infections occurred every day. HIV acquired through mother-to-child transmission around the time of birth is often unsuspected in older children, and the benefits of treatment are diminished in children who develop symptoms of immune system failure before infection is discovered.
Provider-initiated HIV testing and counseling (PITC) involves health care providers routinely recommending HIV testing and counseling when people attend health care facilities. To investigate the provision and uptake of PITC among children between 6 and 15 years old, the researchers collected and analyzed data from staff at 6 clinics in Harare, Zimbabwe.
Among 2,831 eligible children, about three-quarters were offered PITC, of whom 1,534 (54.2%) consented to HIV testing. The researchers diagnosed HIV infection in about 1 in 20 (5.3%) of the children tested, highlighting the need for more effective PITC. HIV infection was also found in 1 out of 5 guardians who tested with a child.
The main reasons that health-care workers gave for not offering PITC were perceived unsuitability of the accompanying guardian to provide consent for HIV testing on behalf of the child, and lack of availability of staff or HIV testing kits. Children who were asymptomatic, older, or attending with a male or a younger guardian were less likely to be offered HIV testing. Male guardians were less likely to consent to their child being tested.
In interviews, health-care workers raised concerns that a child might experience maltreatment if he or she tested positive, and showed uncertainty around whether testing of the guardian was mandatory and whether only a parent (if one was living) could legally provide consent. When parents were alive but not present, seeking consent from another adult raised ethical concerns that a positive HIV test in a child would disclose the HIV status of a parent who hadn’t provided consent.
The study, which was funded by the Wellcome Trust, did not explore the reasons for refusal of HIV testing by clients. Also, because the relationship of the child to the accompanying adult was not available, the appropriateness of the guardian could not be independently ascertained.
Lead author Dr. Rashida Ferrand from the London School of Hygiene & Tropical Medicine said: “The fear of the stigma faced by the child and their family seems to be discouraging caregivers from testing children for HIV. However, with improved clarity of guidelines, engagement with staff, and organisational adjustments within clinics, it should be possible to harness the commitment of health-care workers and properly implement HIV testing and counseling.”
In an accompanying Perspective, Mary-Ann Davies and Emma Kalk of the University of Cape Town, who were uninvolved in the study, point out that “The fact that >90% of infected children had a previous missed opportunity for testing indicates suboptimal pediatric PITC coverage in most routine settings,” and call for “clear HIV testing policies for children and guidance on guardianship, together with training of [health-care workers] on such policies.”