Researchers Uncover Clue To Baldness, Could Lead To Cure

Male baldness, long considered the bane of men all over the world, may become a thing of the past after researchers found a biological clue that could raise the prospect of stopping and perhaps even reversing hair loss.

The researchers, from University of Pennsylvania’s Perelman School of Medicine, have identified an abnormal amount of protein called Prostaglandin D2 (PGD2) in the bald scalp of men with male pattern baldness. Lab studies were conducted using mice and cultured human hair follicles.

In both human and animal models, the team found that PGD2 and its derivative, 15-dPGJ2, inhibit hair growth. The inhibition occurred through a receptor called GPR44, which could be a promising therapeutic treatment for androgenetic alopecia in both men and women with thinning hair.

Among men with androgenetic alopecia (AGA), bald scalp tissue had elevated levels of PGD2 compared with hair-covered tissue from the same individual, said George Cotsarelis, MD and colleagues.

The researchers reported yesterday (March 21) in the journal Science Translational Medicine that drugs are already being developed that target the pathways associated with hair loss. The hope is that the findings lead to a cure for baldness.

Most men begin to go bald in middle age, with nearly 80 percent of men experiencing at least some hair loss by age 70. In men, testosterone plays a major role in hair loss, as do genetic factors, causing hair follicles to shrink and eventually becoming so small they are invisible, leading to the appearance of baldness.

“Our findings should lead directly to new treatments for the most common cause of hair loss in men, androgenetic alopecia,” the team wrote.

However, it remains unclear if blocking the GPR44 receptor would allow hair to regrow after balding, or if it would just prevent further balding. It also remains unclear whether inhibiting the receptor would have any effect in humans at all.

To explore that possibility, Cotsarelis and his colleagues examined scalp tissue in 22 white men between 40 and 65 who underwent hair transplantation for male pattern baldness. None of the men were taking either of two approved medications for baldness — minoxidil and finasteride.

The researchers found through genetic analysis that levels of PGD2 were three times higher in bald scalp tissue than in hair-covered scalp.

To further test the possibilities and better understand the significance of their finding, they turned to experimental studies involving mice. In normal mice, they found an association between the increase in PGD2 and the regression of hair follicles during normal hair cycling.

In mice engineered to have elevated of PGD2 in the skin, they developed alopecia, miniaturization of follicles, and sebaceous gland hyperplasia — all characteristics associated in human baldness.

“The next step would be to screen for compounds that affect this receptor and to also find out whether blocking that receptor would reverse balding or just prevent balding – a question that would take a while to figure out,” Cotsarelis told Helen Briggs of BBC News.

“Although a different prostaglandin was known to increase hair growth, our findings were unexpected, as prostaglandins haven´t been thought about in relation to hair loss, yet it made sense that there was an inhibitor of hair growth, based on our earlier work looking at hair follicle stem cells,” he added.

Prostaglandins are well characterized for their role in many bodily functions, including controlling cell growth and dilating smooth muscle tissue. It was shocking to find that PGD2 inhibited hair growth while prostaglandin F2alpha is known to increase hair growth.

“The question of whether similar changes in PGD2 levels are found in the affected scalp of women with androgenetic alopecia also needs to be addressed,” the team wrote. Future studies, potentially testing topical treatments that may target GPR44, can determine whether targeting prostaglandins will benefit woman with AGA as well.

Cotsarelis´ study was funded by the National Institutes of Health, the Skin Disease Research Center, the Pennsylvania Department of Health, the Edwin and Fannie Gray Hall Center for Human Appearance at University of Pennsylvania Medical Center, the American Skin Association, the Dermatology Foundation, and L´Oreal.

Cotsarelis is also a co-inventor of a patent owned by UPenn describing the PGD2 pathway as a target for inhibiting hair loss, among other claims.

Brain Handles Odors In A Different Way

Researchers from the Stowers institute for Medical Research have traced individual odor molecules in the brain to create a new model of how our sense of smell works. While once thought to cluster related smells, researchers have know discovered that the brain reacts to smells in a broader sense.
Previous research has shown that the brain handles the senses in a very orderly way.
Things that we touch were thought to be mapped together in the somatosensory cortex, things that we heard were mapped together in the auditory system, and things that we taste were mapped together in the gustatory cortex. The olfactory cortex handles things that we smell, but new research suggests that this system may react to smells differently than previously thought.
Rather than group chemically-related smells together in the olfactory system, the smells were mapped everywhere in the system.
“When we mapped the individual chemical features of different odorants, they mapped all over the olfactory bulb, which processes incoming olfactory information,” says Associate Investigator C. Ron Yu, PhD, who led the study published in this week´s online edition of the Proceedings of the National Academy of Sciences (PNAS).
“From the animal´s perspective that makes perfect sense. The chemical structure of an odor molecule is not what´s important to them. They really just want to learn about their environment and associate olfactory information with food or other relevant information.”
As we smell things, nasal receptors send an electrical signal up to glomeruli in the olfactory bulb. The pattern with which these glomeruli send and receive signals to the olfactory system was thought to represent specific odors.
This hypothesis on how the brain handles odor had been widely accepted, but it had never been accurately mapped. Recent research and available technologies have shown that this process breaks down at a very fine level, thus requiring a more fine-tuned and up close look at the olfactory map.
Yu and his team generated a line of super sensitive transgenic mice and equipment sophisticated enough to deliver hundreds of odor stimuli to a single mouse.
In their tests, the researchers discovered that certain odors did activate within a specific and distinct area of the olfactory bulb. Other odors, however, signaled glomeruli in various areas of the bulb. Some odors even intermingled in the olfactory bulb, suggesting that the glomeruli haven´t yet evolved to detect the specific chemical shapes of certain odors.
This finding did not surprise lead author of the study Limei Ma, PhD, as there are thousands of odors.
“Many of them could be really novel to the organism, something they never encountered before,” she says. “The system must have the capability to recognize and encode anything.”
The research suggests that the “chemotopic” hypothesis on how the brain responds to different odors may not be completely accurate. The team has now devised a “Tunotopic” hypothesis on the olfactory system. This means individual glomeruli are “tuned” to receive certain odor chemical molecules, and therefore can send them to different areas of the olfactory bulb. The team believes that this sort of hypothesis can be used to describe how the brain handles the other senses as well.
“When you have a new chemical synthesized, like new perfumes and food flavors, you don´t have to create new brain regions to react to it,” Ma said in the press release. “What you do is use the existing receptors to sense all these chemicals and then tell your brain whether this is novel, whether it´s similar, or whether it´s something really strange.”

UK’s Internet Economy Tops Amongst G20 Nations

The Internet is responsible for a larger percentage of the UK’s economy than any other G20 industrial or developing nation, a new study from Boston Consulting Group (BCG) has discovered.

According to BBC News, BCG figures show that the UK’s “Internet economy” was worth £121 billion pounds (approximately $192 billion) in 2010 — or in excess of £2000 ($3,176) per person.

Furthermore, they discovered that the Internet contributed more to the UK economy than the education, construction, or healthcare industries, and that more online retail sales were conducted their than in any other “major” economy, the British news organization also reported.

“If it were formally categorized as a sector, the internet would be the fifth largest in the UK,” ZDNet UK reporter David Meyer said, noting that those dollar figures include such things as “e-commerce, online ads, cloud data storage and other internet-related spending.”

The trend is unlikely to change. The Wall Street Journal reported on Monday that the UK is experiencing a higher-than-average Internet economy growth rate of 10.9% (compared to 8.1% for other developed G20 countries) and that it will contribute £225 billion ($357 billion) by the year 2016. If that happens, that means that the Internet would make up a whopping 12.4% of the UK’s total economy.

In a March 19 statement, BCG said that they projected that the Internet economy will grow at an average of 8% annually, and will contribute a combined total of $4.2 trillion dollars to the gross domestic product (GDP) of the G20 in 2016.

“If it were a national economy, it would rank in the world´s top five, behind only the U.S., China, India, and Japan, and ahead of Germany,” BCG Senior Partner David Dean, a co-author of the report, said.

The Internet economy of South Korea was second to the UK in terms of total impact on GDP, Meyer said, followed by China, Japan, and the US. In terms of projected growth rates, the Wall Street Journal reports that BCG is expecting Italy’s Internet economy to increase by 11.5%, Germany’s by 7.8% and America’s by 6.5%.

“However, the Internet economies of developing markets in the G-20 are expected to grow at an average of 17.8% through 2016, led by Argentina at 24.3% and India at 23%,” the Journal added. “In 2010, developing markets contributed 24% of the G-20’s Internet economy; by 2016 that will rise to 34%, the report says.”

“The Internet economy offers one of the world´s few unfettered growth stories,” said Dean. “Policymakers often cite GDP growth rates of around 10 percent per year in the developing markets, but they look past similar, or even higher, rates close to home.”

Allen Donates $300 Million To ‘Brain Observatory’

Microsoft co-founder Paul Allen has donated $300 million in order to create a center that will study the brain and learn more about exactly how the human mind works, the Allen Institute for Brain Science announced Wednesday.

The latest donation brings the amount of money that the 59-year-old businessman, investor and philanthropist has invested in the institute which was named in his honor to $500 million, Forbes staff writer Matthew Herper said.

According to Herper and the Wall Street Journal‘s Robert Hotz, the additional money will allow for the funding of additional projects and will double the number of scientists and technicians to more than 350 people.

The Allen Institute announced his contribution during a press conference in Seattle on March 21, during which Allen said that the goal of the Institute’s research is “to one day understand the essence of what makes us human,” Herper said in his March 21 article.

“I´ve always been fascinated by the workings of the human brain. I´m awed by its enormous complexity,” he added in comments published by Forbes. “Our brains are many magnitudes more advanced in the way they work than any computer software ever invented. Think about this: We can teach students to program computers in a couple of years of school. But even with a lifetime of learning, at present we are far away from fully understanding the brain“¦ Thus, we have only begun to scratch the surface of the complex problems inherent in figuring out the deep, detailed knowledge of the brain´s inner workings.”

KPLU-FM out of Seattle said that the Institute is dedicated to “unraveling mysteries of the brain” and “bringing some of America’s top scientists” to the facility, which they note has been called “a brain observatory” at which researchers “hope to answer big questions about how the mind works“¦ They’ll peer inside the brain, similar to how groups of astronomers gather at major observatories to peer into the stars for answers about the formation of the universe.”

The radio station added that the project itself has been inspired by the type of research conducted by physicists, in which a team of dozens of scientists collaborate in order to answer core questions of their subject. At the Allen Institute, KPLU-FM reports, the staff will attempt to determine exactly how the brain works on the cellular and molecular level, including the mechanics between sight, decision-making, and action-taking. The Microsoft co-founder’s funding has reportedly given them enough capital to complete four years’ worth of research.

Among the projects completed by researchers at the facility since it was founded by Allen in 2003 are an online, computerized atlas which brings together a variety of different imaging techniques to document the structure of the human brain, as well as genetic and biochemical data, all of which are analyzed using state-of-the-art technology, Hotz reported. More than 4,000 scientists are already using the tool, he added.

With Allen’s latest donation, researchers at the facility will now attempt to determine who the brain stores, encodes and processes information; discover the cellular building blocks involved in all brain function, as well as how and why they are sometimes targeted by diseases; and how to cells develop and then collaborate in order to create the systems behind a person’s thoughts and behaviors, the Institute said in a press release.

“Paul Allen’s generosity and bold vision have allowed us to build a unique organization and advance brain research in ways that wouldn’t be possible otherwise,” Allen Institute CEO Dr. Allan Jones said in a statement Wednesday. “This new funding enables us to apply our structured, industrial-scale approach to science to tackle increasingly complex questions about how the brain works — questions that must be answered if we are to understand and treat autism, Alzheimer’s disease, depression, traumatic brain injury and the myriad other brain-related diseases and disorders that affect all of us either directly or indirectly.”

Samsung’s New Smart TVs: Dazzling Pros And A Few Surprising Cons

Jedidiah Becker for RedOrbit.com
Samsung Electronics announced earlier this month that it would start shipping out its much-hyped fifth-generation of Smart TVs.
Decked out with a panoply of hi-tech new features, the leading global innovator in consumer electronics is bent on transforming the old boob-tube into a multi-functional media center fit for George J. Jetson´s living room.
“Our goal with this year´s models,” explained the company´s senior VP of Home Entertainment Joe Stinziano, “was to truly redefine what a TV can be while providing unprecedented choice to the consumer.”
“We have delivered the incredible picture quality and beautiful design that consumers have come to expect from Samsung, as well as seamless connectivity, several new ways to control the TV and exclusive services.”
While availability varies from one model to the next, a few of the most impressive new features include:
– A dual-core processor for launching apps and navigating tasks: Combined with a few hundred gigs of storage, this makes the Smart TV essentially as computer-esque as any smartphone on the market — although you´ll need the Smart Wireless Keyboard (sold separately) to take full advantage of their interfacing potential.
– A built-in camera complete with noise-canceling microphone: Tired of those thousand-button remotes that seem to require 12 hours of college courses just to operate? Well, now you can rifle through most of the TV´s functions using voice and even gesture commands. Just say “Web Browser” or point at it on the screen, and you´ll be online in seconds. And if you´re sitting outside the range of the camera and built-in microphone, no worries —  the new Smart Touch Remote also comes equipped with a microphone. “¦ Oh yeah, and did we mention that the camera is paired up with face-recognition software, allowing you to create and access separate user settings as well as Facebook, Twitter and other social media accounts simply by flashing your mug in front of the camera?
– A new function called AllShare Play: This lets users connect with their mobile devices and even store up to five gigs of media in the cloud. It also lets you use your smartphone or tablet to search the Web for what you want and then launch the site to the TV with a tap of the touch-screen.
While these are just a handful of the new Smart TV´s innovate novelties, the most impressive feature might well be the dramatically improved picture quality on all of its Full HD LED and Plasma screens.
In addition to adding its state-of-the-art Micro Dimming contrast enhancement to even more of its LED models, the new Micro Dimming Pro and Micro Dimming Ultimate provide richly textured colors and mind-boggling sharpness that sets the standard for the industry.
All of its Plasma TVs will also now come equipped with its Real Black Filter, delivering the black levels and color contrast previously reserved only for high-end models. At the upper-end of the price scale (including models E6500, E7000 and E8000), Real Black Pro will take its place, absorbing even more external light reflections that its predecessor and yielding one of the most life-like pictures ever seen.
And what about all those YouTube videos we love to watch on the big screen but get annoyed at because of the grainy, low-resolution picture? The new Smart TVs come with innovative De-Mosquito and De-Blocking image filters that can dramatically improve the sharpness and clarity of even low-quality videos.
Samsung has also opted to include 3D technology on a larger range of its LED and Plasma TVs. What´s even cooler, they´ll also be shipping at least two complementary pairs of those expensive 3D glasses with every TV that´s equipped for it — and you get four free pairs if you´re ready to dish-out the cash for the high-end models.
ROOM FOR IMPROVEMENT
The list of dazzling new features and improvements in quality goes on and on. But before readers start to suspect this writer of being ℠on the take´ from Samsung, it´s worth pointing out that not everyone is simply staring in jaw-gaping wonder at what the electronics gods hath wrought.
The tech-news site ARS Technica, for instance, was allowed to take some of the new Smart TVs for a proverbial test drive a few weeks before their release. While generally impressed with the visionary direction in which Samsung´s seems to be taking its Smart TV technology, they still saw considerable room for improvement.
Some of the smaller issues included the fact that the TVs´ built-in cameras do not swivel from side-to-side but only up and down, meaning that if the user isn´t standing or sitting directly in front of the TV, they won´t be able to use the motion-sensing and face-recognition functions, at least not optimally.
ARS Technica also pointed out that the TV´s voice-recognition appears to suffer from some of those same frustrating annoyances familiar to users of Apple´s Siri or its Android counterparts. However, this is more than likely just a reflection of the current state of voice-recognition technology (which, alas, leaves much to be desired) rather than any particular deficiency in Samsung´s products.
Like the voice-control functions, ARS Technica also found that the TV´s motion-sensing controls demonstrated a far-from-flawless functionality. Although a decided improvement over the likes of the Kinect and Wiimote technologies (from Xbox and Wii, respectively), it still found Samsung´s gesture controls to be “a little stutter and unsure.”
The Smart TV´s new Smart Touch Remote offers an elegant simplification and streamlining of its complex predecessors, adorned with only a few essential buttons (for volume and channel) and a convenient, highly functional touchpad. The buttons, however, had “little tactile feedback,” making them difficult to use. And on the whole, they just seemed incongruous with whole hi-tech package.
“The channel up/down button seems wildly out of place on a TV like this; paging through channels sequentially just to see what´s on seems like an antiquated action to include in such a modern device,” wrote ARS Technica´s Casey Johnston.
A POTENTIAL PRIVACY THREAT?
While this handful of minor technical shortcomings did not escape ARS Technica´s fastidious eye, other critiques of Samsung´s new Smart TVs have been of a more Orwellian nature.
Gary Merson, a tech writer for MSNBC´s TechnoLog and self-proclaimed “HD Guru,” sees a potential security threat in the new Smart TVs.
Now that the integration of Internet and TV is complete, the television experience has become an interconnected, two-way exchange of information. And with sensitive, built-in cameras and microphones always connected, Merson remains apprehensive about whether the technology is able safeguard the privacy its users.
“While these features give you unprecedented control over an HDTV, the devices themselves, more similar than ever to a personal computer, may allow hackers or even Samsung to see and hear you and your family, and collect extremely personal data,” Merson wrote on Monday.
Though such concerns may at first glance seem like the delusions of a paranoid technophobe, recent incidents involving the popular California-based webcam maker Trendnet provide grounds for a bit of reflection.
In January, rumors began surfacing in blogs and chat rooms that a seemingly minor coding glitch in the software of some of Trendnet´s webcams had led to a number of disturbing security breaches.
Using a simple hacking trick, savvy Web surfers were able obtain live video streams of hundreds of unsuspecting users without even entering a password.
Within days, numerous sites sprung up across the Web offering instructions on how to exploit the security blip. It wasn´t until February that the company publically acknowledged the embarrassing security breach and providing vulnerable users with software patches.
Yet even more troubling to Merson is the simple fact that Samsung has not yet issued a privacy policy explicitly delineating its data collection and sharing practices for the new Smart TVs.
According to Merson, the only thing Samsung has officially stated was that it “assumes no responsibility, and shall not be liable” if a product or service is not “appropriate” — whatever that means.
Owners of the new TVs need to be aware that there is no simple on-off switch or plug for the camera and microphone. While a Samsung representative explained that the voice-command feature can in fact be turned off, the user has to get into the TV´s software to actually do so.
And what´s more, as Merson points out, neither the camera nor microphones have indicators to let users know whether they´re on or off. Essentially, the only way to be sure the camera isn´t watching you is to physically turn it upwards towards the ceiling.
Thus, like every form of technology, from stone-tipped spears to satellites, it appears that Samsung´s revolutionary new Smart TVs may — to put it dramatically — have the potential for both good and evil. Hopefully the forward-thinking company will quickly recognize that it´s in both the consumer´s and their best interest to iron-out any kinks in their products´ functionality and security.

Robotic Jellyfish Fuses Hi-Tech Materials With Nature’s Elegant Design

In an article published Wednesday, a team of Navy-sponsored researchers outlined an in-the-works project for a robotic jellyfish that is able to propel itself through water by harnessing the latent power of the ocean.

Humorously dubbed ‘Robojelly’, scientists say that the plans for the mechanical Cnidarian were hatched with a very serious purpose in mind: Researchers say they hope the experimental gizmo will eventually be used to assist emergency teams in dangerous underwater rescue operations — like the recent Costa Concordia disaster that claimed the lives of at least 25 people when the cruise liner ran aground off the coast of Italy in January.

Particularly ingenious is the means the scientists have devised for powering the device. Drawing on and modifying existing hydrogen-power technology, Robojelly would derive its fuel from sea water, giving it access to an essentially unlimited power source.

“To our knowledge this is the first successful powering of an underwater robot using external hydrogen as a fuel source,” wrote lead author Yonas Tadesse, a mechanical engineer at Virginia Tech.

REPRODUCING NATURE´S MECHANICS

Equally impressive is how Tadesse´s team plans make the robot swim.

And if mimicry really is the highest form of flattery, then Mother Nature ought to be blushing.

Though still in the early phases of development, the scientist´s model draws on the elegant simplicity of the mechanism used by real jellyfish to propel themselves through the water.

One of the world´s most ancient and primitive life forms, jellyfish are equipped with simple rows of circular muscle that line the inside of their mushroom-shaped cap. When these muscles relax, the cap becomes engorged with water and puffs out, giving the creature its characteristic bell-shaped form.

When the circular muscles contract, the water held in the cap is forcefully ejected, propelling the gelatinous invertebrate through the water in rhythmic bursts.

In an attempt to replicate this movement, the researchers are making use of a material known to engineers as shape-metal alloy. As the name indicates, the material is an amalgam of metals that is able to resume its original shape after being deformed or scrunched up.

Essentially, it´s as malleable as tin-foil and resilient as rubber.

The researchers say the body of the robot consists of eight segments of this shape-metal alloy arranged in a form similar to the jellyfish´s bell-like cap.

In order to mimic the contraction and relaxation of natural muscles, the researchers coated each of these eight segments with a layer of platinum black powder. With electrical help from additional “straws” of pure carbon, the platinum coating catalyzes a chemical reaction with oxygen and hydrogen molecules in the sea water, releasing heat that is then transferred to Robojelly´s artificial muscles.

When the heat reaches the rings of shape-metal alloy, it causes them to temporarily change their shape, or ℠contract´.  They are then immediately cooled off by the surrounding water and resume their original shape, ready to start the process again.

While the basic mechanism for allowing the robot to glide through water has already been worked out, the researchers say that they still have a bit of tinkering to do to figure out how to coordinate and control Robojelly´s movements.

In their current model, the eight segments that make up the bell-shaped cap all contract and relax simultaneously. In order to guide its movements, however, the team says it will have to work out a mechanism for controlling the contraction of each individual segment separately.

Funding for the project came from the US Office of Naval Research, which supports independent private research that is considered potentially beneficial to the US Navy and Marine Corps.

The team´s report was published Wednesday in the journal Smart Materials and Structures, a of publication of Britain´s Institute of Physics.

CO2 Storage in Deep Saline Aquifers

[ Watch the Video ]

Brett Smith for RedOrbit.com

A group of researchers at MIT, led by Ruben Juanes, published a study this week that showed deep saline aquifers in the United States are capable of storing a century´s worth of carbon dioxide produced by the nation´s coal or gas fueled power plants.

The researchers were able to accurately model carbon dioxide storage in the aquifers, or layers of water-bearing permeable rock, that would be located well below those water sources used for human consumption or agriculture, according to the study published in the Proceedings of the National Academy of Sciences (PNAS) journal.

This development could be a huge windfall for the carbon capture and storage industry´s (CCS) effort to reduce carbon emissions. Current systems to solely capture carbon emissions have been proven to be very effective. A newly proposed clean coal plant to be built near Edinburgh, Scotland by the American company Summit Power Group would have systems that capture carbon emissions at 90 percent efficiency, according to Severin Carrell of The Guardian.

The bigger problem in reducing coal-fired power plant emissions is what to do with the carbon dioxide once it is captured. The proposed Scottish power plant has run into opposition because the current plan for these captured gases would be to use them in an effort to pump oil out of the North Sea, which is not considered an environmentally friendly use.

Deep underground locations are currently viewed as the most promising places in the U.S. to store the emissions. The process known as geo-sequestration would involve the injection of supercritical carbon emissions into geological formations. The deep saline aquifers described in the study would be bounded above by a cap-rock formation that would prevent the flow of emissions up toward the surface.

In a video presentation, researcher Mike Szulczewski described how the MIT group modeled the sub-surface flow of supercritical fluid both during and after the injection process. He said the injection stage was modeled “to ensure that the injection pressure does not become too high and fracture the cap rock.” This would cause significant CO2 leakage from the aquifer.

“We also modeled what happens to the CO2 after injection to ensure it does not travel to a potential leakage pathway like a large fracture,” said Szulczewski.

After injection, the carbon dioxide is potentially trapped in the aquifer by two different mechanisms: capillary and solubility trapping. Capillary trapping, also called residual gas trapping, occurs when the injected CO2 plume passes through porous rock, disconnecting some of the supercritical fluid into the pores though surface tension. Researches simulated this through the use of tiny multi-colored glass beads that trapped liquid as it passed through them.

Solubility trapping happens because carbon dioxide saturated water is heavier than the aquifer´s unsaturated water by 1 percent. This will cause the CO2 to sink to the bottom of the aquifer over a period of time. This type of trapping is less likely to leak since the supercritical fluid is not able to rise to the surface because of buoyancy.

Using the two criteria of proper injection pressure and aquifer trapping potential, the researchers were able to reach the conclusion that storage utilization of the nation´s deep saline aquifers would allow for the stabilization of U.S. emissions at the current rate for over 100 years.

Ibuprofen May Help Cure Altitude Sickness

A new study by the Stanford University Medical Center reports that ibuprofen, an anti-inflammatory medication that is used often as a painkiller, may prove effective at curbing the symptoms of acute mountain sickness. Symptoms include headache, fatigue, dizziness, nausea, vomiting and poor appetite.
The researchers did a double-blind, placebo-controlled trial of 86 men and women, 58 men and 28 women. They traveled to the White Mountains northeast of Bishop, Ca. The participants stayed the night at 4,110 feet and were given 600 milligrams of either ibuprofen or placebo at 8 a.m. Next they headed up to 11,700 feet where they were given a second dose at 2 p.m. They then hiked 3 miles up to 12,570 feet where a third dose was administered at 8 p.m. They then spent the night on the mountain.
The researchers found, through a questionnaire, that of the 44 participants who took ibuprofen, 19 of them (43%) came down with symptoms of altitude sickness, while 29 of them (69%) receiving placebo came down with symptoms. The conclusion being that ibuprofen reduced the incidence of altitude sickness by 26 percent.
The researchers also observed that those who took the drug suffered from less severe symptoms compared to the placebo group. But, according to the press release, the reduction in severity was not statistically significant, based on the self-reporting survey.
Researchers don´t know exactly what biological mechanisms cause altitude sickness, but some think the lack of oxygen causes the brain to swell with fluids. Ibuprofen is thought to reduce the inflammation, because of its anti-inflammatory properties.
Other medications, such as acetazolamide and dexamethasone,  are available for the treatment for acute mountain sickness. But, their side effects are more severe than those of ibuprofen. Dr. Grant Lipman, author of the study, says “The safety profile of ibuprofen makes it more attractive then dexamethasone, which has been associated with hyperglycemia, adrenal suppression, delirium, depression, insomnia and mania.”
The researchers suggest a dosage of 600 milligrams. Even though more would possibly be better prevention, there are risks of gastrointestinal and kidney problems if somebody  becomes dehydrated.
The study is published online at the Annals of Emergency Medicine.

New Study Reveals That Butterflies Know Exactly Where To Go

New research provides scientists with details about the migratory patterns of monarch butterflies and their endangered habitats.

The Monarch butterfly (or Danaus plexippus) is a popular creature worldwide. Perhaps the most recognized and quintessential butterfly, the Monarch can be found as far south as Mexico and as far north as Canada. In fact, each year millions of these creatures begin their migration from Mexico to the great white north, breeding and laying eggs as they go. Most of these butterflies will stay in the southern and central areas of the United States, laying their eggs on milkweed plants. Some aspiring generations of Monarchs will travel all the way to Canada to reproduce.

In recent years, the milkweed population of the United States and Canada has been damaged, creating problems for this amazing Monarch migration. In fact, the International Union for the Conservation of Nature (IUCN) has declared this migration “threatened” due to milkweed destruction, and Canada has had the Monarch butterfly on their list of species of “special concern” since 1997.

To find out more about this species and why they migrate the way they do, new research is being conducted by the University of Guelph, led by Prof. Ryan Norris, Department of Integrative Biology, former graduate student Nathan Miller, and Environment Canada. Their research shows how these Monarchs recolonize even the northern most reaches of their breeding grounds. With this information, Norris´ team hopes to preserve the species as well as protect their food, habitats, and breeding grounds.

This information is critical in helping researchers understand why the Monarch migrate this way and furthermore, how well they will be able to withstand environmental changes, such as habitat loss.

“It wasn´t clear where these individuals were born and how long they lived,” Norris said. “One possibility was that some monarchs that reach places like southern Ontario could have migrated all the way from Mexico.”

To unravel this mystery, Miller sampled Monarch species from 44 sites across Ontario, Canada and the northern US. He then analyzed chemical markers, or stable isotopes, as well as wing wear to determine the birthplace of each butterfly. This research showed the team that as much as ten percent of the test subjects in the northern breeding range had traveled directly from Mexico. This is no easy feat for any living creature, and the results of the finding were nothing less than extraordinary to the team.

“This is an incredible journey from an animal this size, especially if you consider that these butterflies are little more than eight months old and have travelled thousands of kilometers over their lifetime,” Miller said.

The other 90 percent of Monarchs studied by researchers were first-generation butterflies born that spring while their parents were en route to the north. While researchers had previously thought the most fertile area for Monarchs was the southern area of the United States, the new evidence suggests that many of the butterflies had been born further north, in the central US.

“Linking these periods of the breeding cycle provides us key information for conservation and identifies highly productive regions that fuel the migration further north,” said Norris.

Norris and Miller´s team published their results in the journal PLoS One.

Can Common Pain Relievers Cause Hypertension?

According to a Tel Aviv professor of medicine, common over-the-counter medications such as painkillers may be a hidden cause of high blood pressure and hypertension. These drugs may even conflict with current prescriptions, rendering them useless.

While it may be common knowledge by now that kidney failure and endocrine tumors cause high blood pressure, the effects of common pain medications and over-the-counter drugs remain dangerously unknown, according to new research.

Professor Ehud Grossman of Tel Aviv University´s Sackler Faculty of Medicine has recently conducted research on these common drugs and found them to be an underlying cause of hypertension.

This research was published in the American Journal of Medicine.

Hypertension is a major risk factor for strokes, heart attacks, and brain aneurisms. Even patients on anti-hypertensive medications are at risk, as the chemical makeup of these common drugs can raise blood pressure, interfering with other medications. Simple drug interference isn´t the most dangerous part of this issue, according to Prof. Grossman. What is most troubling is how dangerously unaware doctors and patients are to the kind of reactions these drugs can have.

Prof. Grossman suggests that one reason these over-the-counter medicines are not suspected to have dire effects is the relative ease with which they can be obtained.

“In diagnosing the causes of hypertension, over-the-counter drugs like ibuprofen are often overlooked,” Prof. Grossman said in the published paper. The new research studies all kinds of common medications related to high blood pressure. Examples include painkillers, birth control, anti-depressants, and antibiotics.

These common medications have been widely shown to increase blood pressure in patients taking the drugs. However, the new research conducted by Prof. Grossman found that, because these drugs are so common, many doctors fail to account for them in their prescribed medical treatments. Worse still, these doctors often failed to inform the patient about the effects of these medications and their potential interaction with the prescribed drugs. In the end, it´s the doctor´s responsibility to inform the patient as to what kind of reactions could arise.

According to Prof. Grossman, doctors would be advised to either decrease the amount of prescribed medication when common drugs are being used, or prescribe an anti-hypertensive medication for patients currently taking these common medications. “Many physicians don´t account for this, and some don´t even know about it. It´s their responsibility to be informed and make sure that their patients are aware that this is a possibility,” says Prof. Grossman.

While treatment can usually be altered to avoid the dangers of hypertension, this isn´t always the case. For example, there are new anti-vascular endothelial growth factor drugs that increase blood pressure to block the formation of new blood vessels. In doing so, anti-vascular drugs prevent new blood vessels and arteries from forming on solid tumors. Because these drugs perform so efficiently, doctors may not want to prescribe a third anti-hypertensive medication. In the end, patients should be monitored closely in their specific treatment.

Patients should also be aware of hypertension and the types of drugs that may bring about this danger. “Once a patient has won a longer life with the use of these drugs, you don´t want to expose them to problems associated with blood pressure, such as stroke,” says Prof. Grossman. By simply being aware of the drugs taken, patients and doctors can adjust regimens to prevent hypertension and its effects.

How Does Shock Therapy Really Help Depressed Patients?

Electroconvulsive therapy (ECT) has been an effective yet controversial tool to treat severe depression for more than 70 years, and now Scottish researchers for the first time say they have discovered why the procedure often works the way it does.

ECT works by altering how different parts of the brain involved in depression communicate with each other. The University of Aberdeen and University of Dundee researchers, reporting in the journal Proceedings of the National Academy of Sciences (PNAS), said ECT involves anesthetizing patients with serious mood disorders, and then using an electric shock to induce a seizure. They said the method has been the most effective treatment available for seven decades.

ECT has the strongest supporting data among treatments for patients whose depression doesn´t respond to medication, according to the American Psychiatric Association. Between 10 and 20 percent of depressed patients received shock therapy, Paul Holtzheimer, an associate professor of psychiatry and surgery at Dartmouth Medical School, who was not involved in the new study, told Bloomberg‘s Elizabeth Lopatto.

“This gives us a much more powerful view of the brain,” he told Lopatto in a telephone interview. “If this study holds up, it tells us this is a network problem.”

In the Scottish study, nine patients scheduled for ECT had their brains scanned using functional MRI both before and after treatment. The MRI detects blood flow to specific regions of the brain. The team analyzed the brain´s connectivity using a new mathematical model.

“ECT is a controversial treatment, and one prominent criticism has been that it is not understood how it works and what it does to the brain,” said study leader Professor Ian Reid. “However we believe we´ve solved a 70-year-old therapeutic riddle because our study reveals that ECT affects the way different parts of the brain involved in depression connect with one another.”

Despite all the controversy surrounding the use of shock therapy, the procedure has probably helped 75 to 85 percent of patients recover from their symptoms, said Reid.

Reid said all nine patients in the study were diagnosed with severe clinical depression and were successfully treated with ECT — two sessions per week, an average of 8 total treatments. None of the patients in the study had responded to chemical antidepressants.

Using the new mathematical model to analyze brain connectivity, “we were able to find out to what extent more than 25,000 different brain areas ℠communicated´ with each other and how the brain´s internal communication patterns differed before and after ECT treatment in severely depressed patients,” said study co-author Professor Christian Schwarzbauer.

The researchers said their findings suggest a “hyper-connection” between the areas of the brain involved in emotional processing and mood change and the parts of the brain involved in thinking and concentrating. Our key finding is that if you compare the connections in the brain before and after ECT, ECT reduces the connection strength between these same areas – it reduces this hyperconnectivity.

And so, for the first time we can point to something that ECT does in the brain that makes sense in the context of what we think is wrong in people who are depressed.

“As far as we know no-one has extended that ℠connectivity´ idea about depression into an arena where you can show a treatment clearly treating depression, changing brain connectivity,” said Reid. “And the change that we see in the brain connections after ECT reflects the change that we see in the symptom profile of patients who generally see a big improvement.”

The team said they now hope to continue monitoring the patients to see if the depression and hyperconnectivity return.

“If we understand more about how ECT works, we will be in a better position to replace it with something less invasive and more acceptable,” said Reid. “At the moment only about 40 percent of people with depression get better with treatment from their GP.”

The findings may lead to new drug targets which match the effectiveness of ECR without an impact on memory, the researchers noted.

“These findings make a lot of sense,” Professor David Nutt, of Imperial College London, told BBC News. “Indeed, the disabling of connections between different areas of the brain is what I would have predicted from the depression literature.”

“This is why my research group is progressing psilocybin – which also disrupts this network, as we showed in PNAS recently – as a treatment for depression,” said Nutt, who was not involved in the Scottish study.

Schwarzbauer said their new method could be “applied to a wide range of other brain disorders such as schizophrenia, autism, or dementia, and may lead to a better understanding of the underlying disease mechanisms and the development of new diagnostic tools.”

He said more studies may lead to therapies that don´t have the side effects of ECT. Confusion, memory loss and physical pain such as muscle spasms are among the most common side effects of shock therapy. Also, a better understanding of how the brain region is affected in depression may enable doctors to better target patients who will benefit from treatment, he added.

“This is the start of a longer process,” Schwarzbauer said. “This is a very novel finding.”

Synthetic Marijuana Use A Major Problem With Teens

Synthetic marijuana, with nicknames such as “Mr. Smiley,” “Blaze,” and “Spice,” has been blamed on hospitalizations of at least three teens who have smoked or ingested the fake pot trying to get a legal high, according to a new report.

The substance is created through a mixture of plant and herbal materials and sprayed with chemicals. And until recently, this relatively new drug has been sold legally in convenience stores around the country as potpourri or herbal incense. Perhaps more troubling, the drug is apparently still available online.

Joanna Cohen, an emergency medicine physician at Children´s National Medical Center in Washington, DC, told USA Today that there was very little information available on this substance in medical literature when it first started making waves.

With the cases that have been reported, namely three teen ER visits, a number of visible “telltale signs” are now known, including excessive sweating, agitation, speech trouble, aggression and restlessness, and “euphoric and psychoactive effects” commonly associated with traditional marijuana use.

Given the growing popularity of this drug with teens and young adults, “it´s important to share the information we have with other doctors and help parents and schools be on the lookout” for symptoms, which require immediate medical attention, said Cohen, lead author of the report, published today in the journal Pediatrics.

The National Institute on Drug Abuse filed a report in November stating it found nearly one in nine high school seniors had tried the synthetic marijuana in the past year, second only to the number of teens who had used marijuana.

The American Association of Poison Control Centers reported that the synthetic marijuana started to become a problem in 2009 and quickly grew in popularity. It said it has handled nearly 7,000 related calls in 2011, more than double the number received in 2010.

In the latest report, Cohen and colleagues presented three case studies of teenagers who visited the ER after ingesting the fake pot. Each teen suffered similar symptoms, such as rapid heartbeat and high blood pressure. All three were treated and eventually released from the hospital.

“We became concerned about it after seeing these teenagers, and when we researched the literature, we realized there is very little out there about the effects of these compounds,” said Cohen. “We wanted to publish these case reports mostly because we wanted to share the information we had gathered to let the medical community know what we were seeing.”

She said it is difficult to say if the symptoms experienced by the three teens is typical with this drug because there is so much we don´t know about it yet. “The big danger is that kids´ brains are still developing and we don´t know about the long-term effects. It can have serious consequences such as memory loss, [mental] deficits, and psychosis with long-term, repeated use.”

Federal lawmakers have yet to pass a bill banning the sales of fake marijuana, but at least 39 states in the union have adopted their own bans, according to the National Conference of State Legislatures. The Drug Enforcement Administration earlier this month extended its ban on five chemicals used to produce the synthetic drug. Its one-year ban, which is set to expire shortly, puts a Schedule I classification on those substances, meaning they are the most restricted under the Controlled Substances Act. Schedule I drugs are found to have a high potential for abuse and no accepted medical uses.

Cohen said urine drug screens are useless because the compounds don´t show up in them, “so comprehensive lab work is necessary to confirm use.” She added that if the teens hadn´t told medical staff which substances they had used, it is likely a diagnosis and cause of symptoms would go unknown.

“There is very little available to test for these substances. The tests aren´t routinely available and are costly,” Bruce Goldberger, professor and director of toxicology at the University of Florida College of Medicine in Gainesville, told Kim Carollo at ABC News.

“We sometimes have no idea what we´re dealing with,” added Dr. Corey Slovis, chair of emergency medicine at Vanderbilt University Medical Center in Nashville. “We may see a patient who is extremely agitated with symptoms that could be due to some other kind of drug.”

Slovis said the compounds found in synthetic marijuana are much stronger than real marijuana, and the bigger problem is that it is made with different ingredients. Those ingredients make it difficult to determine specifically which agent or agents is responsible for symptoms.

David Rotenberg, vice president of treatment at Caron Treatment Centers in Wernersville, Pennsylvania, told Denise Mann of WebMD that he is concerned about the rise in use of these compounds and the number of kids who are finding themselves hospitalized as a result.

There are so many unknowns, he said. “You don´t know what you are taking, or what dose you are getting, and what the kid is predisposed to.”

These drugs are perhaps even more attractive to kids who are already abusing other drugs and alcohol. “Kids who have drug problems and are put on probation or are in an outpatient treatment program gravitate toward this stuff because it doesn´t show up in all urine screens,” he said.

“This stuff is bad news,” he added.

Image Courtesy Wikipedia (CC BY-SA 3.0)

Attention-Seeking Children Learn Better Later On

Parents, before you dismiss those constant “look at me” demands from your child, you may want to know that children who crave attention from you are most likely to learn and collaborate when they are older.
A study published in the journal Child Development is the first to show that toddlers´ expectations of how their parent will respond to their needs and bids for attention relate to the acquisition of social rules and norms later in childhood.
Lead author of the study, Marie-Pierre Gosselin, a PhD candidate in the Department of Psychology at Concordia University explains that, “Toddlers whose parents have consistently responded positively to their attention-seeking expect interactions to be fulfilling. As a result, they´re eager to collaborate with their parents´ attempts to socialize them.”
By observing the quality of toddlers´ attention-seeking, Gosselin and co-author David R. Forman, currently at the State University of New York at Geneseo, were able to quantify toddlers´ expectations. Scientists and caregivers have long theorized that toddlers have expectations of their parent´s behavior, however no one had provided a reliable measure of those expectations.
In the study, parents and children were put in the same room and the parent was asked to fill out a long survey with questions that required attention and focus. This usually provoked attention-seeking behaviors in the child.
Some toddlers pointed at and shared objects with their parent, laughed and smiled while talking to the parent, and used phrases like, “excuse me mommy.” This constituted high-quality behavior in the researchers´ eyes.
Low-quality attention-seeking behavior was shown by toddlers who cried, screamed, or even took the parent´s pen and threw it across the room.
Gosselin says that they expected to find that parents who had been attentive, sensitive and responsive to their child in a variety of contexts would have children who showed more positive, high-quality attention-seeking behaviors than children of less responsive parents because these behaviors reflected the child´s expectations of a parent´s response.
For the second part of the study, the child had to watch his or her parent perform a series of actions (such as, how to retrieve a ball using three specific movements) and then try to imitate them.
Gosselin found that toddlers who showed positive attention-seeking behaviors collaborated more with the parent in this task than those who showed more negative attention-seeking behaviors when the parent was busy.
The results of the study shows that it is important to encourage positive or high-quality attention-seeking in toddlers because it predicts their motivation to collaborate and participate in skill building activities.
“For parents it´s important to know that it´s not the amount of attention seeking but really the quality of attention seeking that their toddler displays that matters for their development,” says Gosselin.
Gosselin is now in the process of analyzing data on what happens when the parent is busy on the phone. She says that with the spread of cell phones it is important to see what kind of attention-seeking behaviors children resort to in this situation, how parents respond, and what are the implications for their development. She also plans to look into how toddlers seek attention from teachers and day-care workers.

Helium Shortage Leaves Scientists In No Mood To Celebrate

Michael Harper for Redorbit.com

Helium is a remarkable gas with many roles. During the week it can be used in the laboratory, making sure important equipment like telescopes and MRI machines run cool. On the weekends, helium likes to party, holding your balloons aloft and setting the mood. This kind of “Burning the candle at both ends” is catching up with the gas however, and ultimately may see the end of helium on this planet.

According to some, the world may run completely out of helium gas within 30 years. Such an outage could have major implications on space travel and exploration, scientific and nuclear research, and even medical advances and early detection of diseases.

To make the situation all the more frustrating is the way we are depleting this resource: selling the gas at unbelievable low prices for party balloons and other uses. The writing on the wall is clear: the world is running out of the precious gas at an alarming rate, and scientists worry if current conditions continue, we may have to travel, quite literally, to the ends of the Earth to find more.

Helium is a natural byproduct of petrochemicals and therefore, is a non-renewable resource. The gas is released during natural gas and oil drilling. Therefore, most of the gas are found in the mineral-rich south and southwest. If the gas is not captured, it is released into the air, making it impossible to recover.

From arc welders to MRI machines, helium is used to make machines run cooler, detect leaks, and pressurize tanks.

Sporting the lowest melting point of any element (-452 degrees Fahrenheit), helium is used to cool infrared detectors, nuclear reactors, and MRI equipment. Since helium is used in so many ways by so many fields, it´s been predicted that the Earth may run completely dry of the gas by the end of the 21st century.

In the 1920´s, between World Wars One and Two, the United States decided helium could be incredibly beneficial to the war effort. Seeing the potential need for air power in future wars, the United States government decided to stockpile the gas in very large quantities. Fast forward to 1996, and the United States was left with all of this helium gas stuck in bottles and pipes within a 250 mile radius around Amarillo, Texas, now the helium capital of the world. The US government passed the Helium Privatization Act in 1996 to sell off the helium stockpiles at a price significant enough to more than recover from their initial investment in the gas. The downside, however, is that this price does not reflect market value, meaning we can buy helium for much less than it is worth.

The problem now, according to top scientists, is we´ve become accustomed to buying this precious resource for much less than a premium.

Knowing how important this gas is to science and future research makes Professor Robert Richardson of Cornell University, New York, quite unhappy.

In a report by the US National Research Council, an arm of the US National Academy of Sciences, Richardson had this to say: “In 1996, the US Congress decided to sell off the strategic reserve and the consequence was that the market was swelled with cheap helium because its price was not determined by the market. The motivation was to sell it all by 2015,” according to a report by The Guardian‘s Robin McKie.

“The basic problem is that helium is too cheap. The Earth is 4.7 billion years old and it has taken that long to accumulate our helium reserves, which we will dissipate in about 100 years. One generation does not have the right to determine availability for ever,” Richardson added.

In fact, professor Richardson estimates that the price of a single party balloon is much more expensive than you might think. He estimates that the gas inside a single party balloon may cost as much as $100.

In order to make helium users more aware and subsequently more careful about how they use the gas, Richardson also suggests marking the price of helium up by 20-50%. Richardson hopes that such an increase would encourage users to find ways to recycle the gas.

NASA, for instance, uses up to 75 million cubic feet annually, yet makes no attempt to recycle or recapture lost helium as they pressurize their rocket tanks.

Perhaps what makes such a shortage so painful is the fact that helium is the second most abundant resource in our universe (hydrogen is the first.) Even the wind from the Sun is comprised of helium and yet, due to our atmosphere, we cannot tap into it directly.

We can, however, pull helium from lunar soil, reports McKie.

Some researchers estimate that a new gold rush era could form, breeding a new kind of futuristic prospector digging in lunar soil for helium-3, a gaseous sort of gold. Helium-3 is the second type of stable helium available on the Earth and is very rare on the Earth´s surface. It is plentiful in lunar soil, however, and scientists believe sides of the moon that have a higher exposure to solar winds could be a veritable gold mine for helium-3.

Rock samples brought back from Apollo spaceships and mineralogical maps brought back by the Clementine spacecraft show that these areas of the moon are helium-3 enriched and high in titanium dioxide.

Armed with this evidence and research, Drs. Jeffrey R. Johnson of the U.S. Geological Survey in Flagstaff, Arizona; Timothy S. Swindle of the University of Arizona´s Lunar and Planetary Laboratory in Tucson; and Paul G. Lucey of the University of Hawaii´s Institute of Geophysics and Planetology in Honolulu have developed a helium-3 map of the Moon.

Armed with this gas, research for helium-3 fusion can begin. Helium-3 fusion is already seen as a viable, green and renewable energy resource. As such a light and powerful gas, it could be used in any number of ways. For example, NASA could use the gas to propel their rockets to space with more power than currently harnessed. Marshall Savage, an amateur futurist and author of “The Millennial Project: Colonizing the Galaxy in Eight Easy Steps, says rockets ““¦could get to Mars in a weekend, instead of seven or eight months.”

Swindle and team´s mapping of helium-3 deposits on the moon´s surface is just the beginning of helium prospecting. In the end, Swindle suggests that we´ll be using helium-3 to transport ourselves as far as Neptune and Uranus to find even larger deposits of this precious gas. For now, scientists may have to put some of their tests and research on hold as our stock of helium is just as available and affordable for a children´s birthday party as it is for scientists trying to unlock the mysteries of the universe.

Researchers Point To Specific Insecticide As Bee Killer

Lee Rannals for RedOrbit.com
A growing concern over the dwindling honeybee population has scientists working to try and pin down the culprits behind the decline in numbers, and new research suggests that an insecticide may be the cause.
Researchers wrote in the journal Environmental Science & Technology that neonicotinoid insecticides have played a key role in killing off the pollinators.
The University of Padua team said widespread deaths of honeybees have been reported since the neonicotinoid insecticides were first introduced in the late 1990s.
They suspect the population decrease may be due to particles of the insecticide made airborne by the drilling machines used for planting, which contaminates the air, paralyzing the nerves of honeybees.
The Italian researchers found during their study that the honeybees that flew through the emission cloud of the seeding machines used to plant corn were dying.
They tested different types of insecticide coatings and seeding methods in an attempt to make the pneumatic drilling method safer.
The team found that all of the variations in seed coatings using neonicotinoid insecticides killed honeybees that flew through the emission cloud.
Although this study is new, the results are nothing that scientists and beekeepers did not already speculate or know.
Eric Mussen, a bee expert for the University of California, Davis, told RedOrbit in an email that the insecticide in the study is just one of many devices used by man to help kill off honeybees.
“The neonicotinoids are simply one of a number of types of insecticides that can outright kill or, at sublethal doses, negatively impact honey bees and other bees, as well,” Mussen, Ph.D, an extension apiculturist at UC Davis who was not a part of the research, told RedOrbit.
He said exotic mites and diseases have presented more challenges into the honeybee population than exposures to pesticides.
“However, as pesticides continue to be formulated to be more effective target pest eliminators, they tend to impact non-targets more, as well,” Mussen wrote.
According to Mussen, things may never return back “to the good old days”, because so much of the bee habitat has been overtaken by airports, shopping centers, residential areas and other developments.
“Even undisturbed areas, that used to produce large expanses of blooms, inexplicably tend to have sparse numbers of plants on them, now,” he said.
He said our air and soil contain pollutants from decades of exposure, so even if everyone was to stop all new pollution, it would not clean things up immediately.
“We have a lot of research ahead of us to determine what synergisms are occurring in our
beehives,” Mussen said. “Once we have that knowledge, then the commercial crop producers will do what they can to help protect our bees.”
Researchers wrote in a paper “The Plight of the Bees” published last year in the same journal as the new study that individuals do not have to wait for research and policy to make their own difference.
They said individuals can modify their landscapes around their home to make them healthier for bees.
“Promoting the health of bee pollinators can begin as an individual or local endeavor, but collectively has the far-reaching potential to beautify and benefit our environment in vital and tangible ways,” the authors of The Plight of the Bees paper wrote in the journal.

Native American Tribe Granted Permission To Hunt Bald Eagles

The US government for the first time has granted a Native American tribe in Wyoming permission to kill two bald eagles for a religious ceremony, opening the door for future negotiations for native tribes to claim more of the nationally honored birds for their long-suppressed religious freedoms, the tribe said.
The US Fish and Wildlife Service (FWS) granted the Northern Arapaho Tribe a permit on March 9 allowing it to either kill or capture and release two bald eagles this year. The permit application was first filed by the tribe in 2008, and after years of review, the FWS decided to allow the hunt to take place.
“They did make a case for why the take of a bird from the wild was necessary,” Matt Hogan, Denver regional director for the FWS, told CNN´s Eric Fiegel.
The tribe filed a lawsuit in federal court last year challenging the denial of the application by the government, saying it “unreasonably burdens the religious rights of tribal members,” according to court documents. Despite the FWS´s permit approval that case is still pending.
Hogan said the reasoning behind the approval had nothing to do with lawsuit. He said it took some time to make sure all the criteria were met and that the permit was in accordance with the Bald and Golden Eagle Protection Act, which allows Bald Eagles to be used in Native American religious ceremonies.
While the religious sincerity of the Northern Arapaho tribe isn´t being called into question, there is concern among animal advocates, who believe there are other ways to honor spiritual traditions without having to kill the revered creatures. They say the tribe could raise captive birds, or accept eagle feathers or carcasses already available from a federal repository that collects birds killed by accidents or other causes.
But tribal leaders said that program was quite inadequate. Spiritual leader Nelson White said tribal members have in the past received badly decomposed eagles from the national repository. And one tribal member waited five years to receive an eagle from the program, only to open the box and find a goose.
Wayne Pacelle, president and CEO of the Humane Society of the United States, the country´s largest animal protection agency, said the FWS´s decision was alarming. “There is something unsettling about allowing the authorized killing of the bald eagle,” Pacelle told Reuters reporter Laura Zuckerman.
Tribal leaders said the permit, which can be renewed yearly, is a “start” but two eagles are not enough to meet the needs of 9,600 Northern Arapahos living in west-central Wyoming. “After further negotiations are pursued, we may be able to obtain even more eagles down the road,” William C’Hair, the tribe’s language and cultural commissioner, told Zuckerman.
The Bald Eagle was nearly extinct before the government banned the pesticide DDT in the early 70s and later adopted federal protections for the raptor. Legislation in the late 70s was instrumental in helping the bird bounce back to healthy numbers, with breeding pairs soaring from 400 in 1963 to more than 9,500 today.
The eagles were removed from the threatened and endangered species act list in 2007, but there are still other federal laws in place protecting them from being hunted and killed. However, the FWS determined it was allowable for native people to claim bald eagles for their religious ceremonies as long it did not endanger the preservation of eagle populations.
Diane Katzenberger, a spokeswoman for the Fish and Wildlife Service, told Zuckerman that the agency will use the criteria from this case and the Bald and Golden Eagle Protection Act when evaluating similar requests from other tribes. “However, at this time we do not have other pending permit applications,” she noted.
The Northern Arapaho say the eagle represents a powerful figure in its tribal lore and in their spiritual practices, many of which were historically outlawed by the Federal Bureau of Indian Affairs accompanied by a program of involuntary acculturation of native peoples.
Robert Holden, deputy director of the National Congress of American Indians, told CNN’s Fiegel that the eagle “flies higher then any other creature. It sees many things. It’s closer to the Creator.” He said he was disturbed by the comments made that giving the tribes permission to claim eagles would lead to a mass killing of the creatures.
“How stupid can that be?” he said. “It´s a religion. It´s what we do. We´re more concerned about the eagle population than any culture in this Western Hemisphere. Why would we want to kill all the eagles?”
Hoglden said the issuance of the permit will have little impact on the bird. Taking two eagles from the wild “will not in any way jeopardize the status of the eagle population, either in the state of Wyoming or nationwide,” he said.
The Northern Arapaho decline to say what they will do with the eagles once they kill them. But Harvey Spoonhunter, a tribal elder and former chairman of the Northern Arapahoe Business Council, said the eagle has been with us since the beginning of time, and we “respectfully utilize the eagle in our ceremonies “¦ We get to utilize the eagle, which we consider a messenger to the Creator.”
Only a few tribes still practice ceremonies that require them to kill eagles, Suzan Shown Harjo, president of Washington, DC-based Indian rights group Morning Star Institute, told Ben Neary of the Associated Press (AP). From the 1880s through the 1930s, the government enforced so-called “Civilization Regulations” that criminalized many traditional ceremonies, including the Northern Arapaho´s Sun Dance.
The permit is good until February 2013, and Holden said he knows of no other applications being filed. The permit requires that tribe members must notify the FWS within 24 hours once a bald eagle is captured or killed.

‘Off Topic’ Thinking Linked To High Working Memory Capacity

Having a mind that wanders and drifts off into thoughts unrelated to the task at hand might not be such a bad thing after all, according to a new study published online by the journal Psychology Science last Wednesday.

In fact, according to PsychCentral Senior News Editor Rick Nauert, researchers at the University of Wisconsin-Madison and the Max Planck Institute for Human Cognitive and Brain Science have discovered that this phenomenon is actually associated with working memory capacity, a trait that has long been associated with reading comprehension, IQ score, and other generally accepted measured of mental aptitude.

“According to researchers, our minds are wandering half the time, drifting off to thoughts unrelated to what we´re doing,” Nauert said. “In the new study, researchers believe a wandering mind is linked to working memory–a mental workspace that allows you to juggle multiple thoughts simultaneously. Working memory allows an individual to multi-task and retain information while performing other activities.”

As part of their study, Daniel Levinson and Richard Davidson of the University of Wisconsin-Madison and Jonathan Smallwood at the Max Planck Institute for Human Cognitive and Brain Science, asked volunteers to perform one of two basic tasks. Participants could either push a button once a specific letter appeared on a screen, or tap in time with their own breathing. The experts would then measure the frequency at which they would lose focus.

“Throughout the tasks, the researchers checked in periodically with the participants to ask if their minds were on task or wandering,” the US-based university said in a statement Thursday. “At the end, they measured each participant’s working memory capacity, scored by their ability to remember a series of letters given to them interspersed with easy math questions.”

Regardless of which task a person had been given to complete, the researchers discovered that individuals who had higher working memory capacity reported that their mind wandered more often during these simple tasks, though their performance on their respective tests did not suffer as a result.

According to the press release, this is the first time that a study has unearthed a positive correlation between a person’s working memory and the act of mind wandering, and they believe that the discover “suggests that working memory may actually enable off-topic thoughts.”

“What this study seems to suggest is that, when circumstances for the task aren´t very difficult, people who have additional working memory resources deploy them to think about things other than what they´re doing,” Dr Jonathan Smallwood of the Leipzig, Germany-based Institute, told The Telegraph on Friday.

“Our results suggest the sorts of planning that people do quite often in daily life–when they are on the bus, when they are cycling to work, when they are in the shower–are probably supported by working memory,” he added. “Their brains are trying to allocate resources to the most pressing problems.”

As ISPs Prepare to Police Web Piracy, Questions of Efficacy and Motive Remain

Jedidiah Becker for RedOrbit.com

Depending on whether or not you use peer-to-peer networks for file sharing, you may or may not remember a brief battery of blogosphere fireworks last summer after a handful of major internet service providers (ISPs) announced that they had reached an agreement with the Motion Picture Association of America (MPAA) and the Recording Industry Association of America (RIAA).

According to that deal – euphemistically dubbed the “Memorandum of Understanding” (MU) – the US’s largest ISPs, including Comcast, Cablevision, Verizon and Time Warner Cable agreed to help the entertainment industry crack down on Web users who downloaded copyrighted material via peer-to-peer networks, the most common of which for the past several years has been BitTorrent.

Thus, already the gatekeepers to the World Wide Web, ISPs would now also serve as the Praetorian Guard to that motley conglomerate of record labels and film factories on which we’ll bestow the simple sobriquet “Big Entertainment.”

If you don’t remember any of this happening, don’t worry – you’re not the only one. For obvious reasons, ISPs were eager to elude the initial flare up of media attention that surrounded the deal, and within a month the issue had all but slipped down the memory hole à la collective public amnesia.

Moreover, while the ISPs initially announced that they would try to have formal piracy-policing mechanisms in place by the end of the year, 2011 came and went, and the cyber scaramouches continued to upload and download pirated media, thumbing their noses at the likes of Disney and Columbia Records.

Neither Big Entertainment nor ISPs have forgotten the Memorandum of Understanding, however, and this week a group of American media publishers met in New York to discuss a timeline for implementing its measures.

According to tech news site CNET which broke the news of the deal last summer, RIAA CEO Cary Sherman says that the majority of participating ISPs should be ready to launch the program by July 1 of this year. Sherman also explained that the logistics of implementing the program required a lot of technical preparation on the part of the ISPS; hence the nearly half-year delay.

“Each ISP has to develop their infrastructure for automating the system,” Sherman said. This, he explained, is necessary “for establishing the database so they can keep track of repeat infringers, so they know that this is the first notice or the third notice. Every ISP has to do it differently depending on the architecture of its particular network. Some are nearing completion and others are a little further from completion.”

MU´S PROTOCAL FOR PIRACY-POLICING

“Repeat infringers?” “First notice or third notice?” So what exactly does the ISP-entertainment cabal have in store for hardened, repeat transgressors of copyright law?

The pact will allow the owners of copyrighted media – say, Sony, 20th Century Fox or Universal – to eavesdrop on digital exchanges taking place over peer-to-peer networks. When the content owners detect the illicit sharing of copyrighted media, they then turn over the user’s IP number to the internet service providers who are then expected to initiate a “graduate response” procedure.

(Note: Don’t forget, if a subscriber has a router used by numerous devices, the ISP will only have the IP address of the router associated with the specific subscriber´s account, not the various devices that use it.)

This “graduated response” consists of a series of escalating notices aimed at deterring the subscriber´s piratic activity.

The first notices, called the ‘Initial Educational Steps,’ will simply inform the offending Internet subscribers that they are in violation of both their ISP´s terms of service agreement as well as federal copyright law. The customer may be sent more than one such educational notice and will also be informed that failure to desist will result in more severe measures.

The educational step is followed by the so-called ‘Acknowledgement Step,’ whereby users who continue to exchange copyrighted media are sent a letter requesting that they formally acknowledge that they are infringing on copyright laws as well as pledge to end the illicit activity.

If both the educational and acknowledgement notices fail to deter the offending account holder’s activity, ISPs are then supposed to send out a Mitigation Measure Copyright Alert, informing the subscriber that his or her account is now subject to Mitigation Measures (i.e. punishment). As in the previous step, this also requires the subscriber’s formal acknowledgement.

Like cyber-Torquemadas, the participating ISPs then have an arsenal of deterrents at their disposal, each intended to inflict various degrees of discomfort on their contumacious customers and thus inspire their contrition (or at least extreme annoyance). For instance, the ISP might repeatedly reroute the offending subscriber to educational pages that inform him about the evils of IP-apostasy.

But the ISPs also have the option of adopting more severe measures. Alternatively, they can opt to strangulate the user’s connection, bringing their browsing speed to a crawl and plunging them back into the Internet Stone Ages – you remember, when the simple task of checking emails or downloading a single picture provided you with enough idle time to grab a cup of coffee and thumb through Columbia’s music-club catalogue.

According to the 36-page Memorandum of Understanding, the ISP can even opt to simply suspend the subscriber’s account. Not surprisingly, however, none of the participating ISPs have agreed to implement this measure.

THE WHO’S AND WHY’S

With the program set for a July 1 roll out, there remain a number of questions surrounding the collaborative Web-policing program.

The first regards the potential efficacy of the program. According to the MU document, the program is intended to erect a sort of paralegal institution for both educating the public on copyright laws and dissuading it from transgressing them.

It’s reasonable to assume, however, that upwards of 95 percent of all media-pirating Web users are fully aware that they’re violating existing laws – and 95 percent is probably a conservative estimate.

This admittedly leaves only the ‘dissuading’ element of the program as being potentially relevant, and this relies on the threats of service to providers to attenuate their customers’ service and, more importantly, to actually follow through with those threats.

But lest we forget, in the eyes of ISPs, offending users are first and foremost ‘customers’ rather than mere ‘copyright infringers.’ Moreover, the UP agreement leaves those ISPs a significant deal of latitude in how they deal with delinquent subscribers.

At the end of the day, what reason does an ISP – say Comcast – have for badgering and threatening its customers and eventually driving them into the arms of its competitor? No doubt Verizon, for example, who has maybe adopted a more lenient approach to UP enforcement, is more than happy to welcome the prodigal pirate-subscriber to its fold?

Thus, will ISPs really willing to sacrifice customers (speak revenue) merely to keep in the good graces of Big Entertainment?

This leads inquiring minds to a second question; to wit, what possible dangling carrot could have lured ISPs into a voluntary deal that can – at least in this author’s limited knowledge – does nothing but harm their business?

From the side of Big Entertainment, of course, there’s no question of motive. The ubiquity and increasing sophistication of the Internet has their archaic and moribund business model creaking and tottering under its own weight. Like a man in spasmatic death throes, they’re flailing about and grasping desperately for anything that might help them survive another day – and mulct another dollar from media-loving consumers. But this issue has been written about ad nauseam in recent years and we’ll waste no more words on it here.

Simultaneously desperate and flushed from a series of minor legal victories, the entertainment barons recently saw their efforts to install a draconian and right-abusing legal regime (i.e. SOPA-PIPA) collapse in amidst cacophonous chants of “Internet freedom.”

Thus exposed to a public that finds their unscrupulous attempts to preserve their ailing cash cow increasingly repugnant, Big Entertainment has nothing to lose from the Memorandum of Understanding, however ineffective it may turn out to be.

But why the ISPs?

I’ll leave that question open for readers to deliberate, and in lieu of an answer I’ll point to an observation made by CNET which initially broke the story of the agreement last summer:

Negotiations over the main points of the Memorandum of Understanding dragged on for years between the ISPs and organizations representing the entertainment industry, and it wasn’t until members of the president’s administration stepped in to broker the deal that the reluctant ISPs began to acquiesce.

Curious.

Research Confirms Menopause Causes Memory Problems

According to research published in the journal Menopause, the journal of the North American Menopause Society, women really do suffer memory problems when going through menopause.
Millions of women going through menopause have complained about forgetfulness or described having “brain fog” in their late 40s and 50s.
In a new study, researchers gave women various cognitive tests to validate their experiences and provide some clues to what is happening in the brain as menopause hit.
“The most important thing to realize is that there really are some cognitive changes that occur during this phase in a woman´s life,” Dr. Miriam Weber, a neuropsychologist at the University of Rochester Medical Center who led the study, said in a recent statement.
“If a woman approaching menopause feels she is having memory problems, no one should brush it off or attribute it to a jam-packed schedule. She can find comfort in knowing that there are new research findings that support her experience. She can view her experience as normal.”
The researchers studied 75 women from age 40 to 60 who were approaching or beginning menopause.  The women underwent various cognitive tests that looked at several skills, including their abilities to learn and retain new information.
They were asked about menopause symptoms related to depression, anxiety, hot flashes, and sleep difficulties.
The team also checked their blood levels of the hormones estradiol and follicle-stimulating hormone.
The researchers found that the women’s complaints were linked to some types of memory deficits, but not others.
According to the study, those women who had memory complaints were more likely to do poorly in tests designed to measure “working memory”, which is the ability to take in new information and manipulate it in their heads.
Tasks in working memory include things like being able to calculate the amount of a tip, adding up a series of numbers, or adjusting an itinerary on the fly.
The team also found that women’s reports of memory difficulties were associated with a lessened ability to keep focus on a challenging task like doing taxes or driving in a long road trip.
The women in the study were more highly educated and on average of higher intelligence than the general population, according to Weber.
The study said that anywhere from one-third to two-thirds of women during menopause report forgetfulness and other difficulties.
“If you speak with middle-aged women, many will say, yes, we´ve known this. We´ve experienced this,” Weber said in a press release. “But it hasn´t been investigated thoroughly in the scientific literature.
“Science is finally catching up to the reality that women don´t suddenly go from their reproductive prime to becoming infertile. There is this whole transition period that lasts years. It´s more complicated than people have realized.”

New Research Suggests That White Rice Increases Risk Of Type 2 Diabetes

According to a new study published in the British Medical Journal, regularly eating white rice significantly increases the risk of Type 2 diabetes.
The authors from the Harvard School of Public Health looked for evidence of the association between eating white rice and Type 2 diabetes in previous studies and research. The new study focuses on finding a direct link between the risk and the amount of rice eaten. This study also seeks to determine if the risk of Type 2 diabetes is greater in Asian countries, whose diet consists of more white rice than westerners.
“What we’ve found is white rice is likely to increase the risk of Type 2 diabetes, especially at high consumption levels such as in Asian populations,” Qi Sun of the Harvard School of Public Health told AFP.
“But at the same time people should pay close attention to the other things they eat.
“It’s very important to address not just a single food but the whole pattern of consumption.”
Sun´s team first discovered the link in their analysis of four previous studies conducted in China, Japan, Australia, and the United States. These studies followed 350,000 people for anywhere from 4 to 22 years. Out of these groups of people, more than 13,000 (less than 5%) developed Type 2 diabetes.
The studies carried out in Asian countries China and Japan found that those who ate white rice regularly were 55 % more likely to develop the disease. Those who lived in Australia and the United States, where white rice isn´t consumed as often is in Asian countries, were 12% more likely to develop the disease.
The previous studies also found that the Asian participants ate 3 to 4 servings of white rice a day compared to the westerners one to 2 servings of white rice a week.
Sun´s team found that, on average, the risk of type 2 diabetes is increased by 10% with each increased serving of white rice.
White rice is the most common form of rice eaten worldwide. It gets its color from being machine-processed, which removes the hull, leaving behind a white grain which consists of mostly starch.
Brown rice, on the other hand, contains its natural fiber and vitamin content, as well as its brown color. Brown rice also has a lower glycemic index , or sugar content, than starchy white rice.
Sun´s team suggests that the high risk of Type 2 diabetes from high white rice consumption may be caused by the lack of the nutrients found in brown rice.
“I don’t think I can put forward a 100-percent confirmed case, given that this is a meta-analysis of four original studies,” Sun said.
“But I see a consistency across these studies, and there is biological plausibility that supports the association between white rice consumption and diabetes.”
In light of these studies and findings, the authors of this research recommend eating more whole grains instead of refined carbohydrates such as white rice. This sort of change in diet could slow down the global trend of diabetes.
According to the US Centers for Disease Control and Prevention (CDC). diabetes affects nearly 350 million adults worldwide.

Researchers Study Negative Effects Of Cell Phone Use During Pregnancy

Yale School of Medicine researchers have concluded that exposure to cell phones during pregnancy affects the brain development of the offspring and may cause hyperactivity.

The researchers are drawing their conclusions based on studies conducted on mice.

“This is the first experimental evidence that fetal exposure to radio frequency radiation from cellular telephones does in fact affect adult behavior,” said senior author Hugh S. Taylor, M.D., professor and chief of the Division of Reproductive Endocrinology and Infertility in the Department of Obstetrics, Gynecology & Reproductive Sciences.

To conduct the study, the researchers exposed two groups of pregnant mice to different levels of radiation. The first group of pregnant mice was exposed to radiation from a muted and silenced cell phone placed on top of their cages. A call was placed and left active for the duration of the test, which lasted 17 days. The second group of mice acted as a control and were left in the same conditions but with the cell phone deactivated.

According to the study, the mice were exposed to “800-1900 MHz frequency radiation.” What is less clear, however, is whether or not the mice were exposed to the entire spectrum between 800 and 1900 MHz or if a specific part of this frequency was active during the 17 day long test.

After the mice that had been exposed to radiation as fetuses had grown into adults, the researchers measured their brain electrical activity as well as conducted several psychological and behavioral tests. These adult mice showed signs of hyperactivity and reduced memory capacity. Taylor attributed the behavioral changes to an effect on the development of the prefrontal cortex region of the brain during pregnancy.

Attention deficit hyperactivity disorder, or ADHD, is a developmental disorder characterized by the same kind of hyperactive and short attention span. The neuropathology of ADHD is localized in the prefrontal cortex region of the brain.

“We have shown that behavioral problems in mice that resemble ADHD are caused by cell phone exposure in the womb,” said Taylor. “The rise in behavioral disorders in human children may be in part due to fetal cellular telephone irradiation exposure.”

Taylor isn´t suggesting that all pregnant women put down their cell phones. Further research still needs to be conducted in order to better understand the mechanisms behind these results. Once these mechanisms are better understood, then exposure limits can be established for women who are pregnant. Taylor still suggests exercising care and limiting exposure of the fetus to cell phone radiation, however.

Tamir Aldad, the first author of this study, added that rodent pregnancies are much shorter than human pregnancies, lasting only 19 days. In addition, the rodent offspring are born with a much-less-developed brain than human babies. Therefore, further research needs to be conducted in order to determine if the same risks of exposure found in the tests on rodents can translate to humans.

“Cell phones were used in this study to mimic potential human exposure but future research will instead use standard electromagnetic field generators to more precisely define the level of exposure,” said Aldad.

The results of this study were published in the March 15th issue of Scientific Reports, a Nature publication.

Best Educational Apps For Students And Teachers

Derek Walter for RedOrbit.com

As the newest iPad launches today, odds are that it will become an even more popular tool for use in the classroom.

A wide range of powerful apps are distinguishing themselves in enhancing classroom instruction, managing student behavior, and organizing the unique chaos of planning for each school day.

Educreations (Free): This whiteboard app is one of the best for putting a whiteboard in your hand. The interface is much like interactive whiteboard software found on a Smartboard or Promethean Board. Those who prefer a styles could pair their iPad with one of the many third-party styluses available for a better writing experience.

Those who are tired of having their students ask, “what was that about again?” can record the lesson while teaching for later playback. These lessons can also be added to a gallery inside the app that other users have uploaded.

The app includes a gallery of pen colors or the ability to upload images saved on one´s iPad. Users that are really particular about their color choices can also customize the pen colors.

Class Dojo: While it does not have a native app for iPad, Class Dojo has become a strong alternative to traditional methods for improving student behavior.

What makes Class Dojo different from the usual, dry character-building program is the familiarity kids will feel with it. It uses icons, badges and pop-up notifications; all tech tools that school-age children know through their own smartphones or iPod touches.

Students are assigned a “dojo monster,” complete with a customizable icon. When paired with an interactive whiteboard or projector, teachers can initiate popup notifications to reward good behavior or chastise naughtiness.  The running tally remains on the board to encourage students to achieve higher levels of proper behavior.

Smartphone and tablet owners can install a web app onto their device from the Class Dojo web site.  By doing this, teachers can add or deduct points from the little monsters while away from a computer.

Class Dojo is the creation of entrepreneurs Sam Chaudhary and Liam Don. Chaudhary taught social science in the UK before joining with Don to launch Class Dojo.

The peculiar name Class Dojo comes from a brainstorming session when the team stumbled across the Japanese term “dojo,” which means “place of the way.” The concept was that the classroom could be a place of instruction for proper behavior and good character.

“We have 40 years of education research that shows behavior is a major indicator of academics, career success, or social ills like substance abuse and criminal prosecutions,” he said. “Self-control is the biggest predictor of lifetime success.”

Even better for teachers, Class Dojo accumulates the points into a document that is easy to print or e-mail to parents. Teachers can use the points for classroom rewards or goal setting. Those who sign up for a free account can also vote on future features they would like to see added. The developers said they are going to ramp up feature releases this year.

Star Walk ($4.99): Whether or not you are an astronomy teacher, Star Walk is an immersive app for allowing students to explore the universe. Star Walk fills the iPad screen with the location of the stars in real time.

For the best experience, take it outside at night and hold it over your head. Identify the stars and constellations overhead while moving it around to explore the sky. Or to explore the galaxies, swipe on the screen with your finger.

The app also includes a graphic of the moon and other planets with a variety of data. There is also some ominous music that plays in the background, though this can be muted if you wish.

Evernote (Free): Evernote has grown from a simple note-taking app into a powerful platform for educators. Users can write lesson plans, save files, and upload images and articles from across the web with its assorted range of clipping tools.

Evernote´s most compelling feature is its ability to sync across one´s iPad and other devices. There is an Evernote app for iPhone, Android, Mac, PC, and web browsers.

While Evernote is free, frequent users may consider opting for the $45-per-year premium account. This offers an unlimited amount of data upload, while the free accounts cap users at a specified amount.

Those that dig Evernote should also consider Skitch, an Evernote app that, like Educreations, allows users to ink up a page.

Math Board ($4.99): This app has the potential to be the math teacher´s best friend. It has a clever interface mixed with strong tools for enhancing math lessons. It is primarily geared toward elementary students who may be learning or strengthening skills in multiplication, division, subtraction, or division.

It supports output to a projector or interactive whiteboard with a VGA connection. For the truly geeky, output it wirelessly to an Apple TV to roam the classroom while continuing to use the app.

Users can also download a free eBook manual written by the developers from the iBookstore.

FCC’s Chief Legal Advisor Steps Down

Federal Communications Commission (FCC) senior counsel and chief legal adviser Amy Levine is leaving the agency, the agency announced on Wednesday.

Levine had been promoted to legal adviser to FCC Chairman Julius Genachowski in February 2011.

No specific reason was given for her departure beyond an FCC statement saying she is planning to “relocate from the Washington area.”

Genachowski praised Levine’s work on the spectrum incentive auction deal in Congress.

“Amy brought to my office a rare combination of legislative expertise, policy know-how, and consensus-building. Among Amy´s accomplishments was her tireless dedication to working with Congress to see the historic incentive auctions legislation through its recent signing into law. It goes without saying that we´ll miss Amy deeply,” he said.

In addition to spectrum policy, Levine was involved in wireless and homeland security issues, and played a major role in the FCC’s rejection last year of AT&T’s bid to acquire T-Mobile on the grounds it would thwart competition.

Charles Mathias in the FCC´s Wireless Telecommunications Bureau will assume Levine´s role on an interim basis as acting senior counsel and legal adviser.

Separately, Genachowski praised on Thursday the creation of the FCC´s new ℠Leading by Advancing Digital´ (LEAD) Commission, which will be tasked with creating a blueprint for advancing the digital transition of education.

The FCC and Obama administration have been pushing broadband as an educational imperative through subsidies and incentives designed to build out high-speed networks to schools and facilitate a move from traditional textbooks to digital versions.

The Commission will solicit input from teachers, parents, school officials, technology leaders and others, and will be co-chaired by Columbia University President Lee Bollinger; James Coulter of TPG Capital; Former Secretary of Education Margaret Spellings and James Steyer, CEO of Common Sense Media, where Genachowski is a founding board member.

Genachowski and U.S. Department of Education Secretary Arne Duncan said the new Commission would help students and teachers achieve their full potential.

“I´m pleased these leaders are rising to the challenge Secretary Duncan and I set out to harness technology to help our students reach their full potential.  I´m confident the LEAD Commission´s blueprint will chart a course to ensure that education technology will help prepare students to compete in the 21st century global economy,” Genachowski said.

“It´s no exaggeration to say that technology is the new platform for learning. Technology isn´t an option that schools may or may not choose for their kids. Technological competency is a requirement for entry into the global economy — and the faster we embrace it — the more we maintain and secure our economic leadership in the 21st century,” said Secretary Duncan.

The Commission´s main goals are to develop a fact base of current efforts, key trends, cost implications and obstacles to the adoption of critical existing technologies.  LEAD will also investigate the ways in which technology has been a catalyst for improvement in other sectors, and the implications for teaching and learning.

Finally, the Commission will recommend the types of policies and funding mechanisms that may be needed to ensure that school systems can best utilize technology.

Additional information about the Commission can be found at http://www.leadcommission.org/.

Summer Bugs Appearing Sooner Than Expected

Experts say the mild winter will cause the summer bugs to appear early.

Rising Sea Levels Could Boost Storm Surges

Sea level rise due to global warming has already doubled the annual risk of coastal flooding of historic proportions across widespread areas of the United States, according to a new report from Climate Central. By 2030, many locations are likely to see storm surges combining with sea level rise to raise waters at least 4 feet above the local high-tide line. Nearly 5 million U.S. residents live in 2.6 million homes on land below this level. More than 6 million people live on land below 5 feet; by 2050, the study projects that widespread areas will experience coastal floods exceeding this higher level.

Titled “Surging Seas,” the report is the first to analyze how sea level rise caused by global warming is compounding the risk from storm surges throughout the coastal contiguous U.S. It is also first to generate local and national estimates of the land, housing and population in vulnerable low-lying areas, and associate this information with flood risk timelines. The Surging Seas website includes a searchable, interactive online map that zooms down to neighborhood level, and shows risk zones and statistics for 3,000 coastal towns, cities, counties and states affected up to 10 feet above the high tide line.

In 285 municipalities, more than half the population lives below the 4-foot mark. One hundred and six of these places are in Florida, 65 are in Louisiana, and ten or more are in New York (13), New Jersey (22), Maryland (14), Virginia (10) and North Carolina (22). In 676 towns and cities spread across every coastal state in the lower 48 except Maine and Pennsylvania, more than 10% of the population lives below the 4-foot mark.

Tidal gauge records show that the sea has already risen 8 inches globally during the last century, and projections point to a steep acceleration. “Sea level rise is not some distant problem that we can just let our children deal with. The risks are imminent and serious,” said report lead author Dr. Ben Strauss of Climate Central. “Just a small amount of sea level rise, including what we may well see within the next 20 years, can turn yesterday´s manageable flood into tomorrow´s potential disaster. Global warming is already making coastal floods more common and damaging.”

In addition to the Surging Seas report and website, Climate Central is releasing fact sheets laying out the risks for each coastal state. Staff scientists (Ben Strauss, Claudia Tebaldi, Remik Ziemlinski) have also authored two peer-reviewed studies being published March 15th in the scientific journal Environmental Research Letters, with co-authors at the University of Arizona (Jeremy Weiss, Jonathan Overpeck) and the National Oceanic and Atmospheric Administration (Chris Zervas).  In addition to hosting the map tool, the national report, state fact sheets, and the peer-reviewed papers, the website, SurgingSeas.org, includes downloadable data for all the cities, counties and states studied; embeddable widgets; republishable graphics; and links to dozens of local, state and national planning documents for coping with rising seas.

The website also shows how the threat from climate change-driven sea level rise and storm surge is expected to increase over time at 55 tidal gauges around the U.S. and near most major coastal cities. At the majority of these gauges, floods high enough to formerly be called worse than once-a-century events have more than doubled in likelihood.

Land, housing and population vulnerability estimates are based on 2010 Census data and on land elevations relative to potential water levels, and do not take into account potential protections.  However, properties behind walls or levees may suffer enhanced damage when defenses are overtopped, or during rainstorms, because the same structures that normally keep waters out can keep floodwaters in once they arrive.

“Escalating floods from sea level rise will affect millions of people, and threaten countless billions of dollars of damage to buildings and infrastructure,” Strauss said. “To preserve our coastal towns, cities and treasures, the nation needs to confront greenhouse gas pollution today, while also preparing to address sea level rise that can no longer be avoided.

Disproportionate Eyes Help Giant Squids Avoid Predators

Researchers from Swedish and American universities say that they have solved the mystery as to why giant and colossal squid have such enormous eyes, and perhaps unsurprisingly, the oversized ocular orbits are essentially a defense mechanism.

According to Rob Waugh of the Daily Mail, the giant squid can be upwards of 27-feet long and can weigh half a ton, or “as much as five adult men.” Even so, their eyes, which are roughly the size of a basketball, are still “far, far too big for their bodies.” Experts from Lund and Duke Universities set out to discovery exactly why that was the case.

“It doesn’t make sense a giant squid and swordfish are similar in size but the squid’s eyes are proportionally much larger, three times the diameter and 27 times the volume,” Duke biologist Sönke Johnsen, one of the researchers involved on the study, said in a press release. “The question is why. Why do giant squid need such large eyes?”

In order to probe that question, Johnsen, lead scientist Dan-Eric Nilsson of Lund University, and colleagues obtained photographs of the eye of a giant squid and examined a colossal squid corpse from New Zealand, LiveScience Senior Writer Stephanie Pappas wrote on Thursday.

They studied the specimen and the pictures first in order to confirm the purported eyeball size of both types of squid, confirming that they could reach diameters of more than 10 inches. With that knowledge, Pappas said that the team then created a mathematical model showing how well the cephalopods can see in the ocean depths.

For the most part, they found that large eyes would not be beneficial at depths of 1 kilometer (0.6 miles), as objects in motion are detected more frequently as a result of bioluminescence resulting from the provocation of minute aquatic creatures than by sight itself, BBC News Environmental Correspondent Richard Black said.

“For seeing in dim light, a large eye is better than a small eye, simply because it picks up more light. But for animals that live in the sea or in lakes, the optical properties of water will severely restrict how far away things can be seen,” Nilsson said in a March 15 statement.

“Through complex computations we have found that for animals living in water, it does not pay to make eyes much bigger than an orange. Making eyes larger than that will only marginally improve vision, but eyes are expensive to build and maintain,” he added.

There is one exception, though, according to the BBC: that of very large moving objects, such as the sperm whale, one of the squid’s primary predators. In these cases, larger eyes help creatures detect sources of bioluminescence more easily, essentially helping the squid discover the presence of the whales at distances of approximately 120 meters (394 feet) and giving them a greater chance to “take evasive action and avoid being eaten,” Black added.

“They’re most likely using their huge eyes to spot and escape their predators, sperm whales,” Johnsen, who along with her colleagues submitted their research to the journal Current Biology, said. “It’s the predation by large, toothed whales that has driven the evolution of gigantism in the eyes of these squid.”

Image 2: Two men inspect a nearly intact 9.2 meter giant squid. Credit: Photo: NTNU Museum of Natural History and Archeaology, via Wikimedia Commons

Image 3: It´s no surprise that giant and colossal squid are big, but it´s their eyes that are the real standouts when it comes to size, with diameters measuring two or three times that of any other animal. Now, researchers reporting online on March 15 in Current Biology, a Cell Press publication, have used complex computations to explain those massive peepers. Giant squids´ 10-inch eyes allow them to see very large and hungry sperm whales from a distance in the pitch darkness of their deep-sea home. According to the researchers´ calculations, animals living underwater would have no use for such large eyes if the goal were to see an average object, such as prey smaller than themselves. That´s why even the eyes of large whales aren´t much more than 3.5 inches across. Credit: Nilsson et al.: “A unique advantage for giant eyes in giant squid.” Current Biology

Increased Collaboration Between Nursing Home RN And LPN Staff Could Improve Patient Care

Researchers estimate nearly 800,000 preventable adverse drug events may occur in nursing homes each year. Many of these incidents could be prevented with safety practices such as medication reconciliation, a process in which health care professionals, such as physicians, pharmacists and nurses, review medication regimens to identify and resolve discrepancies when patients transfer between health care settings. In nursing homes, both registered nurses (RNs) and licensed practical nurses (LPNs) often are responsible for this safety practice. A recent study by a University of Missouri gerontological nursing expert found, when observed, these nurses often differed in how they identified discrepancies. Recognizing the distinct differences between RNs and LPNs could lead to fewer medication errors and better patient care.

Amy Vogelsmeier, assistant professor in the MU Sinclair School of Nursing, says because pharmacists and physicians often are unavailable, both RNs and LPNs equally are responsible for practices such as medication reconciliation and other activities to coordinate care once patients enter nursing homes.

Vogelsmeier said RNs often are underutilized in nursing homes, though their clinical education and experience give them a greater sense of the “bigger picture,” which leads to better outcomes.

“Right now in the industry, RNs and LPNs often are used interchangeably but inappropriately,” Vogelsmeier said. ℠The solution is not to replace LPNs with RNs but to create collaborative arrangements in which they work together to maximize the skill sets of each to provide the best possible care for patients.”

She says assigning RNs and LPNs complementary roles that maximize their unique abilities will improve patient care and satisfaction. Additionally, Vogelsmeier said offering LPNs enhanced training opportunities may help them build the cognitive skills necessary to work in the current nursing home environment.

“Nursing home care is more complex than it was 10 years ago,” Vogelsmeier said. “People used to move into nursing homes and stay there the rest of their lives, but now they´re using nursing homes to transition between hospitals and their homes. Patients in nursing homes are sicker, and their stays are shorter. That demands better nursing staff coordination of care.”

The study, “Medication Reconciliation in Nursing Homes: Thematic Differences Between RN and LPN Staff,” was published in the Journal of Gerontological Nursing and was funded by the John A. Hartford Foundation and the University of Iowa Gerontological Nursing Interventions Research Center. Vogelsmeier´s coauthors include Jill Scott-Cawiezell from the University of Iowa and Ginette Pepper from the University of Utah.

On the Net:

CDC Launches New Graphic Anti-Smoking Campaign

The Center for Disease Control and Prevention (CDC) announced a new graphic anti-smoking ad campaign today featuring personal descriptions and photographs of people who have suffered effects from smoking.

The campaign, called “Tips from Former Smokers”, will include promotional materials that shows up-close, voyeuristic looks at victims of disease.

The new anti-tobacco campaign will last for 12-weeks and will feature prime-time television spots in which people describe how their lives were changed by smoking.

The CDC campaign will cost $54 million, a number in which the U.S. tobacco industry spends in an average two days of promotional efforts.

“We estimate that this campaign will help about 50,000 smokers to quit smoking,” CDC Director Thomas R. Frieden told The New York Times. “And that will translate not only into thousands who will not die from smoking, but it will pay for itself in a few years in reduced health costs.”

The diseases suffered by the 14 people in the ad campaign include lung, head and neck cancer, Buerger’s disease, asthma, heart attack and stroke.

“I think all too often smokers think, ℠I´ll just die a few years early.´ And that´s true. But there´s often a lot of pain and disability that goes with that. The smokers who volunteered to come forward and be in these ads show that reality,” Frieden told the Washington Post.

About 8 million Americans have smoking-related illnesses, and as many as 443,000 Americans die every year from smoking-related causes.

The U.S. surgeon general warned last week that one in four high school seniors is a regular cigarette smoker.  The surgeon general also said that about 80 percent of those who smoke during high school will continue to smoke as adults.

The new campaign comes just two weeks after a federal judge struck down the Obama administration’s plan to put graphic images covering half of the front and back of each cigarette pack to warn smokers of the danger.

Ob-Gyns Can Prevent Negative Health Impacts Of Environmental Chemicals

UCSF-Led Analysis Calls for More Active Role of Reproductive Health Specialists

Ob-gyns are uniquely positioned to play a major role in reducing the effects of toxic chemicals on women and babies, according to an analysis led by University of California, San Francisco (UCSF) researchers.

The team recommends a multipronged approach that includes evaluating patients´ environmental exposures to chemicals and providing education, in addition to broader strategies to influence government policy.

Over the past 70 years, there has been a dramatic increase in the number of natural and synthetic chemicals to which every person is exposed. A recent UCSF study showed that the bodies of virtually all U.S. pregnant women carry multiple chemicals, including some banned since the 1970s and others used in common products such as non-stick cookware, processed foods and personal care products.

At the same time, rapidly accumulating evidence shows that exposure to environmental chemicals at levels common in daily life can adversely impact reproductive and developmental health.

“Exposures to environmental chemicals before and during pregnancy are of particular concern, since we now know that encountering certain chemicals during these developmental periods is linked to a number of health problems,” said senior author Linda C. Giudice, MD, PhD, professor and chair of the UCSF Department of Obstetrics, Gynecology and Reproductive Sciences.

“Obstetricians, gynecologists and other reproductive health providers can play a groundbreaking role by intervening in critical stages of human development to translate the new science into healthier future generations,” she said.

The study is available online in the American Journal of Obstetrics and Gynecology.

Patient History Should Include Expsosure to Toxic Chemicals

Taking an exposure history is a key first step, according to the study´s lead author Patrice Sutton, MPH, a research scientist with the UCSF Program on Reproductive Health and the Environment.

“We recommend that clinicians always ask women of childbearing age about their occupational exposures to chemicals known to negatively impact health,” said Sutton.

“Many patients who are pregnant or thinking about becoming pregnant already are interested in their environmental exposures, and at the same time, other women of childbearing age are unaware of the risk of their exposures,” she said.

Clinicians should provide guidance about avoiding exposures at home, in the community and at work, and for example, it can be incorporated as part of childbirth classes. At UCSF, tips to avoid toxic environmental exposures are distributed through brochures and the hospital´s electronic health record to women and their partners who see an ob-gyn.

The role of reproductive health professionals in preventing exposure to environmental toxicants extends beyond the clinic or office setting, according to the study authors.  They suggest that physicians can work through their professional organizations to bring about healthier environments through policy change on the federal, state or local level, and within their institutions, clinicians can encourage their hospitals to create healthy food service models.

Society-wide actions are essential for reducing toxic exposures to pregnant women, since individuals cannot control their exposure to many toxins, such as through air pollution, according to Sutton.

The human reproductive system is particularly vulnerable to biological changes caused by chemicals in the environment when exposures occur during critical or sensitive periods of development such as in utero, and during infancy, childhood and adolescence. This vulnerability is partly due to extensive changes, such as cellular growth or hormonal shifts. Yet the majority of chemicals used for commercial purposes have entered the marketplace without testing and standardized information about their potential health effects, including among those most vulnerable, according to the team.

“Reproductive health professionals will find research that assists in minimizing health effects from environmental chemicals, valuable in managing care for patients,” said co-author Jeanne A. Conry, MD, PhD, president elect in nomination  of the American Congress of Obstetricians and Gynecologists and an obstetrician-gynecologist and assistant physician in chief with Kaiser Permanente, Roseville, near Sacramento, Calif.

Pediatricians have long been attuned to their opportunity to counsel patients about preventing harm from hazardous environmental exposures. The American Academy of Pediatrics has had an environmental health committee for over half a century and publishes a clinicians´ handbook for the prevention of childhood diseases linked to environmental exposures.

“In light of what we know about the impact of environmental exposures to chemicals before and during pregnancy, clinicians who see patients during these critical periods can have just as important an impact as pediatricians,” Giudice said. Reproductive health professionals are poised to intervene even earlier in development than pediatricians, she added, so together the two professions can more strongly influence the long-term health of children, adolescents, and adults.

Contributing authors are Tracey J. Woodruff, PhD, MPH, and Joanne Perron, MD, with the UCSF Program on Reproductive Health and the Environment; Naomi Stotland, MD, with the UCSF Department of Obstetrics, Gynecology and Reproductive Sciences; and Mark D. Miller, MD, MPH, with the UCSF Department of Pediatrics.

Financial support for this paper was provided to UCSF by New York Community Trust, the National Institute for Environmental Health Sciences, and the Environmental Protection Agency.

On the Net:

Parents Of Obese Children Encouraged To Begin With Their Own Weight Loss Journey

A study published in the journal Obesity reveals that parents of obese children can lead by example in regards to a child´s weight loss. Researchers at the University of California, San Diego School of Medicine and the University of Minnesota conducted the research and found that when parent´s lost weight, their children were more likely to follow suit.
“We looked at things such as parenting skills and styles, or changing the home food environment, and how they impacted a child´s weight,” said Kerri N. Boutelle, PhD, associate professor of pediatrics and psychiatry at UC San Diego and Rady Children´s Hospital-San Diego. “The number one way in which parents can help an obese child lose weight? Lose weight themselves. In this study, it was the most important predictor of child weight loss.”
The statistics are somewhat shocking. According to recent data, 31 percent of United States´ children are overweight or obese. This percentage equates to nearly 4 to 5 million children. Doctors define obesity to be 20% over normal weight. This means if a child´s normal weight is 100 pounds, then weighing 120 pounds would put them in the obese category.
Parents looking to encourage healthy eating habits are fighting an uphill battle. The Center for Disease Control and Prevention have conducted studies that show popular fast food restaurants are often located in close proximity to schools. Of the 1,292 schools and 613 fast food restaurants included in the study, 35% of the schools were within a five minute walk of a fast food restaurant. More than 80% of schools had at least one fast food restaurant within a ten minute walk.
In order to combat childhood obesity, doctors recommend treatment programs that include both the parent and the child in order to combine nutrition education with exercise and behavior therapy techniques.
“Parents are the most significant people in a child´s environment, serving as the first and most important teachers,” said Boutelle “They play a significant role in any weight-loss program for children, and this study confirms the importance of their example in establishing healthy eating and exercise behaviors for their kids.”
While conducting research for this study, the doctors looked at eighty parent-child groups with an obese child between the ages of 8 to 12. These groups participated for 5 months in either parent-only or parent-plus-child treatment programs.
The researchers then focused on evaluating the effectiveness on three types of parenting skills taught during family-based behavioral treatment for childhood obesity and the subsequent impact on the child´s weight. Parents either modeled behavior to promote their own weight loss, changed the food environment at home, or changed their parenting style and techniques, limiting when and what the child could eat and encouraging them to exercise and be active.
After looking at the data from these studies, the researchers found that the only consistent predictor of a child´s weight loss was the parent´s weight loss and drop in BMI. This new research confirms previously published research.
Clinicians, doctors, and pediatricians are encouraged to suggest parental weight loss, in addition to family-based treatments, to help obese and overweight children lose weight.

Apple Projected To Sell 65 Million iPads This Year

Original predictions of new iPad sales have been increased as Apple quickly sold through their preorder stock. Analysts from Canaccord Genuity originally predicted Apple to sell 60 million of their popular tablet device this year. However, as excitement for the new iPad (simply named “iPad”) grows, the analysts have adjusted their estimates.
They now estimate Apple to sell 65.6 million iPads this year.
Since the original iPad was introduced and released 2 years ago, Apple has sold 55 million units of the device.
Apple´s CEO Tim Cook introduced the new iPad at a press release in San Francisco recently. Touting a new, high resolution “Retina” display, faster graphics processing and improved camera, the new iPad has millions of people ready to buy. Apple began to take pre-orders on March 7, 2012, the day it was announced. In that time, Apple has sold through its stock of preorders. Now, Apple´s website lists a wait time of 2-3 weeks before shipping the iPad.
Those interested in buying the new iPad in stores have only to wait until tomorrow, March 16 at 8am when Apple officially launches the device in the US as well as Canada, Japan, Hong Kong, Singapore, the United Kingdom and other parts of Europe. iPad will ship to 25 more countries a week later on March 23, 2012.
The analysts are expecting record numbers this weekend as the iPad launches to millions of buyers. Apple has put a limit on buyers, allowing only 2 iPads per person in order to control inventory. Last year the iPad 2 sold out within days of its launch. Apple wants to sell as many as they can this weekend without turning people away.
So why the excitement for this new version of the iPad?
In addition to the higher resolution “Retina” display and faster graphics processing (thanks to the new A5X system-on-a-chip processor) the iPad will now be available with 4G LTE capabilities. This new generation of wireless technology boasts download speeds of around 20 mbps. The iPad will ship in its usual variations: black or white, wifi or 4G LTE enabled, and in sizes of 16, 32, and 64 gigabytes. American customers will have the option of choosing either AT&T or Verizon wireless for their LTE coverage. The basic wifi models start at $499 for the 16GB model. 4G enabled LTE iPads will start at $629.
Analysts also say that this weekend´s predicted numbers could cause Apple´s rivals to struggle to release competing products over the next few quarters. According to Reuters, Analysts T Michael Walkley and Matthew Ramsay of Canaccord Genuity said “While Samsung appears to have the most competitive Android tablet offering, we view Windows 8 as a greater longer-term threat to Apple´s dominant iPad share than Android tablets.”
Helping to keep Apple in the dominant position will be last year´s version of the iPad, the iPad 2. Apple has decided to keep the older model available for purchase for $100 less than the newer models.
In anticipation for the launch, Apple´s stock has risen about 2%. Stock prices for Apple closed at $589.75 on Wednesday evening.

Study Finds Developing Countries Suffer From Poor Hospital Care

A new study finds that patient safety is lacking in the world´s developing countries.

According to a recent BBC News report, the study, published in the British Medical Journal, looked at the hospital records of over 15,000 patients in 26 hospitals in 8 countries, Egypt, Jordan, Kenya, Morocco, South Africa, Tunisia, Sudan and Yemen.

The study found that 8.2 percent of patients suffered an adverse event, one where an unintended injury resulted in permanent disability or death that came about as a result of healthcare management.

The most common adverse event, at 34.2 percent, is therapeutic error. Therapeutic error is where the diagnosis is made but the proper therapeutic response was not ordered or not delivered. The next most common error, which followed at less than twenty percent was diagnostic error where the diagnosis was not made properly or in a timely manner or there was a failure to make the proper diagnosis from the information given.

Some hospitals reported almost one in five patients were affected by accidents or poor treatments. The researchers note that adverse events happen in developed nations also, but the results are not as tragic. Patients are more likely to survive an adverse event in a developed nation but will more likely die from the same event in a developing nation.

According to Dr. Itziar Larizgoitia, the coordinator of the World Health Organization (WHO) patient safety group, doctors and nurses are not to blame for the poor hospital care. He says, “It is not the intent of health professionals to fail on patients. Rather, the harm caused by health care is often the result of failing process and weak systems. Often doctors and nurses in developing countries have not received adequate training, are not adequately supervised, do not have protocols to follow nor the means to record patients´ information, or in some cases, do not even have running water with which to wash their hands.”

The study also found that age and length of stay has an effect on the survivability of an adverse event. Older patients have a higher chance of dying due to an adverse event than younger patients. But the longer the stay the larger the risk of failing to survive.

Dr. Ross Wilson, chief medical officer of the New York City health and hospital corporation and author of the paper, told BBC News: “The older you are the more at risk you are and if you have that event the more at risk you are of significant consequences like permanent disability or death. In addition the longer you are in hospital the more at risk you are, but these are the same as in the developed world. People at the extremes of life are more at risk.”

In order to solve this issue the researchers agree that the problem is greater than just adding more resources. Dr. Larizgoitia says culture change is also required.

He told BBC News reporter Matt McGrath: “Developing and adapting patient safety practices to the different cultural contexts are essential. Safety practices that work in one context may not work in another one. It is essential to understand which practices can work effectively in different contexts and it is also essential to facilitate and encourage the adoption of the culturally and context specific practices.”

Autism Link found in Children with Bone Disorders

(Ivanhoe Newswire)– Children with multiple hereditary exostoses (MHE), suffer from multiple growths in their bones that cause pain and disfigurement. However aside from the physical impairments, parents also notice that their children with MHE also experience autism-like social problems.

With the encouragement of parents, researchers at Sanford-Burnham Medical Research Institute are able to uncover the link between autism and MHE by using a mouse model of MHE to investigate cognitive function. They found that mice with a genetic defect that models human MHE show symptoms that meet the three defining characteristics of autism.

“There is growing evidence that many autistic people have related genetic defects, or defects that are exacerbated by this one,” Yu Yamaguchi, M.D., Ph.D., professor in the Sanford Children’s Health Research Center at Sanford-Burnham.” Yamaguchi led this study, along with colleagues Fumitoshi Irie, Ph.D. and Hedieh Badie-Mahdavi, Ph.D., was quoted as saying.

In humans, MHE is caused by a mutation in one of two genes, Ext1 or Ext2. Together, these genes encode an enzyme necessary to produce heparan sulfate–a long sugar chain that helps bone cells grow and proliferate. In this study, Yamaguchi and his team used mice that lack the Ext1 gene in just a certain type of neuron to understand the mechanism of social problems in MHE patients.

The mice were tested for the three defining characteristics of autism: social impairment, language deficits, and repetitive behavior. The team found that the mutant mice were less social than normal mice. They also exhibited language deficiencies, as determined using ultrasound vocalization measurements, a well-characterized substitute for mouse language. Lastly, Yamaguchi’s team took at look at repetitive behaviors in these mice. Using a board covered with holes, they observed that normal mice will poke their noses in many holes at random, while the mutant mice poke their noses in the same hole again and again.

This information clearly demonstrates what the parents of children with MHE have always suspected–the disease affects more than just bones. The genetic defect that causes skeletal deformities also causes social and cognitive problems.

Not all autistic children have MHE, nor are all MHE children autistic. But, according to Yamaguchi, there is evidence that some people who are autistic might have similar defects in heparan sulfate. This is the sugar chain that’s defective in MHE, where it causes bone deformities and social deficits.

“There are a few studies that compared the genomes of healthy and autistic people and they revealed differences in some heparan sulfate-related genes,” Yamaguchi was quoted as saying.

There are most likely many different genetic abnormalities that can lead to autism in the general population. This study and others now indicate that for some, the condition could be caused by mutations in genes encoding enzymes and proteins involved in making heparan sulfate.

Yamaguchi’s team is now comparing DNA from autistic and non-autistic volunteers to look for mutations in heparan sulfate genes. So far the initial results have been encouraging.

“I can’t emphasize enough how much it helped that the parents of kids with MHE got involved and supported this research,” Yamaguchi said. “As parents, they noticed their kids had social problems that gave them challenges at school. School officials and other people didn’t take these observations seriously–they usually just waved off the problems, assuming that the kids’ bone deformities just make them shy. This latest research doesn’t solve any bone issues for MHE patients, but it does help support what the parents always knew–these children need special care.”

SOURCE: Proceedings of the National Academy of Sciences, March  2012.

A Fragrant New Candidate For Biofuel

A  class of chemical compounds used for flavor and fragrance may one day become a clean, renewable resource with which to fuel our automobiles. U.S. Department of Energy researchers have modified the E. Coli bacteria to create large quantities of methyl ketone from glucose. First tests of this methyl ketone show very high cetane numbers. Cetane is a fuel rating system for diesel fuel, similar to octane ratings for gasoline. This makes the methyl ketones a viable candidate for production of advanced biofuels, according to researchers.
“Our findings add to the list of naturally occurring chemical compounds that could serve as biofuels, which means more flexibility and options for the biofuels industry,” says Harry Beller, a Joint BioEnergy Institute microbiologist who led this study. “We´re especially encouraged by our finding that it is possible to increase the methyl ketone titer production of E. coli more than 4,000-fold with a relatively small number of genetic modifications.”
Beller is a corresponding author of a paper describing his work and findings. “Engineering of Bacterial Methyl Ketone Synthesis for Biofuels” was published in the journal Applied Environmental Microbiology  and was co-authored by first author Ee-Been Goh, Edward Baidoo, and Jay Keasling.
Scientists are looking very closely at advanced biofuels as a potential replacement for gasoline, diesel, and jet fuel. Advanced biofuels are derived from non-food plants and other forms of agricultural waste, therefore presenting a clean and renewable resource. Synthesizing these biofuels presents another promising candidate as a fuel replacement. In previous research, Beller and his team were able to use E. Coli to synthesize diesel fuel.
“In those studies, we noticed that bacteria engineered to produce unnaturally high levels of fatty acids also produced some methyl ketones,” Beller says. “When we tested the cetane numbers of these ketones and saw that they were quite favorable, we were prompted to look more closely at developing methyl ketones as biofuels.”
Methyl Ketones were discovered more than a century ago in rue, an aromatic evergreen herb. Today the ketones are widely used as a fragrance for essential oils and cheeses. Beller and his colleagues were able to create large enough quantities of these ketones using the same means of synthesis used to engineer fatty acid-producing E. Coli.
“For methyl ketone production, we made two major modifications to E. coli,” Beller says. “First we modified specific steps in beta-oxidation, the metabolic pathway that E. coli uses to break down fatty acids, and then we increased the expression of a native E. coli protein called FadM. These two modifications combined to greatly enhance the production of methyl ketones.”
Beller and his team tested two types of methyl ketone to determine cetane numbers. In the United States, the minimum cetane diesel fuel must have is 40. Using a combination of both types of ketones, Beller´s team was able to achieve a number as low as 58.4. While this number is quite impressive, the team still have concerns that the melting point of these ketones is still too high to be a completely viable option for a cold-temperature fuels.
The next step for Beller and his team is increase production of the ketones as well as improve their fuel properties.

On the Net:

The Dirty Secret Of Carbon Fiber

Today carbon fiber is heralded for its strength, its rigidity and its ability to be cast in shapes that would be virtually impossible with other materials. Carbon fiber is now commonly used in high performance automobiles, and in the world of sporting goods with everything from skis to bicycles to golf clubs. It is used as casing for laptops and even has been utilized as a material for cutting edge jewelry makers looking to go cutting edge.

Of course the biggest use of carbon fiber is in the aerospace industry, and the new Dreamliner airplane is made of carbon fiber. This material allows for weight savings while still providing that strength and rigidity. Composite materials thus offer a multitude of benefits today.

One irony is that composites are often universally referred to as “space age” when in fact composites are not really all that new. A composite is one that features a multitude of materials, whereby the different materials still remain physically and chemical separate and distinct within the finished structure. In other words, this is about layers of material. This is what gives the finished product — such as bicycle frames and jet aircraft, which are made of long fibers of carbon, its strength and rigidity.

Thus plywood is a type of composite, and all composites have a dirty secret — these are often very hard to recycle. In the case of carbon fiber it is a far bigger problem.

Carbon Fiber´s Carbon Footprint
Because it is a composite, carbon fiber is difficult to recycle, but it gets worse. By its very nature it is neither biodegradable nor photodegradable. It is simply a product designed to last and last. For the user that is really good news. That means that bicycle frame will last for many long rides, and flyers should take comfort that the plane will stand the test of time.

That is all good, until the product reaches its end of life. The good news here is that aircraft are flying longer and longer and a carbon fiber body will endure years of changes in pressure, temperature and other extreme conditions. In the case of a large aircraft small cracks, dings and scratches can be repaired and the structural integrity maintained.

The same cannot be said for bicycles, tennis rackets and other products that take advantage of carbon fiber. While there are companies that can repair a crack in a carbon fiber bicycle frame, the falling price of the materials makes it easier just to replace it instead. The result is that a product that doesn´t biodegrade could be going to a landfill — where it could take hundreds of years or longer to break down.

Recycling is beginning, and in the bicycle industry the largest makers including Giant, Trek and Specialized are beginning to address the problem. But there is another issue, carbon fiber isn´t easy to recycle.

To recycle carbon fiber it is cut down into one inch strips. These cannot be reused in the place of virgin carbon fiber as it wouldn´t have the strength or rigidity necessary for most products. In fact, it isn´t even possible to cut down the strips from a used up Dreamliner and turn into a bike, at least not at present.

Instead carbon fiber is cut down into the strips and melted, where it can be transformed into a thermoplastic. This has benefits, and some recycling efforts are turning the recycled carbon fiber into phone cases, laptop shells or even water bottle cages for bicycles. But that´s a lot of cases, shells and cages from one bike. Imagine how much a single automobile body would produce, not to mention a Dreamliner.

Not a Dream But a Nightmare
Recycling itself has a dirty secret. It really needs to be cost effective to make it worth the time, effort and most importantly the energy used. And with carbon fiber it hasn´t reached the levels of any“¦ yet.

With all recycling it is worth considering the amount of energy that the recycling effort uses — or potentially saves. In the case of recycled soda pop cans made from aluminum only five percent of the energy is used to recycle versus what would be used to produce a can from virgin aluminum. This has in turn created a market whereby it is more cost effective and more energy effective to recycle. It is why there are deposits on cans. So far there is no similar offering in savings of energy or money, and thus few outside the world of carbon fiber are spending the time looking at it.

However, the various industries involved are considering this very closely. As noted bicycle manufacturers — who of course note the green alternative transport that bicycles provide — are leading the efforts. And fortunately companies such as Toray in Japan, which is currently one of the world´s largest producers of carbon-fiber, are trying to address the issue of carbon fiber recycling before it becomes a serious problem.

Dirty Production and Cheaper Products
The other dirty secret of carbon fiber is that its clean lines, smooth texture and often inspired shapes suggests that it is the product of skilled craftsman. This is only partially true. Whereas wood, steel and other materials required someone to actually shape those materials, carbon fiber allows for fantastic shapes dreamed up on a computer in a 3D CAD program and made reality through 3D printing techniques.

After a model is created an actual product can be produced using carbon fiber. This has streamlined the design process, but the skill is mostly used in the computer program. The question to ask is whether this is the same level of artistic skill and “feel” as someone creating a model using older traditional methods.

There is also the issue that carbon fiber factories are far from clean looking. The products may have clean lines, but the production line is one of cut fibers, resins and other materials. It is in a word quite dirty with lots of left over materials — which as with the finished products are not easily recycled.

The final part of the equation is that carbon fiber has long been the body material of a Ferrari automobile, costing more than many houses. Now the material is making its way down market. The prices for bicycles, tennis rackets and golf clubs are falling as well. That means more people get to experience this “space age” material but it just adds to the other problems.

Whereas scrap metal is picked up when left on the curb, will we ever see a point when the junk man wants an old bike frame made of carbon fiber?

Image Caption: A Ferrari branded bicycle, built by Colnago at the Colnago Factory museum outside Milan, Italy. (Credit: Peter Suciu)

On the Net:

Pfizer Chief Says Some European Countries ‘Freeloading’ Off Others

Pfizer said Europe is undermining drug innovation by cutting prices, raising barriers to new medicines and “freeloading” off others in Asia and the U.S. who are willing to pay, according to a Reuters report.

Chief executive Ian Read told Reuters on Monday that European governments are sacrificing the long-term future of science in their countries for the sake of short-term budget cuts.

The report said the chief executive of the world’s largest drugmaker claims there is a disconnect in Europe between the marketplace for pharmaceuticals and the desire of European governments to have innovation and research.

Read said governments in Europe that are becoming increasing reluctanct to pay up for innovative therapies would eventually regret it.

He said the pharmaceutical industry is a high-risk business, and European leaders are sacrificing the long term for the short term.

He used Germany as an example when speaking to Reuters, using Berlin’s recent decision to extend drug price freezes from 2010 and to use a basket of countries like Poland and Greece as a benchmark for how much it will pay for drugs.

Read said they are saying that “investment in innovation is at a level that Greek prices can support.”

“That’s not a recipe to create an innovative industry that can compete on the world stage,” he told Reuters.

He said since Germany is one of Europe’s wealthiest countries, he questioned whether referencing its prices to Greek or Polish levels would offer drug makers a fair return.

“These are the questions I’d like politicians to look at in a fundamental way,” he said. “The risk of freeloading is so great in an industry with sunk costs.”

He told Reuters he would like to see governments taking a longer-term view and engaging on the issue of who should pay for the research and development costs of these new modern medicines.

On the Net:

Red Meat May Shorten Life Expectancy, Study Says

Researchers at Harvard Medical School said that a diet high in red meat may be shortening life expectancy.

The study suggests that red meat increased the risk of death from cancer and heart problems in its 120,000 participants.

The Harvard researchers said substituting red meat with fish, chicken and nuts lowered the risks.

The team analyzed data from 37,698 men between 1986 and 2008, and 83,644 women between 1980 and 2008.

They said that adding an extra portion of unprocessed red meat to someone’s daily diet would increase the risk of death by 13 percent.  They also found that it would increase the risk of fatal cardiovascular disease by 18 percent, and fatal cancer by 10 percent.

The researchers found that the figures for processed meat were higher, with a 20 percent overall mortality, 21 percent for death from heart problems and 16 percent for cancer mortality.

“We found that a higher intake of red meat was associated with a significantly elevated risk of total, cardiovascular disease, and cancer mortality,” the authors wrote in a press release.

“This association was observed for unprocessed and processed red meat with a relatively greater risk for processed red meat.”

The researchers said saturated fat from red meat may be behind the increased heart risk and the sodium used in processed meats may “increase cardiovascular disease risk through its effect on blood pressure.”

The results of the study were published in the journal Archives of Internal Medicine.

On the Net:

Conjunction Of Jupiter And Venus Impresses Skywatchers

If you have noticed an unusual conjunction of lights in the night sky lately, you are not seeing alien craft coming to whisk you away, you are seeing a conjunction of Venus and Jupiter and it makes for an amazing view in the heavens.

The two planets are 450 million miles apart in space, but because they are aligned in the same direction from Earth they are appearing to be almost within touching distance of each other, reports The Telegraph.

They will appear very bright and relatively close over for the next few weeks, and if you live in a rural area where there is less light pollution, you will get the full benefit of the show.

Despite being much smaller, Venus appears brighter because of its relative closeness to Earth and because it gets more intense sunlight than the larger Jupiter.

After March 14, the gas giant Jupiter will appear to drop lower towards the horizon until it is eventually invisible after sunset by mid-April. However in July, early-risers will be treated to a similar spectacle, in the eastern sky at daybreak.

Robert Massey of the Royal Astronomical Society, explained, “although conjunctions are not that rare, the interest in this one is a result of how spectacular it is.”

℠´Both planets are very bright in the night sky. If you know where to look, you can even see Venus in the day. The two being so close together will be beautiful. Last night they looked like two beacons.

℠´It is also interesting for people because it just happens to be something which you can see for yourself. In the northern hemisphere we should look for them in the south west this evening. The pair will appear to move to the west over the course of the night.

℠´While the pair will drift apart after a couple of days, Jupiter will be visible for at least another two weeks.´´

On the Net:

Gut Cells Could Be Used To Produce Insulin For Diabetes Patients

Columbia researchers have conducted a study that suggests cells inside intestines could be employed to make insulin for patients with type I diabetes. Previously, researchers considered stem cell transplants to be the only way to replace lost cells inpatients with type I diabetes. Such a discovery could also mean that these patients would be free from daily insulin injections as well.

Researchers have been conducting their work on mice and published their results in the journal Nature Genetics.

Type I diabetes is an autoimmune disease that destroys pancreas cells used for producing insulin. Once these cells go missing in the pancreas, patients with the disease have to inject themselves with insulin to keep their blood glucose levels in balance.

In addition, patients must use a glucose meter for diabetes to keep their sugars at an acceptable level.

Scientists have long sought to combat the effects of type I diabetes by creating a cell that will do the work of the pancreas cells by releasing insulin into the blood stream when necessary. Researchers have been able to recreate these types of cells in the lab using stem cells. However, these cells are not yet appropriate for use in diabetes patients because they do not release insulin at the appropriate time. If glucose levels go unchecked and unbalanced, a patient could fall victim to hypoglycemia.

Doctors Chutima Talchai, PhD and Domenico Accili, MD and professor of medicine at Columbia University Medical Center conducted the study on progenitor cells in mice. Their research shows that these cells were able to create insulin-producing cells.

Progenitor cells are like stem cells in that they can be used to recreate other cells. However, they cannot divide and replicate cells indefinitely. Doctors Talchai and Accili used progenitor cells from the gastrointestinal tract, as they have been found to produce cells that can recreate serotonin, gastric inhibitory peptide, and other cells and hormones found in the bloodstream and GI tract.

The doctors found their results by controlling a specific gene that has been found to decide what a cell will be, Foxo1. When this gene was flipped off, the progenitor cells began to produce insulin on their own.

These cells could be dangerous if they did not release the right amount of insulin at the right time, but researchers found that these cells did just that.

The insulin-producing progenitor cells used in the mice effectively regulated glucose levels and produced insulin in sufficient quantity.

This research suggests that insulin-producing cells could be reproduced in the GI tracts of diabetes patients, both pediatric and adult alike.

In the press release for the new findings, Dr. Accili said “Nobody would have predicted this result. Many things could have happened after we knocked out Foxo1. In the pancreas, when we knock out Foxo1, nothing happens. So why does something happen in the gut? Why don´t we get a cell that produces some other hormone? We don´t yet know.”

The next step in the research, according to Dr. Accili, is to find a drug that has the same effects on progenitor cells in humans as flipping off Foxo1 does in mice.

“It´s important to realize that a new treatment for type I diabetes needs to be just as safe as, and more effective than, insulin,” Dr. Accili says. “We can´t test treatments that are risky just to remove the burden of daily injections. Insulin is not simple or perfect, but it works and it is safe.”

On the Net:

Diabetics Should Lift Weights Before Cardio Workout

The results of a new study published in the journal Diabetes Care may help diabetics who exercise control their blood sugar better.

The study followed 12 people who averaged age 32, ten men and two women, and already exercised regularly and were diagnosed with Type-1 diabetes. The participants met the researchers at a laboratory for two exercise sessions that were held five days apart.

The participants started one session at 5pm with 45 minutes of treadmill running followed by 45 minutes of weight training. The order of the training was switched in the second session.

The time was chosen in order to simulate the approximate time they participants would normally work out, typically after-work hours.

The researchers were very careful to monitor the participants blood sugar during the exercise sessions. Samples of blood were taken before, during, and after the sessions. The exercise sessions were stopped and the participants were given a snack if the the participants blood sugar dropped below 4.5 mmol/L (millimoles per liter) of blood.

The study found when participants followed the aerobic exercise first plan then their blood sugar was more likely to fall below the 4.5 mmol/L threshold. But when they did weight training first the suffered less severe drops in blood sugar even hours after exercise and post-exercise drops were less severe than before.

Since the study was so small, the researchers were unsure if other mitigating factors, that were not monitored, could have skewed the results.

According to Reuters, for example, they did not monitor the levels of a number of hormones that could lead to changes in blood glucose during exercise. Nor was the patients diet monitored and controlled, but the researchers wanted to simulate real-life conditions faced by people with Type-1 diabetes.

Also the study followed diabetics who were in good physical fitness, would the same results occur with somebody who had less than optimum health?

Dr. Vivian Fonseca, chief of endocrinology at Tulane University Medical School, and was not involved in the study, told Reuters, “While the study findings are very intriguing, they may have limited practical value until more studies are done.”

The authors of the study still recommend that Type-1 diabetics train with weights before their cardiovascular workout in order to better control their blood sugar.

On the Net:

March/April 2012 Annals Of Family Medicine Tip Sheet

Four articles in the current issue draw attention to policy initiatives and implications of the rapidly changing U.S. health care environment. Collectively, they examine some of the challenges and opportunities facing the country following the 2010 passage of the Patient Protection and Affordable Care Act.
Researchers Project Cost of Family Health Insurance Premiums Will Surpass Household Income by 2033
Updating estimates of who will be able to afford health insurance in the future in light of the 2010 Patient Protection and Affordable Care Act that reformed health care payment in the United States, researchers now estimate that the cost of an average family insurance premium will surpass household income by 2033. This compares to a 2005 estimate that the cost of insurance premiums would surpass household income by 2025.
Analyzing data from the Medical Expenditure Panel Survey and the U.S. Census Bureau, researchers developed an updated model of insurance premium cost and household income projections. Projecting out to 2040, they found that if health insurance premiums and national wages continue to grow at recent rates and the U.S. health system makes no major structural changes, the average cost of a family health insurance premium will equal 50 percent of the household income by the year 2021 and surpass it by 2033. If out-of-pocket costs are added to the premium costs, they find the 50 percent threshold is crossed by 2018 and exceeds household income by 2030.
While at first glance the change in the projection might be perceived as progress, in part due to a recent slowdown in the rate of premium increases, they point out that during that same period, employee contributions to insurance premiums and out-of-pocket expenses have grown faster than overall premium costs, suggesting that insurers are shifting costs onto patients in other ways. The authors assert the slowdown in the rate of premium increases has been offset by higher deductibles and copayments and fewer covered services. They conclude that continuing to make incremental changes in U.S. health policy will likely not bend the cost curve, which has eluded policy-makers for the past 50 years. Unless major changes are made to the U.S. health care system, private insurance will become increasingly unaffordable to low-to-middle income Americans.
Who Will Have Health Insurance in the Future? An Updated Projection
By Richard A. Young, MD
John Peter Smith Family Medicine Residency Program, Fort Worth Texas
Jennifer E. DeVoe, MD, DPhil
Oregon Health and Science University, Portland
Update on Primary Care Initiatives from the Innovation Center at the Centers for Medicare and Medicaid Services
Richard Baron, MD, MACP, Group Director, Seamless Care Models for the Center for Medicare and Medicaid Innovation, an entity created by the 2010 Patient Protection and Affordable Care Act to test new models of health care delivery to improve the quality of care while lowering costs, highlights some of the Center’s new primary care programs and initiatives. He notes that by changing delivery models and moving to a payment model that rewards physicians for quality of care instead of volume of care, the country may be able to achieve the kind of health care patients want to receive and primary care physicians want to provide. Collectively, he asserts, the Center’s programs communicate a vision for the future of primary care and have the power to change the national conversation.
New Pathways for Primary Care: An Update on Primary Care Programs From the Innovation Center at CMS
By Richard J. Baron, MD, MACP
Centers for Medicare and Medicaid Services, Baltimore, Md.
Call to Expand Federal Funding for Primary Care Training
To revitalize the national primary care workforce and ensure access to care following passage of the 2010 Patient Protection and Affordable Care Act, policy researchers at the Robert Graham Center in Washington call on policymakers to increase funding for Title VII, Section 747 of the Public Health Service Act, which is intended to increase the quality, quantity and diversity of the primary care workforce, but which has been severely cut over the past two decades. They contend that new and expanded Title VII initiatives are required to increase the production of primary care physicians; establish high-functioning academic, community-based training practices; increase the supply of well-trained primary care faculty; foster innovation and rigorous evaluation of these programs; and, ultimately to improve the responsiveness of teaching hospitals to community needs. They conclude that failure to launch a national primary care workforce revitalization program would put the health and economic viability of the United States at risk, and they call on Congress to act on the Council on Graduate Medical Education’s recommendation to increase funding for Title VII, Section 747 roughly 14-fold to $560 million annually – a small investment in light of the billions that Medicare currently spends to support graduate medical education.
The Next Phase of Title VII Funding for Training Primary Care Physicians for America’s Health Care Needs
By Robert L. Phillips, Jr., MD, MSPH
The Robert Graham Center, Washington D.C.
Barbara J. Turner, MD, MSED, MA, FACP
University of Texas Health Science Center, San Antonio
Replacing the Idyllic “Lone Physician” Myth with a New Paradigm That Reflects the Realities of Modern Practice
In a reflection piece, researchers present an alternative to the heroic figure of the mythical “lone physician” that acknowledges the current realities of primary care practice. This new, more collaborative alternative places the primary care physician within the context of a highly functioning health care team. They assert this new paradigm fulfills the collaborative, interprofessional, patient-centered needs of new models of care, and may help to ensure the work of primary care physicians remains compassionate, gratifying and meaningful.
The Myth of the Lone Physician: Toward a Collaborative Alternative
By George W. Saba, PhD, et al
University of California, San Francisco
With the March/April issue, Annals builds on last issue’s theme of multimorbidity, the coexistence of multiple chronic health conditions in a single individual, a phenomenon that is growing at an alarming rate and bankrupting the U.S. health care system. Because of the negative consequences and high cost associated with multimorbidity, it has received growing interest in the primary care literature and is now acknowledged as a research priority. With this in mind, three articles and an editorial in the current issue dive into the challenges of measuring multifaceted morbidity, offering tools to assess value and integrate patients’ complexity. An accompanying editorial from U.S. Department of Health and Human Services details a strategic framework developed in 2010 by the Department to ensure a more coordinated and comprehensive approach to improving the health of patients with multimorbidity.
Huge Variation in Studies Estimating the Prevalence of Multimorbidity, Researchers Cite Differences in Methods
Marked variation exists among studies looking at the prevalence of multimorbidity with respect to both methodology and findings. In a systematic review of 21 studies reporting on the prevalence of multimorbidity, researchers observed the largest difference at the age of 75 years in both primary care (with prevalence ranging from 4 percent to 99 percent across studies) and the general population (with prevalence ranging from 13 percent to 72 percent across studies). They conclude differences of this magnitude are unlikely to reflect differences between populations and more likely to be due to differences in methods. In addition to their differing geographic settings, the studies differed in recruitment method and sample size, data collection, and operational definition of multimorbidity, including the number of conditions and the conditions selected. All of these differences, they assert, affect prevalence estimates. The researchers call on investigators designing future studies to consider the number of diagnoses to be assessed (with 12 frequent diagnoses of chronic diseases appearing ideal) and should attempt to report results for differing definitions of multimorbidity (both 3 diseases and the classic 2 diseases). Use of more uniform methodology, they conclude, should permit more accurate estimation of the prevalence of multimorbidity and facilitate comparisons across settings and populations.
A Systematic Review of Prevalence Studies on Multimorbidity: Toward a More Uniform Methodology
By Martin Fortin, MD, MSc, CFPC, et al
Université de Sherbrooke, Quebec, Canada
Measuring Multimorbidity: A Review of Measures Suitable for Primary Care Research
To assess the impact of multimorbidity, it is necessary to measure it. In a systematic review of 194 articles studying different measures of multimorbidity and morbidity burden suitable for use in research in primary care, researchers identify 17 different measures, some of which are more established than others. The measures most commonly used in primary care and for which there is greatest evidence of validity are disease counts, the Charlson index, and the Adjusted Clinical Groups System. They conclude that different measures are most appropriate according to the outcome of interest and the type of data available. For example, they conclude that researchers interested in the relationship between multimorbidity and health care utilization will find most evidence for the validity of the Charlson Index, the ACG System and disease counts; but evidence is strongest for the ACG System in relation to costs, for Charlson index in relation to mortality and for disease counts or Charlson index in relation to quality of life. Other measures such as the Cumulative Index Illness Rating Scale and Duke Severity of Illness Checklist are more complex to administer and their advantages over easier methods have not been well established. The authors call for more research to directly compare the performance of different measures.
Measures of Multimorbidity and Morbidity Burden for Use in Primary Care and Community Settings: A Systematic Review and Guide
By Alyson L. Huntley, BSc, PhD, et al
Bristol University, England
Objective and Subjective Measures Needed for Complete Assessment of Patient Morbidity
A comprehensive assessment of a patient’s morbidity requires both subjective and objective measurement of diseases and disease burden, as well as an assessment of emotional symptoms. Comparing two different approaches to measuring morbidity – 1) objective measurement using ICD-9 diagnosis codes and 2) subjective measurement using patient-reported disease burden and emotional symptoms – researchers conclude both are needed. Analyzing data on 961 older adults with three or more medical conditions, researchers found morbidity measured by diagnosis code was more strongly associated with health outcomes of higher utilization; whereas self-reported disease burden and emotional symptoms were more strongly associated with patient-reported outcomes. The authors conclude that accurate measurement strategies to account for morbidity burden will become increasingly important as we develop new methods for evaluating patient-centered care delivery for complex patients.
Association of Patient-Centered Outcomes With Patient-Reported and ICD-9-Based Morbidity Measures
By Elizabeth A. Bayliss, MD, MSPH, et al
Kaiser Permanente, Colorado
Health and Human Services’ Strategic Framework for Tackling the Enormous Health System Challenge of Multimorbidity
An accompanying editorial from the Office of the Assistant Secretary for Health, U.S. Department of Health and Human Services attempts to bring a greater sense of order to the vexing challenge of multimorbidity. The authors outline a strategic framework developed in 2010 by the U.S. Department of Health and Human Services to ensure a more coordinated and comprehensive approach to improving the health status of individuals with multiple chronic conditions. The framework, intended for use by clinical practitioners, policymakers, researchers and others, is organized into four major goal areas: 1) strengthening the health care and public health systems; 2) empowering the individual to use self-care management; 3) equipping care providers with tools, information and other interventions; and 4) supporting targeted research about individuals with multiple chronic conditions and effective interventions. The three articles on multimorbidity in the current issue, the authors contend, represent progress toward the framework’s fourth goal.
Toward a More Cogent Approach to the Challenges of Multimorbidity
By Richard A Goodman, MD, MPH, et al
Office of the Assistant Secretary for Health, U.S. Department of Health and Human Services
Primary Care Physicians Order Lung Cancer Screening Tests in Asymptomatic Patients Despite Lack of Evidence
Primary care physicians in the United States frequently order lung cancer screening tests for asymptomatic patients even though major expert groups do not recommend it. A nationally representative survey of 962 primary care physicians, which used clinical vignettes to assess screening practices, revealed 57 percent of respondents ordered at least one of three lung cancer screening tests (chest radiograph, low-radiation dose spiral computed tomography or sputum cytology) in the past 12 months for asymptomatic patients. Thirty-eight percent reported ordering no tests. Further analysis revealed physicians were more likely to have ordered screening tests if they believed expert groups recommend lung cancer screening or that screening tests are effective; if they graduated from medical school 20 to 29 years ago; if they would recommend screening for asymptomatic patients, including patients without substantial smoking exposure; and if their patients had asked them about screening. The authors conclude primary care physicians need more information about lung cancer screening’s evidence base, guidelines, potential harms and costs to avert inappropriate ordering.
Lung Cancer Screening Practices of Primary Care Physicians: Results from a National Survey
By Carrie N. Klabunde, PhD, et al
National Cancer Institute, Bethesda, Md.
How the Medical Culture Contributes to the Harassment and Abuse of Family Physicians in the Workplace in Canada
The current medical culture appears to contribute to harassment and abuse in the workplace of family physicians in Canada. Interviews with 23 female and 14 male practicing family physicians in Canada revealed four ways in which the medical culture intentionally or unintentionally contributes to the facilitation and perpetuation of abuse in the workplace of family physicians: 1) modeling of abusive behaviors, 2) status hierarchy within the medical community, 3) shortage of physicians, and 4) lack of transparent policies and follow-up procedures after abusive encounters. The authors discuss these findings using the criminology-based broken window theory that asserts when lesser criminal acts, such as broken windows, are tolerated, more vandalism and other types of crime will eventually occur in the community. They assert that effective elimination of abuse must start from efforts that begin on the first day of medical school and continue through residency training and into clinical practices.
How the Medical Culture Contributes to Coworker-Perpetrated Harassment and Abuse of Family Physicians
By Baukje Miedema, PhD, et al
Dalhousie University, Fredericton, Canada
Video Elicitation Interviews: Powerful Qualitative Method for Primary Care Research
Researchers at the University of Michigan describe the concept and method of video elicitation interviews and provide practical guidance for primary care researchers who want to use this qualitative method to investigate physician-patient interactions. During video elicitation interviews, researchers interview patients or physicians about a recent clinical interaction using a video recording of that interaction as an elicitation tool. Video elicitation is useful, they point out, because it allows researchers to integrate data about the content of physician-patient interactions gained from video recordings with data about participants’ associated thoughts, beliefs and emotions gained from elicitation interviews. This method also facilitates investigation of specific events or moments during interactions. They conclude that while video elicitation interviews are logistically demanding and require considerable time and resources, the detailed data they produce make the effort worthwhile for many important research questions in primary care.
Video Elicitation Interviews: A Qualitative Research Method for Investigating Physician-Patient Interactions
By Stephen G. Henry, MD, and Michael D. Fetters, MD, MPH, MA
University of Michigan, Ann Arbor

On the Net:

Child Injuries On Stairs Less Frequent, Still A Problem

Fewer kids are getting hurt on stairs today than a decade ago, but new research shows that a US child still goes to the ER due to a stair-related injury once every six minutes, on average, and one in four children under 12 months being carried on the stairs also gets hurt.
One author of the study said it is important for parents to supervise their kids when they are on the stairs and discourage them from playing on them. But he added that changes in how staircases are designed could cut down stair-related injuries.
Dr. Gary Smith, head of the Center for Injury Research and Policy at Nationwide Children’s Hospital in Columbus, Ohio, told Reuters Health that we should build environments where we know children will live or visit so that they are safe for them.
That includes built-in gates at the top and bottom of stairs, as well as handrails that are easy for small children to reach and grip firmly, he added.
Smith said the study, published in the journal Pediatrics, is the first nationally representative study on stair injuries in young kids. He and his colleagues found that nearly 932,000 children under 5 were hurt in stair-related accidents in the US between 1999 and 2008. That´s more than 93,000 kids per year, more than 7,700 per month, or nearly 260 per day. Those numbers account for about 46.5 injuries for every 10,000 kids under the age of 5.
The most common injuries were bruises, sprains or cuts, often around the head and neck. About one in ten of the injuries involved a broken bone, and less than 3 percent of the children had to be hospitalized.
Dr. Young-Jin Sue, an ER doctor at The Children´s Hospital at Montefiore in New York, said that was consistent with her own experience in the ER.
“Fortunately the vast majority of stair injuries are very mild,” she told Reuters Health. “They´re soft tissue injuries — bumps and bruises. I can´t remember the last time we had to hospitalize a child” who was injured on the stairs.
Numbers of injuries have fallen throughout the years, the researchers said, dropping by 11.6 percent by 2008, mostly because of a sharp decline in stair injuries tied to baby walkers, which were once responsible for 25,000 child injuries per year.
Voluntary safety standards enacted in the 1990s and heightened awareness about the dangers of baby walkers helped fuel the decline of those types of injuries — about 1,300 per year, Smith said.
Despite the decline, a child being admitted to the ER once every six minutes with an injury suffered on the stairs, is a sobering statistic, and quite surprising, Smith said.
It isn´t clear how many children may have died as a result of the injuries because the data obtained from the National Electronic Injury Surveillance System, or NEISS, don´t track deaths, Smith said.
The data showed that nearly all of the injuries, 95 percent, occurred at home, and about 88 percent were caused by simple falls. Children jumping down the stairs or riding toys on the stairs accounted for 2.6 percent of injuries, and another 2.7 percent were hurt using baby walkers.
Also, about 33,500 injuries were a result of children under the age of 1 who were being carried on the stairs by a parent or other caretaker. Those youngsters were three times more likely to be hospitalized than kids injured in other ways.
“We do live in a multi-tasking world,” Smith said. “If you have to take your child up or down the stairs, only the child should be in your arms.”
Stair gates, handrails and softer steps are good precautions to keeping your child safe from stair-related injuries, but increased awareness is perhaps one of the best advantages, said Smith.
Sue pointed out the importance of keeping stairs clutter-free, and making sure little kids are always supervised. “I think the message for parents over and over and over again is: they´re growing human beings, and you think you´ve got them figured out, but then they’re always one step ahead of you,” she told Reuters Health.
Still, even the perfect parent can´t be watching a kid at every second, Smith said.

On the Net:

Coral Reefs May Be Adapting To Global Warming

As global warming heats up the Earth´s oceans, one ecosystem stands to be severely threatened: Coral reefs.

However, new research has given scientists to be hopeful about the fate of these coral reefs.

An international team of researchers has studied a coral population in South-East Asian waters that had survived a bleaching event. What was significant about this reef was that it had also survived another bleaching event 12 years earlier in 1998.

The researchers published their findings in the journal PLoS ONE.

The researchers analyzed three different sites effected by the 2010 bleaching event and found interesting results. It had been previously understood that fast growing coral was more likely to survive these bleachings. However, in some locations, such as Indonesia, fast growing coral (staghorn corals, for example) died off in large numbers.

When researchers studied sites at Malaysia and Singapore, however, the fast growing corals were much more colorful and healthy than their bleached and slow-growing counterparts.
Dr James Guest, a joint research fellow at the UNSW Centre for Marine Bio-innovation and the Advanced Environmental Biotechnology Centre at Singapore´s Nanyang Technological University is the lead author of the study.

Guest writes in the press release “Mass coral-bleaching events, caused by a breakdown in the relationship between the coral animals and their symbiotic algae, are strongly correlated with unusually high sea temperatures and have led to widespread reef degradation in recent decades.”

According to Guest, these recent studies have proven certain species of coral to be more susceptible to bleaching events. In previous results, the severity of the bleaching events had very different results on each species. Guest and his team have data that suggests the slower, and larger species of coral will replace the faster, smaller species in the future.

The researchers noticed a trend when studying these locations. According to their data, the thermal history of each location could play a factor in how likely a species of reef will learn to adapt to its surroundings.

““¦During the 2010 event the normal hierarchy of species susceptibility was reversed in some places. Corals at our Indonesian study site in Pulau Weh, Sumatra, followed the usual pattern, with around 90% of colonies of fast-growing species dying. But the pattern was the opposite at study sites in Singapore and Malaysia, even though sea-temperature data showed that the magnitude of thermal stress was similar at all sites,” Guest said.

“When we looked at archived sea-surface temperature data and past bleaching records we found that the locations that had a reversed hierarchy of susceptibility and less severe bleaching in 2010 also bleached during 1998. In contrast, the site that had a normal bleaching hierarchy and severe bleaching did not bleach in 1998.”

Guest warns that this new data, while encouraging, does not mean that reefs are immune to the effects of global warming. As shown in the results of this study, some reefs will not be able to adapt to the changing climates as well as others. Furthermore, coral reefs continue to face other dangers, such as overfishing, diseases, and pollution.

On the Net:

HIV Cure Not Imminent But Progress Continues

The human immunodeficiency virus (HIV) is a complex creature that researchers have been working on getting the upper hand on for 30 years. It appears, however, that scientists are finally making real progress on several fronts in the search for a cure for HIV infections.

Early human trials of vaccines designed to prevent or treat infection with the difficult to target virus have proved disappointing. HIV is a “provirus” that is integrated into the DNA of a host cell, where it can remain latent or eventually reactivate.

“It has proven to be an incredibly formidable challenge to develop a vaccine,” said John Coffin, professor of molecular biology at Tufts University in Boston. “In recent years the pendulum is swinging back.”

Researchers are flushing hidden HIV from cells and changing out a person´s own immune system cells, making them resistant to HIV and then putting them back into the patient´s body, writes Deena Beasley for Reuters.

HIV, unfortunately, is especially resistant treatments and procedures. It lies low in pools or reservoirs of latent infection that even powerful drugs cannot reach, scientists told the Conference on Retroviruses and Opportunistic Infections, one of the world´s largest scientific meetings on HIV/AIDS.

Dr. Kevin De Cock, director of the Center for Global Health at the US Centers for Disease Control and Prevention, says, “We need to get the virus to come out of the latent state, then rely on the immune system or some other treatment to kill the virus.”

The virus infects more than 33 million people worldwide and thanks to prevention measures and tests that detect HIV early, infection with the virus that causes AIDS is no longer the death sentence it used to be.

Antiviral drugs and treatments are expensive and questions of side effects, drug resistance and ultimate lifespan, make lifelong use of antiviral drugs a less-than-ideal solution. Scientific advances in molecular engineering, however, are allowing researchers to reconstruct the basic building blocks of HIV.

“Vaccines work by recognizing the surface of the virus and eliminating it,” said Dr. Dennis Burton, professor of immunology and microbial science at the Scripps Research Institute in La Jolla, California. “HIV is a highly evolved virus that has developed a surface incorporating features to avoid antibody responses,” including instability.

He presented research showing that “broadly neutralizing antibodies” can be designed to recognize and penetrate HIV, giving researchers new vaccine targets.

Phillip Gregory, chief scientific officer at Sangamo BioSciences explained that because HIV is a “reverse transcriptase” virus it is constantly mutating, making it very difficult for the body´s immune system to keep up.

Vaccines induce the production of antibodies that recognize and bind to very specific viral surface molecules, but the HIV molecules end up with a variety of subtle molecular differences on their surface.

“Eradication is a very tough theoretical sell,” Gregory continued. “What´s going to work is getting to the point that we could reasonably expect the immune system to get it totally under control.”

Sangamo is conducting two gene therapy trials in which infection-fighting white blood cells known as CD4 cells are removed, manipulated to knock out the CCR5 gene used by HIV to infect cells and then replaced.

“The change is permanent. Those cells and their progeny will go on to carry that genetic change,” said Geoffrey Nichol, head of research and development at Sangamo.

An earlier study of a single infusion of the gene therapy in six HIV-infected patients showed mixed results, eliminating the virus in one patient with a naturally occurring gene mutation.

On the Net:

Bees Seek Adventure, Studies Show

Humans aren´t the only species on Planet Earth to seek thrills and adventure. A new study posted in the journal Science explains that honey bees are just as likely as human beings to seek an adrenaline high. Molecular pathways in the brain that are often associated with thrill-seeking were found in honey bees as well.

Often thought to be diligent in roles given to them by the hive, this new study shows that honey bees may have wants and desires other than serving the queen of the hive.

Gene Robinson, entomology professor and director of the Institute for Genomic Biology at the University of Illinois at Urbana-Champaign led the study that suggests that honey bees may or may not enjoy the tasks given to them by the hive. The study shows that bees may have a personality, and therefore, a preference in how they spend their lifetime.

In the study, Robinson says “In humans, differences in novelty-seeking are a component of personality. Could insects also have personalities?”

To prove this hypothesis, Robinson and other researchers from Wellesley College and Cornell University studied two types of behavior that looked like thrill-seeking in the honey bees: scouting for food and scouting for nests.

When a bee colony outgrows its nest, the hive divides and the bees begin to look for a new home. During this process, only 5 percent of the bees will begin hunting for a suitable home. These bees are called nest scouts, and research has shown that these bees are more than 3 times more likely to seek this kind of thrill again and become food scouts.

“There is a gold standard for personality research and that is if you show the same tendency in different contexts, then that can be called a personality trait,” Robinson said. Not only do certain bees exhibit signs of novelty-seeking, he said, but their willingness or eagerness to “go the extra mile” can be vital to the life of the hive.

Using microarray analysis, the research team studied the brains of the food and nest scouting bees against the non-scouting bees. This research showed that there are thousands of genetic differences between the brains of the scouting bees and the non-scouting bees. The research team was shocked by the results.

“We expected to find some, but the magnitude of the differences was surprising given that both scouts and non-scouts are foragers,” Robinson said.

With the differences discovered between the two types of bees, the researchers then went on to test their theories. By singling out the specific genes found in scouting bees as opposed to non-scouters, the scientists were able to increase and decrease the amount of these genes in other bees. In order to test whether non scouting bees could be convinced otherwise, the researchers treated the bees to increase or decrease specific chemicals in the brain linked to thrill-seeking.

The non-scouting bees began to exhibit scouting behavior, despite the fact they had never scouted before. Alternatively, scouting bees who underwent similar treatment stopped scouting and exhibited behavior of other non-scouting bees.

“Our results say that novelty-seeking in humans and other vertebrates has parallels in an insect,” Robinson said. “One can see the same sort of consistent behavioral differences and molecular underpinnings.”

These findings suggest the evolution of behavior relies on the different molecular pathways carved out by genetic coding.

“It looks like the same molecular pathways have been engaged repeatedly in evolution to give rise to individual differences in novelty-seeking,” Robinson said.

These researchers believe that the same sort of genetic coding is shared between animals, humans, and insects.

On the Net:

Helmet Technology Adapts to Protect Athletes

Peter Suciu for RedOrbit

Today the helmet is as ubiquitous to football as the old pigskin ball (which actually isn´t pigskin these days). However, there was a time when helmets weren´t worn, nor were pads or other safety equipment. The game was at one point in the early 20th century considered so dangerous that there were calls to ban it. And it actually took President Theodore Roosevelt to step in and help make the game safer.

Of course now helmets are required in pro sports — including football, baseball, hockey, and cycling. The former two took a while for adoption, with some competitors protesting the need for such protection. However, given what recent studies into head related injuries have shown it is surprising anyone would think of playing a contact sport or ride a bike at high speeds without a helmet.

Head protection has also made its way to the ski slopes where skiers and snowboarders alike have begun to realize the need for this protection. This has in turn created a cottage industry for helmet makers, designers and innovators. But the result is that the so-called brain buckets are probably doing a much better job protecting the brains.

Helmet design has thus come a long ways. The earliest football helmets were made of leather and provided some protection, but the first plastic football helmet wasn´t even introduced until 1939 by Riddell. It is also worth noting that Riddell´s headband and liner suspension system were so innovative that this found its way to the American M1 steel helmet used in World War II. This basic lining system was so successful it was utilized by nations around the world, and only recently has been replaced by more form fitting and customizable systems.

Damaged Helmet Doesn´t Protect

One thing that Riddell has also learned is that a damaged helmet doesn´t do much good at protecting the wearer. Last fall the company, which is the official helmet maker of the National Football League, called for an in-season time out that would allow coaches, players and parents of youth athletes to inspect the equipment.

“Between regular practices and games, a player´s equipment experiences hours of use,” said Dan Arment, president of Riddell in a statement. “Inspecting equipment throughout the season helps ensure it´s prepared to perform its job — protecting players on the field.”

What the eyes and even touch can´t see can still be a problem. To that end the Brain Injury Association of Canada recently noted that the lifespan of helmet could be vastly shortened by regular use. This applies to football, hockey and even alpine sport helmets. Testing revealed there is as much as a 30 percent increase in the risk of injury every time a helmet receives a significant impact.

Canadian-based Impact-Alert has stepped in to help the wearer — as well as coaches and trainers — determine how much potential damage a helmet may have taken. The company´s electronic sensor can be installed in a helmet and track how sustained a hit or impact is and help the user determine if it needs to be replaced.

Taking Hits Differently

In the old days of pro cycling riders would sometimes say they didn´t need a helmet because “they knew how to fall.” If that sounds dubious, it is, and no one can truly know how they might hit the ground. But the truth is that most helmets have been designed with blunt impacts in mind.

Just as there is no way to know how to fall, not all impacts are the same. Research has shown that there are other types of impacts.

One company, MIPS — or multi-directional impact protection system — has created a helmet that can protect the head from glancing blows or falls that aren´t exactly an outright impact. The solution here is provide a helmet with the built-in system that will allow the shell to shift under pressure or impact, while the lining and internal pads remain in place, thus protecting from rotational forms.

The same concept is being utilized by the makers of the Vaco 12 helmet — which has earned the nickname “beanbag helmet” as the liner features tiny round beads that resemble those in a beanbag chair. The concept is based on the idea that each bead has a limited number of contact points; that being 12, where each bead can receive the kinetic energy in a hit and pass it to is 12 contact points, which further pass it on. The energy weakens accordingly and thus also reduces the impact forces and rotational forces. As with other concepts in the works, this involves a solid outer shell and a snug internal liner.

Sport specific designs are also being considered.

Bauer, which also has a long history of helmet development, this year introduced a hockey-specific helmet that can manage the multiple types of hits including rotational-force impacts. It features a liner made of a light and pliable material that has the ability to dissipate extreme force.

Recycle When Reuse Isn´t Possible

One issue that remains with most athletic helmets is that the design essentially consists of composite materials, or at least a composite design. This in turn means that the helmets are very difficult to recycle at end of use — with this fact likely one culprit for why helmets are seldom replaced.

Fortunately there are efforts under way to find methods to recycle helmets. Issues remain, notably that the foam materials used in cycling, snowboard, ski and other athletic helmets tend to be neither biodegradable or photo-degradable. In other words that used bike helmet could be in a land fill for a long time without proper recycling efforts.

Some companies are considering this issue, and one solution from Pro-Tec is to create a helmet that could actually survive multiple impacts. This works because the helmet features multi-impact rebounding foam that can be reconstituted after a crash. This could allow the helmet to be used time and time again. So far the company has produced the Pro-Tec B2 Snow version, but as with the other solutions these could be rather hot for summer sports.

In the end it is probably safe to say that any helmet is better than no helmet at all, but it is important for the wearer not to expect too much from the helmet either. Ironically a study conducted last year in Canada found that in some cases cyclists became more daring when wearing a helmet. It may be called a brain bucket, but commonsense is still the final key.

On the Net:

Coca-Cola Responds To Recipe Change Reports

Lawrence LeBlond for RedOrbit.com

UPDATE: March 9, 2012 3:00 p.m Eastern.

Reports that were originally received from various media outlets stating that Coca-Cola was changing its formula to avoid adding cancer warning labels to its beverages are false.

In a statement posted on its website today, Coca-Cola said that it is in fact not changing its world-famous formula.

“The caramel color in all of our products has been, is and always will be safe, and The Coca-Cola Company is not changing the world-famous formula for our Coca-Cola beverages. Over the years, we have updated our manufacturing processes from time to time, but never altered our Secret Formula,” Coca-Cola said on its website.

The No. 1 soft drink maker said they have asked its caramel suppliers to modify their production process to reduce the amount of 4-MI in the caramel, but that it will not have any effect on the formula or the flavor of its products. “These modifications will not affect the color or taste of Coca-Cola,” it said.

The company added it is committed to the “highest quality and safety” of its products, and it will “continue to rely on sound, evidence-based science to ensure that our products are safe.”

———

A specific caramel coloring found in Pepsi, Coca-Cola, and other popular soft drinks that a consumer watchdog said contain high levels of a chemical linked to cancer in animals has now been deemed safe by US regulators.

Despite this, PepsiCo and Coca-Cola both decided to adjust the formula of their caramel coloring across the US so they do not have to label their products with a cancer warning to comply with additional regulations enforced in California.

The recipe has already been changed for drinks sold in the Golden State and the companies said the changes will be expanded nationwide to streamline their manufacturing processes.

The Center for Science in the Public Interest (CSPI) reported earlier this week that it found the unsafe levels of the chemical 4-methylimidazole (4-MI) — used to make caramel color — in cans of Coke, Pepsi, Dr. Pepper, and Whole Foods´ 365 Cola.

Coca-Cola confirmed that changes were being made at its facilities to keep within the law but argued that the CSPI´s allegations on the dangers the ingredient posed on humans were false.

“The company has made the decision to ask its caramel suppliers to make the necessary manufacturing process modification, to meet the specific Californian legislation,” A spokesperson for Coca-Cola told Daily Mail Online. “Those modifications will not change our product.”

California added 4-MI to its list of carcinogens, after studies showed high levels of the chemical led to tumors in lab animals. However, the studies were inconclusive on whether the chemical was dangerous to humans or not.

“Caramel is a perfectly safe ingredient and this has been recognized by all European food safety authorities,” the spokesperson added. “The 4-MEI levels in our products pose no health or safety risks. Outside of California, no regulatory agency concerned with protecting the public´s health has stated that 4-MEI is a human carcinogen.”

“The caramel color in all of our ingredients has been, is and always will be safe. That is a fact,” the spokesperson said.

This had been the CSPI´s second go-around with the Food and Drug Administration (FDA) over the dangers of 4-MI in soft drinks. It first petitioned the regulator last year, but the FDA has continually maintained that the claims were exaggerated.

“It is important to understand that a consumer would have to consume well over a thousand cans of soda a day to reach the doses administered in the studies that have shown links to cancer in rodents,” said FDA spokesman, Doug Karas to the Daily Mail’s Laura Pullman.

CSPI maintains that the regulator is allowing soft drink companies to needlessly expose  millions of Americans to a chemical that is known to cause cancer.

“If companies can make brown food coloring that is carcinogen-free, the industry should use it,” CSPI´s executive director Michael Jacobson told Reuters.

The FDA said it will review the watchdog´s petition, but that the soft drinks in question were still safe.

CSPI took cans from stores in the Washington DC area, where they found some had levels of 4-MI near 140 micrograms per 12-ounce can. California has a legal limit of 29 micrograms of 4-MI per 12 ounces, it noted.

The FDA´s limit for 4-MI in caramel coloring is 250 parts per million (ppm). Once the caramel is mixed in with the soda it becomes diluted. According to calculations by Reuters, the highest levels of 4-MI found in the soft drinks were about 0.4 ppm, significantly within the safe zone.

“This is nothing more than CSPI scare tactics,” the American Beverage Association (ABA) told Reuters in a statement. “In fact, findings of regulatory agencies worldwide … consider caramel coloring safe for use in foods and beverages.”

ABA said its member companies will continue to caramel coloring in certain products but that adjustments were being made to meet California requirements. “Consumers will notice no difference in our products and have no reason at all for any health concerns,” the ABA said.

Diana Garza-Ciarlante, a representative for Coca-Cola, said its suppliers would modify the manufacturing process used to reduce the levels of 4-MI, which is formed during the cooking process and as a result may be found in trace amounts in many foods.

“While we believe that there is no public health risk that justifies any such change, we did ask our caramel suppliers to take this step so that our products would not be subject to the requirement of a scientifically unfounded warning,” she said in an email to The Telegraph.

On the Net:

Experts Call For Long-Term Nuclear Meltdown Research Project

With the one-year anniversary of the accident at the Fukushima Daiichi nuclear power plant reactor coming up this weekend, three US university professors are calling for a long-term study of how such fuels respond to such extreme environmental conditions.

University of Notre Dame Professor of Civil Engineering and Geological Sciences Peter C. Burns, University of Michigan Earth and Environmental Sciences Professor Rodney Ewing, and Alexandra Navrotsky of the University of California-Davis are calling for a long-term, national research program to study the issue, the Ann Arbor, Michigan-based university said in a Thursday press release.

On March 11, 2011, a magnitude 9.0 earthquake struck the island nation of Japan, triggering a tsunami that, in addition to causing widespread death and destruction, also cut off electricity to the nuclear power station, a second press release, this one from the South Bend, Indiana school, said. Those natural disasters acted as the catalyst for “a series of explosions which released large quantities of radioactive substances into the surrounding environment.”

Three of the plant’s six boiling-water reactors suffered partial core-melt events in the aftermath of the earthquake and tsunami, and several tons of seawater were needed to help cool the overheated reactors. That seawater wound up being directly discharged into the ocean and groundwater for nearly an entire month after the disaster, the University of Michigan said. Now, Burns, Ewing, and Navrotsky have penned a review article, published in this week’s edition of the journal Science are calling on US officials to be better prepared for similar events in the future.

“Reactors are designed to high safety standards, but on the anniversary of the accidents in Fukushima we are reminded that the forces of nature can produce unlikely events that can overcome the safety margins built into the reactor designs,” Burns said in a statement. “A reactor core meltdown releases radioactive material from the fuel. If containment systems fail, as they did at Fukushima, radioactive material is then released into the environment.”

“At Fukushima, a large amount of radioactive material was released when seawater was pumped onto the reactor cores that later leaked into the ocean and groundwater,” he added. “Little is known about how radioactive fuel in a reactor accident interacts with water and releases radioactive material. This paper examines what is known, points to serious shortcomings in our understanding, and proposes a course of research to address the problem.”

The Notre Dame professor and his colleagues believe that some of the information can be gathered by staging simulated core-melt events using fuel analogs containing nonradioactive isotopes. However, they also assert that some of the studies will require the use of actual radioactive materials, making them difficult, costly, and potentially dangerous but nonetheless “essential to reduce the risk associated with increasing reliance on nuclear energy.”

“Nuclear power reactors, of which there are currently 440 operating worldwide, provide about 16 percent of the world’s electricity. They also produce extremely radioactive used fuel,” Burns said. “A growing reliance on nuclear energy in the world over the coming decades will make serious reactor accidents more likely, although they will remain rare events. To better protect humanity when accidents do occur, we need a much improved understanding of how water interacts with damaged fuel, and how the radioactive material is released and transported in water.”

On the Net:

Maternal Obesity May Influence Brain Development Of Premature Infants

Maternal obesity may contribute to cognitive impairment in extremely premature babies, according to a new study by researchers at Wake Forest Baptist Medical Center.

“Although in the past decade medical advances have improved the survival rate of babies born at less than seven months, they are still at very high risk for mental developmental delays compared with full-term infants,” said Jennifer Helderman, M.D., assistant professor of pediatrics at Wake Forest Baptist and lead author of the study. “This study shows that obesity doesn’t just affect the mother’s health, but might also affect the development of the baby.”

Published in the March issue of the journal Pediatrics, the study looked at 921 infants born before 28 weeks of gestation during 2002 to 2004 at 14 participating institutions. The researchers assessed the babies’ placenta for infection and other abnormalities, interviewed the mothers and reviewed their medical records. At age 2, the children’s cognitive skills were evaluated using the Mental Development Index (MDI) portion of the Bayley Scales of Infant Development, a commonly used measure.

The scientists found that both maternal obesity and lack of high school education were associated with impaired early cognitive function, as was pre-term thrombosis (blood clot) in the placenta.

“We weren’t really surprised by the socioeconomic factors because it has been repeatedly shown that social disadvantage predicts worse infant outcomes,” Helderman said. “However, obesity is of particular interest because it is becoming more prevalent and it is potentially modifiable during the pre-conception period and pregnancy.”

Obesity has been linked to inflammation, and inflammation can damage the developing brain, Helderman said. What isn’t known is if the obesity-related inflammation in the mother is transmitted to the fetus.

“Few studies have addressed prenatal risk factors of cognitive impairment for infants born this prematurely. The long-term goal is to use information from studies like ours to develop treatments that prevent cognitive impairment in extremely premature babies,” Helderman said.

Helderman’s colleague, Michael O’Shea, M.D., section head of neonatology at Wake Forest Baptist, is currently conducting a study that follows these same babies into mid-childhood to determine long-term cognitive problems.

More than 30,000 extremely premature babies are born each year in the United States.

On the Net:

Cannabinoid 2 Receptors Regulate Impulsive Behavior

A new study lead by the Neuroscience Institute of Alicante reveals how manipulating the endocannabinoid system can modulate high levels of impulsivity. This is the main problem in psychiatric illnesses such a schizophrenia, bipolar disorder and substance abuse.

Spanish researchers have for the first time proved that the CB2 receptor, which has modulating functions in the nervous system, is involved in regulating impulsive behavior.

“Such a result proves the relevance that manipulation of the endocannabinoid system can have in modulating high levels of impulsivity present in a wide range of psychiatric and neurological illness,” explains SINC Jorge Manzanares Robles, a scientist at the Alicante Neuroscience Institute and director of the study.

Carried out on mice, the study suggests the possibility of undertaking future clinical trials using drugs that selectively act on the CB2 and thus avoid the psychoactive effects deriving from receptor CB1 manipulation, whose role in impulsivity has already been proven.

However, the authors of the study published in the British Journal of Pharmacology remain cautious. Francisco Navarrete, lead author of the study, states that “it is still very early to be able to put forward a reliable therapeutic tool.”

The study assessed the actions of two cannabinoid drugs that stimulate and block CB2 in the mouse strain showing high levels of impulsivity. The scientists then analyzed whether these drugs were capable of modulating impulsive behavior and the cerebral modifications associated with this change in behavior.

The authors concluded that CB2 receptor activity modulation reduced impulsive behavior in mice, depending on the patterns that governed the administration of each drug. Furthermore, the genetic expression levels of CB2 tended to return to normal, leaning towards strains that had little impulsivity.

The Endocannabinoid System

The Endocannabinoid System mainly comprises two receptors (CB1 and CB2), two endogenous ligands and two metabolism enzymes. It regulates many aspects of embryonic development and is involved in many homeostatic mechanisms.

Although it was thought that CB2 only regulates immune response on a peripheral level, a study published in the Science journal in 2005 showed that it was found in the brain under normal conditions. Since then many authors have linked it to the regulation of emotional behavior and cognitive functions.

For example, the same group of Spanish researchers has contributed greatly in applying this receptor in regulating anxiety and depression. Furthermore, others studies have demonstrated how their altered role is linked to increased chances of becoming depressed or anxious or taking drugs.

Virtue or defect?

Impulsivity is a personality trait characterized by behavioral actions that lack forethought or in which the subsequent consequences are not considered. The authors outline that this is “a normal behavior that allows us as human beings to adapt to our surroundings under certain circumstances that require an immediate reaction.”

Nonetheless, such behavior can cause a disproportionate response and lead to a pathological state. There a multitude of psychiatric illness that are characterized by a high level of impulsivity. One of these includes substance abuse, which is extremely problematic for society in general.

On the Net: