Human Stem Cells Lead To Corneal Regrowth, Improved Vision In Mice

Rebekah Eliason for redOrbit.com – Your Universe Online

In an exciting new study, researchers have discovered a way to collect cells for the regeneration of corneal tissue – the clear membrane covering the pupil that directs light into the back of the eye.

The research team from Boston reported that purified human stem cells were used to improve long-term vision in mice. Currently, the team is waiting for FDA-approval to begin patient clinical trials.

This collaborative research effort was led by Natasha Frank, MD, and Markus Frank, MD, using work done at Massachusetts Eye and Ear/Schepens Eye Research Institute, Boston Children’s Hospital, Brigham and Women’s Hospital, and the US Department of Veterans Affairs Boston Healthcare System.

In some people blood vessels grow onto the cornea, vision clouding known as corneal blindness results. This condition is caused when limbus stem cells, located behind the cornea, are destroyed by injury, infection or autoimmune disease. Outcomes are inconsistent, but limbal stem cell transplants from an uninjured eye or deceased organ donor have had promising results.

“Previously published work on limbal epithelial cell grafts showed that when more than three percent of transplanted cells were stem cells, transplants were successful—less than three percent and the transplants were not,” said HSCI Affiliated Faculty member Natasha Frank.

“The question in the field then was whether we could enrich the limbal stem cells. But until this study there was no specific marker that could isolate these cells,” added Frank.

In this study, researchers have discovered the biological marker ABCB5 protein that is located on the surface of the limbal stem cells. In order to purify only the cells responsible for successful limbal cell transplants, the team developed an antibody that could mark limbal cells in a sample of general human limbal cells.

“ABCB5 allows limbal stem cells to survive, protecting them from apoptosis [programmed cell death],” says Markus Frank. “The mouse model allowed us for the first time to understand the role of ABCB5 in normal development, and should be very important to the stem cell field in general.” according to Natasha Frank.

Researchers successfully transplanted purified limbal stem cells from adult humans into mice with corneal blindness. At five weeks and thirteen months, the team checked to see if the corneas had re-grown. They discovered that the mice corneas were normal and even had the same thickness and protein expression as corneas in healthy mice.

“I think a very exciting part of the study is that even though there is a lot of evidence that adult stem cells contribute to tissue regeneration, what we see is basically the first evidence that you can take adult stem cells and regrow the organ that’s been damaged,” Frank said.

In future research, the team plans to search for a way to replicate limbal stem cells. This would allow a single donor eye to produce enough transplantable cells to help several different patients. In addition they will partner with biopharmaceutical companies to produce commercial qualities of the ABCB5 antibody for humans, and they are planning to further collaborate with co-author Victor Perez, MD, a professor of ophthalmology at the Bascom Palmer Eye Institute in Miami, to move the techniques used in the current study into clinical trials.

“This finding will now make it much easier to restore the corneal surface. It’s a very good example of basic research moving quickly to translational application,” said Bruce Ksander, PhD, an associate scientist at Schepens Eye Research Institute.

This study was published in the journal Nature.

—–

SHOP NOW: Ocuvite Adult 50+ Vitamin & Mineral Supplement 90 Count Soft Gels

Early Humans May Have Evolved Bigger Brains By Eating Insects

April Flowers for redOrbit.com – Your Universe Online
A new study, led by Washington University in St. Louis, suggests that seasonal diet changes may have played a role in the development of bigger brains and higher-level cognitive functions in human ancestors and other primates. The findings, published in the Journal of Human Evolution, show that figuring out how to survive on a lean-season diet of hard-to-reach ants, slugs and other bugs might have been the catalyst for early tool use.
“Challenges associated with finding food have long been recognized as important in shaping evolution of the brain and cognition in primates, including humans,” said Amanda D. Melin, PhD, assistant professor of anthropology in Arts & Sciences. “Our work suggests that digging for insects when food was scarce may have contributed to hominid cognitive evolution and set the stage for advanced tool use.”
Melin and her colleagues, biologist Hilary C. Young and anthropologists Krisztina N. Mosdossy and Linda M. Fedigan, all from the University of Calgary, studied Capuchin monkeys in Costa Rica for five years. The evidence from their findings support the evolutionary theory that links the development of sensorimotor (SMI) skills to the creative challenges of foraging for insects and other buried, embedded or hard-to-obtain foods. SMI skills are those needed for increased manual dexterity, tool use, and innovative problem solving.
The team believes that their study is the first to provide detailed field evidence showing how seasonal food changes influence the foraging patterns of Capuchin monkeys in the wild. Many human populations consume embedded insects on a seasonal basis, leading the team to suggest that this practice played a vital role in human evolution.
“We find that capuchin monkeys eat embedded insects year-round but intensify their feeding seasonally, during the time that their preferred food – ripe fruit – is less abundant,” Melin said. “These results suggest embedded insects are an important fallback food.”
A 2009 study published in the American Journal of Physical Anthropology defined fallback foods as a term “to denote resources of relatively poor nutritional quality that become particularly important dietary components during periods when preferred foods are scarce.”
Other studies have shown that these foods help to shape the evolution of primate body forms. In primate species whose fallback foods are mainly vegetation, these forms include strong jaws, thick teeth and specialized digestive systems.
The current study presents evidence indicating that fallback foods can also play an important role in shaping brain evolution in primates that fall back on insect-based diets. The strongest influence of this sort can be observed in primates that have evolved in habitats with wide seasonal variations — the wet-dry cycles in some South American forest, for example.
“Capuchin monkeys are excellent models for examining evolution of brain size and intelligence for their small body size, they have impressively large brains,” Melin said. “Accessing hidden and well-protected insects living in tree branches and under bark is a cognitively demanding task, but provides a high-quality reward: fat and protein, which is needed to fuel big brains.”
Not all Capuchin monkeys have the same abilities with tools, however, and Melin believes that her research reveals why.
There are two major groups of Capuchin monkeys: the gracile (untufted, genus Cebus) and the robust (tufted, genus Sapajus). Scientists say that the split between the two groups occurred millions of years ago during the late Miocene epoch, according to genetic analysis of mitochondrial chromosomes. One of the most observable differences between the two lineages is their variation in tool use. While the Cebus lineage is known for clever food foraging tricks like banging snails or fruits against branches, they are not adept tool-users like the Sapajus cousins.
Melin said that the explanation could be found in habitat differentiation. Cebus capuchins have historically and consistently occupied tropical rainforests. In contrast, the Sapajus capuchins spread from their original Atlantic rainforest habitat into drier, more temperate and seasonal habitats.
“Primates who extract foods in the most seasonal environments are expected to experience the strongest selection in the ‘sensorimotor intelligence’ domain, which includes cognition related to object handling,” Melin said. “This may explain the occurrence of tool use in some capuchin lineages, but not in others.”
“We predict that the last common ancestor of Cebus and Sapajus had a level of SMI more closely resembling extant Cebus monkeys, and that further expansion of SMI evolved in the robust lineage to facilitate increased access to varied embedded fallback foods, necessitated by more intense periods of fruit shortage,” she said.
Modern examples of this behavior exist, most notably the seasonal consumption of termites by chimpanzees. The chimps’ use of tools to extract the protein-rich termites is an important survival technique in a harsh environment.
How does this research affect our understanding of hominids?
Using the fossil record to decode the extent of the seasonal dietary variations is challenging. However, seasonsal variation in diet for at least one South African hominin (Parathropus robustus) has been suggested by stable isotope analysis. Such research has also suggested that early human diets may have included a wide range of extractable foods, including termites, plant roots and tubers.
Even today, humans frequently consume insects as a seasonally important diet when other animals are scarce.
—–
FOR THE KINDLE: The Insect Cookbook: Food for a Sustainable Planet (Arts and Traditions of the Table: Perspectives on Culinary History)

Nuclear Transfer Proven An Effective Method In Stem Cell Production

redOrbit Staff & Wire Reports – Your Universe Online
A new process known as “somatic cell nuclear transfer” is far better and much more accurate when it comes to coaxing embryonic stem cells out of human skin tissue, according to new research appearing in Tuesday’s edition of the journal Nature.
Scientists from Oregon Health & Science University (OHSU), the University of California-San Diego (UCSD) School of Medicine and the Salk Institute for Biological Studies created stem cells using two different methods: nuclear transfer, which involves moving genetic material from a skin cell into an empty egg cell, and a more traditional method in which activating a small number of genes reverts adults cells back to an embryonic state.
Experts believe that stem cell therapies could someday be used to replace human cells damaged through injury or illness, including spinal cord injuries, diabetes, Parkinson’s disease and multiple sclerosis. Human embryonic stem cells (ES cells), which are cells cultured from discarded embryos, are viewed by scientists as the “gold standard” of the field, and the new study reports that somatic cell nuclear transfer (SCNT) more closely resembled ES cells.
This marks the first time that researchers had directly compared the SCNT method with the induced pluripotent stem cell (iPS cell) technique, and in a statement, co-senior author and UCSD assistant professor in reproductive medicine Dr. Louise Laurent explained that the nuclear transfer ES cells were “more completely reprogrammed” and had “fewer alterations in gene expression and DNA methylation levels” than the iPS cells.
Access to actual human embryonic stem cells (hESCs) has been limited in the US due to ethical and logistical issues, forcing researchers to devise other methods to create stem cells, the study authors explained. Typically, that means creating iPS cells by taking adult cells and adding in a mixture of genes that regress those cells to a pluripotent stem-cell state. Those cells can then be coaxed into cells resembling those found in the heart or brain.
Over the past year, however, an OHSU-led team of researchers have built upon somatic cell nuclear transfer (the same technique used for cloning organisms) to transplant the DNA-containing nucleus of a skin cell into an empty human egg. Once completed, the combination naturally matures into a group of stem cells.
For the first time, the OHSU, UCSD and Salk Institute researchers conducted a direct, in-depth comparison of the two different methods. They created four nuclear transfer ES cell lines and seven iPS cell lines using the same skin cells as the donor genetic material source, and then compared them to a pair of standard human ES lines.
A battery of standard tests revealed that all 13 cell lines were shown to be pluripotent. However, when the researchers used powerful genomic techniques to take a closer look at the DNA methylation (a biochemical process responsible for turning genes on or off) and the gene expression signatures of each cell line, they discovered that the nuclear transfer ES cells more closely resembled those of ES cells than did iPS cells in both characteristics.
“If you believe that gene expression is important, which we do, then the closer you get to the gene expression patterns of embryonic stem cells, the better. Right now, nuclear transfer cells look closer to the embryonic stem cells than do the iPS cells,” co-senior author Joseph R. Ecker, director of the Salk Institute’s Genomic Analysis Laboratory and co-director of the Center of Excellence for Stem Cell Genomics, said in a statement.
Despite the results, Ecker explained that he did not expect to see a large increase in the use of nuclear transfer protocols – in part because the method is one that falls into restricted for federal funding purposes. However, he believed that their findings could be adapted to improve the protocols used in the production of iPS cells, provided scientists can determine exactly what component of an egg helps spur on the growth of pluripotent stem cells.
—–
SHOP NOW: AmScope B100B-MS 40X-2000X Biological Binocular Compound Microscope with Mechanical Stage

Even Brief Moments Of Mindfulness Meditation Can Help Alleviate Stress

April Flowers for redOrbit.com – Your Universe Online
As people become more aware of how stress affects their lives in almost every area, mindfulness meditation has risen in popularity as a way to improve both mental and physical health. Researchers have investigated the usefulness of mindfulness on everything from gene expression to cancer treatment, but most of these studies focus on lengthy, weeks-long training programs.
Carnegie Mellon University researchers took a different direction in their investigation into the benefits of mindfulness meditation practice. The researchers investigated the effects of brief mindfulness meditation on the ability to be resilient under stress. Their findings, published in Psychoneuroendocrinology, reveal that brief mindfulness meditation practice — 25 minutes a day for three consecutive days — alleviates psychological stress.
“More and more people report using meditation practices for stress reduction, but we know very little about how much you need to do for stress reduction and health benefits,” said J. David Creswell, associate professor of psychology in the Dietrich College of Humanities and Social Sciences.
The research team recruited 66 healthy individuals between the ages of 18 and 30 years of age for the three day experiment. The recruits were divided into two groups. The first group went through a brief, three day mindfulness meditation program. For 25 minutes a day, the participants were given breathing exercises to help them monitor their breath and learn to focus on the present moment. The remaining participants completed a matched three-day cognitive training program. During this program, they were asked to critically analyze poetry in an effort to enhance problem-solving skills.
After finishing their final exercises, both groups completed stressful math and speech tasks in front of stern-faced evaluators. The participants self-reported their stress levels during the tasks, then provided saliva samples for cortisol measurements. Cortisol is commonly called the “stress hormone.”
Reduced stress perceptions were reported by the participants who received the brief mindfulness meditation training. This suggests that the mindfulness practice fostered psychological stress resilience. The mindfulness participants also showed a greater cortisol reactivity than the critical thinking participants.
“When you initially learn mindfulness mediation practices, you have to cognitively work at it – especially during a stressful task,” Creswell said in a recent statement. “And, these active cognitive efforts may result in the task feeling less stressful, but they may also have physiological costs with higher cortisol production.”
The researchers are focusing their investigations on the possibility that mindfulness can become more automatic and easy to use with long-term mindfulness training. They believe that this will result in reduced cortisol reactivity.
—–
SHOP NOW: ProSource Premium High Density Exercise Yoga Mat with Comfort Foam and Carrying Straps, Black

Dangers Of Hyperthermia During The Summer For Older Adults

National Institutes Of Health
During the summer, it is important for everyone, especially older adults and people with chronic medical conditions, to be aware of the dangers of hyperthermia. The National Institute on Aging (NIA), part of the NIH, has some tips to help mitigate some of the dangers.
Hyperthermia is an abnormally high body temperature caused by a failure of the heat-regulating mechanisms in the body to deal with the heat coming from the environment. Heat stroke, heat syncope (sudden dizziness after prolonged exposure to the heat), heat cramps, heat exhaustion and heat fatigue are common forms of hyperthermia. People can be at increased risk for these conditions, depending on the combination of outside temperature, their general health and individual lifestyle.
Older people, particularly those with chronic medical conditions, should stay indoors, preferably with air conditioning or at least a fan and air circulation, on hot and humid days, especially when an air pollution alert is in effect. Living in housing without air conditioning, not drinking enough fluids, not understanding how to respond to the weather conditions, lack of mobility and access to transportation, overdressing and visiting overcrowded places are all lifestyle factors that can increase the risk for hyperthermia.
People without air conditioners should go to places that do have air conditioning, such as senior centers, shopping malls, movie theaters and libraries. Cooling centers, which may be set up by local public health agencies, religious groups and social service organizations in many communities, are another option.
The risk for hyperthermia may increase from:
Age-related changes to the skin such as poor blood circulation and inefficient sweat glands

  • Alcohol use
  • Being substantially overweight or underweight
  • Dehydration
  • Heart, lung and kidney diseases, as well as any illness that causes general weakness or fever
  • High blood pressure or other health conditions that require changes in diet. For example, people on salt-restricted diets may be at increased risk. However, salt pills should not be used without first consulting a physician.
  • Reduced perspiration,caused by medications such as diuretics, sedatives, tranquilizers and certain heart and blood pressure drugs
  • Use of multiple medications. It is important, however, to continue to take prescribed medication and discuss possible problems with a physician.

Heat stroke is a life-threatening form of hyperthermia. It occurs when the body is overwhelmed by heat and is unable to control its temperature. Heat stroke occurs when someone’s body temperature increases significantly (above 104 degrees Fahrenheit) and shows symptoms of the following: strong rapid pulse, lack of sweating, dry flushed skin, mental status changes (like combativeness or confusion), staggering, faintness or coma. Seek immediate emergency medical attention for a person with any of these symptoms, especially an older adult.
If you suspect someone is suffering from a heat-related illness:

  • Get the person out of the heat and into a shady, air-conditioned or other cool place. Urge the person to lie down.
  • If you suspect heat stroke, call 911.
  • Apply a cold, wet cloth to the wrists, neck, armpits and/or groin. These are places where blood passes close to the surface of the skin, and the cold cloths can help cool the blood.
  • Help the individual to bathe or sponge off with cool water.
  • If the person can swallow safely, offer fluids such as water or fruit and vegetable juices, but avoid alcohol and caffeine.

The Blind Outperform Sighted People When Using Haptic Technology

redOrbit Staff & Wire Reports – Your Universe Online

New research in the burgeoning field of haptic technology at the University of California-Berkeley has discovered that people who are blind or visually impaired tend to outmaneuver their sighted counterparts – especially when they used both hands and multiple fingers to find their way around.

The reason that blind subjects outperformed people with normal-range eyesight when it comes to using haptic (or tactile) technology is that they’ve developed superior cognitive navigation strategies, claimed Valerie Morash, a doctoral student in psychology at the university and the lead author of a new paper published Tuesday in the online edition of the journal Perception.

“Most sighted people will explore these types of displays with a single finger. But our research shows that this is a bad decision. No matter what the task, people perform better using multiple fingers and hands,” Morash explained in a statement.

“We can learn from blind people how to effectively use multiple fingers, and then teach these strategies to sighted individuals who have recently lost vision or are using tactile displays in high-stakes applications like controlling surgical robots,” she added.

Scientists have been investigating how receptors on a person’s fingertips communicate information to the brain for decades, the university said. Now, researchers at several multimedia companies (including Disney) have begun using more different types of tactile interfaces, which use vibrations and either electrostatic or magnetic feedback to allow tablet computer and mobile device users to navigate or experience what things feel like.

Morash and colleagues from UC Berkeley and the Smith-Kettlewell Eye Research Institute in San Francisco recruited a total of 28 participants – 14 blind adults and 14 who were normally sighted but blindfolded for the course of the study – and had them complete a series of different tasks using a tactile map.

For instance, they were asked to use various hand and finger combinations to find landmarks, determine if a road looped around, or similar challenges. They found that both blind and sighted participants performed better when using both hands and several fingers, though the visually impaired subjects were on average 50 percent faster at completing the tasks (and even quicker when using both hands and all of their fingers).

Specifically, the study authors reported that tasks requiring line-tracing were faster when fingers were added to a hand that was already in use, and sometimes when added to the second hand, and that using both hands and multiple fingers allowed participants to complete local and global search tasks more quickly.

Tasks involving distance comparison were faster when multiple fingers were used, but not when two hands were used. Furthermore, the researchers found that participants were able to move faster in a straight line when using multiple fingers.

In all, their findings were found to support the notion that tactile systems perform best when they are capable of exploiting the independence of multiple fingers, and that blind participants benefitted more from two hands or multiple fingers than their sighted counterparts. This conclusion “indicates that the blind participants have learned, through experience or training, how to best take advantage of multiple fingers during haptic tasks,” the authors wrote.

“As we move forward with integrating tactile feedback into displays, these technologies absolutely need to support multiple fingers,” noted Morash. “This will promote the best tactile performance in applications such as the remote control of robotics used in space and high-risk situations, among other things.”

—–

SHOP NOW: Haptics Technologies: Bringing Touch to Multimedia (Springer Series on Touch and Haptic Systems)

Product Engineering May Draw Inspiration From Animal Pee Study

[ Watch the Video: Water Experiment 1: Empty In The Same Time Span ]

Gerard LeBlond for redOrbit.com – Your Universe Online

A recent study from the Georgia Institute of Technology on how quickly animals urinate reveals that previous information on urinary flow dynamics may not “hold water” after all.

David Hu, Georgia Tech’s assistant professor led the study of how quickly 32 different animals urinated. The findings indicate that animals will urinate in about the same time no matter how large their bladders are. For instance, a cat’s bladder holds about .17 ounces of urine, while an elephant’s bladder holds about 609 ounces. However, both will take about 20 seconds to empty their bladder. The research also revealed that all animals that weigh more than 6.6 pounds take about the same time to urinate.

“It’s possible because larger animals have longer urethras. The weight of the fluid in the urethra is pushing the fluid out. And because the urethra is long, flow rate is increased,” David stated.

According to the researchers, gravity has an impact on how fast the urine flows on animals. Larger animals have longer urethras allowing them to empty their bladders in jets. An elephant’s urethra is about 3.5 feet long that allows it to urinate at about 13 feet per second. Smaller animals’ urination has a minimal effect from gravity.

“If its urethra were shorter, the elephant would urinate for a longer time and be more susceptible to predators,” Hu explained.

Patricia Yang, a graduate student that was part of the research team, explained that smaller animals “urinate in small drops because of high viscous and capillary forces. It’s like peeing in space. Mice and rats go in less than two seconds. Bats are done in a fraction of a second.”

The research team observed 16 animals from the zoo urinating, then watched cows, horses, dogs and other animals on YouTube videos relieve themselves. As they watched, the team realized that their findings could help engineers design products.

“It turns out that you don’t need external pressure to get rid of fluids quickly. Nature has designed a way to use gravity instead of wasting the animal’s energy,” Hu said.

The manufacturing of water tanks, backpacks and fire hoses could benefit from their research and be manufactured more efficiently. The team has demonstrated in an experiment that uses a tea cup, a quart and gallon containers of water, empties in the same duration of time using different lengths of connected tubes. A second experiment uses three cups filled with the same amount of water using different length tubes — the longer tubes attached to the cup emptied faster.

“Nature has shown us that no matter how big the fire truck, water can still come out in the same time as a tiny truck,” Hu added.

—–

Everything you need for your pet – Pet Supplies

Mosquitoes Found To Have An Aromatic Attraction To Malaria Hosts

Brett Smith for redOrbit.com – Your Universe Online

A new study published in the Proceedings of the National Academy of Sciences has found that the parasite responsible for Malaria alters the scent of its mammalian host – causing it to become more attractive to the mosquitoes that spread the disease from host to host.

The malaria parasite can be propagated only by mosquito. The flying insect ingests the parasite via infected blood, leading to the following generation developing within the mosquito’s gut. These burgeoning parasites then journey to the mosquito’s salivary glands and are transferred to the new host during the following ingestion of blood.

The new study showed mosquitoes are particularly attracted to mice infected with malaria, even when the mice didn’t show symptoms.

“Malaria-infected mice are more attractive to mosquitos than uninfected mice,” said study author Mark Mescher, associate professor of entomology at Penn State. “They are the most attractive to these mosquito vectors when the disease is most transmissible.”

In the study, researchers from Penn State and the Swiss Institute of Technology in Zurich (ETH Zurich) examined chemicals emitted by infected mice. They discovered that the chemicals mosquitoes like the most were predominantly released during the highly infectious phase of the disease – 13 to 20 days after infection.

“There appears to be an overall elevation of several compounds that are attractive to mosquitoes,” said Consuelo De Moraes, a professor of biocommunication and Entomology at ETH Zurich.

The international team was able to identify four specific chemical compounds that were highly attractive for the mosquitoes.

The study team mentioned although afflicted people smell more attractive, they generally do not develop a distinct body odor. They added that the malaria pathogen could also have negative effects on mosquitoes.

“Since mosquitoes probably don’t benefit from feeding on infected people, it may make sense for the pathogen to exaggerate existing odor cues that the insects are already using for host location,” Mescher said.

Surprisingly, the researchers also found that when afflicted mice no longer had symptoms, they still released the mosquito-attracting chemicals.

While the results of the mice-based study can’t be directly translated to humans, the researchers said they were interested in focusing future research on how to prevent the spread of malaria from asymptomatic individuals.

“We were most interested in individuals that are infected with the malaria parasite but are asymptomatic,” De Moraes said. “Asymptomatic people can still transmit the disease unless they are treated, so if we can identify them we may be able to better control the disease.”

“If this holds true in humans, we may be able to screen humans for the chemical scent profile using this biomarker to identify carriers,” Mescher added.

Another malaria study published in March found warming temperatures will allow mosquitoes spreading the disease to travel into higher altitudes. The study team said without improved monitoring and control efforts, malaria cases will increase significantly as the planet’s temperatures increase in the years ahead – spreading the disease to areas of Africa and South America that have traditionally faced a low risk of infection.

—–

Follow redOrbit on Twitter, Facebook and Pinterest.

The History Of Virtual Reality

Virtual reality (also known as immersive multimedia) is a computer-simulated environment that mocks or simulates real-world environments, as well as simulates physical presences in real or imagined worlds. VR technology has traditionally been a virtual sight- and perhaps sound-based experience, but has the potential to recreate other sensory experiences, such as virtual taste, smell, and even touch.

As for touch (haptic technology), some systems have already included such experiences in their designs, including vibrating controllers for video gaming, allowing players to get a feel of what is happening in a game. Some advanced haptic systems now include tactile information, generally referred to as force feedback in medical, military and gaming applications.

Today, VR technology is big business in many areas of the tech, medical and military community. However, the technology is not a new one, as the term can trace its roots back more than a century. In fact, the first mention of virtual reality can be traced back to the 1860s, when 360-degree art via panoramic murals began to appear. While this may be a very archaic description of the term virtual reality, the future of the technology only gets better.

Simulators, which today are almost everywhere, were introduced in the 1920s. Early vehicle simulators may have helped bring about more futuristic systems that now include flight simulators, golfing simulators, spacecraft simulators, as well as simulation-based video gaming. In fact, Thomas A. Furness III would develop the first visual flight simulator for the US Air Force in 1966.

In the 1930s, Stanley G. Weinbaum described the first goggle-based VR system, called Pygmalion’s Spectacles. This early device recorded holographic images of fictional experiences and included smell and touch. By the 1950s, Morton Heilig was describing an “Experience Theatre” that could encompass all senses, drawing the viewer into the activity taking place onscreen or onstage. He developed the Sensorama prototype in 1962 and produced five short films to be displayed on the device, engaging multiple senses (sight, sound, smell, and touch).

Ivan Sutherland, with the help of his student Bob Sproull, created what is considered the first virtual reality head-mounted display (HMD) system in 1968. This device was primitive both in terms of user interface and realism and the HMD was so heavy that it had to be suspended from the ceiling.

In 1977, MIT researchers developed the Aspen Movie Map, which was a crude virtual simulation of Aspen, Colorado. The primitive system allowed users to wander the streets of the city in one of three modes: summer, winter, and polygons. The seasonal modes were based on a multitude of photographs taken of virtually every aspect of the city in both summer and winter. The polygons mode was a simple 3D model of the city.

The virtual reality concept began to become mainstream in the 1980s, being popularized by Jaron Lanier, one of the modern pioneers in the field. He founded the company VPL Research in 1985, which had developed some of the influential “goggles and gloves” systems of the decade.

In 1991, Antonio Medina, an MIT graduate and NASA scientist, designed a VR system that would “drive” Mars rovers in apparent real time despite the substantial delay of Mars-Earth-Mars signals.

Virtual reality gaming started to take shape in the early 90s as well. Jonathan Waldern, a VR PhD researcher launched “Virtuality” in 1992, the first mass-produced, networked, multiplayer VR location-based entertainment system. The system, which was primarily used in the UK, went on to comprise more than 42 million plays in more than 17 countries. The Virtuality system featured headsets and exoskeleton gloves that gave users one of the first immersive VR experiences.

Nintendo jumped into the VR gaming scene in 1995 with the HMD gaming system Virtual Boy. Other head mounted displays for gaming released in the 90s include Virtual I-O’s iGlasses; Cybermaxx, developed by Victormaxx; and Forte Technologies’ VFX-1.

Virtual reality gaming was also big in the 90s in arcades around the world. These virtuality systems encompassed aspects of racing and shooting games, many of which are still thriving today. These early arcade gamers, however, were simplified and only simulate certain aspects of reality. But modern gaming brought VR technology to a whole new level with the inception of devices such as the Wii Remote, Microsoft’s Kinect and the PlayStation Move, all of which track and send motion input of the players to the game console.

The latest craze in VR gaming comes with a new high field of view VR headset system designed specifically for gaming called the Oculus Rift. The headset provides about a 110 degree field of view, absolute head orientation tracking, USB interface and aimed at 1920×1080 resolution or greater. The company behind Oculus Rift, Oculus VR, was purchased by Facebook in spring 2014 for $2 billion.

Sony announced at the Game Developers Conference in March 2014 that it was developing a rival system to the Oculus Rift – the prototype has been dubbed Project Morpheus.

Virtual reality has also been popularized in pop culture, being portrayed in books, music, TV and film. Some of the more notable films that touch on VR include: Tron (1982), Total Recall (1990, as well as the 2012 remake), The Lawnmower Man (1992), Virtuosity (1995), Strange Days (1995), The Matrix (1999, and its sequels), Vanilla Sky (2001), Inception (2010), and Ender’s Game (2013).

Today, VR technology is being widely implemented, helping medical and military personnel in training exercises and perhaps eventually in real-life scenarios. What will the future hold for virtual reality? Only time will tell.

Get the History of Virtual Reality e-book at Amazon.com

Image Caption: Virtual Reality concept with a Head Mounted Display. Credit: dolgachov/Thinkstock.com

—–

SHOP NOW: BW Mobile Theatre Video Glasses – Movies on 52 Inch Virtual Screen – Black

Picking Up Healthy Habits After Age 30 Could Help Reverse Heart Disease Risk

redOrbit Staff & Wire Reports – Your Universe Online

Eliminating unhealthy habits such as losing weight and kicking the cigarette habit in your 30s and 40s could potentially reverse the natural progression of coronary artery disease, according to new research appearing in the June 30 edition of the journal Circulation.

According to lead investigator Bonnie Spring, a professor of preventive medicine at the Northwestern University Feinberg School of Medicine, and her colleagues, deciding to embrace healthy lifestyle changes can control and perhaps even undo damage to a person’s heart. However, the opposite was also found to be true: picking up unhealthy habits as they grow older can have a measurable adverse impact on the coronary arteries.

“It’s not too late. You’re not doomed if you’ve hit young adulthood and acquired some bad habits. You can still make a change and it will have a benefit for your heart,” Spring said in a statement Tuesday. Likewise, if you fail to maintain a healthy lifestyle after the age of 30, “you’ll see the evidence in terms of your risk of heart disease,” she added.

The study authors looked at 5,000 patients who were enrolled in the Coronary Artery Risk Development in Young Adults (CARDIA) study. They examined five different types of healthy behaviors – not being overweight or obese, consuming low amounts of alcohol, maintaining a healthy diet, being physically active and being a non-smoker – in each, as well as coronary artery calcification and thickening among each of the participants.

Each subject was assessed at baseline, when they were between the ages of 18 and 30, and again 20 years later. At the beginning of the study, less than 10 percent of CARDIA participants said they engaged in all five healthy lifestyle behaviors. Two decades later, about 25 percent of them had added at least one additional healthy behavior.

Every additional healthy lifestyle factor was linked to reduced odds of two major markers that can predict future cardiovascular events – detectable coronary artery calcification and lower intima-media thickness. Spring said that the findings were important because they helped to debunk a pair of commonly held health-care myths.

“The first is that it’s nearly impossible to change patients’ behaviors. Yet, we found that 25 percent of adults made healthy lifestyle changes on their own,” she explained. “The second myth is that the damage has already been done – adulthood is too late for healthy lifestyle changes to reduce the risk of developing coronary artery disease. Clearly, that’s incorrect. Adulthood is not too late for healthy behavior changes to help the heart.”

On the flip side, 40 percent of the study participants lost healthy lifestyle factors, acquiring more bad habits as they grew older. Spring said that this had “a measurable negative impact on their coronary arteries,” increasing their risk of detectable coronary artery calcification and higher intima-media thickness.

The researchers said that their findings demonstrate that healthy changes made by people in their 30s and 40s, such as maintaining a healthy body weight, cutting back on sodium and regularly participating in moderate physical activity, must be sustained. As Spring explained, “Adulthood isn’t a ‘safe period’ when one can abandon healthy habits without doing damage to the heart. A healthy lifestyle requires upkeep to be maintained.”

SHOP NOW: Nature Made, Vitamin D3 2,000 I.U. Liquid Softgels, 250-Count

Mars One Accepting Payload Proposals For 2018 Unmanned Mission

Gerard LeBlond for www.redorbit.com – Your Universe Online

Mars One is inviting universities, researchers and companies to contribute payloads for the 2018 unmanned Mars Lander mission. A panel of experts will determine the best ideas to accompany the Lander that will set the stage for a manned mission to Mars in 2025.

The four proposals requested will demonstrate technologies for the 2025 mission. One payload will be from a worldwide university competition and two will be for sale to the highest bidder representing scientific experiments, marketing techniques, or any other payload of value to the mission.

“We are opening our doors to the scientific community in order to source the best ideas from around the world. The ideas that are adopted will not only be used on the lander in 2018, but will quite possibly provide the foundation for the first human colony on Mars. For anyone motivated by human exploration, there can be no greater honor than contributing to a manned mission to Mars,” said Arno Wielders, co-founder and chief technical officer of Mars One.

The August 2018 unmanned mission will carry the selected payloads on Mars One to the Red Planet. Lockheed Martin, who built the 2007 NASA Phoenix spacecraft, has been contracted to develop the mission study concept for the 2018 Mars lander.

The four payloads will consist of the following:

A demonstration payload that includes four experiments demonstrating some of the technologies important for human survival and a permanent settlement on Mars.

Included in the payload will be a soil acquisition experiment to collect soil for water production; a water extraction experiment for removing water from the soil on Mars; a solar panel to demonstrate how the settlement will generate energy using only sunlight; and a camera system that will use a Mars-synchronous communication satellite to stream in real time, live video back to Earth.

The university payload competition will consist of one payload selected from worldwide entries submitted by universities. The submission can include scientific experiments, technology demonstrations or any other idea pertaining to a settlement on Mars. The Mars One community members will be the judges in selecting the winning competition entry from the universities. Notice of intent for submissions can be placed on the Mars One university competition website.

“The brightest young minds of our planet are being invited to participate in Mars One’s first Mars lander. This is an opportunity for university teams to launch an experiment not just to space, but to the surface of Mars. We do this to inspire students to believe that anything is possible. We’re not only looking for scientific proposals but also for outreach or educational ones. Mars One’s community will determine which payload flies to Mars in an online vote,” said Bas Lansdorp, Co-founder & CEO of Mars One.

In addition to the two competition payloads, two other payloads will be for sale to the highest bidder. These can come as scientific experiments, technology demonstrations, marketing strategies, publicity campaigns or any other worthy suggestions.

“Previously, the only payloads that have landed on Mars are those which NASA has selected. We want to open up the opportunity to the entire world to participate in our mission to Mars by sending a certain payload to the surface of Mars,” Lansdrop added.

The time frame for submission is on a tight schedule because of the 2018 launch. This gives the participants a very stringent time frame to submit their ideas and if chosen, develop a payload for launch. The payload ideas will be evaluated by Mars One and Lockheed Martin, but will also request expert insight in the selections to ensure that fitting payloads are selected.

The 2025 manned mission to Mars to establish a human colony originally picked 1058 individuals from a list of 200,000 candidates; as of May 5, 2014, the field of participants shrank to 705. The remaining applicants will be interviewed by a Mars One committee to pick the final colonists.

The next elimination phase will be based on the applicants “knowledge, intelligence, adaptability and personality,” according to Norbert Kraft, Mars One’s chief medical officer.

The candidates will now be placed in teams of four consisting of two men and two woman. They will be put through vigorous training to prepare for the voyage. However, individuals as well as a whole team could still be eliminated if they show they are not up to the task, according to the Mars One team.

Research May Help Prevent Eye Injuries Among Soldiers

By K.C. Gonzalez, University of Texas at San Antonio
Researchers at UTSA are discovering that the current protective eyewear used by our US armed forces might not be adequate to protect soldiers exposed to explosive blasts.
According to a recent study, ocular injuries now account for 13 percent of all battlefield injuries and are the fourth most common military deployment-related injury.
With the support of the U.S. Department of Defense, UTSA biomedical engineering assistant professor Matthew Reilly and distinguished senior lecture in geological sciences Walter Gray have been collaborating with researchers at the U.S. Army Institute of Surgical Research at Joint Base San Antonio Fort Sam Houston and the UT Health Science Center at San Antonio to understand the unseen effects that can occur as a result of a blast injury.
In a basement laboratory at Fort Sam Houston military base, the research team has spent the last two years simulating Improvised Explosive Device (IED) blasts on postmortem pig eyes using a high-powered shock tube.
So far, they have discovered that the shock wave alone created by an IED, even in the absence of shrapnel or other particles, can cause significant damage to the eyes that could lead to partial or total blindness.
Perhaps the most striking discovery is that these blasts can damage the optic nerve, which transmits information from the eye to the brain. Optic nerve injuries occur even at low pressures and could be the cause of many visual deficits, which until now have been associated traumatic brain injuries.
“There has been considerable controversy surrounding whether primary blasts could damage the eye,” shared Reilly. “No one had shown conclusive evidence before, perhaps because they weren’t looking at the problem quite as closely as we have. We had some idea of what to look for based on results from computational models and now we have experimental data that supports this phenomenon.”
This groundbreaking research will not only help physicians know what type of injuries to screen for and treat following a blast injury, but also create a reliable model to test various protective eyewear solutions that might prevent or reduce blast damage to the eyes.
Reilly has several family and friends who currently or previously serve in the military who have had various injuries.
“I wasn’t in the military but I would like those who serve our country to be better protected in the field or give them better diagnostics when they are injured,” he said. “I want to make sure their quality of life is as high as possible after they have been deployed. I am just trying to give back.”
Moving forward, the research team plans to delve further into the link between the optic nerve and the brain in an effort to understand the causes and symptoms of traumatic brain injuries.

FDA Approves ReWalk Motorized Exoskeleton For Sale In The US

Alan McStravick for redOrbit.com – Your Universe Online

This year’s FIFA World Cup, being played in Brazil, is a quadrennial event that captures the attentions of a staggering majority of the people that live on this planet. Excusing the pun, the event kicked off by celebrating a revolutionary advancement in paraplegic mobility technology made possible by work done by one of Brazil’s native sons, Dr. Miguel A. L. Nicolelis, MD, PhD.

Heading up the Walk Again Project, Nicolelis explained how the group’s goal was to have a fully functioning first-of-its-kind brain-controlled whole-body exoskeleton ready for demonstration just before the first match of the tournament was played. I have previously had the opportunity to document his and his team’s work leading up to that day.

Duke University-based Nicolelis and his Walk Again Project enabled 29-year-old paraplegic Juliano Pinto to launch the biggest sporting event on Earth by donning the exoskeleton and willing it to move with the power of his mind, kicking out the first ball to officially begin the World Cup almost three weeks ago.

While news of this amazing prototype was welcomed the world over, equally good news was announced this week for individuals living with paralysis in the United States. That is because the US Food and Drug Administration (FDA) has now cleared for marketing and sale a motorized exoskeleton called ReWalk.

This bionic suit first gained attention just over two years ago when Britain’s Claire Lomas became the first person to complete a marathon while wearing an assistive motorized device, reports CNET‘s Dana Kerr. Lomas finished the 2012 London Marathon in 16 days time. This is an impressive feat for someone who previously was unable to move due to a severe spinal cord injury.

ReWalk was designed to enable its wearer the ability to stand, walk and navigate stairs with limited assistance. In a statement heralding the FDA announcement, director of the Office of Device Evaluation, Christy Foreman stated, “Along with physical therapy, training, and assistance from a caregiver, these individuals may be able to use these devices to walk again in their homes and in their communities.”

ReWalk, developed by Argo Medical Technologies of Marlborough, MA, was subjected to a standard de novo classification process by the FDA. This process is used for all devices that are novel in design and intended use. The FDA also put ReWalk through the ringer, testing all aspects of its hardware and software as well as its battery systems and overall durability. Aiding in the final decision to approve the device for sale was clinical data obtained from 30 participants currently using the device.

As noted above, ReWalk has already jumped the regulatory hurdles in Europe and has been available for the purchase price of about $70,000 US. That will be the anticipated price of the device here in the US once it goes on sale as well. Once made available, Argo Medical Technologies will be required to provide to the FDA a post-market clinical study that highlights specifically any adverse events as a result of use of the device.

As reported by the USA Today, Marine Capt. Derek Herrera, a paraplegic trained on the device, will be one of the first Americans to own one.

“I see this as a milestone for people in my same situation,” Herrera said in an Argo Medical Technologies press statement. “It will be incredible for me to regain independence, to use the system to walk and stand on my own.”

Relationship Between Unhealthy Food Logos And Childhood Obesity

Alan McStravick for redOrbit.com – Your Universe Online

In a feature in this past April’s Ad Age magazine, a write up was done on the re-branding of McDonald’s mascot Ronald McDonald that detailed, among other things, the new wardrobe for the hamburger-hocking clown.

As Lorene Yue noted, “These days, the spokesclown for McDonald’s Corp. has been relegated mostly to the nonprofit Ronald McDonald House Charities, in part to make him a smaller target for activists who accuse the company of using him to market fast food to kids.” Continuing she explained, “But starting in June, he’ll take a bigger role in McDonald’s social media channels to promote its new “Fun makes great things happen” campaign.”

This re-emergence of the child-friendly clown seems particularly ill-timed to the release of a new study on childhood obesity and its tie to unhealthy food branding and marketing. This study, conducted by a research team out of Michigan State University (MSU) and the University of Oregon (UO) and led by Anna McAlister, assistant professor of advertising and public relations found that the more familiar a child was with logos and other images associated with fast food restaurants, sodas and other unhealthy snack option brands, the more likely the child was to be overweight or obese.

“We found the relationship between brand knowledge and BMI to be quite robust,” McAlister commented. “The kids who know the most about these brands have higher BMIs.” BMI, or Body Mass Index, is a number calculated from a child’s weight and height. This method is typically very reliable in determining body fatness in most children and teens. While not a direct measure of body fat, research has shown BMI correlates to other measures, such as underwater weighing and dual energy x-ray absorptiometry. Doctors and researchers rely on BMI as it is an inexpensive and easy-to-perform method for screening weight categories that lead to later health problems.

Previous studies have explicitly shown that children who struggle with being overweight at a young age tend to maintain unhealthy weight levels as they age and mature.

The researchers brought together a participant group of children between the ages of 3 and 5 years of age whom they asked to identify certain logos, brands and images like golden arches, silly rabbits and the crown of a king. Those children that were able to enthusiastically identify each picture tended to have a higher BMI than those who presented less recognition of the images.

“The results varied, which is a good thing,” McAlister noted. “Some kids knew very little about the brands while others knew them exceptionally well.”

To support their initial results, the team conducted the study twice with separate groups of children. In the first attempt they discovered that exercise habits among the group tended to offset the negative effects of too much familiarity with unhealthy food. Unfortunately, that same finding could not be duplicated in the second group.

“The inconsistency across studies tells us that physical activity should not be seen as a cure-all in fixing childhood obesity,” McAlister claimed. “Of course we want kids to be active, but the results from these studies suggest that physical activity is not the only answer. The consistent relationship between brand knowledge and BMI suggests that limiting advertising exposure might be a step in the right direction too.”

It’s this last point that should have parents on guard with respect to the McDonald’s Corporation pulling the pusher of Happy Meals out of mothballs and gussying him up in new duds. No marketing decision by McDonald’s, and to a lesser extent their competitors, is ever made lightly. In the same Ad Age article, David Zlotnik, McDonald’s director of global marketing perhaps showed his cards when he admitted, “We’ve been working on his new clothes for probably close to two years.”

McAlister almost seems careful not to assign full blame to the correlation between brand recognition and BMI squarely at the purveyors of the marketing and advertising. Recognizing that children get most of their food messages from television, one could argue those kids with higher BMI suffer the condition due to a sedentary lifestyle that usually means they spend too much time in front of the TV. It’s akin to a chicken v. egg paradox. Are they familiar with the images, brands and logos because they have higher BMI thanks to a sedentary lifestyle or do they have a higher BMI because they are bombarded by the messages they see on TV?

However, McAlister did explain what they believe their study ultimately told them. “From our results,” she said, “it would suggest that it’s not the TV time itself, but rather what is learned about these brands. It’s probably the developing food knowledge, not the sedentary lifestyle.”

Serving as co-author on the paper, UO professor Bettina Cornwell believes the study results provide important insight into children’s relationship with food, or their “first language of food.” As she notes, it doesn’t take long for a child to figure out their likes and dislikes and that early lesson can stick with them their entire lives.

“What we’re trying to show here is just how young kids are when they develop their theory of food,” McAlister added. “As early as three years of age, kids are developing a sense of what food means to them.”

The study and its findings have been published in the most recent issue of the journal Appetite.

—–

GET PRIMED! Join Amazon Prime – Watch Over 40,000 Movies & TV Shows Anytime – Start Free Trial Now

Potentially Habitable Exoplanet With Earth-Like Temperatures Found

redOrbit Staff & Wire Reports – Your Universe Online
Astronomers have discovered a new, potentially habitable Super-Earth believed to possess temperatures comparable to those found here, but with much larger seasonal shifts, provided the atmosphere is similar to our planet’s own.
Dr. Robert A. Wittenmyer, a researcher at UNSW Australia specializing in the detection and characterization of extrasolar planets, and his colleagues located the new Super-Earth in orbit around the nearby red dwarf star Gliese 832 some sixteen light years away.
In 2009, a cold Jupiter-like planet identified as Gliese 832b was discovered orbiting this star, and according to the researchers, the newly discovered Gliese 832c is believed to be the closest quality habitable world discovered to date. It has been added to the Habitable Exoplanets Catalog, along with 23 other objects of interest.

According to the Planetary Habitability Laboratory at the University of Puerto Rico at Arecibo, Gliese 832c has a mass at least five times that of Earth’s and an orbital period of 36 days. While it possesses Earth-like temperatures, the large seasonal shifts combined and a denser atmosphere could make it too hot to support life, essentially meaning that it is closer to being a Super-Venus than a Super-Earth.
“The Earth Similarity Index (ESI) of Gliese 832c (ESI = 0.81) is comparable to Gliese 667Cc (ESI = 0.84) and Kepler-62e (ESI = 0.83),” the laboratory explained. “This makes Gliese 832c one of the top three most Earth-like planets according to the ESI (i.e. with respect to Earth’s stellar flux and mass) and the closest one to Earth of all three, a prime object for follow-up observations. However, other unknowns such as the bulk composition and atmosphere of the planet could make this world quite different to Earth and non-habitable.”
Artistic representations of Gliese 832c depict it as a temperate world covered in clouds. Based on the relative size of the planet, it is assumed that it possesses a rocky composition, but if it is larger it could consist primarily of ice and gas, the researchers said. It orbits near the inner edge of the conservative habitable zone.
In terms of temperature, it possesses an average equilibrium temperature of 253K, which is similar to Earth’s average equilibrium temperature of 255K. However, it has a high eccentricity (based on an estimated 0.3 albedo) that causes large temperature shifts of up to 25K.
A research paper detailing the findings of Dr. Wittenmyer’s team has been accepted for publication by The Astrophysical Journal.
At this point, the two planets orbiting Gliese 832 include an inner, Earth-like world and an outer Jupiter-like giant planet, making it appear as if it is a scaled-down version of our own Solar System. The outer giant could fill a dynamical role in the Gliese 832 system as Jupiter plays in our Solar System, the researchers said, noting that it will be interesting to discover whether or not there are any other objects that follow this particular configuration.
Image 2 (below left): Artistic representation of the potentially habitable exoplanet Gliese 832 c as compared with Earth. Gliese 832 c is represented here as a temperate world covered in clouds. The relative size of the planet in the figure assumes a rocky composition but could be larger for a ice/gas composition. Credit: PHL @ UPR Arecibo.
Image 3 (below right): The Habitable Exoplanets Catalog now has 23 objects of interest including Gliese 832 c, the closest to Earth of the top three most Earth-like worlds in the catalog. Credit: PHL @ UPR Arecibo.

Teething Babies Do Not Need Medicine On Their Gums

U.S. Food and Drug Administration

There are more theories about teething and “treating” a baby’s sore gums than there are teeth in a child’s mouth. One thing doctors and other health care professionals agree on is that teething is a normal part of childhood that can be treated without prescription or over-the-counter (OTC) medications.

Too often well-meaning parents, grandparents and caregivers want to soothe a teething baby by rubbing numbing medications on the tot’s gums, using potentially harmful drugs instead of safer, non-toxic alternatives.

That’s why the Food and Drug Administration (FDA) is warning parents that prescription drugs such as viscous lidocaine are not safe for treating teething in infants or young children, and that they have hurt some children who used those products.

FDA has previously recommended that parents and caregivers not use benzocaine products for children younger than 2 years, except under the advice and supervision of a health care professional. Benzocaine—which, like viscous lidocaine, is a local anesthetic—can be found in such OTC products as Anbesol, Hurricaine, Orajel, Baby Orajel, and Orabase.

The use of benzocaine gels and liquids for mouth and gum pain can lead to a rare but serious—and sometimes fatal—condition called methemoglobinemia, a disorder in which the amount of oxygen carried through the blood stream is greatly reduced. And children under 2 years old appear to be at particular risk.

Parents Have Safer Alternatives

On average, children get one new tooth every month from 6 months of age to about age 3, for a total of 20 “baby teeth.”

According to the American Academy of Pediatrics (AAP), occasional symptoms of teething include mild irritability, a low-level fever, drooling and an urge to chew on something hard.

Because teething happens during a time of much change in a baby’s life, it is often wrongly blamed for sleep disturbances, decreased appetite, congestion, coughing, vomiting and diarrhea.

If your child’s gums are swollen and tender, gently rub or massage the gums with your finger, and give your child a cool teething ring or a clean, wet, cool washcloth to chew on.

Chill the teething ring or washcloth in the refrigerator for a short time, making sure it’s cool—not cold like an ice cube. If the object is too cold, it can hurt the gums and your child. The coolness soothes the gums by dulling the nerves, which transmit pain.

“The cool object acts like a very mild local anesthetic,” says Hari Cheryl Sachs, M.D., a pediatrician at FDA. “This is a great relief for children for a short time.”

Parents should supervise their children so they don’t accidentally choke on the teething ring or wash cloth.

Avoid Local Anesthetics

For teething, avoid local anesthetics such as viscous lidocaine or benzocaine-containing teething products except under the advice and supervision of a health care professional.

Viscous lidocaine is a prescription medication, a local anesthetic in a gel-like syrup. Doctors may prescribe it for chemotherapy patients (children and adults) who are unable to eat because of mouth ulcers that can occur with chemotherapy. Dentists may use it to reduce the gag reflex in children during dental X-rays and impressions.

Parents may have viscous lidocaine on hand if it has been prescribed to treat another family member for pain relief from conditions such as mouth or throat ulcers. But it should never be used to comfort a teething baby.

The Institute for Safe Medication Practices (ISMP)—a nonprofit organization dedicated to preventing medication errors—has received reports of teething babies suffering overdoses of viscous lidocaine. Symptoms include jitteriness, confusion, vision problems, vomiting, falling asleep too easily, shaking and seizures.

The drug also “can make swallowing difficult and can increase the risk of choking or breathing in food. It can lead to drug toxicity and affect the heart and nervous system,” says Michael R. Cohen, RPh, MS, ISMP president.

Parents have been known to repeatedly apply viscous lidocaine if a baby keeps fussing, says Cohen. They have also been known to put liquid gel forms of a topical anesthetic into a baby’s formula or even soak a pacifier or a cloth in it, then put that in their baby’s mouth. How much the baby gets is not measured, so it may be too much, he says. For all these reasons, FDA recommends viscous lidocaine not be used to treat the pain associated with teething.

“Teething is a normal phenomenon; all babies teethe,” says Ethan Hausman, M.D., a pediatrician and pathologist at FDA. “FDA does not recommend any sort of drug, herbal or homeopathic medication or therapy for teething in children.”

In the Hunt For Dark Matter, New Simulations Show Evolution Of “Local Universe”

Brett Smith for redOrbit.com – Your Universe Online

According to astrophysicists, dark matter is the key to understanding the universe as it comprises 85 percent of all mass found in it and is suspected to have caused the growth of galaxies.

In a new study to be presented at the Royal Astronomical Society’s National Astronomy Meeting, scientists from that country’s Durham University have found a possible explanation for why some dark matter didn’t form galaxies in the early Universe – gas that would have created galaxy was sterilized by the heat from the first stars that formed in the Universe.

“I’ve been losing sleep over this for the last 30 years,” said Carlos Frenk, Director of Durham University’s Institute for Computational Cosmology. “Dark matter is the key to everything we know about galaxies, but we still don’t know its exact nature. Understanding how galaxies formed holds the key to the dark matter mystery.”

According to theory, dark matter in the early Universe trapped interstellar gas in ‘halos’ that would eventually become galactic nurseries. However, some of these halos never lit up with new galaxies. The researchers pointed to our own ‘local’ neighborhood in the Universe and said there should be more galaxies near the Milky Way, based on this theory.

Using highly complex computer simulations, the study team was able to determine that heat from a first few initial stars ruined the possibility of other stars forming. Perhaps as a nod to the ongoing FIFA World Cup, the study researchers described this as a sort of “cosmic own goal.”

“We have learned that most dark matter halos are quite different from the ‘chosen few’ that are lit up by starlight,” Frenk said. “Thanks to our simulations we know that if our theories of dark matter are correct then the Universe around us should be full of halos that failed to make a galaxy. Perhaps astronomers will one day figure out a way to find them.”

“What we’ve seen in our simulations is a cosmic own goal,” said Till Sawala, an astrophysicist at Durham. “We already knew that the first generation of stars emitted intense radiation, heating intergalactic gas to temperatures hotter than the surface of the sun. After that, the gas is so hot that further star formation gets a lot more difficult, leaving halos with little chance to form galaxies.”

“We were able to show that the cosmic heating was not simply a lottery with a few lucky winners,” Sawala added. “Instead, it was a rigorous selection process and only halos that grew fast enough were fit for galaxy formation.”

In addition to shedding some new light on dark matter, the new study is also the first to simulate the evolution of our own ‘Local Group’ of galaxies, which includes the popular Andromeda Galaxy. These simulations are a part of Durham University’s larger EAGLE project, which will attempt to simulate from the beginning the formation of galaxies in a representative volume of the Universe.

Increased Early Death Risk Linked To Watching Too Much TV

Gerard LeBlond for www.redorbit.com – Your Universe Online

A study published in the Journal of the American Heart Association reveals a possible link to premature death in adults who watch too much television. Adults who watch at least three or more hours of TV per day, double the risk of early death by any cause, according to the study.

Lead author Miguel Martinez-Gonzalez, from the Department of Public Health at the University of Navarra, Pamplona, Spain, along with several colleagues from CIBER-OBN — the official Spanish agency for funding biomedical research — participated in this study.

“Television viewing is a major sedentary behavior and there is an increasing trend toward all types of sedentary behaviors. Our findings are consistent with a range of previous studies where time spent watching television was linked to mortality,” Martinez-Gonzalez stated.

The study involved 13,284 young and healthy Spanish university graduates. The average age was 37, and 60 percent of them were women. Over an 8.2 year period, the researchers assessed three types of sedentary behavior — television viewing time, time spent on a computer and daily driving time — and their association with death from all causes.

Among the participants, there were 97 deaths, 19 were from cardiovascular causes, 46 were from cancer and 32 from other causes. The researchers took into account other risk factors, and concluded that the risk of death was two times higher in the participants who watched three or more hours of TV per day.

Computer use and driving time showed no significant association with premature death. However, they also concluded that more studies were needed to confirm the association between computer use and driving with death rates.

“As the population ages, sedentary behaviors will become more prevalent, especially watching television, and this poses an additional burden on the increased health problems related to aging,” Martinez-Gonzalez said. “Our findings suggest adults may consider increasing their physical activity, avoid long sedentary periods, and reduce television watching to no longer than one to two hours each day.”

Previous research concluded that half of the US population are leading sedentary lives. The American Heart Association recommends that people get at least 150 minutes of moderate-intensity aerobic activity, or at least 75 minutes of intense aerobic activity per week, and moderate-to high-intensity strength training at least two days per week.

Friendly Competition Pits US Against German Astronauts On Space Station

[ Watch The Video: Out Of This World Cup ]

NASA

NASA’s Reid Wiseman and Steve Swanson and the European Space Agency’s Astronaut Alexander Gerst will be cheering on their home countries’ World Cup 2014 teams, but will the post-goal celebrations be as uplifting as those on Earth?

The astronauts have trained for years to work together as a unified crew, so they’re already 260 miles above Earth aboard the International Space Station. But US astronauts Swanson and Wiseman and their German crewmate Gerst will feel the friendly competition more keenly on June 26, when their home countries play against each other for a chance to advance. Either or both could advance out of Group G under different scenarios possible in this year’s World Cup matches. USA and Germany face off at 11 am CDT (noon EDT) June 26 at Arena Pernambuco in Recife, Brazil.

Wiseman kidded his colleagues in orbit that the stronger US team spirit aboard the space station is a sign the US will be stronger on the field too. “I believe we will win. It’s two against one up here, so I think the U.S. chances are pretty good,” Wiseman said during an in-flight interview with ESPN on June 24.

Wiseman says the crew already is checking its busy schedule for Thursday to see how they can fit in watching the game during what will be afternoon time for them.

Gerst is optimistic that the two teams will meet again. “I hope we kick their butt a little bit, but I’m going to hope it’s going to be at the final game, not at this game on Thursday,” Gerst said.

It may be just a friendly competition among the crew members, but that doesn’t mean there aren’t stakes involved. “If the U.S. wins, these guys are going to draw a little U.S. flag on my head, but I think if Germany wins these guys should have to shave their heads. Either way I’m looking forward to the game. It’s going to be fun,” Gerst said.

Wiseman and Gerst arrived at the space station May 28 as part of the Expedition 40/41 crew and are scheduled to spend the next several months living and working in space until they return to Earth in November. Swanson arrived as part of the Expedition 39/40 crew on March 25 and is expected to return home in September.

NASA has more connections to the World Cup than you might think. For more information visit http://www.nasa.gov/worldcup

NASA’s Reid Wiseman and Steve Swanson and the European Space Agency’s Astronaut Alexander Gerst will be cheering on their home countries’ World Cup 2014 teams.

Curiosity Currently Traveling Through Ancient Glaciers On Mars

FECYT – Spanish Foundation for Science and Technology
3,500 million years ago the Martian crater Gale, through which the NASA rover Curiosity is currently traversing, was covered with glaciers, mainly over its central mound. Very cold liquid water also flowed through its rivers and lakes on the lower-lying areas, forming landscapes similar to those which can be found in Iceland or Alaska. This is reflected in an analysis of the images taken by the spacecraft orbiting the red planet.
NASA’s Mars Curiosity Rover has completed a Martian year –687 Earth days– this week. The vehicle travels through an arid and reddish landscape that was home to glaciers in the past. Ancient Mars held large quantities of water, yet its global hydro-geological cycles were very cold, so much so that they induced the presence of a giant ocean, partially ice-covered and rimmed by glaciers on the lower plains of the northern hemisphere.
Now, an international team of researchers has confirmed this global picture locally, on the Martian site where Curiosity is roving: Gale crater. “This crater was covered by glaciers approximately 3,500 million years ago, which were particularly extensive on its central mound, Aeolis Mons” points to SINC the lead investigator of the study Alberto Fairén, from the Centro de Astrobiología (INTA-CSIC) in Spain and Cornell University in the USA.
“However, at that time there were also rivers and lakes with very cold liquid water in the lower-lying areas within the crater,” adds the researcher, who highlights the fact that ancient Mars was capable of “maintaining large quantities of liquid water (an essential element for life) at the same time that giant ice sheets covered extensive regions of its surface”.
To carry out the study, the team has used images captured with the HiRISE and CTX cameras from NASA’s Mars Reconnaissance Orbiter, together with the HRSC onboard the Mars Express probe managed by the European Space Agency (ESA).
Analyses of the photographs have revealed the presence of concave basins, lobated structures, remains of moraines and fan-shaped deposits which point to the existence of ancient glaciers on Gale. In fact they seem to be very similar to some glacial systems observed on present-day Earth.
“For example, there is a glacier on Iceland –known as Breiðamerkurjökull– which shows evident resemblances to what we see on Gale crater, and we suppose that is very similar to those which covered Gale’s central mound in the past,” says Fairén.
The article also shows images of other glacial systems on Earth which match those on Mars, such as the Malaspina glacier (named after the famous mariner in the service of the Spanish Navy) in Alaska, or others located in northern regions of Canada and the Antarctic.
“As part of the Mars Science Laboratory (MSL) NASA mission, the Curiosity rover can still find evidence of past glacial activity on Gale, and on a very small-scale, for example finding accumulations of angular to sub-angular boulders, striated bedrock and striated boulder pavements and boulder chains,” points out Fairén.
The researcher emphasizes that the current study “provides strong local support for the global ‘cold and wet’ model of the ancient Martian environment, which explains both the geological traces of the presence of liquid water in the past which cover the entire planet together with the climatic models which have demonstrated that Mars was never a warm planet.”
In the specific case of the Gale crater, it is thought that it was excavated by the impact of a huge meteorite 3,600 million years ago and was covered by glaciers shortly afterwards.
“It is even possible that the area of impact was already covered by glaciers before the collision, and in that case the glaciers would have re-covered the recently formed crater in a very short time,” says Fairén, who concludes by pointing out an interesting aspect in terms of life on the planet: “The energy delivered after the impact, combined with the ice on the surface, could have generated very interesting environments from an astrobiological point of view, like hydrothermal areas for example.”

Consumers Want Less Sugar And Salt, Worry Less About Fat

Alan McStravick for www.redorbit.com – Your Universe online

Earlier this month, redOrbit reported about how food marketers were pulling some interesting tricks out of their bag to convince the casual consumer that a patently unhealthy item might actually be considered a healthful option. Many consumers, however, don’t need to be told that slapping the word ‘antioxidant’ on a bottle of Cherry 7-Up doesn’t make it a better drink option than, say, water or natural fruit or vegetable juice.

Despite packaging and advertising trickery, many consumers are using their common sense to try and eat more sensibly rather than trying to adopt a diet mentality that often ends with dissatisfaction and disappointment at the results.

At this year’s Institute of Food Technologists Annual Meeting and Food Expo, held in New Orleans this week, it was revealed in a panel discussion that greater than 50 percent of consumers have actively expressed an interest in products with reduced levels of salt and sugar. Unfortunately, food packaging practice is maintaining a focus on advertising the low- or no-fat attribute of a product.

The panel also cited recent research which shows that only ¼ of consumers claimed they were dieting, yet more than 70 percent of consumers stated they did actually want to lose weight.

“Consumers know they need to take care of their health,” said Lynn Dornblaser, director, innovation & insight, Mintel Group, Ltd. “They want to lose weight, but they don’t like the idea of dieting. They know that living a healthy lifestyle is all about moderation.”

As noted by the panel, a shift in consciousness by many consumers has been realized in the reduction in desire for low- and no-fat options and the increase in the desire for foods that contain less sodium and sugar. More than 50 percent of consumers surveyed identified a reduction in sodium and sugar as being more important than calorie and carbohydrate count and reduced fat.

“And yet in the US market, it’s all about low- or no-fat claims,” said Dornblaser. “Products that make a low-sugar, low-calorie or low-sodium claim are less prevalent.”

Dornblaser’s singling out the US is important. This is because in Europe and the rest of the world, food marketing focusing on no- or low-fat is less common.

Despite US companies still extolling the fat content of their products, they have been quietly and slowly reducing sugar and salt levels in response to consumer demands. Food producers have recognized that consumers are keenly looking at their nutritional labels, especially with regard to the overall sodium and sugar content in the product.

Dornblaser recognizes the slow adoption on behalf of the food producers because “most consumers know that less sodium means less taste.” For this reason, we are seeing products that promote healthier sodium levels but also tout the good taste of the product.

The panel discussion also took a look at some of the emerging consumer trends that may actually send mixed messages to food producers:

* Consumers consistently rank taste as the most important food attribute (88 percent), followed by appetite satisfaction or satiety (87 percent), and value (86 percent).

* Food that was grown or made locally was important to just 36 percent of consumers.

* “Artisan” food, a relatively undefined term to reflect non-processed food, or food made by hand or by a small firm, was deemed important by 36 percent of consumers.

* Thirty-three percent of consumers stated that “organic” was an important food attribute.

Also on the panel, Joanne L. Slavin, professor in the Department of Food Science and Nutrition at the University of Minnesota, stated that the US Dietary Guidelines for 2015 will likely again recommend lower levels of both sugar and salt in food products. Additionally, she also anticipates continued, “movement toward whole foods and away from nutrients,” and reference to trending topics “such as sustainability, gluten, vegan diets and food processing.”

“Consumers look to flavor first, health attributes second,” said Dornblaser. “Any (food producer) has to keep that in mind. Consumers aren’t afraid of sugar or salt, they’re afraid of too much sugar or salt. The way to do that overtly and covertly is reduce when you can. Consumers do look at the nutrition statement.”

Best Way To Avoid Ingredient-Based Food Fear Is To Increase Familiarity

[ Watch The Video: Food Fears: Increasing Familiarity Is The Best Way To Avoid Ingredient-Based Food Fear ]

By Brian Wansink, Cornell University

Daily headlines on internet pages and blogs claim: “New ingredient X is harmful to your health.” Such warnings can scare people into avoiding these ingredients without actually knowing the facts, leading some people to have food fears about ingredients such as sugar, fat, sodium, high-fructose corn syrup (HFCS), mono sodium glutamate (MSG), and others. While some of these food fears are merited, others can be misleading.

A new Cornell University study published in Food Quality and Preference, investigated who might be most prone to food fears, why, and what can they do to correct misperceptions. The phone survey of 1008 US mothers investigated what they thought about the food ingredient HFCS. When comparing those who avoided HFCS with those who did not, the study uncovered three key findings about avoiders: 1) They were more likely to receive their information from the internet rather than TV, 2) they had a desire to have their food related choices known by their friends or reference groups, and 3) they were not willing to pay more for foods that instead contained regular table sugar when compared to peers who did not avoid HFCS.

Researchers found that giving consumers more information about the ingredient such as its history can be effective in reducing ingredient fears. To arrive at this conclusion they asked participants to rate the healthfulness of Stevia, a natural sweetener. Half of the participants were given historical and contextual information to read about the product and the remaining participants were not given anything to read. Those who received information about an ingredient’s history rated the product as healthier than those who did not. Lead author Brian Wansink recommends, “To overcome food ingredient fears, learn the science, history, and the process of how the ingredient is made, and you’ll be a smarter, savvier consumer.”

Carnegie Mellon Researchers Demonstrate A Driverless Future

National Science Foundation

Carnegie Mellon researchers bring NSF-funded autonomous vehicle to D.C. to show promise of driverless cars

In the coming decades, we will likely commute to work and explore the countryside in autonomous, or driverless, cars capable of communicating with the roads they are traveling on. A convergence of technological innovations in embedded sensors, computer vision, artificial intelligence, control and automation, and computer processing power is making this feat a reality.

This week, researchers from Carnegie Mellon University (CMU) will mark a significant milestone, demonstrating one of the most advanced autonomous vehicles ever designed, capable of navigating on urban roads and highways without human intervention. The car was brought to Washington, D.C., at the request of Congressman Bill Shuster of Pennsylvania, who participated in a 33-mile drive in the autonomous vehicle between a Pittsburgh suburb and the city’s airport last September.

Developed with support from the National Science Foundation (NSF), the U.S. Department of Transportation, DARPA and General Motors, the car is the result of more than a decade of research and development by scientists and engineers at CMU and elsewhere. Their work has advanced the underlying technologies–sensors, software, wireless communications and network integration–required to make sure a vehicle on the road is as safe–and ultimately safer–without a driver than with one. (In the case of the Washington, D.C., demonstration, an engineer will be on hand to take the wheel if required.)

“This technology has been enabled by remarkable advances in the seamless blend of computation, networking and control into physical objects–a field known as cyber-physical systems,” said Cora Marrett, NSF deputy director. “The National Science Foundation has long supported fundamental research that has built a strong foundation to enable cyber-physical systems to become a reality–like Dr. Raj Rajkumar’s autonomous car.”

Raj Rajkumar, a professor of electrical and computer engineering and robotics at CMU, is a leader not just in autonomous vehicles, but in the broader field of cyber-physical systems, or CPS. Such systems are already in use in sectors such as agriculture, energy, healthcare and advanced manufacturing, and they are poised to make an impact in transportation as well.

“Federal funding has been critical to our work in dealing with the uncertainties of real-world operating conditions, making efficient real-time usage of on-board computers, enabling vehicular communications and ensuring safe driving behaviors,” Rajkumar said.

In 2007, Carnegie Mellon’s then state-of-the-art driverless car, BOSS, took home the $2 million grand prize in the DARPA Urban Challenge, which pitted the leading autonomous vehicles in the world against one another in a challenging, urban environment. The new vehicle that Rajkumar is demonstrating in Washington, D.C., is the successor to that vehicle.

Unlike BOSS, which was rigged with visible antennas and large sensors, CMU’s new car–a Cadillac SRX–doesn’t appear particularly “smart.” In fact, it looks much like any other car on the road. However, top-of-the-line radar, cameras, sensors and other technologies are built into the body of the vehicle. The car’s computers are tucked away under the floor.

The goal of CMU’s researchers is simple but important: To develop a driverless car that can decrease injuries and fatalities on roads. Automotive accidents result in 1.2 million fatalities annually around the world and cost citizens and governments $518 billion. It is estimated that 90 percent of those accidents are caused by human error.

“Because computers don’t get distracted, sleepy or angry, they can actually keep us much safer–that is the promise of this technology,” Rajkumar said. “Over time, the technology will augment automotive safety significantly.”

In addition to controlling the steering, speed and braking, the autonomous systems in the vehicle also detect and avoid obstacles in the road, including pedestrians and bicyclists.

In their demonstration in D.C., cameras in the vehicle will visually detect the status of traffic lights and respond appropriately. In collaboration with the D.C. Department of Transportation, the researchers have even added a technology that allows some of the traffic lights in the Capitol Hill neighborhood of Washington to wirelessly communicate with the car, telling it the status of the lights ahead.

NSF has supported Rajkumar’s work on autonomous vehicles since 2005, but it is not the only project of this kind that NSF supports. In addition to CMU’s driverless car, NSF supports Sentry, an autonomous underwater vehicle deployed at Woods Hole Oceanographic Institute, and several projects investigating unmanned aerial vehicles (UAVs) including those in use in search and rescue and disaster recovery operations. Moreover, NSF supports numerous projects that advance the fundamental theories and applications that underlie all autonomous vehicles and other cyber-physical systems.

In the last five years, NSF has invested over $200 million in CPS research and education, building a foundation for the smart systems of the future.

Are Kids Getting Too Much Of A Good Thing When It Comes To Fortified Breakfast Cereal?

Brett Smith for redOrbit.com – Your Universe Online
In reaction to widespread nutritional deficiency illnesses in the first part of the 20th century, cereal makers began adding essential vitamins and minerals to their products around 1940.
Now, a new report from the health research and advocacy organization Environmental Working Group has found that popular breakfast cereals are exposing developing and very young children to excessive levels of vitamin A, zinc and other nutrients.
“Heavily fortified foods may sound like a good thing, but it when it comes to children and pregnant women, excessive exposure to high nutrient levels could actually cause short or long-term health problems,” said report author Renee Sharp, a research director at EWG. “Manufacturers use vitamin and mineral fortification to sell their products, adding amounts in excess of what people need and more than might be prudent for young children to consume.”
The advocacy group noted that high doses of vitamin A could have toxic effects and cause liver problems, skeletal irregularities and hair thinning. The EWG added that excessive amounts of zinc can damage copper absorption in the body, impair both types of blood cells and disrupt the immune system. Pregnant women taking too much vitamin A can bring about developmental abnormalities in the fetus. Seniors with high vitamin A intake have been known to have problems with osteoporosis and hip fractures, the group said.
The newly-released report heavily criticized the federal Daily Values for most vitamins and minerals, which may date back to 1968. The group said these values were calculated for adults, not children – resulting in nutrient amounts in products that are much higher than necessary for children.
The report authors noted packaging that states a product has “added vitamins” only compounds the nutritional problem.
“In other words, when a parent picks up a box of cereal and sees that one serving provides 50 percent of the Daily Value for vitamin A, he or she may think that it provides 50 percent of a child’s recommended intake,” said report author Olga Naidenko, an EWG research consultant. “But he or she would most likely be wrong, since the Daily Values are based on an adult’s dietary needs.”
The report’s analysis of nearly 1,600 cereal labels and over 1,000 breakfast bars revealed that 114 cereals contained 30 percent or more of the recommended level of intake for vitamin A, zinc and/or niacin. The report also found 23 cereals with at least one nutrient in amounts “much greater” than the amounts considered safe for children age 8 and under by the National Academy of Sciences.
According to USA Today reporter Michelle Healy, Kellogg spokesperson Kris Charles responded to the report by categorizing it as incomplete.
“The report ignores a great deal of the nutrition science and consumption data showing that without fortification of foods such as ready-to-eat cereals, many children would not get enough vitamins & minerals in their diets,” Charles said in a statement. “Less than 2 percent of all cereals assessed by EWG made their ‘Top 23’ list and the vast majority of these are adult-oriented cereals not regularly consumed by children.”
The Food and Drug Administration is currently accepting feedback on recommended variations to the Nutrition Facts labels and the EWG said the FDA should demand labels on products marketed to children show percent Daily Values particular to each age group, such as 1-to-3-year-olds and 4-to-8-year- olds.

Harpoons May Be Used By ESA To Clear Earth’s Orbit Of Space Junk

Gerard LeBlond for www.redorbit.com – Your Universe Online
The European Space Agency (ESA) is embarking on an age old practice of using harpoons for the future goal of clearing orbits of tumbling satellites and other hazardous space junk.
The Earth’s orbit is filled with decades of launched satellites that are no longer being used, along with other space debris that pose a collision threat with ongoing missions. There are more than 17,000 trackable objects floating in Earth’s orbit. While the majority of these objects are larger than a coffee cup, even pieces of debris as small as a nut can cause catastrophic damage if it collides with a working satellite.
Our satellites monitor the planet daily, floating in the lower orbit and the only way to protect collision is to remove the space debris such as upper rocket stages and out-of-service satellites from the orbit. These large objects pose a substantial threat to the working satellites we as a society depend on in our everyday life.
The large objects weigh tons, and if a collision occurs, or if it explodes from left over fuel or partially charged batteries heated by the sun, it would leave a dangerous debris cloud floating within the orbit. This cloud could eventually impact a satellite or cause a chain reaction, potentially destroying multiple satellites.
ESA has come up with a plan for avoiding such a catastrophe. The Clean Space initiative (mission e.DeOrbit) is set to launch in 2021. It consists of sensors and automatic controls that will identify and locate potentially dangerous debris. The difficult element of the mission was how to secure the hazardous space junk. Many solutions were considered, such as using a throw net, clamping mechanisms, robotic arms and a tethered harpoon.
Airbus Defence and Space in Stevenage, UK, has previously considered the harpoon concept. This system requires a high-energy impact into the target which would be powerful enough to pierce the structure, then reel it in.
Tests have been done with a prototype harpoon to assess the penetration and strength needed to pull in the debris, as well as if any additional fragments may have been generated that would threaten the mission. The next step for the ESA is to build a prototype version of the harpoon and mechanisms for the mission.
Once the prototype is completed, there will be three stages using computer models, analysis and experiments, ending with a full demonstration.
The preliminary design of the harpoon has a penetrating tip to pierce the debris, a crushable cartridge to help embed the tip into the debris and barbs on the tip to keep the harpoon attached while the space junk is being reeled in. This is just one concept of clearing space debris ESA is exploring.

The History Of Wearable Technology

Wearable technology, which includes wearable devices, tech togs, and fashionable electronics, consists of articles of clothing and/or accessories that incorporate computer and advanced electronic technologies. Designs often incorporate practical functions and features, but may also have a purely critical or aesthetic agenda.

Today, wearable devices are exploding onto the market, with everything from smart glasses (Google Glass) to smart watches (Samsung Galaxy Gear) on the rise. As for smart watches, the technology isn’t exactly new, however, as it got its start back in the 1970s with the release of the first calculator watch.

The calculator watch, first released in 1975 under the Pulsar brand, became a widely popular tool for science geeks and math nerds everywhere. These early “smart” watches had their heyday through the mid-1980s and although their popularity went downhill, many companies still produce calculator watches to this day.

The calculator watch can be seen in pop culture, getting front cover prominence on The Police song Wrapped Around Your Finger (1983), worn by Sting. As well, calculator watches have appeared in film, notably in the Tom Hanks movie Cast Away (2000) and Back To The Future (1985), with Michael J. Fox’s character Marty McFly sporting one. In the hit game Grand Theft Auto V (2013), protagonist Trevor Philips can be seen wearing a calculator watch as well.

The dwindling popularity of the calculator watch can be foretold by the introduction of PDAs, smartphones and other techy products.

While the 1970s saw the production of the first modern-era wearable computers, the history of wearable technology may go back even farther.

Due to the varied definitions of “wearable” and “computer,” the first wearable computer may have been introduced as early as the 1600s, when the first abacus necklace was unveiled. Other early wearable computers include a sixteenth-century abacus ring, the first wristwatch worn by the Queen of Naples in 1810, and the first cheating devices worn in shoes at roulette tables in the 1960s.

In 1961, mathematicians Edward O. Thorp and Claude Shannon built computerized timing devices to help them cheat at the gambling game roulette — one concealed the device in a shoe, while the other in a pack of cigarettes. Various versions of this apparatus were built in the 1960s and 1970s.

Thorp has referred to himself as the inventor of the first “wearable computer.” However, the specificities of the device were archaic and since the device relied on several other alternately-placed components to work properly, it was deemed this was not a wearable computer. Others went on to perfect the device and built similar wearables through the 1970s, all for the purpose at cheating the tables.

Along with the early wearable calculator watches in the 1970s, came the introduction of a wearable system for the blind. Published by C. C. Collins in 1977, this camera-to-tactile device converted images into a 1024-point, 10-inch square tactile grid on a person’s vest.

In the 1980s, wearable computers started becoming more general-purpose and better fit the modern definition of “computer” by incorporating task-specific hardware to more general-purpose devices. Steve Mann built a backpack-mounted multimedia computer in 1981. Remaining active in the wearable computer field through the 80s led Mann to create the first wearable wireless webcam in 1994, which became the first example of “lifelogging.”

Other wearable computer devices unleashed in 1994 include the first “wrist computer,” developed by Edgar Matias and Mike Riucci of the University of Toronto, and the Forget-Me-Not device, developed by Mik Lamming and Mike Flynn at Xerox EuroPARC, which recorded interactions with people and devices and stored it in a database for later query. DARPA’s Smart Modules Program was started in 1994 to find humionic approach to wearable and carryable computers. In 1996, DARPA hosted the “Wearables in 2005” workshop, bringing industrial, university and military visionaries together for the purpose of delivering computing to the individual.

As the world moved into the 21st century, wearable technology started to take off.

In 2002, as part of Kevin Warwick’s Project Cyborg, his wife, Irena wore a necklace that was electronically linked to Warwick’s nervous system via an implanted electrode array. The necklace color changed dependent on signals from Warwick’s nervous system. Devices that supported augmented reality also got their start in the early-2000s.

In the late-2000s, various Chinese companies began producing mobile phones on wristwatches, the descendants of which as of 2013 include the i5 and i6, which are GSM phones with mini 1.8-inch displays.

Approaching the 2010s, wearable devices started moving toward incorporating IEEE, IETF and other industry standards, including Bluetooth technology, leading to more various interfacing under the wireless personal area network (WPAN) and wireless body area network (WBAN) categories.

Wearable computing really took off in 2011 when Google developed the first prototype of what it now calls its Google Glass Project (smart glasses). The technology is based on military research of head-mounted displays starting back in 1995.

In April 2013, Google Glass was released to a handful of Glass Explorers who were invited to try the technology. It was officially released to the general public for a starting price of $1500 in May 2014.

With the advent of Google Glass, numerous companies made a run into the smart wearable market, including Apple (iWatch), Samsung (Galaxy Gear), Sony (SmartWatch), and others.

Wearable technology has now evolved into numerous types of devices, including watches, glasses, headbands, wigs, rings, etc. As well, such devices are being implemented for numerous applications, including personal and business computing, practical everyday tasks, fitness tracking, healthcare monitoring, etc.

Of course, there have been concerns with such technology; the main issues being that such devices could be a huge threat to security and privacy, with some legislation already being passed to ban the use of smart wearables in some instances and locations.

Still, it will be interesting to see where wearable technology goes as we move forward.

Get the History of Wearable Technology e-book at Amazon.com

Image Caption: Stylish wristbands with smart technology. Credit: Chesky_W/Thinkstock.com

Could A ‘Great Wall’ System Protect Tornado Alley In The Future?

Alan McStravick for redOrbit.com – Your Universe Online

Comprised of a relatively narrow swath of land bordered on the east by the Appalachian Mountains, on the west by the Rockies and extending from central Texas as far north as North Dakota, Tornado Alley sees its fair share of activity each year. And when you compare the vast difference in the total number of tornadoes that are visited on Europe and China, 57 and 3 for 2013 respectively to the 811 that touched down in the US, one enterprising researcher, Professor Rongjia Tao of Temple University’s Department of Physics dared to ask if there wasn’t any way to eliminate the destructive threat of tornadoes in this region.

In research recently published in the International Journal of Modern Physics B, Tao believes the answer to taming Mother Nature in this corridor is the construction of three east-west running ‘Great Walls’ that will act as a barrier between the meteorological components necessary for the formation of life- and property-threatening cyclones. Each wall would rise into the sky some 1000 feet with a width of just over 150 feet. The full length of each wall is projected to start at 1 mile, with future building to occur if local municipalities feel they could benefit from extending the walls further.

Using the cost of construction for the 974 foot Comcast Center in Philadelphia as a guide, Tao believes the construction of each wall could be completed for around $160 million – far less, he claims, than the costs associated with the most destructive tornadoes whose cost from damages and rebuilding have often soared into the realm of multiple billions of dollars.

Tao’s understanding of Tornado Alley centers around it being what he terms a ‘zone of mixing’. This area sees a northward flow of warm and moist air from the Gulf of Mexico that mixes with the colder, drier air coming south from Canada. The long and mostly flat corridor allows, according to Tao, both air masses to violently crash into one another creating vortex turbulence.

Speaking with redOrbit via telephone, Jacob Wycoff, meteorologist for EarthNetworks, the company behind the popular WeatherBug application, shared his thoughts on Tao’s latest research which originally pre-released in February of this year. Addressing what he termed the ‘fallacy of the researchers,’ Wycoff claimed they lacked a basic “understanding of how atmosphere behaves.” Continuing, he explained, “He dumbed down the clash of hot air and cold air as how tornadoes form. There is a lot more physics involved. A one-thousand foot wall doesn’t address high altitude air masses.”

Instead, Tao’s research seems only to focus on slowing the moving air masses as they approach one another. “If both cold wind and warm wind have speeds of 30 miles/h, the chance to develop tornadoes from the clash is very high,” he claims. “On the other hand, if both winds are moving below 15 miles/h, there is almost no chance for the clash to develop into tornadoes. Hence reducing the wind speed and eliminating the violent air mass clashes are the key to prevent tornado formation in Tornado Alley. We can learn from Nature how to do so.”

Citing similar latitudinal global placement, Tao compares the US’s Tornado Alley to China’s Eastern Plain. He contends the Eastern Plain would be subject to the same tornadic activity as Tornado Alley but for a few geographical features. The Eastern Plain is bordered on the north and south by east-west running mountain ranges as well as another east-west range right in the middle of the plain. Tao believes the mountains are effective at breaking the warm and cold air masses up so that they never gain enough speed to produce powerful cyclones.

The middle range, the Jiang-Huai Hills, while only 300 meters above sea-level, are instrumental in protecting the area from tornadoes. Tao then cites anecdotal evidence as scientific fact when he explains the Jiang-Huai Hills do not have a run as far east as the Pacific coast of China. Ending before the shoreline, these hills therefore yield to a flatland area that is a mini-Tornado Alley. “The city of Gaoyou in this area has a nickname ‘Tornado Hometown’, which has tornado outbreaks once in two years on average,” he explained in a recent statement. “It is thus clear that Jiang-Huai Hills are extremely effective in eliminating tornado formation.”

Tao’s above conjecture is the entire basis for his hypothesis that creating man made mountains on our central plain will magically end our annually occurring season of severe weather. “Climatologically speaking,” EarthNetwork’s Wycoff stated, “The US is the leader for tornadoes. Europe gets tornadoes. China doesn’t have the same moisture field [as the US, France and Italy] to go into their cold dry air.”

The research presented by Tao also highlights two mountainous (or hilly, at least) areas within Tornado Alley that he claims protect the surrounding areas from adverse weather and tornadoes. However, as Wycoff notes, the idea that tornadoes do not occur in mountainous areas is simply erroneous. West Virginia, a state we can all agree is particularly mountainous, has experienced over 120 individual tornado touchdowns between the years 1950 and 2012. “It’s a fallacy that tornadoes will not go through mountains,” Wycoff said. “They will.”

Even still, Tao calls for the initial building of three of his great walls to eliminate the threat present in Tornado Alley. “The first one should be close to the northern boundary of the Tornado Alley, maybe in North Dakota,” he explains. “The second one should be in the middle, maybe in the middle of Oklahoma and going east. The third one can be in the south of Texas and Louisiana.”

Even though Wycoff believes the walls will do little if anything to prevent tornadoes, both Wycoff and Tao understand that immediate local climates could be affected by these structures. “Such great walls may affect the weather,” Tao notes, “but their effect on the weather will be minor. In fact, with scientific design, we may also use these walls to improve the local climate.”

Taking the more measured approach, Wycoff stated, “You can’t just hop into something as massive as this without understanding the negatives. You will have local-scale issues with sun and wind.” Continuing, he asked, “Would the walls prevent wind from powering wind farms that have already been invested in?” Discussing the physics of natural land barriers and other obstructions, Wycoff explained how rain occurs as clouds are forced over structures. “Are you forcing the climate downwind to get rain and the side on the other side of the wall to get none,” he posited. “It could have drastic effects on people who rely on current rain models for farms or livestock.”

Wycoff believes the proposed $160 million price tag for Tao’s great walls would be better spent on investment in new technologies in weather prediction and safer construction projects and methods. “Again, I would reiterate that money would be better used in improving our predictions, disseminating warnings and building protective structures.” Tao’s idea, according to Wycoff, lacks a basic understanding of the physics of meteorology. “If his understanding is flawed, what does that say about his whole methodology? This would be a situation where the cure is worse than the disease.”

BE PREPARED: Urban Survival Bug Out Bag, 2 Person Emergency Disaster Kit, Emergency Zone Brand

Researchers Discover New Type Of Dust In Mars’ Atmosphere

Brett Smith for redOrbit.com – Your Universe Online

Using ultraviolet and infrared imaging techniques, scientists have discovered that the Martian atmosphere contains particulate matter in two distinct sizes, according to a new report published in the journal Icarus.

In the study, a team of Russian and French scientists observed the transition of Mars’ northern hemisphere into summer. During this transitional phase, the Sun’s rays poke through the Martian atmosphere and the spectrometer on board the European Space Agency’s orbital station Mars Express was able to capture just how solar radiation interacts with particulates high above Mars.

The European researchers discovered that the dust in the Martian atmosphere isn’t homogenous. Rather, there are two distinct types. The first type is rougher and includes both water-ice grains and slightly smaller airborne dust. The second type of particulate is finer, and is an aerosol consisting of much smaller particles.

The researchers noted that the density quantity of both types is not that high. Even in the densest layers of the planet’s atmosphere at altitudes of 12 to 18 miles there are approximately three particles of the finer mode per cubic meter, and less than two particles of the coarser mode per one cubic meter.

The study team noted that when looking at what is considered normal on Earth, Martian air is fairly clean, adding that most rooms are dustier. However, aerosols are critical because they have an essential role in developing the planet’s climate.

Because of fine dirt particles from the upper layers of the atmosphere, ice “embryos” are produced faster, which, consequently, have a bearing on clouds’ build-up. The clouds are the reason for both precipitation and temperature on the planet’s surface. Investigating the way the dust is spread in the atmosphere of the planet with regards to the altitude and geographical coordinates is essential for understanding what is happening on Mars, the scientists said.

The study also revealed the conditions on Mars allow for dust storms that are able to lift large volumes of particles from the planet’s surface. The scientists point out that finding fine dust in the atmosphere is in opposition to prior information that indicated supersaturated steam at the same altitude.

With the extraneous dust present it would have been normal for the supersaturated steam to become further condensed and create clouds. The team speculated that crucial to this potential contradiction is that there are very low temperatures such that the growth of ice grains slows down considerably.

Answers to the many questions surrounding Mars’ atmosphere may be answered later this year when NASA’s Mars Atmosphere and Volatile Evolution (MAVEN) mission arrives at the Red Planet.

After MAVEN enters into an orbit above Mars, its Solar Wind Ion Analyzer (SWIA) will begin sampling the small electrically charged particles in and above the planet’s wispy atmosphere. The unmanned craft will also be sampling ions from the solar wind, a magnetic field that originates deep within the Sun and radiates out toward the planets. Particles from the solar wind interact with ions in Mars’ upper atmosphere and energize them enough to escape from Mars’ gravitational pull.

Scientists have theorized that this process progressively strips Mars of its atmosphere and it has done so for billions of years.

What’s Better Than Two, Four Or Six Cores? A Computer Chip With 36

Peter Suciu for redOrbit.com – Your Universe Online

Move over dual-core, quad-core and even six-core; researchers have developed a new computer chip that offers a whopping 36 cores! More cores can mean greater system power. So why wouldn’t every system simply up the core count?

One reason is due to the amount of required communication between the cores, which are a computer chip’s processing units. The more cores the bigger the issue of communication between the cores become. This presents a number of problems that reduces the efficiency that the cores provide.

Today all cores – which are between two and six – are connected by a single wire, which is known as a bus. When two cores communicate they are granted exclusive access to that bus. However, the approach becomes more complicated as the number of cores increase. Cores often spend time waiting for the bus to free up rather than actually performing computations.

The bus also makes it easier to maintain cache coherence, which includes the ability for frequently used data to be stored locally. As a chip performs computations the data can be updated in the cache.

Li-Shiuan Peh, research professor of electrical engineering and computer science at the Massachusetts Institute of Technology (MIT), argued that the massively multi-core chips of the future could even resemble little Internets where each core has an associated router, while data travels between cores in packets of fixed size.

Now Peh’s group is set to unveil such a “network-on-chip” that has 36-cores. It implemented many of the group’s earlier ideas and also was able to solve and address one of the problems that has bedeviled previous attempts to design networks-on-chips – the ability to maintain cache coherence and ensuring that the cores’ locally stored copies of globally accessible data remain up to date.

The network-on-chip solution was unveiled at this month’s International Symposium on Computer Architecture, which was held in Minneapolis.

The network-on-chip solution allows each core to connect only to those immediately adjacent to it, and that could help resolve some of the bus issues.

“You can reach your neighbors really quickly,” Bhavya Daya, an MIT graduate student in electrical engineering and computer science, and first author on the new paper on the multi-core chips, told the MIT News Office. “You can also have multiple paths to your destination. So if you’re going way across, rather than having one congested path, you could have multiple ones.”

The caching of data can also be addressed via the network-on-chip solution, as currently most chips address the caching with a protocol known as “snoopy,” as it involves snooping on the other cores’ communications. So when a core needed a particular chunk of data it has to broadcast the request to all the other cores and the data is sent back via the bus. With network-on-chip the data is everything and can arrive via packets in different sequences and be put together as required.

According to the researchers this hierarchical ordering simulates the chronological ordering of requests sent over a bus, so the snoopy protocol still works.

Moreover, cache coherence in multi-core chips “is a big problem, and it’s one that gets larger all the time,” added Todd Austin, a professor of electrical engineering and computer science at the University of Michigan. “Their contribution is an interesting one: They’re saying, ‘Let’s get rid of a lot of the complexity that’s in existing networks.

That will create more avenues for communication, and our clever communication protocol will sort out all the details.’ It’s a much simpler approach and a faster approach. It’s a really clever idea.”

“One of the challenges in academia is convincing industry that our ideas are practical and useful,” Austin added. “They’ve really taken the best approach to demonstrating that, in that they’ve built a working chip. I’d be surprised if these technologies didn’t find their way into commercial products.”

Sun-Gazing Spacecraft Capture Puffing Sun Giving Birth To A Reluctant Eruption

Royal Astronomical Society (RAS)

A suite of Sun-gazing spacecraft, SOHO, STEREO and Solar Dynamics Observatory (SDO), have spotted an unusual series of eruptions in which a series of fast ‘puffs’ force the slow ejection of a massive burst of plasma from the Sun‘s corona. The eruptions took place over a period of three days, starting on 17 January 2013. Images and animations of the phenomena were presented at the National Astronomy Meeting 2014 in Portsmouth by Nathalia Alzate on Monday 23 June.

The Sun’s outermost layer, the corona, is a magnetized plasma that has a temperature of millions of degrees and extends millions of kilometers into space. The LASCO C2 coronagraph aboard the SOHO spacecraft observed puffs emanating from the base of the corona and rapidly exploding outwards into interplanetary space. The puffs occurred approximately once every three hours; after about 12 hours, a much larger eruption of material began, apparently eased out by the smaller-scale explosions. By looking at high-resolution images taken by SDO and STEREO over the same time period and in different wavelengths, Alzate and her colleagues at the University of Aberystwyth were able to focus down on the cause of the puffs and the interaction between the small and large-scale eruptions.

“Looking at the corona in Extreme UltraViolet light we see the source of the puffs is a series of energetic jets and related flares,” explained Alzate. “The jets are localized, catastrophic releases of energy that spew material out from the Sun into space. These rapid changes in the magnetic field cause flares, which release a huge amount of energy in a very short time in the form of super-heated plasma, high-energy radiation and radio bursts. The big, slow structure is reluctant to erupt, and does not begin to smoothly propagate outwards until several jets have occurred.”

Because the events were observed by multiple spacecraft, each viewing the Sun from a different perspective, Alzate and her colleagues were able to resolve the 3D configuration of the eruptions. This allowed them to estimate the forces acting on the slow eruption and discuss possible mechanisms for the interaction between the slow and fast phenomena.

“We still need to understand whether there are shock waves, formed by the jets, passing through and driving the slow eruption, or whether magnetic reconfiguration is driving the jets allowing the larger, slow structure to slowly erupt. Thanks to recent advances in observation and in image processing techniques we can throw light on the way jets can lead to small and fast, and/or large and slow, eruptions from the Sun,” said Alzate.

Animation: The same jet as in the previous image, this time viewed by the Extreme Ultra-Violet Imager (EUVI) onboard the STEREO mission. The combination of the two viewpoints enables a good idea of the 3D configuration of the events. Credit: STEREO/U. Aberystwyth. Click for an animated version (20 MB)

Using Casual Voice Commands To Program Robots

Gerard LeBlond for www.redorbit.com – Your Universe Online

Robots of today can do a variety of tasks, but still have to be programmed to do them. The technology that goes into producing these robots has progressed to the point of understanding voice commands. However, if the instructor leaves out any key instructions, the robot could not complete the task effectively.

This could all change in the near future.

Ashutosh Saxena, an assistant professor of computer science at Cornell University is teaching robots to understand natural language. Such robots are also programmed to account for missing information and adapt in order to complete the task. Saxena, along with graduate students Dipendra K. Misra and Jaeyong Sung, will be describing the novel methods at the Robotics: Science and Systems conference at the University of California, Berkeley, July 12-16.

The robot’s programming language has typical commands such as find (pan); grasp (pan); carry (pan, water tap); fill (pan, water); and so on.  The software used in the robot translates a human sentence into a language the robot understands, like “Fill a pan with water, put it on the stove, heat the water. When it’s boiling, add the noodles.” However, the instructor didn’t say the command to “Turn on the stove.” With the software, the robot can fill in the missing step and complete the task correctly.

The robot is equipped with a 3D camera and using the vision software developed in Saxena‘s lab, it scans the environment and identifies the object in the surroundings. The robot has also been programmed to associate various object with its capabilities — like a pan can be poured into or from, and a stove can heat objects placed on them. It can also locate a pan, find a faucet, fill the pan, set the pan on the stove and if the command is given to heat the water, it can find a stove or microwave to do so. Even if the next day, the pan or robot is in a different location, it can perform the same task.

Previous attempts for solving missing command problems were tested with a set of templates for common actions one word at a time. The new research called “machine learning” trains the robot’s computer brain to associate entire commands with flexibility defined actions. Animated video simulations are fed into the computer that were created by humans — similar to playing a video game. It also uses recorded voice commands from several different speakers.

The computer takes the information and stores combinations of similar commands, matching them to a variety of outcomes. So the robot hears, “Take the pot to the stove,” “Carry the pot to the stove,” “Put the pot on the stove,” “Go to the stove and heat the pot” and so on, it will compare the commands given to what it has heard before and adjust to the highest probable match. Once a match is achieved the video supplies a plan of action and the robot can find the sink and stove and match the recorded action to carry the pot from one to another.

The robot was tested giving it a task of preparing some ramen noodles and making affogato — a dessert from Italy combining coffee and ice cream. The command was to “Take some coffee in a cup. Add ice cream of your choice. Finally, add raspberry syrup to the mixture.”

The robot performed the task 64 percent of the time correctly, even when various commands were spoken or the environment was different, and was able to account for missing steps. The success rate was three to four times better than previous methods, but the researchers said,  “There is still room for improvement.”

On the “Tell Me Dave” website, you can teach a simulated robot to perform kitchen tasks and add your input. There will be a crowdsource library of instructions for the robots at the University. “With crowdsourcing at such a scale, robots will learn at a much faster rate,” Saxena said.

Visiting researcher, Aditya Jami, is helping “Tell Me Dave” to balance the library of commands to millions of examples.

SHOP NOW: LEGO Mindstorms EV3 31313

Medtronic Reveals They Were Targeted By Hackers, Lost Patient Records

redOrbit Staff & Wire Reports – Your Universe Online
The largest stand-alone medical device manufacturer in the world has revealed that hackers had successfully infiltrated its computers, and that it had lost some patient records in separate incidents last year.
In regulatory documents filed with the US Securities and Exchange Commission (SEC) on Friday, Minneapolis-based Medtronic confirmed that it and two other large medical technology companies had been victimized by cyberattacks originating from Asia, according to Reuters reporter Jim Finkle.
The hackers were unable to gain access to databases which stored patient information, the company said. However, Medtronic went on to confess that it had lost an undisclosed number of records from its diabetes business unit, which markets products such as insulin pumps. The exact nature of the information contained in those files is unknown.
“While we found no evidence of a breach or inadvertent disclosure of the patient records, we were unable to locate them for retrieval,” the company’s 10-K filing said, according to Finkle. Medtronic noted that the US Department of Health and Human Services had questioned company representatives about the loss of the records, and that the agency was provided with information on the problem and the firm’s overall data security measures.
The names of the other cyberattack victims were not disclosed, but according to Jim Spencer and Steve Alexander of the Minneapolis Star-Tribune, previously published reports citing an unidentified source said that Medtronics, Boston Scientific and St. Jude Medical had all been hacked during the first half of 2013. Boston Scientific declined a request to comment on the specifics of the incident, citing security reasons, while St. Jude Medical did not respond to a phone call from the reporters.
No other details about the scale of the attacks were disclosed in the filing, Spencer and Alexander noted. However, Medtronics did report in the filing that they had been contacted by some state attorneys general about whether or not patients needed to be notified about the missing patient records. The company said that it found no evidence of a breach, and that based on its review of the situation, it believed that the patient data had not been compromised.
“When and how to tell people that their personal information may have been compromised has long been a source of debate among corporations, consumers and regulators,” the StarTribune reporters explained. “The matter gained more public interest late last year, when hackers gained access to Target Corporation’s systems and retrieved card data and personal information of tens of millions of customers.”
The Target incident took place in the midst of the 2013 holiday season, one of the busiest and most lucrative times of the year for the retailer, and involved hackers gaining unauthorized access to payment card data. The breach reportedly only affected customers who shopped at one of the company’s 1,797 US stores between November 27 and December 15, and Target advised customers to monitor their accounts for suspicious or unusual activity.
“Target was criticized by some consumer advocates for not moving swiftly enough to inform the public that consumer information was stolen,” Spencer and Alexander said. That criticism likely played a role in Medtronic’s efforts to contact various government officials and agencies to make sure that they are in compliance with patient privacy regulations, Secure Digital Solutions CEO Chad Boeckmann told the reporters.
PROTECT YOURSELF TODAY – Norton Antivirus

Researchers Develop Self-powered Artificial Cardiac Pacemaker

The Korea Advanced Institute of Science and Technology (KAIST)

As the number of pacemakers implanted each year reaches into the millions worldwide, improving the lifespan of pacemaker batteries has been of great concern for developers and manufacturers. Currently, pacemaker batteries last seven years on average, requiring frequent replacements, which may pose patients to a potential risk involved in medical procedures.

A research team from the Korea Advanced Institute of Science and Technology (KAIST), headed by Professor Keon Jae Lee of the Department of Materials Science and Engineering at KAIST and Professor Boyoung Joung, M.D. of the Division of Cardiology at Severance Hospital of Yonsei University, has developed a self-powered artificial cardiac pacemaker that is operated semi-permanently by a flexible piezoelectric nanogenerator.

The artificial cardiac pacemaker is widely acknowledged as medical equipment that is integrated into the human body to regulate the heartbeats through electrical stimulation to contract the cardiac muscles of people who suffer from arrhythmia. However, repeated surgeries to replace pacemaker batteries have exposed elderly patients to health risks such as infections or severe bleeding during operations.

[ Watch The Video: Self-powered Pacemaker ]

The team’s newly designed flexible piezoelectric nanogenerator directly stimulated a living rat’s heart using electrical energy converted from the small body movements of the rat. This technology could facilitate the use of self-powered flexible energy harvesters, not only prolonging the lifetime of cardiac pacemakers but also realizing real-time heart monitoring.

The research team fabricated high-performance flexible nanogenerators utilizing a bulk single-crystal PMN-PT thin film (iBULe Photonics). The harvested energy reached up to 8.2 V and 0.22 mA by bending and pushing motions, which were high enough values to directly stimulate the rat’s heart.

Professor Keon Jae Lee said:

“For clinical purposes, the current achievement will benefit the development of self-powered cardiac pacemakers as well as prevent heart attacks via the real-time diagnosis of heart arrhythmia. In addition, the flexible piezoelectric nanogenerator could also be utilized as an electrical source for various implantable medical devices.”

This research result was described in the April online issue of Advanced MaterialsSelf-Powered Cardiac Pacemaker Enabled by Flexible Single Crystalline PMN-PT Piezoelectric Energy Harvester

Questions Surround Effectiveness Of Veteran PTSD Treatment

April Flowers for redOrbit.com – Your Universe Online
PTSD is one of the biggest problems facing our returning veterans. Time Magazine’s Battleland Blog asserts that 21 percent of the 500,000 post-9/11 troops treated by the VA are being treated for PTSD, not counting veterans of the Vietnam or Korean wars. With so many vets under treatment for PTSD, you would think the US Department of Defense (DOD) and the US Department of Veterans Affairs (VA) would have a firm grip on which of their treatments are working the best.
[ Watch the Video: PTSD Treatment Going Unmeasured By VA And DOD ]
A recent report from the Institute of Medicine (IOM), however, reveals that this is not the case. The report, the second in a two-phase assessment of PTSD services for the military, determined that the DOD and the VA do not measure the effectiveness of treatment for PTSD, nor have they kept pace with the growing demand for PTSD treatment. These findings call into question the millions of dollars spent to improve the mental health of returning servicemen.
“Both departments lack a coordinated, consistent, and well-developed evidence-based system of treatment for PTSD and need to do a better job tracking outcomes,” said Sandro Galea, MD, DrPH, chair of the IOM committee, and chair of the Department of Epidemiology at Columbia University’s Mailman School of Public Health. “Mental health is among the most important factors behind successful re-entry after military service, and we don’t know if treatments are working.”
This report follows a recent scandal at the VA that resulted in the resignation of VA Secretary Eric Shinseki on May 30. Federal investigators found that in a network of more than 1,700 healthcare facilities, veterans were being systematically denied timely care. The facilities, according to the investigation, suffered from inefficiency and bureaucracy.
The IOM report has more comprehensive tallies for the number of veterans being treated for PTSD. According to the results, an estimated five percent of all being treated in the military health system suffer from PTSD, with a higher eight percent seen among those who served in Iraq and Afghanistan. Between 2003 and 2013, the number of veterans from all eras seeking help with PTSD more than doubled — going from approximately 190,000 (4.3 percent of total VA users) to more than half a million (9.2 percent). Of all the PTSD sufferers treated by the VA in 2012, 23.6 percent were veterans of the Iraq and Afghanistan wars. The VA spent $294 million in 2012 alone to treat PTSD. That number is expected to reach $500 million by 2017 if the treatment demands continue to rise at the current rate.
The VA and the DOD already have a host of programs and services in place to help. These range in intensity to prevent, screen for, diagnose, and treat current and former service members who have PTSD or who are at risk for it. The report finds that the current efforts of the DOD are local, ad hoc, incremental, and crisis-driven, with little planning devoted to the development of a long-range approach to obtaining desired outcomes. The VA’s programs, in contrast, have a more unified organizational structure, ensuring more consistency of treatment. Without data analysis to determine which treatments are the most effective, neither department has any ways of measuring whether the care they provide is effective, or whether the money they spend is resulting in high-quality healthcare.
“Given that the DOD and VA are responsible for serving millions of service members, families, and veterans, we found it surprising that no PTSD outcome measures are used consistently to know if these treatments are working or not,” said Galea. “They could be highly effective, but we won’t know unless outcomes are tracked and evaluated.”
The IOM committee noted an exception: the VA’s specialized intensive PTSD programs. These programs are collecting outcome data, but they only serve one percent of PTSD-suffering veterans. Moreover, the data collected suggests that these programs are only moderately successful in improving the symptoms of the patients.
The IOM report strongly suggests that both departments develop, coordinate and implement a measurement-based PTSD management system. This system should document patients’ progress over the course of treatment, regardless of where they receive treatment, and continue with long-term follow-up using standardized and validated instruments. One example of such validated instruments would be the PTSD checklist, which is one of several reliable and valid self-report measures that could be used to monitor patient progress and guide modification of individual treatment plans.
The report also found that neither department’s strategic efforts necessarily encourage the use of best practices for preventing, screening for, diagnosing, and treating PTSD. Leaders at all levels within the DOD and service branches were found to not be consistently held accountable for implementing policies and programs to manage PTSD effectively. The VA’s central office has established policies for minimum care and PTSD treatment, yet it is unclear if the VA leaders adhere to those policies, encourage staff to follow the guidance, or use the data available from its specialized PTSD programs to improve the way they manage the disorder, according to the IOM.
The committee suggests that both VA and DOD leaders should communicate a strong mandate through their chain of command regarding the high priority that PTSD management and using best practices should have. Holding those leaders, who are responsible for delivering high-quality care for their populations, accountable can also help ensure that information on PTSD programs and services is gathered, measured and reported.
The committee also recommends that both agencies maintain an adequate workforce of mental health professionals. Although both have substantially increased their workforce, they have not kept pace with demand for services. Such shortages of staff could result in clinicians not having adequate time to provide evidence-based psychotherapies.
“There is generally good will and spikes of excellence in both departments. Substantial effort has been made toward providing service members excellent PTSD care. However, there is tremendous variability in how care is implemented and an absence of data that tell us if programs are working or not,” Dr. Galea said.
“In many respects our findings that neither the DOD nor the VA has a system that documents patients’ progress and uses standardized instruments to chart long-term treatment are not surprising,” he added. “We are hopeful that the report will provide a blueprint for where we need to get to.”

New Tool Predicts The Painful ‘COST’ Of Cancer

Brett Smith for redOrbit.com – Your Universe Online

While cancer treatment is well-known for the physical and emotional stress it puts on a patient, a new study has revealed new details about the financial pain that treatment can inflict on a person.

Published in the journal Cancer, the new study resulted in the development of an 11-question tool called the COmprehensive Score for financial Toxicity (COST) that can accurately measure a patient’s risk for, and capacity to withstand, financial stress.

“Few physicians discuss this increasingly significant side effect with their patients,” said study author Dr. Jonas de Souza, a head-and-neck cancer specialist at the University of Chicago Medicine.

“Physicians aren’t trained to do this. It makes them, as well as patients, feel uncomfortable,” he added. “We aren’t good at it. We believe that a thoughtful, concise tool that could help predict a patient’s risk for financial toxicity might open the lines of communication. This gives us a way to launch that discussion.”

The team developed the COST survey by starting with a literature review and a number of extensive in-person interviews. The researchers chatted with 20 patients and six cancer specialists, along with nurses and social workers. These initial conversations resulted in a list of 147 questions. The scientists then whittled the list down to 58 questions. Next, they asked 35 patients which of the remaining questions were the most critical to their situation, resulting in a 30-item list.

“In the end, 155 patients led us, with some judicious editing, to a set of 11 statements,” de Souza said. “This was sufficiently brief to prevent annoying those responding to the questions but thorough enough to get us the information we need.”

The researchers described their 11 questions as short and easy to understand. Questions included asking patients the degree with which they agreed with statements like, “My out-of-pocket medical expenses are more than I thought they would be” or “I am able to meet my monthly expenses.”

Patients who helped devise COST had been in therapy for at least sixty days and had obtained bills. Not including the top 10 percent and the bottom 10 percent, participants earned between $37,000 and $111,000. The median yearly income for these patients was approximately $63,000.

The researchers said they expected that financial toxicity would be connected with income.

“But in our small sample that did not hold up,” de Souza said. “People with less education seemed to have more financial distress, but variations in income did not make much difference. We need bigger studies to confirm that, but at least we now have a tool we can use to study this.”

The team said they are now expanding their study to confirm these findings and link the newly developed scale with quality of life and stress in cancer patients.

“We need to assess outcomes that are important for patients,” de Souza said. “The cost burden cancer patients experience is definitely one. Measuring this toxicity is the first step towards addressing this important issue.”

“At the end,” he added, “this is another important piece of information in the shared-decision-making process.”

High Hopes For New Hypoallergenic Peanuts

April Flowers for redOrbit.com – Your Universe Online

The fastest growing allergy in the world right now is to peanuts. Reactions run the gamut from a slightly swollen tongue to severe, and potentially fatal anaphylaxis. Food Allergy Research & Education (FARE) says that allergy to peanuts is on the rise in children, while the Centers for Disease Control and Prevention (CDC) estimate that nearly 4 million Americans have the allergy. There may be an answer on the horizon, however.

In a recent study, researchers from North Carolina’s Agricultural and Technical State University (NC A&T) describe a new, patented, process for removing up to 98 percent of allergens from peanuts. Allergens are the substances in organic products that trigger allergic reactions such as swelling or hives. The new process, developed by NC A&T School of Agriculture and Environmental Sciences’ food and nutrition researcher Dr. Jianmei Yu and two former colleagues, involves soaking de-shelled and roasted peanuts in a solution of food-grade enzymes. Yu stresses that the “new” peanuts are not genetically modified.

“Treated peanuts can be used as whole peanuts, in pieces or as flour to make foods containing peanuts safer for many people who are allergic,” said Yu. “The treated peanuts could even be used in immunotherapy, under a doctor’s supervision, the hypoallergenic peanuts can build up a patient’s resistance to the allergens.”

Two key peanut allergens, Ara h1 and Ara h 2, are reduced by the new process. Ara h 1 is reduced to undetectable levels, while Ara h2 is reduced 98 percent. Researchers used human skin-prick trials, conducted at the University of North Carolina at Chapel Hill, to measure the effectiveness of their method.

NC A&T has signed an exclusive licensing agreement with a Toronto-based firm that specializes in commercializing emerging technologies in food, agriculture, and a variety of other fields, Xemerge.

“This is one of the best technologies in the food and nutrition space we have seen,” said Johnny Rodrigues, Chief Commercialization Officer of Xemerge.

“It checks all the boxes: non-GMO, patented, human clinical data, does not change physical characteristics of the peanut along with maintaining the nutrition and functionality needed, ready for industry integration from processing and manufacturing to consumer products.”

Other approaches have been made to reduce peanut allergens that involved chemicals and irradiation. The NC A&T process uses neither of these; rather it employs commonly available food-processing equipment. Dr. Yu is working with Xemerge to further refine the process by testing other food-grade enzymes. Rodrigues says that there is no timetable, yet, for releasing the hypoallergenic peanuts to grocery stores and food manufacturers.

“We have the FDA, and food manufacturers’ product cycles to factor in,” Rodrigues added. “Remember that in many cases, the peanut processors will be selling to a third party who will be integrating the hypoallergenic peanuts into their branded products.”

Not everyone is convinced, however. Dr. Ruchi Gupta is a pediatrician at Northwestern University in Chicago, and the author of The Food Allergy Experience. She cautions allergy sufferers to be wary of new products.

“I love that people are working on products to improve the lives of people with peanut allergies, but do we need them and will people use them? I think more testing is needed,” she told Reuters Health. “Even a small amount of the allergenic proteins in peanuts can cause very severe allergic reactions.”

Comet Siding Spring Won’t Disrupt Martian Orbiters As It Makes Close Approach Of The Red Planet

April Flowers for redOrbit.com – Your Universe Online
Later this year, comet Siding Spring will have a very close brush with Mars. That brush will be so close, in fact, that it lead scientists to worry about the safety of the three spacecraft currently orbiting the planet. Some experts from the University of Maryland (UMD) have used a satellite mounted telescope to observe the “fresh” comet, however, and they find that it poses minimal danger to the Martian spacecraft.
Instead, the researchers believe that the NASA orbiters will have an unprecedented close-up view of changes occurring to the comet formerly known as C/2013 A1 as it approaches the sun, as well as any effects it might have on the Martian atmosphere.
Fresh comets are those that have never before approached the sun and therefore have not gone through the process of sublimation — or the transformation of frozen material from solid ice to gas. As such, fresh comets have some of the most ancient material scientists can study. The nucleus of the comet, or its solid center, is a lumpy, non-spherical clump of frozen gases mixed with dust, much like a dirty snowball.
“Comet Siding Spring is making its first passage through the inner solar system and is experiencing its first strong heating from the sun,” said UMD assistant research scientist Dennis Bodewits, lead researcher on the UMD astronomy team that used NASA’s Swift satellite to estimate the comet’s size and activity. “Comets like this one, which formed long ago and remained for billions of years in the icy regions beyond Pluto, still contain the primeval building materials of our solar system in their original state.”
When a comet gets too near the sun, sublimation begins and different gases are released. These gases carry large quantities of dust from the nucleus that reflect sunlight and brighten the comet. When the comet reaches a distance of around two and a half times Earth’s distance from the sun (2.5 astronomical units, or AU) it has become warm enough that the majority of gas being released is from water.
The researchers used Swift’s Ultraviolet/Optical Telescope (UVOT) to capture a sequence of images between May 27 and 29. These images depicted Siding Spring cruising through the constellation Eridanus at a distance of about 2.46 AU (229 million miles) from the sun. UVOT is unable to detect water molecules directly, however it can detect light emitted by fragments that form when ultraviolet sunlight breaks up water — specifically, hydrogen atoms and hydroxyl (OH) molecules.
“Based on our observations, we calculate that at the time of the observations the comet was producing about 2 billion billion billion water molecules, equivalent to about 13 gallons or 49 liters, each second,” said team member Tony Farnham, a senior research scientist at University of Maryland College Park (UMCP). An Olympic-sized swimming pool could be filled in about 14 hours at this rate, but the scientists say this is a modest output compared to other comets observed by Swift.
They have estimated the size of Siding Spring to be approximately 2,300 feet across. This places the comet at the lower end of an estimated size range created from earlier observations by other spacecraft.
On October 19, comet Siding Spring will make its closest approach to Mars. It will pass just 86,000 miles above the planet’s surface. This is close enough to allow the gas and dust in the outermost reaches of the comet’s atmosphere, or coma, to interact with the atmosphere of Mars.
The closest recorded comet approach to Earth occurred on July 1, 1770, when the now-defunct comet Lexell passed within 1.4 million miles from our surface. This distance is about six times farther out than the moon. The approach of Siding Spring to Mars will be 16 times closer than this.
The spacecraft orbiting Mars — the Mars Reconnaissance Orbiter (MRO), Mars Express, and the Mars Atmosphere and Volatile EvolutioN (MAVEN) — will be in no danger from Siding Spring, but they will be used to observe the comet during this unprecedented opportunity. The team hopes to learn more about the Martian atmosphere, which is thinner than Earth’s, from this event.
The Swift spacecraft is used to single out and observe comets at distances where the comets are emitting mostly gases other than water vapor. They follow these new comets as they course through the inner solar system to learn how comets’ activity changes during repeated orbits of the sun. The hope is that these observations will lead to a better understanding of the evolution of our solar system and the comets that formed some five billion years ago.

Image 2 (below): This composite of C/2013 A1 (Siding Spring) merges Swift UVOT images taken between May 27 and 29, 2014. Sunlight reflected from the comet’s dust, which produces most of the light in this image, appears yellow; violet shows ultraviolet light produced by hydroxyl (OH), a molecular fragment of water. Credit: NASA/Swift/D. Bodewits (UMD), DSS

Asil chicken

The Asil chicken, also known as the Aseel chicken, is a breed of domestic chicken that was developed in the South Punjab/Sindh region of India, where they were used for cock fighting. Because of this, the breed is naturally confrontational, with chicks fighting when they are only a few weeks old and roosters often fighting with each other until one is dead. However, they are kind and willing towards humans. Today, the breed is listed on “watchlist” status by the Livestock Conservancy. Although hens are not efficient at laying eggs, they are known to be good sitters.

There are many varieties of Asil chickens that vary in appearance based on the standards of the country in which they are bred.  Some members of this breed have feathered tufts on their head and beards under their beaks, but these are most often seen in India and Pakistan. Types of this breed include the Madras asil, a large breed with a long tail that was the first fighting chicken in history, the Sindhi Aseel, which is found in Pakistan and is typically red or blue in color, the Amroha, which is small to medium in size and very rare, and the bantam asil, which is a miniature version of the Amroha. Many of these types are red, blue, and white in color, although some can be black or green.

Image Caption: The Vaal Seval from Alanganallur, Madurai. Credit: Vyas16muthu/Wikipedia (CC BY-SA 3.0)

Law Enforcement May Soon Have A New Weapon In The Fight Against Drugged Drivers: Marijuana Breathalyzer

Alan McStravick for www.redorbit.com – Your Universe online
Stories of a marijuana breathalyzer have existed in the same realm as those pee-pool tablets we all heard about as children but never saw. Oh sure, we had friends whose other friends parents used it in their pool , but for many, it was just the chance that we might create a cloud of color emanating from our trunks that caused us to think more than twice before letting go and letting it flow.
Imagining just the possibility that a marijuana breathalyzer could exist has the same cooling effect on those who might think about drugging and driving. Current methods of detection rely on the collection of saliva, blood or urine. However, due to the long-term nature of marijuana staying in your system, proving an individual was drugged at the time of driving is very hard to prove and therefore conviction rates are staggeringly low.
That could all be about to change.
Kal Malhi, a former member of the Royal Canadian Mounted Police (RCMP) who spent a good deal of his career as a drug enforcement officer has, with the assistance of two physicians, developed the world’s first marijuana breathalyzer that is capable of detecting whether or not an individual ingested cannabis in the two-hours previous to being tested. Malhi calls his new invention the Cannabix Breathalyzer.
Digital Journal reports that Malhi believes if it makes it into the hands of law enforcement it will further dissuade people from driving under the influence of marijuana. Partnering with Dr. Raj Attariwala of Vancouver, British Columbia and Florida physician Dr. Bruce Goldberger, Malhi claims he was inspired to develop the device after coming across a Swedish study about breath testing technology.
Law enforcement has stepped up enforcement of anti-drinking and driving policies across much of North America. As a result, that enforcement, regarded as a potential deterrent, has lowered DWI arrests and convictions in many localities. “People are becoming very afraid to drink and drive nowadays,” Malhi told CTV News, “because they feel that they will get caught and charged. But they’re not afraid to drug and drive because they don’t feel that law enforcement will do anything about it.”
Presenting law enforcement with a tool that could aid in deterrence, Malhi sees his Cannabix device as an invaluable resource in helping to lower the overall numbers of those who choose to get high and then get behind the wheel of an automobile. The Cannabix device, still unpatented, must now be subjected to a battery of field tests to determine its efficacy and its legality as a potential evidence gathering device against defendants. In the meantime, marijuana advocates at NORML have addressed the dearth of scientific evidence supporting the need to step up enforcement of drugged driving arrests and convictions. While they concede there is a brief impairment of psychomotor skills, they contend it is short lived and usually presents itself in slowing the speed of the vehicle and a slightly diminished response time to emergency situations.
In the blog entry on their site they state, “Nevertheless, this impairment does not appear to play a significant role in on-road traffic accidents. A 2002 review of seven separate studies involving 7,934 drivers reported, ‘Crash culpability studies have failed to demonstrate that drivers with cannabinoids in the blood are significantly more likely than drug-free drivers to be culpable in road crashes.’”
On the other side of the argument, the University of Washington’s Alcohol and Drug Abuse Institute highlights some alarming trends about perceptions of relative safety, when compared to alcohol use and driving, with regard to marijuana use by younger people.
Regardless of which side of the fence you are on in this debate, it is important to arm yourself with knowledge presented by both sides in forming your final opinion. While the Cannabix Breathalyzer may or may not be awarded a patent after its field testing, a product will likely be introduced soon to the tool bag used by law enforcement for the detection of recent cannabis use by drivers.

Investigating The Unusual Behavior Of Water In Extremely Cold Conditions

redOrbit Staff & Wire Reports – Your Universe Online

Despite the fact that it covers more than two-thirds of the planet’s surface, and that US households can use as much as 100 gallons of it per day, there are still mysteries to be uncovered about water – as evidenced by a pair of studies published in the June 18 edition of the journal Nature.

In the first paper, researchers from the Department of Energy’s SLAC National Accelerator Laboratory completed the first-ever structural observations of liquid water at temperatures of down to negative 51 degrees Fahrenheit – the so-called “no man’s land” at which some of the more unusual properties of the compounds became extremely amplified.

While scientists have previously established that water can remain in its liquid form in the extreme cold, this study marks the first time that anyone has been able to analyze its molecular structures under such conditions. Using the SLAC’s Linac Coherent Light Source (LCLS) X-ray laser, the study authors have made it possible to study H2O under these exotic conditions, while also gaining new insight into how it behaves in its more natural states.

“Water is not only essential for life as we know it, but it also has very strange properties compared to most other liquids,” lead investigator Anders Nilsson, deputy director of the SUNCAT Center for Interface Science and Catalysis (a joint SLAC/Stanford facility), explained in a statement. “Now, thanks to LCLS, we have finally been able to enter this cold zone that should provide new information about the unique nature of water.”

Water, despite possessing a relatively simple molecular structure, is unusual in several ways, according to Nilsson and his colleagues. For example, its liquid form is more dense than its solid form (hence the reason that ice floats), and it is capable of absorbing tremendous amounts of heat. Furthermore, its density profile makes it so that lakes and oceans do not freeze all the way to the bottom, which prevents fish from dying out during the winter months.

When purified water is supercooled, there is nothing to seed the formation of ice crystals, which means that these traits are enhanced and it can remain liquid at far lower temperatures than is usually possible. The so-called “no mans land” of temperatures ranges from roughly minus 42 to minus 172. For decades researchers have attempted to investigate what happens to water molecules at these temperatures without having to rely on simulations.

“Now the LCLS, with X-ray laser pulses just quadrillionths of a second long, allows researchers to capture rapid-fire snapshots showing the detailed molecular structure of water in this mysterious zone the instant before it freezes,” the SLAC explained.

Using this technology, Nilsson’s team discovered that the structure of a water molecule “transforms continuously” in this temperature range, and that additional cooling causes those changes to “accelerate more dramatically than theoretical models had predicted.”

In the second study, scientists from Princeton University used computer models to study freezing water in the hopes they could discover why ice floats when the majority of liquids crystallize into dense solids that wind up sinking. They discovered that water had a “split personality” of sorts when the temperature is cold enough and the pressure is high enough.

Under those specific conditions, Princeton chemical and biological engineering professor Pablo Debenedetti and his associates learned that water is capable of spontaneously splitting into two different liquid forms, each of which had different densities.

Those forms, the university noted in a statement, can co-exist in much the same way that oil and vinegar do in salad dressing, except that the water separates from itself instead of from a different molecule. If this newly discovered split personality can be replicated in future experiments, it could lead to a better understanding of the behavior of water in the cold temperatures found in high-altitude clouds, where it can exist in a supercooled state before forming snow or hail.

“The new finding serves as evidence for the ‘liquid-liquid transition’ hypothesis, first suggested in 1992 by Eugene Stanley and co-workers at Boston University and the subject of recent debate,” the university said. “The hypothesis states that the existence of two forms of water could explain many of water’s odd properties – not just floating ice but also water’s high capacity to absorb heat and the fact that water becomes more compressible as it gets colder.”

In most liquids, the molecules slow down immensely when they become colder, and eventually form a dense and orderly solid that will sink to the bottom if it is placed in liquid. As previously noted, however, ice floats in water due the unorthodox behavior of its molecules. There are some areas of lower density (fewer molecules contained within a specific volume) and regions of higher density, and as the temperature falls, the lower density regions become so prevalent that they seize control of the mixture and form a solid that is less dense than the original liquid.

“The work by the Princeton team suggests that these low-density and high-density regions are remnants of the two liquid phases that can coexist in a fragile, or ‘metastable’ state, at very low temperatures and high pressures,” the university said. Now that scientists have verified the two different forms of water, it could help them develop a theory that addresses how the liquid behaves at other temperatures, ranging from normal to supercooled.

“The research is a tour de force of computational physics and provides a splendid academic look at a very difficult problem and a scholarly controversy,” said Arizona State University professor C. Austen Angell, who was not involved in the research. “Using a particular computer model, the Debenedetti group has provided strong support for one of the theories that can explain the outstanding properties of real water in the supercooled region.”

International Space Station Being Used As A Technology Test Bed

NASA

The International Space Station is critically important to NASA’s future exploration missions. The orbiting outpost provides a platform to test technologies in a long-duration weightless environment; conditions which are impractical to replicate on Earth. NASA’s Space Technology Mission Directorate is utilizing the space station as a test bed for multiple game-changing technology demonstrations.

“The International Space Station is our national laboratory for foundational space technology development,” said Dr. Michael Gazarik, Associate Administrator for the Space Technology Mission Directorate. “The new technologies we fly and test on the station will help create the new capabilities needed for our Asteroid Initiative and our Evolvable Mars Campaign. The International Space Station is an innovation incubator for the advanced space technology that will get us to Mars, and beyond.”

Fluids in Weightlessness – Working in the Microgravity Environment

Fluids behave very differently in the absence of gravity. In the microgravity aboard the space station, NASA’s Space Technology Mission Directorate is planning to conduct two important fluids experiments this year.

The first will use soccer-ball-sized, free-flying satellites known as Synchronized Position Hold, Engage, Reorient, Experimental Satellites, or “SPHERES,” in the SPHERES-Slosh experiment. During the test, two SPHERES robots will be attached to opposite ends of a metal frame holding a plastic tank with colored water. The robots will perform several maneuvers while the distribution of the water is precisely measured using a data acquisition system. The acquired data will improve our ability to predict the distribution of fluid for future exploration missions – information spacefarers can use to monitor fuel or other critical fluid levels and how the fluids interact with machines.

The second experiment is an evaluation of a critical heat transfer technology required for thermal control of future spacecraft. Future exploration missions will be performed in very challenging thermal environments. NASA’s Phase Change Material heat exchanger is designed to maintain a spacecraft temperature within an appropriate temperature range. Because these heat exchangers incorporate a multi-phase fluid, it is important to verify their performance in a microgravity environment. NASA’s Space Technology Mission Directorate is collaborating with the Human Exploration and Operations Mission Directorate to develop two heat exchangers for evaluation on space station. This experiment will help mitigate a critical risk associated with the use of this technology for the Orion Multi-Purpose Crew Vehicle’s future missions.

Enabling Exploration with Robotic Assistance

Forging a permanent human presence in deep space requires extra sets of “eyes” and “hands” that help and protect astronauts. NASA will use a variety of highly capable, versatile and sophisticated robots to investigate worlds beyond our own, complement the work of human astronauts and prepare the way for crewed missions to the furthest reaches of the solar system.

NASA’s Space Technology Mission Directorate has developed climbing legs for the space station’s robotic crew member, Robonaut 2 (R2). The new appendages will provide R2 with the mobility it needs to help with regular and repetitive tasks inside and outside the space station. The performance of these tasks will free up the crew for more critical work, including scientific research. Also in development, the technology directorate’s ISS IntraVehicular Activity free-flyer will autonomously perform several operations aboard the station, including health monitoring, inventory control and ground supervisory control. These robots will be enabled with six degrees-of-freedom navigation across the entire US segment of the station. They will be able to work seamlessly beside human crewmates. A 3D printer scheduled for launched to the space station aboard the Space-X 4 mission, developed and infused into flight testing by the technology directorate, will serve as a critical asset to the crew. The printer will demonstrate in-space manufacturing and the ability to print 3D parts in microgravity, producing on-demand replacement parts when needed.

An External Platform for Technology Demonstration

The International Space Station provides a uniquely continuous view to deep space, allowing NASA to demonstrate another promising technology required for future exploration missions. The Station Explorer for X-Ray Timing and Navigation (SEXTANT) will demonstrate use of X-ray emitting neutron stars as beacons for navigation, guiding spacecraft on their journey into deep space. These pulsars have pulse timing characteristics which enable GPS-like position determination anywhere within the solar system. This technology will lay the groundwork for long-range X-ray communications, and will be launched to the station late in 2016.

Access to Space

Access to space is a critical challenge for demonstrating advanced technologies. Already a staple in the aerospace industry, small satellites called CubeSats can be reliably and affordably deployed from space station. The small size of CubeSats allow for a viable and affordable way to use small spacecraft as platforms for testing and demonstrating technologies that might have more general applications in larger-scale spacecraft and systems. NASA’s technology directorate also is investigating the use of the space station for an advanced entry, descent, and landing technology demonstration. The location and orientation of the station can be used to provide the necessary speed and altitude required to perform important testing in this technology area, critical for success on our path to Mars. NASA is considering the execution of a high-energy reentry flight test on a four-meter inflatable decelerator. This technology would be enabling for missions involving a planetary entry.

“NASA’s Space Technology Mission Directorate will continue to develop technologies for demonstration aboard space station, taking advantage of the amazing capabilities it has to offer,” said Ryan Stephan, program executive for the directorate’s Game Changing Development Program. “The space station is more than a tremendous engineering marvel; it represents an ideal test bed for demonstrating the promising technologies required to enable future exploration missions.”

Chimpanzees In Rwanda Are Imposing A “Natural Tax” On Farmers By Stealing Precious Crops

Trinity College Dublin

Light-fingered chimpanzees are changing the way subsistence farmers make a living in Africa by causing them to grow different crops and spend more time guarding their goods. This is according to work performed by researchers from Trinity College Dublin’s Department of Geography in the School of Natural Sciences, who say that communities near the edge of tropical forests are experiencing a lack of ‘dietary diversity’ and an increased exposure to disease-carrying insects as a result.

Through crop raiding, a form of human-wildlife conflict, hundreds of thousands of marginalized farmers are losing edible crops to damage from these troublesome animals each year. Farmers are reducing their cultivation of maize, beans and other staples, which are highly prized by raiding species. In addition, by guarding their existing crops during the night, farmers are increasingly exposed to malaria carried by mosquitos and soil-based worms which cause elephantiasis.

Despite the positive actions taken by affected farmers working around the Gishwati Forest fragment in western Rwanda, the shifts in farming practice are having a cumulative, negative effect on their communities. The damage might be minor on each occasion, but the losses soon add up, and an increased risk of disease is a major problem.

“Unsurprisingly, non-human primates are quite fond of the food crops we grow! The chimps are basically imposing a ‘natural tax’ on farmers growing crops near the nutrient-rich soils of the forest,” said Shane McGuinness, lead author on the research and PhD student in Geography at Trinity, who conducted the interview-based study with the help of the Great Apes Trust and local conservation workers.

Although their numbers are small in this forest, chimpanzees are an internationally protected species and have the potential to generate substantial amounts of tourism-driven revenue. Sylvain Nyandwi of the Great Apes Trust of Iowa (the organization currently charged with conserving the forest), said that 19 chimps had been identified but there were likely to be more elusive thieves out there that had yet to be accounted for.

Actions to reduce the impact of the chimps must be carefully measured to balance the conservation of the important habitat in which they live, while protecting the lives and livelihoods of local people. Farmers changed the crops they were growing to reduce the risk of crop raiding without needing to be prompted by conservation organizations. McGuinness added: “This is a great, positive step towards proper, community-led conservation. Using local knowledge and appropriate scientific know-how to solve these human-wildlife conflicts is imperative to implementing lasting and robust conflict mitigation.”

Work is now being finalized on a much larger project around the Volcanoes National Park in northern Rwanda, made famous by the film Gorillas in the Mist, where McGuinness is assessing the impacts of mountain gorilla, buffalo and golden monkey on the conservation of this park and the development of surrounding human communities.

A copy of the full journal paper, which was awarded an open-access waiver in the international peer-reviewed journal of Human Dimensions of Wildlife to boost exposure, is available here.

A Link Found Between Musical Training And Executive Brain Function

Alan McStravick for www.redorbit.com – Your Universe online

Musical training and performance programs like Suzuki Strings have long been thought to be beneficial to the child student though little, if any, research was able to substantively back up that assertion. With research recently conducted at Boston Children’s Hospital, evidence now exists that shows just how beneficial early musical training can be in helping to determine an individual’s later academic success and prolonged executive brain functions through the remainder of their lives.

In the controlled study, the team employed the use of functional magnetic resonance imaging (fMRI) to reveal a possible biological link between formal musical training and a boost in brain power. Published online yesterday in the journal PLOS One, the results show how the fMRI of brain areas known to be associated with executive function were more active in musicians than in non-musicians. As formal musical training can be cost prohibitive, the study did make adjustments for socioeconomic factors affecting their study participants.

The reason executive function in the brain is important and was thus the focus of this study is because it is comprised of high-level cognitive processes which enable an individual to understand and retain information faster than someone with a diminished capacity for executive function. Additionally, executive function is responsible for helping to regulate behaviors, good decision making, problem solving, planning and adjusting to changing mental demands.

“Since executive functioning is a strong predictor of academic achievement, even more than IQ, we think our findings have strong educational implications,” says study senior investigator Nadine Gaab, PhD, of the Laboratories of Cognitive Neuroscience at Boston Children’s. “While many schools are cutting music programs and spending more and more time on test preparation, our findings suggest that musical training may actually help to set up children for a better academic future.”

Previous study on this subject was able to draw a correlation between musical training and its relation to cognitive abilities. However, far fewer studies have looked at the effects of formal musical training and its effect on executive function. Those studies that did focus on executive function presented inconclusive or mixed results and were limited by a lack of objective brain measurements. Additional flaws of previous studies include having only looked at a few aspects of executive function, employing a poorly-defined definition of what actually constitutes musical training and control groups, and failing to take into account limiting socioeconomic factors.

Participants in the Boston Children’s Hospital study included 15 children who trained in music, all aged between 9 and 12. The definition of formal musical training for this study meant each participant had to have played their instrument for a minimum of two years while being concurrently enrolled in private music lessons. This participant group had, on average, played their instruments for 5.2 years and practiced 3.7 hours per week.

The control group for the musically trained children was comprised of 12 untrained children of the same age. Being untrained simply meant they had no formal training outside of possible basic introduction to music and instruments that may have been offered in a school setting.

An additional study cohort consisted of 15 professional adult musicians and 15 adult non-musicians.

The adult non-musicians, like the non-musician children, had no formal training in music other than possible basic music introduction that may have been presented in their early school setting.

To account for socioeconomic differences, the researchers made certain to match the musician and non-musician groups based on the education, job status and familial income of the parents (in the children’s groupings) or their own (in the adult cohorts). Additionally, the researchers matched the two groups based on IQ by subjecting all likely participants to a battery of cognitive tests. The children’s brains were also observed via fMRI during testing.

The results showed that both the adult musicians and the musically trained children presented enhanced performance during cognitive testing that related directly to several aspects of executive functioning. The fMRI results of the children also showed specific activation of areas located in the prefrontal cortex during a cognitive test that required them to switch between different mental tasks. Activity was noted in the supplementary motor area, the pre-supplementary area and the right ventrolateral prefrontal cortex. Each of these regions are known to be linked to executive function.

“Our results may also have implications for children and adults who are struggling with executive functioning, such as children with ADHD or [the] elderly,” says Gaab. “Future studies have to determine whether music may be utilized as a therapeutic intervention tools for these children and adults.”

While increased executive function was recognized among the musician groups of both children and adults, the team conceded that it could not be determined if the musical training triggered an increase in executive function abilities or if those abilities, pre-existing in the individuals, attracted them to music and predisposed them to stick with their lessons. To answer this “chicken or the egg” question, the team hopes to conduct further long-term studies that will follow selected children over time, assigning them to musical training at random.

The study, supported by the Grammy Foundation, was primarily authored by Boston Children’s Hospital’s Jennifer Zuk, EDM. Co-investigators on the study were Christopher Benjamin, PhD and Arnold Kenyon of the Laboratories of Cognitive Neuroscience.

Prescription Pain Killer Deaths Outnumber Heroin, Cocaine Overdoses

Rebekah Eliason for redOrbit.com – Your Universe Online

Reports from a new study at McGill University indicate that there are a greater number of deaths from commonly prescribed painkillers than from heroin and cocaine overdose combined. This study is the first of its kind and highlights a huge public health issue.

Death from prescribed painkillers has dramatically increased in recent years with 16,000 US deaths in 2010. In per capita opioid consumption, the United States and Canada rank first and second.

“Prescription painkiller overdoses have received a lot of attention in editorials and the popular press, but we wanted to find out what solid evidence is out there,” says Nicholas King, of the Biomedical Ethics Unit in the Faculty of Medicine.

For this study, King and his team set out to identify and summarize available evidence to perform a systematic review of the existing literature regarding painkiller overdose. The team did a comprehensive survey of scientific literature and only included reports that contained quantitative evidence.

“We also wanted to find out why thousands of people in the U.S. and Canada are dying from prescription painkillers every year, and why these rates have climbed steadily during the past two decades,” says King. “We found evidence for at least 17 different determinants of increasing opioid-related mortality, mainly, dramatically increased prescription and sales of opioids; increased use of strong, long-acting opioids like Oxycontin and methadone; combined use of opioids and other (licit and illicit) drugs and alcohol; and social and demographic factors.”

Professor Kind adds, “We found little evidence that Internet sales of pharmaceuticals and errors by doctors and patients–factors commonly cited in the media — have played a significant role.”

According to the researchers this study suggests there is a complicated “epidemic” where physicians, users, the health care system and the social environment all hold responsibility.

“Our work provides a reliable summary of the possible causes of the epidemic of opioid overdoses, which should be useful for clinicians and policy makers in North America in figuring out what further research needs to be done, and what strategies might or might not be useful in reducing future mortality,” says King. “And as efforts are made to increase access to prescription opioids outside of North America, our findings might be useful in preventing other countries from following the same path as the U.S. and Canada.”

This study was published in the American Journal of Public Health.

Laser Turned Off By Strange Physics

By Steven Schultz, Princeton University

Inspired by anomalies that arise in certain mathematical equations, researchers have demonstrated a laser system that paradoxically turns off when more power is added rather than becoming continuously brighter.

The finding by a team of researchers at Vienna University of Technology and Princeton University, could lead to new ways to manipulate the interaction of electronics and light, an important tool in modern communications networks and high-speed information processing.

The researchers published their results June 13 in the journal Nature Communications.

Their system involves two tiny lasers, each one-tenth of a millimeter in diameter, or about the width of a human hair. The two are nearly touching, separated by a distance 50 times smaller than the lasers themselves. One is pumped with electric current until it starts to emit light, as is normal for lasers. Power is then added slowly to the other, but instead of it also turning on and emitting even more light, the whole system shuts off.

“This is not the normal interference that we know,” said Hakan Türeci, assistant professor of electrical engineering at Princeton, referring to the common phenomenon of light waves or sound waves from two sources cancelling each other. Instead, he said, the cancellation arises from the careful distribution of energy loss within an overall system that is being amplified.

“Loss is something you normally are trying to avoid,” Türeci said. “In this case, we take advantage of it and it gives us a different dimension we can use – a new tool – in controlling optical systems.”

The research grows out of Türeci’s longstanding work on mathematical models that describe the behavior of lasers. In 2008 (Ref. 2), he established a mathematical framework for understanding the unique properties and complex interactions that are possible in extremely small lasers – devices with features measured in micrometers or nanometers. Different from conventional desk-top lasers, these devices fit on a computer chip.

That work opened the door to manipulating gain or loss (the amplification or loss of an energy input) within a laser system. In particular, it allowed researchers to judiciously control the spatial distribution of gain and loss within a single system, with one tiny sub-area amplifying light and an immediately adjacent area absorbing the generated light.

Türeci and his collaborators are now using similar ideas to pursue counterintuitive ideas for using distribution of gain and loss to make micro-lasers more efficient.

The researchers’ ideas for taking advantage of loss derive from their study of mathematical constructs called “non-Hermitian” matrices in which a normally symmetric table of values becomes asymmetric. Türeci said the work is related to certain ideas of quantum physics in which the fundamental symmetries of time and space in nature can break down even though the equations used to describe the system continue to maintain perfect symmetry.

Over the past several years, Türeci and his collaborators at Vienna worked to show how the mathematical anomalies at the heart of this work, called “exceptional points,” could be manifested in an actual system. In 2012 the team published a paper in the journal Physical Review Letters demonstrating computer simulations of a laser system that shuts off as energy is being added. In the current Nature Communications paper, the researchers created an experimental realization of their theory using a light source known as a quantum cascade laser.

The researchers report in the article that results could be of particular value in creating “lab-on-a-chip” devices – instruments that pack tiny optical devices onto a single computer chip. Understanding how multiple optical devices interact could provide ways to manipulate their performance electronically in previously unforeseen ways. Taking advantage of the way loss and gain are distributed within tightly coupled laser systems could lead to new types of highly accurate sensors, the researchers said.

“Our approach provides a whole new set of levers to create unforeseen and useful behaviors,” Türeci said.

The work at Vienna, including creation and demonstration of the actual device, was led by Stefan Rotter at Vienna along with Martin Brandstetter, Matthias Liertzer, C. Deutsch, P. Klang, J. Schöberl, G. Strasser and K. Unterrainer. Türeci participated in the development of the mathematical models underlying the phenomena. The work on the 2012 computer simulation of the system also included Li Ge, who was a post-doctoral researcher at Princeton at the time and is now an assistant professor at City University of New York.

New Study Finds Depression Misdiagnoses In Type 2 Diabetics

Rebekah Eliason for redOrbit.com – Your Universe Online

People suffering from diabetes are known to often struggle with depression as well. According to new research, the symptoms of depression among people with type 2 diabetes can significantly be reduced by interventions for “diabetes distress.”

This finding suggests that what doctors are usually labeling as depression may not be a co-morbid psychiatric disorder. Instead researchers believe the depression may be a reaction to the stress of living with a disease that is difficult and complex to manage.

In a second study, researchers emphasized the importance of treating type 1 diabetes patients for depression regardless of the cause. They discovered that the more signs of depression reported by the patient, the greater the person’s risk of death.

“Because depression is measured with scales that are symptom-based and not tied to cause, in many cases these symptoms may actually reflect the distress that people are having about their diabetes, and not a clinical diagnosis of depression,” said lead author Lawrence Fisher, PhD, ABPP, Professor of Family and Community Medicine at the University of California, San Francisco.

For this study, Fisher and his research team designed diabetes-specific measures of distress in order to reflect the patient’s level of worry regarding the disease. In addition, patients were asked to fill out the Patient Health Questionnaire, which provided a measure of depressive symptoms.

Patients who reported a high amount of distress regarding their disease and also exhibited high levels of depressive symptoms were given one of three treatments designed to reduce stress related to diabetes rather than directly addressing the depressive symptoms.

The first group participated in an online diabetes self-management program. A second group also participated in the online program but additionally received individual assistance to help problem solve the specific issues causing their diabetes distress. The third group of patients received personalized health risk information, which was followed by general educational material about diabetes sent through the mail. Each group also received personal phone calls throughout the study.

Over a twelve-month period, all three interventions significantly reduced distress about the disease and lowered depressive symptoms. The reductions were maintained by patients throughout the course of the study. Fisher said overall, 84 percent of those scoring above 10 on the PHQ8 (maximum 27, with 10 being moderate depression) reduced their levels of depression to below 10 following the interventions. For all three interventions, the reductions were evenly distributed.

“What’s important about this,” said Fisher, “is that many of the depressive symptoms reported by people with type 2 diabetes are really related to their diabetes, and don’t have to be considered psychopathology. So they can be addressed as part of the spectrum of the experience of diabetes and dealt with by their diabetes care team.”

This study was presented at the American Diabetes Association’s 74th Scientific Sessions.

Embryonic Stem Cells Offer Promising Treatment For Multiple Sclerosis

University of Connecticut

Scientists in the University of Connecticut’s Technology Incubation Program have identified a novel approach to treating multiple sclerosis (MS) using human embryonic stem cells, offering a promising new therapy for more than 2.3 million people suffering from the debilitating disease.

The researchers demonstrated that the embryonic stem cell therapy significantly reduced MS disease severity in animal models, and offered better treatment results than stem cells derived from human adult bone marrow.

The study was led by ImStem Biotechnology Inc. of Farmington, Conn., in conjunction with UConn Health Professor Joel Pachter, Assistant Professor Stephen Crocker, and Advanced Cell Technology (ACT) Inc. of Massachusetts. ImStem was founded in 2012 by UConn doctors Xiaofang Wang and Ren-He Xu, along with Yale University doctor Xinghua Pan and investor Michael Men.

“The cutting-edge work by ImStem, our first spinoff company, demonstrates the success of Connecticut’s Stem Cell and Regenerative Medicine funding program in moving stem cells from bench to bedside,” says Professor Marc Lalande, director of the UConn’s Stem Cell Institute.

The research was supported by a $1.13 million group grant from the state of Connecticut’s Stem Cell Research Program that was awarded to ImStem and Professor Pachter’s lab.

“Connecticut’s investment in stem cells, especially human embryonic stem cells, continues to position our state as a leader in biomedical research,” says Gov. Dannel P. Malloy. “This new study moves us one step closer to a stem cell-based clinical product that could improve people’s lives.”

The researchers compared eight lines of adult bone marrow stem cells to four lines of human embryonic stem cells. All of the bone marrow-related stem cells expressed high levels of a protein molecule called a cytokine that stimulates autoimmunity and can worsen the disease. All of the human embryonic stem cell-related lines expressed little of the inflammatory cytokine.

Another advantage of human embryonic stem cells is that they can be propagated indefinitely in lab cultures and provide an unlimited source of high quality mesenchymal stem cells – the kind of stem cell needed for treatment of MS, the researchers say. This ability to reliably grow high quality mesenchymal stem cells from embryonic stem cells represents an advantage over adult bone marrow stem cells, which must be obtained from a limited supply of healthy donors and are of more variable quality.

“Groundbreaking research like this furthering opportunities for technology ventures demonstrates how the University acts as an economic engine for the state and regional economy,” says Jeff Seemann, UConn’s vice president for research.

The findings also offer potential therapy for other autoimmune diseases such as inflammatory bowel disease, rheumatoid arthritis, and type-1 diabetes, according to Xu, a corresponding author on the study and one of the few scientists in the world to have generated new human embryonic stem cell lines.

There is no cure for MS, a chronic neuroinflammatory disease in which the body’s immune system eats away at the protective sheath called myelin that covers the nerves. Damage to myelin interferes with communication between the brain, spinal cord, and other areas of the body. Current MS treatments only offer pain relief, and slow the progression of the disease by suppressing inflammation.

“The beauty of this new type of mesenchymal stem cells is their remarkable higher efficacy in the MS model,” says Wang, chief technology officer of ImStem.

The group’s findings appear in the current online edition of Stem Cell Reports, the official journal of the International Society for Stem Cell Research. ImStem is currently seeking FDA approval necessary to make this treatment available to patients.

Caffeine Affects Boys And Girls Differently After Puberty

University at Buffalo
Caffeine intake by children and adolescents has been rising for decades, due in large part to the popularity of caffeinated sodas and energy drinks, which now are marketed to children as young as four. Despite this, there is little research on the effects of caffeine on young people.
One researcher who is conducting such investigations is Jennifer Temple, PhD, associate professor in the Department of Exercise and Nutrition Sciences, University at Buffalo School of Public Health and Health Professions.
Her new study finds that after puberty, boys and girls experience different heart rate and blood pressure changes after consuming caffeine. Girls also experience some differences in caffeine effect during their menstrual cycles.
The study, “Cardiovascular Responses to Caffeine by Gender and Pubertal Stage,” will be published online June 16 in the July 2014 edition of the journal Pediatrics.
Past studies, including those by this research team, have shown that caffeine increases blood pressure and decreases heart rate in children, teens and adults, including pre-adolescent boys and girls. The purpose here was to learn whether gender differences in cardiovascular responses to caffeine emerge after puberty and if those responses differ across phases of the menstrual cycle.
Temple says, “We found an interaction between gender and caffeine dose, with boys having a greater response to caffeine than girls, as well as interactions between pubertal phase, gender and caffeine dose, with gender differences present in post-pubertal, but not in pre-pubertal, participants.
“Finally,” she says, “we found differences in responses to caffeine across the menstrual cycle in post-pubertal girls, with decreases in heart rate that were greater in the mid-luteal phase and blood pressure increases that were greater in the mid-follicular phase of the menstrual cycle.
“In this study, we were looking exclusively into the physical results of caffeine ingestion,” she says. Phases of the menstrual cycle, marked by changing levels of hormones, are the follicular phase, which begins on the first day of menstruation and ends with ovulation, and the luteal phase, which follows ovulation and is marked by significantly higher levels of progesterone than the previous phase.
Future research in this area will determine the extent to which gender differences are mediated by physiological factors such as steroid hormone level or by differences in patterns of caffeine use, caffeine use by peers or more autonomy and control over beverage purchases, Temple says.
This double-blind, placebo-controlled, dose-response study was funded by a grant from the National Institute on Drug Abuse of the National Institutes of Health.
It examined heart rate and blood pressure before and after administration of placebo and two doses of caffeine (1 and 2 mg/kg) in pre-pubertal (8- to 9-year-old; n = 52) and post-pubertal (15- to 17-year-old; n = 49) boys (n = 54) and girls (n = 47).

Detoxification Of Air Pollutants Enhanced By Broccoli Sprout Beverage: Clinical Trial In China

Johns Hopkins University

Findings Could Pave Way for Inexpensive Food-Based Preventive Strategies

A clinical trial involving nearly 300 Chinese men and women residing in one of China’s most polluted regions found that daily consumption of a half cup of broccoli sprout beverage produced rapid, significant and sustained higher levels of excretion of benzene, a known human carcinogen, and acrolein, a lung irritant. Researchers from the Johns Hopkins Bloomberg School of Public Health, working with colleagues at several US and Chinese institutions, used the broccoli sprout beverage to provide sulforaphane, a plant compound already demonstrated to have cancer preventive properties in animal studies. The study was published in the June 9 online edition of the journal Cancer Prevention Research.

“Air pollution is a complex and pervasive public health problem,” notes John Groopman, PhD,  Anna M. Baetjer Professor of Environmental Health at the Johns Hopkins Bloomberg School of Public Health and one of the study’s co-authors. “To address this problem comprehensively, in addition to the engineering solutions to reduce regional pollution emissions, we need to translate our basic science into strategies to protect individuals from these exposures. This study supports the development of food-based strategies as part of this overall prevention effort.”

Air pollution, an increasing global problem, causes as many as seven million deaths a year worldwide, according to the World Health Organization, and has in recent years reached perilous levels in many parts of China. Last year, the International Agency for Research on Cancer classified air pollution and particulate matter (PM) from air pollution as carcinogenic to humans. Diets rich in cruciferous vegetables, of which broccoli is one, have been found to reduce risk of chronic degenerative diseases, including cancer. Broccoli sprouts are a source of glucoraphanin, a compound that generates sulforaphane when the plant is chewed or the beverage swallowed. It acts to increase enzymes that enhance the body’s capacity to expunge these types of the pollutants.

The 12-week trial included 291 participants who live in a rural farming community in Jiangsu Province, China, approximately 50 miles north of Shanghai, one of China’s more heavily industrialized regions. Participants in the control group drank a beverage made of sterilized water, pineapple and lime juice while the beverage for the treatment group additionally contained a dissolved freeze-dried powder made from broccoli sprouts that contained glucoraphanin and sulforaphane. Sixty-two men (21%) and 229 women (79%) with a median age of 53 (ranging from 21 to 65) years were enrolled in the study. Urine and blood samples were taken over the course of the trial to measure the fate of the inhaled air pollutants.

The research team found that among participants receiving the broccoli sprout beverage, the rate of excretion of the carcinogen benzene increased 61% beginning the first day and continuing throughout the 12-week period. In addition, the rate of excretion of the irritant acrolein, rapidly and durably increased 23% during the 12-week trial. Secondary analyses by the investigators indicated that the sulforaphane may be exerting its protective actions by activating a signaling molecule, NRF2, that elevates the capacity of cells to adapt to and survive a broad range of environmental toxins. This strategy may also be effective for some contaminants in water and food.

“This study points to a frugal, simple and safe means that can be taken by individuals to possibly reduce some of the long-term health risks associated with air pollution,” notes Thomas Kensler, PhD, professor at the Johns Hopkins Bloomberg School and one of the study’s co-authors. “This while government leaders and policy makers define and implement more effective regulatory policies to improve air quality.”

The clinical trial targeting prevention is notable in that it evaluated a possible means to reduce the body burden of toxins following unavoidable exposures to pollutants. The majority of clinical trials involve treatments of diseases that have already presented or advanced into later stages. Further clinical trials, to evaluate optimal dosage and frequency of the broccoli sprout beverage, are planned in the same general region of China.

Rapid and Sustainable Detoxification of Airborne Pollutants by Broccoli Sprout Beverage: Results of a Randomized Clinical Trial in China” was written by Patricia A. Egner; Jian Guo Chen; Adam T Zarth; Derek Ng; Jinbing Wang; Kevin H Kensler; Lisa P Jacobson; Alvaro Munoz; Jamie L Johnson; John D Groopman; Jed W. Fahey; Paul Talalay; Jian Zhu; Tao-Yang Chen; Geng-Sun Qian; Steven G. Carmella; Stephen S. Hecht; and Thomas W. Kensler.

This work was supported by the National Institutes of Health (P01 ES006052 and P30 S003819). Safeway, Inc. donated the lime juice used in this study.

New Study Suggests Text Messages Could Help Patients Control Diabetes

redOrbit Staff & Wire Reports – Your Universe Online

Researchers from the Scripps Whittier Diabetes Institute have devised a new text message program which they claim could help patients control their diabetes by sending them information about proper nutritional habits, the benefits of physical activities and reminders to check blood sugar on a regular basis.

The program is part of the Dulce Digital study, and initial results suggest a text message-based self-management intervention improved the glycemic control of high-risk Latinos suffering from type 2 diabetes. The researchers reported their findings Friday during a presentation at the 74th Scientific Sessions of the American Diabetes Association.

“The use of mobile phones in health care is very promising, especially when it comes to low-income populations with chronic diseases,” Scripps Whittier Diabetes Institute corporate vice president Dr. Athena Philis-Tsimikas explained in a statement Friday.

“We found that by using text messages we were able to circumvent many of the barriers these patients face, such as lack of transportation or childcare, while still being able to expand the reach of diabetes care and education,” she added.

Dr. Tsimikas and her colleagues joined forces with a community clinic located in the San Diego area known for providing services to a large number of Latino patients with type 2 diabetes. They recruited 128 participants and then randomly placed them in one of two different groups: those receiving only standard treatment for the condition (the control group) or those who received text messages in addition to regular care.

The standard treatment program consisted of regular visits with primary care physicians, as well as a brief computerized presentation including nutrition standards for those with diabetes; desired targets for blood sugar, blood pressure and cholesterol levels; and the various medications recommended to help keep the disease in check.

“For the text messaging group, the same standard care was provided but in addition, messages were sent to their mobile devices at random times throughout the week,” the institute said. “Two to three messages were sent each day at the beginning of study enrollment, and the frequency tapered off over a six-month period.”

The content of some of those text messages included reminders to check blood sugar both before and after participating in physical activity, using smaller plates in order to help make portions look larger and trick yourself into feeling fuller after eating, and remembering to take medications at the same time each day.

At the end of the six month period, Dr. Tsimikas and her fellow investigators – whose research was supported by the McKesson Foundation – discovered that those participating in the Dulce Digital program experienced a significantly larger decrease in hemoglobin A1c test levels than those receiving only regular care.

“Potential next steps include incorporating text messaging into conventional self-management education programs,” the institute said. Possibilities include having patients meet in one-on-one or group visits, and then have supplemental texts sent as ongoing reminders over the course of the next six months.