redOrbit Staff & Wire Reports – Your Universe Online
Newly-created maps of the material located between the stars in the Milky Way could help astronomers solve a nearly century-long mystery involving stardust, according to a new study published in the August 15 edition of the journal Science.
Researchers from the University of Ljubljana in Slovenia and Johns Hopkins University in Baltimore, along with an international team of colleagues, claim their work demonstrates a new method of uncovering the location – and ultimately, the composition of the material found in the vast expanse between star systems in a galaxy.
This expanse is known as the interstellar medium, and the material located there includes dust and gas made up of atoms and molecules that remain following the death of a star. It also supplies the building blocks for new stars and planets, Johns Hopkins professor of physics and astronomy Rosemary Wyse explained.
“There’s an old saying that ‘We are all stardust,’ since all chemical elements heavier than helium are produced in stars,” Wyse said. “But we still don’t know why stars form where they do. This study is giving us new clues about the interstellar medium out of which the stars form.”
Wyse, along with lead investigators Janez Kos and Tomaz Zwitter of the University of Ljubljana and their fellow scientists, focused on a mysterious feature in the light emanated by stars known as diffuse interstellar bands (DIBs). DIBs were discovered in 1922 by a graduate student whose photographs yielded dark lines indicating that some of the starlight was apparently being absorbed by something in the interstellar medium between Earth and that star.
While astronomers have gone on to identify over 400 diffuse interstellar bands, the materials that cause them to appear and their exact location remains unknown, the researchers explained. There has been some speculation that the absorption of starlight responsible for creating these dark bands indicates the presence of unusually large and complex molecules in the interstellar medium, but evidence to support these claims has been difficult to find.
The study authors believe it is important to determine the nature of this puzzling material because it could provide important information about the physical nature and the chemical composition on these interstellar regions. Details such as these are essential components of theories pertaining to the formation of the stars and the galaxies.
“In a completely new approach to understanding DIBs, we combined information from nearly 500,000 stellar spectra obtained by the massive spectroscopic survey RAVE (Radial Velocity Experiment) to produce the first pseudo-three-dimensional map of the strength of the DIB at 8620 angstroms covering the nearest 3 kiloparsecs from the Sun,” the authors wrote. They found that the DIB 8620 carrier has “a significantly larger vertical scale height” than the dust.
[ Watch: RAVE survey animation ]
These new pseudo-3D maps could help solve the mystery. The maps were created by a team of 23 scientists who reviewed data on 500,000 stars collected by RAVE over a 10-year period. The project also required use of the UK Schmidt Telescope in Australia to collect spectroscopic information from the light of as many as 150 stars at once.
“The maps are described as ‘pseudo-3D’ because a specific mathematical form was assumed for the distribution in the vertical dimension that provides the distances from the plane of the Milky Way, with the maps presented in the remaining two dimensions,” the university said, adding that the sample size “enabled the mapmakers to determine the distances of the material that causes the DIBs and thus how the material is distributed” throughout the galaxy.
Future research could use the techniques utilized by Wyse and her colleagues to create other maps, which should provide more insight into the mysteries surrounding DIB locations and the materials responsible for causing them. “To figure out what something is, you first have to figure out where it is, and that’s what this paper does,” Wyse said. “Larger surveys will provide more details in the future. This paper has demonstrated how to do that.”
Involuntary Eye Movement A Foolproof Indication For ADHD Diagnosis
American Friends of Tel Aviv University
TAU researchers develop diagnostic tool for the most commonly misdiagnosed disorder
Attention Deficit Hyperactivity Disorder (ADHD) is the most commonly diagnosed — and misdiagnosed — behavioral disorder in children in America, according to the Centers for Disease Control and Prevention. Unfortunately, there are currently no reliable physiological markers to diagnose ADHD. Doctors generally diagnose the disorder by recording a medical and social history of the patient and the family, discussing possible symptoms and observing the patient’s behavior. But an incorrect evaluation can lead to overmedication with Ritalin (methylphenidate), which has parents everywhere concerned.
Now a new study from Tel Aviv University researchers may provide the objective tool medical professionals need to accurately diagnose ADHD. According to the research, published in Vision Research, involuntary eye movements accurately reflect the presence of ADHD, as well as the benefits of medical stimulants that are used to treat the disorder.
Keeping an eye on the eyes
Dr. Moshe Fried, Dr. Anna Sterkin, and Prof. Uri Polat of TAU’s Sackler Faculty of Medicine, Dr. Tamara Wygnanski-Jaffe, Dr. Eteri Tsitsiashvili, Dr. Tamir Epstein of the Goldschleger Eye Research Institute at Sheba Medical Center, Tel Hashomer, and Dr. Yoram S. Bonneh of the University of Haifa used an eye-tracking system to monitor the involuntary eye movements of two groups of 22 adults taking an ADHD diagnostic computer test called the Test of Variables of Attention (TOVA). The exercise, which lasted 22 minutes, was repeated twice by each participant. The first group of participants, diagnosed with ADHD, initially took the test un-medicated and then took it again under the influence of methylphenidate. A second group, not diagnosed with ADHD, constituted the control group.
“We had two objectives going into this research,” said Dr. Fried, who as an adult was himself diagnosed with ADHD. “The first was to provide a new diagnostic tool for ADHD, and the second was to test whether ADHD medication really works — and we found that it does. There was a significant difference between the two groups, and between the two sets of tests taken by ADHD participants un-medicated and later medicated.”
Foolproof, affordable, and accessible diagnosis
The researchers found a direct correlation between ADHD and the inability to suppress eye movement in the anticipation of visual stimuli. The research also reflected improved performance by participants taking methylphenidate, which normalized the suppression of involuntary eye movements to the average level of the control group.
“This test is affordable and accessible, rendering it a practical and foolproof tool for medical professionals,” said Dr. Fried. “With other tests, you can slip up, make ‘mistakes’ — intentionally or not. But our test cannot be fooled. Eye movements tracked in this test are involuntary, so they constitute a sound physiological marker of ADHD.
“Our study also reflected that methylphenidate does work. It is certainly not a placebo, as some have suggested.”
The researchers are currently conducting more extensive trials on larger control groups to further explore applications of the test.
RNA-Targeted Drug Candidate For Lou Gehrig’s Disease Found
By targeting RNA molecules that tangle and clump in the nervous systems of patients with the most common genetic form of amyotrophic lateral sclerosis (ALS or Lou Gehrig’s disease) and frontotemporal dementia (FTD), researchers have shown they can effectively limit those damaging elements in cells taken from patients. The results reported in the Cell Press journal Neuron on August 14th show that RNA is a viable drug target for the two overlapping and incurable neurodegenerative diseases. The abnormal proteins derived from that aberrant RNA might also serve as biomarkers in clinical trials to test the new ALS and FTD drug candidate and otherwise monitor the diseases, the new study finds.
ALS is caused by a progressive loss of motor neurons, leading to severe impairment of mobility, speech, swallowing, and respiratory function that is usually fatal within two to five years, the researchers explained. In FTD, brain regions that support higher cognitive function are affected instead to produce disabling changes in behavior, personality, and language. At present, no effective treatment is available for either condition. However, experts believe that a better understanding of the underlying disease processes will expedite the development of effective therapies, say Leonard Petrucelli of the Mayo Clinic and Matthew Disney of The Scripps Research Institute.
Indeed, Petrucelli, Disney, and their colleagues say that they were pleasantly surprised at how quickly they were able to identify a small molecule capable of interrupting the disease process in cells carrying the C9ORF72 gene. The disease variant is characterized by an expansion of repetitive DNA, which leads to the production of lengthy RNA tangles and the translation of abnormal proteins, both of which can apparently “set in motion distinct toxic chains of events that ultimately lead to neuronal death.”
When the neurodegeneration targets motor neurons, the result is ALS. When it happens in brain regions that support higher cognitive function, then FTD develops. In some cases, affected people show signs of both conditions, which made it clear to Petrucelli and Disney that the aberrant RNA molecules were attractive targets for fighting the diseases.
In search of promising small molecules for the job, the research team first examined the shape of the culprit RNA. The goal then was to identify compounds capable of binding to it, and they got lucky.
“We found that the RNA repeat that causes c9FTD/ALS has some structural similarities to the RNA that causes fragile X-associated tremor ataxia syndrome, to which we already identified a small molecule that could affect dysfunction,” Disney said.
By testing that compound and others that were chemically similar, they rather quickly identified two that selectively target c9FTD/ALS repeat RNA in cells. One of the two compounds cut the accumulation of RNA tangles in cell lines derived from ALS patients in half, they report, with no apparent toxicity. Petrucelli and Disney are now working on ways to improve it even further.
Importantly, the researchers also found that they could detect aberrant proteins in the spinal fluid of ALS patients with the C9ORF72 mutation, which might be useful when it comes to testing their compound or others in clinical trials.
“A decrease in the levels of c9RAN proteins in patient blood or cerebrospinal fluid in response to treatment would demonstrate the drug is working,” Petrucelli said. “While additional studies must be done, this finding suggests that c9RAN proteins may provide a direct means to measure a patient’s response to experimental drugs that target abnormal repeat RNA.”
University Looks To Create Crowdsourced Atlas Using Nighttime ISS Imagery
redOrbit Staff & Wire Reports – Your Universe Online
NASA has compiled over a million pictures of Earth captured from the International Space Station, and now researchers are looking for your help in cataloging the nearly one-third of those images that were taken at night.
Those photos, which the US space agency calls the highest-resolution nighttime imagery available from orbit, are limited in their usefulness because it can be unclear exactly what they are depicting, explained NBC News reporter Alan Boyle.
To rectify the situation, Complutense University of Madrid (UCM) has established a crowdsourcing project to help catalog those ISS photographs. Researchers at the university have developed a website containing over 1,700 nocturnal images taken from locations all over the world since 2003, and have translated it into 13 languages so that a variety of citizen scientists can assist with the identification effort.
Boyle explained that the UCM initiative, known as Cities at Night, is divided into different divisions. One part of the project asks Web surfers to categorize images into pictures of cities, stars and other objects. Another requires geographical knowledge to match bright points of light to different locations on a map, while a third challenge asks citizen scientists to identify cities in wide-angle images captured at night.
“Anyone can help” with the first part of the project, which has been dubbed Dark Skies of ISS, explained UCM doctoral student Alejandro Sanchez. “In fact, without the help of citizens, it is almost impossible to use these images scientifically. Algorithms cannot distinguish between stars, cities, and other objects, such as the moon. Humans are much more efficient for complex image analysis.”
The second part of Cities at Night, called Night Cities, is based on the notion that a person residing in a particular city would have an easier time identifying its features than someone living halfway across the world, Sanchez said. This data will be used to generate light maps of cities.
The third part, called Lost at Night, is looking for people to review images encompassing an area of 310 miles, armed only with the knowledge of the space station’s location at the time when the picture was taken. “We don’t know which direction the astronaut pointed the camera,” Sanchez said, comparing it to “a puzzle with 300,000 pieces.”
To date, hundreds of volunteers have classified almost 20,000 images, NASA said. However, to ensure maximum accuracy, each one should be classified by multiple individuals. One of the project’s goals is to determine the optimum amount of people required to analyze each image, but the primary mission is to lead to the creation of an open atlas of night time images that can be used by scientists, reporters and the general public at any time.
“The project could open up new fields of research,” Boyle said. “Nighttime satellite readings already have been used to chart the rise and fall of political leaders – and there are few better illustrations of the economic disparity between North and South Korea than the space station’s picture of the peninsula’s dark patch. Pinpointing the locations in NASA’s nighttime pictures could help scientists track energy efficiency as well as the health and environmental effects of light pollution.”
Other potential applications for the atlas, which is a joint project of UCM, MediaLab-Prado, Spanish Light Pollution Research Network, European Cooperation in Science and Technology’s Action Loss of the Night Network, Crowdcrafting, Celfosc and AstroMadrid, could include “evaluating lighting for road and public safety and correlating light pollution with effects on human health and biodiversity,” added NASA.
—–
Keep an eye on the cosmos with Telescopes from Amazon.com
Thousand Robot Flash Mob: Harvard Researchers Develop Swarming, Self-Organizing Machines
redOrbit Staff & Wire Reports – Your Universe Online
Researchers from the Harvard University School of Engineering and Applied Sciences (SEAS) and the Wyss Institute for Biologically Inspired Engineering have created the first-ever thousand-robot flash mob, according to research appearing Friday in the journal Science.
Lead author Michael Rubenstein, a research associate at SEAS and the Wyss Institute, and his colleagues developed the self-organizing swarms of machines, which followed simple programmed rules delivered to them via infrared light to form complex shapes such as a five-pointed star or the letter ‘K’.
According to the research team, the robots, which are only a few centimeters long and move about on three pin-like legs, provide a simple platform for the enactment of complex behaviors. Known as Kilobots, they represent “a significant milestone in the development of collective artificial intelligence (AI),” the university explained in a statement.
Rubenstein, Alejandro Cornejo and Radhika Nagpal created the Kilobots after being inspired by natural swarms, where simple creatures can combine their efforts and cooperate to complete great tasks, explained Ed Yong of National Geographic. For example, thousands of fire ants can unite into living bridges or rafts, or how billions or neurons can come together and create something as complex as the human brain.
[ Watch the robots in action ]
“Scientists have tried to make artificial swarms with similar abilities, but building and programming them is expensive and difficult. Most of these robot herds consist of a few dozen units, and only a few include more than a hundred. The Kilobots smash that record,” Yong added. “They’re still a far cry from the combiner robots of my childhood cartoons… But they’re already an impressive achievement.”
“This is a staggering work,” Iain Couzin, who studies collective animal behavior at Princeton University, told the National Geographic reporter on Thursday. “It offers a vision of the future where robot groups could form structures on demand as, for example, in search-and-rescue in dangerous environments, or even the formation of miniature swarms within the body to detect and treat disease.”
The Kilobots are operated by only two coin batteries, and require only an initial set of instructions to begin their work, said CNET’s Michelle Starr. Once they are told what shape to create, no additional human intervention is required – four of the 1,024 robots represent the starting point of a coordinate system, while the remaining robots are given a two-dimensional image to form. They then assemble into that shape, forming it solely by gauging the location of their fellow Kilobots.
“These robots are much simpler than many conventional robots, and as a result, their abilities are more variable and less reliable,” Rubinstein told Starr, noting that while the robots sometimes “have trouble moving in a straight line, and the accuracy of distance sensing can vary,” the strength of the swarm typically helps them overcome their individual flaws. The technology could be used in the construction, agriculture, medicine, and mining industries.
One limitation, Rubenstein told BBC News science reporter Jonathan Webb, is that the process takes a remarkable amount of time. It takes the Kilobots between six and 12 hours to form the shape of a programmed image, and even takes a few hours to clean up the robots when they’re done. In fact, Rubenstein even told Webb that “watching the experiment run is like watching paint dry.”
Image 2 (below): The Kilobots, a swarm of one thousand simple but collaborative robots. Credit: Photo courtesy of Mike Rubenstein and Science/AAAS
—–
Build your own motorized vehicles and machines – Thames & Kosmos Remote Control Machines
Newly Discovered Winged Reptile Had A Head Crest Like A Butterfly Wing
redOrbit Staff & Wire Reports – Your Universe Online
Researchers have discovered nearly 50 bones belonging to a new species of winged reptile with a bizarre head crest comparable to a butterfly or a yacht sail, various media outlets reported earlier this week.
The new creature, identified as Caiuajara dobruskii, lived in southern Brazil during the Cretaceous period. The remains were discovered in a pterosaur bone bed from an interdunal lake deposit from an ancient desert, and it represents the southernmost occurrence of pterosaur bones ever discovered, the study authors wrote Wednesday in the journal PLOS ONE.
Following an initial analysis of the bones, paleontologist Alexander Kellner of the National Museum at the Federal University of Rio de Janeiro and his colleagues determined the bones represented a new pterosaur that differed from other members of its clade. For example, it possessed a bony expansion projecting inside the large opening in the skull in front of its eyes, and the rounded depressions in the outer surface of its jaw.
The bones discovered by Kellner’s team belonged to both juveniles and adults, with wing spans ranging from two to eight feet, explained Deborah Netburn of the Los Angeles Times. The variety of remains discovered will allow scientists to discover how the bones fit into their clade, as well as how the species developed as it matured.
For example, they found age-related variation in the size and angle of the creature’s bony head crest. In juveniles, it appears to have been small and inclined, but it grew larger and steeper during adulthood. The researchers said that the bone analysis suggests that the species was gregarious, lived in colonies and may have started flying at an exceptionally young age, which could have been an overall trend for many types of pterosaurs.
“This helps us to have a glimpse on the anatomical variation achieved by this species from young to old,” Kellner told Reuters reporter Will Dunham. He added that the unique triangular crest on its head was “bizarre” and resembled a “bony sail,” while Tia Ghose of LiveScience referred to the shape of its head as resembling “the wings of a butterfly.”
As the first flying vertebrates, pterosaurs roamed the Earth from approximately 220 million years ago to 65 million years ago, when they were obliterated by the same asteroid that also wiped out the dinosaurs, Dunham said. Caiuajara dobruskii itself is believed to have lived between 80 million and 90 million years ago, and Kellner said that it was a toothless species that sustained itself on a diet of fruit.
Since the fossils were found in what had once been an inland lake in a desert, Netburn said the study authors “think it is possible that these crested flying lizards may have lived in colonies around the lake, or perhaps, if they were migratory, stopped there for water after long flights.”
“As for what killed them, the researchers still don’t know,” she added. “Desert storms and drought-related mortality are both possibilities. However, it should be noted that it does not seem that all the fossilized Caiuajara died at once, which suggests this area had been home to these flying lizards for a long stretch of time.”
—–
Shop Amazon – Rent eTextbooks – Save up to 80%
Analysis Of Stardust Mission Particles Reveals First Potential Interstellar Space Dust
redOrbit Staff & Wire Reports – Your Universe Online
Seven rare and microscopic particles of space dust collected by instruments onboard NASA’s Stardust mission could be the first samples of contemporary interstellar dust ever obtained by scientists, an international team of researchers report in Friday’s edition of the journal Science.
The samples, which were gathered by Stardust’s aerogel and aluminum foil dust collectors, likely originated from outside of our solar system. The authors believe the particles could have been created by a supernova explosion millions of years ago, and were then altered after being exposed to the extreme conditions of their environment.
Scientists have been analyzing the space dust particles since they first arrived on Earth in 2006, and they have discovered that they have a far more complex structure and composition than previously thought possible. In addition to the new Science study, several other papers detailing the research are currently scheduled to appear next week in the journal Meteoritics & Planetary Science.
“These are the most challenging objects we will ever have in the lab for study, and it is a triumph that we have made as much progress in their analysis as we have,” explained Michael Zolensky, co-author of the Science paper and curator of the Stardust laboratory at the NASA Johnson Space Center in Houston.
“Fundamentally, the solar system and everything in it was ultimately derived from a cloud of interstellar gas and dust,” added lead author Andrew Westphal, physicist at the University of California, Berkeley’s Space Sciences Laboratory. “We’re looking at material that’s very similar to what made our solar system.”
Stardust was launched in 1999 and returned to Earth in January 2006, and once it arrived back on the planet, its sample return canister was transported to Zolensky’s facility at Johnson, where they were preserved and protected for noninvasive research. The analysis was conducted at multiple facilities, including the Lawrence Berkeley National Lab in California, and emphasized preserving the structural and chemical properties of the particles.
While the astronomers believe the dust samples originated from beyond our solar system, that cannot be confirmed until tests are conducted that will ultimately destroy some of the particles. Thus far, though, Westphal said that they have “limited the analyses on purpose” due to the “precious” nature of the samples. He added that they are taking the time “to think very carefully about what we do with each particle.”
The scientists report that the particles are far more diverse chemically and structurally than they had expected, as the smaller ones differ greatly from the larger ones and appear to have a different history. Many of the larger ones were described as being fluffy and resembling a snowflake, they noted.
Two of the particles, each about two microns (thousandths of a millimeter) in diameter, were isolated from the others after a team of citizen scientists discovered their tracks. Those individuals scanned over one million images as part of a volunteer project at the University of California, Berkeley. A third track was left behind by a particle that appeared to be moving more than 10 miles per second – so fast that it wound up vaporizing, according to NASA.
Four particles featured in the Science study were discovered in aluminum foils between tiles on the collector tray, the US space agency added. While those foils were not designed to serve as dust collection surfaces, Rhonda Stroud of the Naval Research Laboratory and her colleagues searched them, identifying a quartet of pits that were lined with material comprised of elements said to fit the profile of interstellar dust particles.
Three of those four particles contained sulfur compounds, which some experts claim is not found in particles of interstellar dust. NASA said that a preliminary examination team plans to continue analyzing the remaining 95 percent of the foils, hoping to discover enough particles to understand the variety and origins of interstellar dust.
“Almost everything we’ve known about interstellar dust has previously come from astronomical observations – either ground-based or space-based telescopes,” said Westphal. “The analysis of these particles captured by Stardust is our first glimpse into the complexity of interstellar dust, and the surprise is that each of the particles are quite different from each other.”
Image 2 (below): The largest interstellar dust track found in the Stardust aerogel collectors was this 35 micron-long hole produced by a 3 picogram speck of dust that was probably traveling so fast that it vaporized upon impact. The other two likely interstellar dust grains were traveling more slowly and remained intact after a soft landing in the aerogel. Credit: Andrew Westphal, UC Berkeley
—–
FOR THE KINDLE – The History of Space Exploration: redOrbit Press
Origami Inspires Foldable Spacecraft Components And Space-Based Solar Power Arrays
Elizabeth Landau, Jet Propulsion Laboratory
As a high school student at a study program in Japan, Brian Trease would fold wrappers from fast-food cheeseburgers into cranes. He loved discovering different origami techniques in library books.
Today, Trease, a mechanical engineer at NASA’s Jet Propulsion Laboratory in Pasadena, California, thinks about how the principles of origami could be used for space-bound devices.
“This is a unique crossover of art and culture and technology,” he said.
Trease partnered with researchers at Brigham Young University in Provo, Utah, to pursue the idea that spacecraft components could be built effectively by implementing origami folds. Shannon Zirbel, a doctoral student at BYU, spent two summers at JPL working on these ideas, supported by the NASA Technology Research Fellowship, with Trease as her research collaborator.
Researchers say origami could be useful one day in utilizing space solar power for Earth-based purposes. Imagine an orbiting power plant that wirelessly beams power down to Earth using microwaves. Sending the solar arrays up to space would be easy, Trease said, because they could all be folded and packed into a single rocket launch, with “no astronaut assembly required.”
Panels used in space missions already incorporate simple folds, collapsing like a fan or an accordion. But Trease and colleagues are interested in using more intricate folds that simplify the overall mechanical structure and make for easier deployment.
Last year, Zirbel and Trease collaborated with origami expert Robert Lang and BYU professor Larry Howell to develop a solar array that folds up to be 8.9 feet (2.7 meters) in diameter. Unfold it, and you’ve got a structure 82 feet (25 meters) across. Their 1/20th-scale tabletop prototype expands to a deployed diameter of 4.1 feet (1.25 meters).
One technique that has been used for an origami-inspired solar array is called a Miura fold. This well-known origami fold was invented by Japanese astrophysicist Koryo Miura. When you open the structure, it appears to be divided evenly into a checkerboard of parallelograms.
With this particular fold, there’s only one way to open or close it: Pull on one corner and — voila — the whole thing is open with minimal effort. The mechanical structure of a device that folds this way is greatly simplified because only one input is required to deploy it.
Miura intended this fold for solar arrays, and in 1995 a solar panel with this design was unfolded on the Space Flyer Unit, a Japanese satellite. Despite this test, the technology is still in its early stages. But now, with an emphasis on small satellites and large structures, Trease says arrays inspired by this fold could see renewed usefulness.
“The fact that we’re going both bigger and smaller may open up domains where it may be relevant again,” Trease said.
The fold that Trease and colleagues used is not a Miura fold, but rather a combination of different folds. Trease’s prototype looks like a blooming flower that expands into a large flat circular surface.
Trease envisions that foldable solar arrays could be used in conjunction with small satellites called CubeSats. And he says the origami concept could be used in antennas as well. It could be especially appropriate for spacecraft applications where it’s beneficial to deploy an object radially — that is, from the center, outward in all directions.
Origami was originally intended for folding paper, which has almost no thickness, so Trease and colleagues had to be creative when working with the bulkier materials needed for solar panels.
“You have to rethink a lot of that design in order to accommodate the thickness that starts to accumulate with each bend,” he said.
Origami has been the subject of serious mathematical analysis only within the last 40 years, Trease said. There is growing interest in integrating the concepts of origami with modern technologies.
“You think of it as ancient art, but people are still inventing new things, enabled by mathematical tools,” Trease said.
A short video clip of the origami-inspired prototype is online at: https://vimeo.com/103446030
The California Institute of Technology in Pasadena manages JPL for NASA.
> Explore Further…
—–
Star Wars Origami: 36 Amazing Paper-folding Projects from a Galaxy Far, Far Away…. by Chris Alexander
NSAIDs Could Reduce Breast Cancer Recurrence Rates In Obese Women
redOrbit Staff & Wire Reports – Your Universe Online
Over-the-counter anti-inflammatory drugs such as aspirin or ibuprofen could significantly lower breast cancer recurrence rates in overweight or obese women, according to new research appearing in Friday’s edition of the journal Cancer Research.
In the study, Dr. Andrew Brenner of the Cancer Therapy & Research Center (CRTC) at the University of Texas Health Science Center at San Antonio and his colleagues found that women whose body mass index (BMI) was greater than 30 and who had the most common form of breast cancer had a 52 percent lower rate of recurrence and a 28-month delay in time to recurrence if they were taking nonsteroidal anti-inflammatory drugs (NSAIDs).
Dr. Brenner and colleagues from the University of Texas at Austin, the START Center for Cancer Care in San Antonio, and the University of Texas Health Science Center used a retrospective analysis of human subjects and cell cultures to determine that using aspirin or other types of NSAIDs reduced the recurrence rate of estrogen receptor alpha (ERα)-positive breast cancer – though they cautioned that the findings were only preliminary.
“Our studies suggest that limiting inflammatory signaling may be an effective, less toxic approach to altering the cancer-promoting effects of obesity and improving patient response to hormone therapy,” explained Dr. Linda A. deGraffenried, an associate professor of nutritional sciences at the University of Texas in Austin.
“These results suggest that NSAIDs may improve response to hormone therapy, thereby allowing more women to remain on hormone therapy rather than needing to change to chemotherapy and deal with the associated side effects and complications,” she added. “However, these results are preliminary and patients should never undertake any treatment without consulting with their physician.”
The study authors obtained blood samples from 440 CRTC breast cancer patients, 58.5 percent of whom were obese and 25.8 percent of whom were overweight. Approximately 81 percent of the subjects took aspirin while the remainder took a different type of NSAID, 42 percent took statins, and 25 percent took omega-3 fatty acids.
They compared the prognoses of those who took NSAIDs with those who did not, then conducted a second study to investigate how breast cancer cells behave in a person’s body. To do so, they conducted a series of experiments designed to simulate a tumor filled with cancer cells, fat cells and the immune cells that promote inflammation.
They discovered that the factors associated with obesity serve as the catalyst for signals that promote growth and resistance to treatment within the tumor environment. While the mechanism that causes breast cancer to be more aggressive and less responsive to treatment in obese women is not well understood, the study authors believe that inflammation plays a key role and suggest that it also makes some cancer drugs less effective.
“Overweight or obese women diagnosed with breast cancer are facing a worse prognosis than normal-weight women,” said Dr. deGraffenried in a separate statement. “We believe that obese women are facing a different disease. There are changes at the molecular level. We want to reduce the disease-promoting effects of obesity.”
Based on their findings, the CTRC and the University of Texas in Austin have launched a pilot anti-inflammatory trial, and are looking to secure funding for a larger study. The authors state that their goal is to determine which women would benefit the most from the addition of NSAIDS to their regular treatment programs.
—–
Let Me Get This Off My Chest: A Breast Cancer Survivor Over-Shares by Margaret Lesh
Norwegian Study Investigates The Prevalence Of Workaholism Amongst Employees
redOrbit Staff & Wire Reports – Your Universe Online
Nearly one in 10 employees suffer from some form of workaholism, researchers from the University of Bergen (UiB) Department of Psychosocial Science claim in research published Wednesday in the online edition of the journal PLOS ONE.
In the study, postdoctoral fellow Cecilie Schou Andreassen and her colleagues recruited 2,160 participants between the ages of 18 and 70 randomly selected from a central registry of employers and employees throughout Norway. They discovered that an estimated 8.3 percent of all Norwegians have some form of addiction to their work.
Schou Andreassen’s team developed seven criteria to measure work addiction:
• You think of how you can free up more time to work.
• You spend much more time working than initially intended.
• You work in order to reduce feelings of guilt, anxiety, helplessness and/or depression.
• You have been told by others to cut down on work without listening to them.
• You become stressed if you are prohibited from working.
• You deprioritize hobbies, leisure activities, and/or exercise because of your work.
• You work so much that it has negatively influenced your health.
“If you reply ‘often’ or ‘always’ to at least four of these seven criteria, there is some indication that you may be a workaholic,” she explained. “This is the first scale to use core symptoms of addiction found in other more traditional addictions.”
Among those who took part in the study, 46.6 percent responded “often” or “always” in response to at least one of the criteria, while 27.7 percent did so for two of the criteria and 14.8 percent did so for three of them. Only 0.3 percent of study participants responded “often” or “always” to all of the criteria, while 1.7 percent did so for six of the seven.
The paper, which the authors are calling the first to assess workaholism in a nationally representative sample, found no gender-based differences – both men and women were equally likely to become addicted to their job. Participants were most likely to spend more time working than originally intended (30.5 percent) or deprioritize their hobbies, leisure-time activities or exercise (24.6 percent) due to their jobs, the UiB research team noted.
Schou Andreassen said their findings also suggest that younger adults tended to be affected by workaholism to a greater extent than their older counterparts, but that education level, marital status and part-time versus full-time employment were essentially non-factors. In addition, men and women who were raising children currently living at home were more likely to become addicted to work than those without youngsters to care for.
The researchers also pointed out that “workaholism may have contradictory psychological, physiological, and social outcomes,” according to the university. “As a significant group seemingly is affected, focus on this phenomenon is timely, especially among health professionals and researchers. But employers, politicians, legislators/lawyers, and journalists should also acknowledge the topic as well.”
“As workaholism is not a formal diagnosis the development of treatment models and real treatment offers has been lacking,” added Schou Andreassen. “The fact that more than eight percent of the general work population seems to suffer from workaholism underlines the need for proper treatment and other relevant interventions.”
Schou Andreassen is no stranger to this type of research. In April 2012, redOrbit reported on the new scale developed by her team to measure work addiction, which we have listed above. Called the Bergen Work Addiction Scale, the scale is based on “core symptoms found in more traditional drug addictions; i.e., salience, mood modification, tolerance, withdrawal symptoms, conflict, relapse, problems.”
As Connie K. Ho reported at the time, the researchers believe the scale can help in the development and facilitation of treatments for patients.
—–
Never Enough: Lessons from a Recovering Workaholic by Dr. Frank O’Neill
Babies Know The Difference Between Animate And Inanimate Objects: Study
By Marc Weisblott, Concordia University
Does a baby know that a dog can jump a fence while a school bus can’t? Can a toddler grasp that a cat can avoid colliding with a wall, while a table being pushed into a wall can’t?
A new study from Concordia shows that infants as young as 10-months old can tell the difference between the kinds of paths naturally taken by a walking animal, compared to a moving car or piece of furniture.
That’s important information because the ability to categorize things as animate beings or inanimate objects is a fundamental cognitive ability that allows toddlers to better understand the world around them.
The study, published in Infant Behavior & Development, looked at about 350 babies — who participated at 10, 12, 16 and 20 months — to find out when children clue in to the fact that animals and objects follow different motion paths.
Since the study subjects could not express much in words, the researchers used a technique called the “visual habituation paradigm,” which measures how long one looks at a given object.
“You can understand something about what babies know based on how long they look at something,” explains former doctoral student Rachel Baker, who collaborated on the study with fellow researcher Tamara Pettigrew and Diane Poulin-Dubois, a professor in Concordia’s Department of Psychology and member of the Centre for Research in Human Development. “Babies will look at something new longer than they will look at something that is already familiar to them.”
Since computer animations of a bus or a table jumping over a wall held the attention of infants for longer than a bus or table bumping into a wall, it indicated the former was newer to them than the latter. In contrast, infants’ attention was held just as well by a cat jumping over a wall as by a cat rebounding after running into a wall, indicating that infants think that cats can both jump and rebound.
This matches real life, says Baker, who obtained her PhD from Concordia and is now a research and statistical officer at the Cape Breton District Health Authority. “Animals do bump into objects — if I’m not paying attention to where I’m going, I’ve been known to bump into things. The bigger picture is that the motion of objects is more predictable than the motion of animals. This research shows that even 10-month-old babies have some understanding of this.”
For the researchers, the study reveals that even the youngest among us absorb more details than some might think, through eyes that are usually open wider than adult ones.
“Babies are really quite smart,” says Baker. “The secret to finding out what they know is to be creative and tap into behaviours they do naturally. By doing so, we’ve shown that babies understand something about animals and objects even though they can’t yet put that knowledge into words.”
Forty Percent Of Americans Expected To Develop Type 2 Diabetes During Their Lifetime
Close to half (40%) of the adult population of the USA is expected to develop type 2 diabetes at some point during their lifetime, suggests a major study published in The Lancet Diabetes & Endocrinology. The future looks even worse for some ethnic minority groups, with one in two (> 50%) Hispanic men and women and non-Hispanic black women predicted to develop the disease.
A team of US researchers combined data from nationally representative US population interviews and death certificates for about 600 000 adults to estimate trends in the lifetime risk of diabetes and years of life lost to diabetes in the USA between 1985 and 2011.
Over the 26 years of study, the lifetime risk of developing type 2 diabetes for the average American 20-year-old rose from 20% for men and 27% for women in 1985, to 40% for men and 39% for women in 2000. The largest increases were in Hispanic men and women, and non-Hispanic black women, for whom lifetime risk now exceeds 50%.
Dr Edward Gregg, study leader and Chief of the Epidemiology and Statistics Branch, Division of Diabetes Translation at the Centers for Disease Control and Prevention said, “Soaring rates of diabetes since the late 1980s and longer overall life expectancy in the general population have been the main drivers of the striking increase in the lifetime risk of diabetes over the last 26 years. At the same time, a large reduction in death rates in the US population with diabetes has reduced the average number of years lost to the disease. However, the overwhelming increase in diabetes prevalence has resulted in an almost 50% increase in the cumulative number of years of life lost to diabetes for the population as a whole: years spent living with diabetes have increased by 156% in men and 70% in women.”
He concludes, “As the number of diabetes cases continue to increase and patients live longer there will be a growing demand for health services and extensive costs. More effective lifestyle interventions are urgently needed to reduce the number of new cases in the USA and other developed nations.”
Writing in a linked Comment, Dr Lorraine Lipscombe from Women’s College Hospital and the University of Toronto, Toronto, Canada says, “The trends reported by Gregg and colleagues are probably similar across the developed world, where large increases in diabetes prevalence in the past two decades have been reported…Primary prevention strategies are urgently needed. Excellent evidence has shown that diabetes can be prevented with lifestyle changes. However, provision of these interventions on an individual basis might not be sustainable. Only a population-based approach to prevention can address a problem of this magnitude. Prevention strategies should include optimisation of urban planning, food-marketing policies, and work and school environments that enable individuals to make healthier lifestyle choices. With an increased focus on interventions aimed at children and their families, there might still be time to change the fate of our future generations by lowering their risk of type 2 diabetes.”
High-Dose Flu Vaccine More Effective In Elderly, Vanderbilt-Led Study Shows
Craig Boerner, Vanderbilt University
High-dose influenza vaccine is 24 percent more effective than the standard-dose vaccine in protecting persons ages 65 and over against influenza illness and its complications, according to a Vanderbilt-led study published this week in the New England Journal of Medicine (NEJM).
The multi-center study enrolled 31,989 participants from 126 research centers in the U.S. and Canada during the 2011-2012 and 2012-2013 influenza seasons in the Northern Hemisphere in order to compare the high-dose trivalent vaccine versus the standard-dose trivalent vaccine in adults over 65 years of age.
“The study was done to see if using a high-dose vaccine protected older adults better than the usual vaccine. Until this trial came out we didn’t know if it was going to be clinically better or not and now we know it is better,” said lead author Keipp Talbot, M.D., assistant professor of Medicine, who served as coordinating investigator for the more than 100 study sites.
“Older adults are the most vulnerable to influenza; they become the sickest and have the most hospitalizations. This vaccine works better than the standard dose and hence I would tell my patients to get the high-dose vaccine every year. In the meantime, we will continue to work to find newer and better vaccines for older adults.”
Researchers concluded that the high-dose vaccine is safe, induces significantly higher antibody responses, and provides superior protection against laboratory-confirmed influenza illness compared to standard dose among persons over 65 years of age.
Study data also indicated that the high-dose vaccine may provide clinical benefit for the prevention of hospitalizations, pneumonia, cardio-respiratory conditions, non-routine medical visits, and medication use.
Between 1990 and 1999, seasonal influenza caused an average of 36,000 deaths and 226,000 hospitalizations per year in the U.S. Adults over 65 years old are particularly vulnerable to influenza complications, accounting for most seasonal influenza-related hospitalizations and deaths.
“Prevention of influenza should lower hospitalizations, deaths, heart attacks, and pneumonia,” Talbot said. “This vaccine does have some more arm soreness than the usual vaccine because it is a higher dose. With this increased soreness comes greater protection.”
Known as the Fluzone High-Dose vaccine, and made by Sanofi Pasteur, the inactivated influenza vaccine contains four times the amount of antigen that is contained in the standard-dose Fluzone vaccine.
“Fluzone High-Dose vaccine is the only influenza vaccine in the U.S. that is designed specifically to address the age-related decline of the immune system in older adults,” said David P. Greenberg, M.D., vice president, Scientific & Medical Affairs, and chief medical officer, Sanofi Pasteur U.S.
Study authors said about one-in-four breakthrough cases of influenza could be prevented if the high-dose vaccine were used instead of the standard-dose vaccine.
“I see older adults hospitalized every year with influenza and many of them come into the hospital with pneumonias and heart failure because they had influenza,” Talbot said “But I have to say our seniors in Nashville are very good at getting vaccinated. Locally they are very good and they do much better than their counterparts who are less than 65 years old. About 76 percent of this community of older adults are vaccinated for influenza each year.”
Antarctica Could Raise Sea Level Faster Than Previously Thought
Potsdam Institute for Climate Impact Research
Ice discharge from Antarctica could contribute up to 37 centimeters to the global sea level rise within this century, a new study shows. For the first time, an international team of scientists provide a comprehensive estimate on the full range of Antarctica’s potential contribution to global sea level rise based on physical computer simulations. Led by the Potsdam Institute for Climate Impact Research, the study combines a whole set of state-of-the-art climate models and observational data with various ice models. The results reproduce Antarctica’s recent contribution to sea level rise as observed by satellites in the last two decades and show that the ice continent could become the largest contributor to sea level rise much sooner than previously thought.
“If greenhouse gases continue to rise as before, ice discharge from Antarctica could raise the global ocean by an additional 1 to 37 centimeters in this century already,” says lead author Anders Levermann. “Now this is a big range – which is exactly why we call it a risk: Science needs to be clear about the uncertainty, so that decision makers at the coast and in coastal megacities like Shanghai or New York can consider the potential implications in their planning processes,” says Levermann.
Antarctica currently contributes less than 10 percent to global sea level rise
The scientists analyzed how rising global mean temperatures resulted in a warming of the ocean around Antarctica, thus influencing the melting of the Antarctic ice shelves. While Antarctica currently contributes less than 10 percent to global sea level rise and is a minor contributor compared to the thermal expansion of the warming oceans and melting mountain glaciers, it is Greenland and especially the Antarctic ice sheets with their huge volume of ice that are expected to be the major contributors to future long-term sea level rise. The marine ice sheets in West Antarctica alone have the potential to elevate sea level by several meters – over several centuries.
According to the study, the computed projections for this century’s sea level contribution are significantly higher than the latest IPCC projections on the upper end. Even in a scenario of strict climate policies limiting global warming in line with the 2°C target, the contribution of Antarctica to global sea level rise covers a range of 0 to 23 centimeters.
A critical input to future projections
“Rising sea level is widely regarded as a current and ongoing result of climate change that directly affects hundreds of millions of coastal dwellers around the world and indirectly affects billions more that share its financial costs,” says co-author Robert Bindschadler from the NASA Goddard Space Flight Center. “This paper is a critical input to projections of possible future contributions of diminishing ice sheets to sea level by a rigorous consideration of uncertainty of not only the results of ice sheet models themselves but also the climate and ocean forcing driving the ice sheet models. Billions of Dollars, Euros, Yuan etc. are at stake and wise and cost-effective decision makers require this type of useful information from the scientific experts.”
While the study signifies an important step towards a better understanding of Antarctica in a changing climate and its influence on sea level change within the 21st century, major modeling challenges still remain: Datasets of Antarctic bedrock topography, for instance, are still inadequate and some physical processes of interaction between ice and ocean cannot be sufficiently simulated yet.
Notably, the study’s results are limited to this century only, while all 19 of the used comprehensive climate models indicate that the impacts of atmospheric warming on Antarctic ice shelf cavities will hit with a time delay of several decades. “Earlier research indicated that Antarctica would become important in the long term,” says Levermann. “But pulling together all the evidence it seems that Antarctica could become the dominant cause of sea level rise much sooner.”
Article: Levermann, A., Winkelmann, R., Nowicki, S., Fastook, J.L., Frieler, K., Greve, R., Hellmer, H.H., Martin, M.A., Meinshausen, M., Mengel, M., Payne, A.J., Pollard, D., Sato, T., Timmermann, R., Wang, W.L., Bindschadler, R.A. (2014): Projecting Antarctic ice discharge using response functions from SeaRISE ice-sheet models. Earth System Dynamics, 5, 271-293 [DOI: 10.5194/esd-5-271-2014]
Virginia Tech Study Finds Common Household Chemicals Affect Reproduction In Mice
Michael Sutphin, Virginia Tech
Virginia Tech researchers who were using a disinfectant when handling mice have discovered that two active ingredients in it cause declines in mouse reproduction.
Although the chemicals responsible for the declines are common in household cleaning products and disinfectants used in medical and food preparation settings, including hand sanitizers, academic scientists have never published a rigorous study, until now, on their safety or toxicity.
“It is likely that you have these chemicals in your house,” said Dr. Terry Hrubec, a research assistant professor in the Department of Biomedical Sciences and Pathobiology at the Virginia-Maryland College of Veterinary Medicine. “The answer to the question, ‘Are these chemicals harmful to humans?’ is that we simply don’t know.”
Hrubec and her research team at the veterinary college saw a decline in reproductive performance of her mice. Stumped by her initial findings, Hrubec noticed animal care staff in her laboratory wetting their hands with a disinfectant before touching the mice.
This observation led her to a letter published in Nature by co-author Patricia Hunt, a geneticist at Washington State University, who had made the same discovery. These two independent observations were the impetus for the study. When Hrubec tested whether the disinfectant might be causing reproductive decline, she came up with the unexpected finding.
“These chemicals have been around for 50 years,” said Hrubec, who is also an associate professor of anatomy at Blacksburg, Virginia’s Edward Via College of Osteopathic Medicine. “They are generally considered safe, but no one has done rigorous scientific research to confirm this.”
The two active ingredients in the disinfectant — alkyl dimethyl benzalkonium chloride and didecyl dimethylammonium chloride — are typically listed by their abbreviations, ADBAC and DDAC, on ingredient lists.
They are a part of a larger class of chemicals called “quaternary ammonium compounds,” which are used for their antimicrobial and antistatic properties as well as their ability to lower surface tension between two liquids or a liquid and a solid. They are found in commercial and householder cleaners, disinfectants, hand sanitizers, preservatives in makeup and other cosmetics, fabric softeners, and dryer sheets.
“We just tested the two active ingredients in the disinfectant, not the entire class of compounds,” Hrubec explained. “To be on the safe side, we need to do more research on these chemicals and find out how they could be affecting human health.”
The research team found that the female mice took longer to get pregnant and had fewer offspring when they did. Forty percent of the mothers exposed to ADBAC and DDAC died in late pregnancy or during delivery. Graduate students Vanessa Melin and Haritha Potineni in the veterinary college’s Department of Biomedical Sciences and Pathobiology assisted with the study.
Hrubec drew comparisons between her research team’s work and similar research on bisphenol A, commonly known as BPA. In 1998, Washington State’s Hunt discovered the toxic effects of BPA, which could be found on baby bottles, medical and dental devices, and coatings on beverage cans, among other uses. Hrubec and Hunt are co-authors on the Virginia Tech study, which will appear in an upcoming issue of Reproductive Toxicology, a leading journal on the effects of toxic substances on the reproductive system.
“If these chemicals are toxic to humans, they could also be contributing to the decline in human fertility seen in recent decades, as well as the increased need for assistive reproductive technologies such as in-vitro fertilization,” Hrubec said.
Quaternary ammonium compounds like the ones used for the disinfectant in Hrubec’s lab were introduced in the 1950s and 1960s. Although some toxicity testing took place during this period, it was conducted by chemical manufacturers and not published.
“These industry-sponsored studies took place before toxicity studies were standardized,” Hrubec said. “In the 1980s, toxicity researchers developed and implemented Good Laboratory Practices, or GLPs. These are guidelines and rules for conducting research so that it is reproducible and reliable. All of the research on these chemicals happened before that.”
Although these chemicals are harmful to mice, Hrubec explained that they might not be dangerous for humans. But considering the widespread human exposure to the compounds through cleaning products and disinfectants, more research is needed to verify human implications. Hrubec noted that an epidemiological study could determine whether people who have a high rate of exposure to the chemicals, such as healthcare workers or restaurant servers, have a harder time becoming pregnant.
In addition to the Virginia-Maryland College of Veterinary Medicine and the Edward Via College of Osteopathic Medicine, the study received funding from the Passport Foundation, a San Francisco-based nonprofit that sponsors research on product safety.
View the journal article in the online edition of Reproductive Toxicology.
Internet Issues Arise As Older Routers Reach Arbitrary Data Limits
redOrbit Staff & Wire Reports – Your Universe Online
If you’ve had trouble getting online or accessing your favorite website recently, you’re not the only one – similar issues are affecting customers and companies all over the world, and the cause is an obscure part of the Internet’s infrastructure that is struggling to keep up with the explosive growth of the World Wide Web.
According to ArsTechnica writer Robert Lemos, online auction site eBay, password management service LastPass and hosting provider Liquid Web are among those who have been affected by the performance issues, which are being blamed on “a novel technical issue that impacts older Internet routers.”
The Internet has grown too large for some older routers to keep up with, resulting in connectivity issues, said Business Insider’s Lisa Eadicicco. Those routers, she said, are only able to process a limited number of routes contained on the online roadmap, which is known as Border Gateway Protocol (BGP) routing tables.
Due to the rapid growth of the Internet, these routing tables are growing too large for the memory of the routers, preventing those devices from properly managing online traffic. Those older routers can only manage routing tables with 512,000 routes, but the number of global routes has recently surpassed that threshold, Eadicicco added.
“BGP is what tier-one ISPs, your last-mile ISP and various large networks use to route data from their own machines to others, and vice versa,” explained Matthew Sparkes, Deputy Head of Technology with The Telegraph. “When you visit a website, that data bounces all over the world, through machines belonging to all manner of companies and organizations. To make this work, machines called routers (large commercial versions of what you have at home) keep a table of known, trusted routes through the tangled web.”
“This routing table has been constantly growing in size as the internet expands and becomes more complex – more information needs to be stored in order to allow the router to bounce data to the correct destination along a logical route,” he added. “Until late 2001, the size of the table was growing exponentially, which was clearly unsustainable. A big effort to implement more efficient methods was made which temporarily slowed expansion.”
Those efforts were short-lived, however, and Sparkes said the amount of online traffic has now reached the point where these older routers simply to not have enough memory or processing power to keep up. The strict 512,000 route limit was instituted by programmers several years ago, trading a limited lifespan for reduced hardware costs, but new issues cropping up have resulted in connectivity issues, traffic slowdown and website crashes.
“The issue appears to have surprised many network engineers. Network-hardware vendors did not give much warning as to the dangers of the default configuration of older routers, and corporate executives likely put off resolving the issue,” Lemos said. “Overall, Internet experts do not believe the issue will dramatically impact Internet operations. Only older routers and switches are affected, and most can be reconfigured to assign more memory to routing IPv4 traffic, but at the expense of supporting the next generation of networking, IPv6.”
Ultimately, engineers will have to either raise their routers memory caps and reboot them, or buy new gear, explained Drew Fitzgerald of the Wall Street Journal. However, some of those firms will have to reconfigure their devices one at a time, which resulted in some websites going offline earlier this week and is expected to cause similar issues in the days ahead.
“The situation echoes – if faintly – the hubbub over the feared Y2K computer glitch in the late 1990s, when experts warned that systems could fail because their dating functions hadn’t been designed to handle the turn of the century,” Fitzgerald said. “This time, Internet specialists are being careful to warn against a descent into that era’s hyperbole and shrill warnings of disasters that never materialized.”
So while the situation may not exactly be an Internet apocalyptic, virtual end-of-days type scenario, the issue has been “adding to many engineers’ real-life workload as recently as this week” and will likely be an issue until technicians can fully sort out the problem. Next generation routers can handle millions of routes, but as the Wall Street Journal noted, there will come a day where even that amount will not be enough to handle all of the world’s online traffic.
How Much Salt Is Too Much? And Is Too Little Also Bad? The Debate Continues In Three New Studies
redOrbit Staff & Wire Reports – Your Universe Online
Consuming too much salt has long been associated with risk of high blood pressure, but cutting back too much on sodium could also be hazardous to your health, according to research appearing in the latest edition of the New England Journal of Medicine (NEJM).
Lead author Dr. Martin O’Donnel of McMaster University in Hamilton, Ontario and his colleagues concluded in their study that consuming less than three grams of sodium per day increased the risk of death or major cardiovascular event by 27 percent versus those who consumed four to six grams each day, said Reuters reporter Gene Emery.
The study followed over 100,000 people from 17 different countries for an average of over three years, added Ron Winslow of the Wall Street Journal. The controversial findings “are the latest to challenge the benefit of aggressively low sodium targets – especially for generally healthy people.”
While the study “has shortcomings, and as an observational study it found only an association, not a causative effect, between very low sodium and cardiovascular risk,” Winslow said that it has “spurred calls to reconsider” recommendations from the American Heart Association and other health groups that most people consume between 1,500 and 2,300 milligrams (though US daily recommendations call for about 3,400 milligrams of salt).
NBC News reporters Judy Silverman and Lisa Tolin noted the study suggests that efforts to reduce the salt content in foods in order to help prevent high blood pressure and the cardiovascular ailments associated with it could be misguided. American Heart Association president Dr. Elliott Antman, however, has taken issue with the fact that sodium was measured through urine samples and noted that it might have ignored other potential health issues.
“We don’t know the diet the subjects who gave a urine specimen were eating and for how long they ate it after. It was one point in time, and the researchers followed them for 3.7 years and try to draw a relationship between one-spot urine and events that occurred over the next 3.7 years,” he told NBC News. Dr. Antman also noted that the study was too short to draw long-term conclusions, since it can take decades for cardiovascular disease to surface.
University of Alabama at Birmingham professor Dr. Suzanne Oparil, the author of an editorial accompanying the study, told reporters that the study, while flawed, provides evidence supporting the notion that consuming too little salt could be dangerous. She added that low-sodium targets are “questionable health policy” without evidence from randomized trials to support such claims.
A second study, also published this week in the New England Journal of Medicine, examined the effect of sodium on blood pressure and found that people consuming a moderate amount of salt did not benefit from reducing their consumption as those consuming higher amounts.
“Previously it was believed that the lower you go the better. What these studies show collectively is that there is an optimal level, and lower is not necessarily better,” McMaster University’s Dr. Andrew Mente, lead author of that study, told Reuters. “If people are eating a very high level of sodium and they reduce their intake, you get a large reduction in blood pressure. But if you’re eating a moderate level of sodium – about what most North Americans eat – and you reduce it to a lower level, you’re not really getting much in return as far as blood pressure reduction is concerned.”
A third NEJM paper, however, reported that one-tenth of all cardiovascular deaths occur to people consuming more than 2,000 milligrams of sodium per day. What that means, said lead author Dr. Dariush Mozaffarian of Tufts University, is that approximately 1.65 million people each year die as the result of heart disease and stroke directly caused by excessive sodium consumption.
Dr. Mozaffarian and his co-authors looked at data from 205 surveys of sodium intake representing approximately 75 percent of the global adult population, and discovered that the average sodium intake in 2010 was 3.95 grams per day – nearly double the daily recommendation set by the World Health Organization (WHO), according to Honor Whiteman of Medical News Today.
“All worldwide regions had sodium intakes above the WHO recommendation. These ranged from 2.18 g per day in sub-Saharan Africa to 5.51 g per day in Central Asia,” Whiteman said. “The researchers found that 4 out of 5 global deaths attributable to excess sodium intake occurred in low- and middle-income countries.”
“The bulk of the available evidence to date suggests that reduced sodium intake is associated with reduced blood pressure, which itself is associated with a reduction in cardiovascular events,” Dr. Antman said in a statement. “Along with improving overall diet, controlling weight, and increasing physical activity, lowering sodium intake is key to lowering blood pressure in the general population and improving blood pressure control in those with hypertension.”
Higher BMI Found To Increase Risk Of Several Types Of Cancer
redOrbit Staff & Wire Reports – Your Universe Online
Obesity increases a person’s risk of developing 10 of the most common forms of cancer, and is believed to cause an estimated 12,000 additional cases of the disease each year, experts from the London School of Hygiene and Tropical Medicine (LSHTM) and the Farr Institute of Health Informatics claim in a new study.
The research, which was published online Thursday in the UK medical journal The Lancet, looked at the medical data of five million UK adults and discovered a link between higher body mass index (BMI) and increased risk of cancers affecting the uterus, gallbladder, kidney, liver, cervix, thyroid, and colon, as well as leukemia, ovarian cancer and breast cancer.
Weight-related increases in cancer risk varied by tumor type, with uterine cancer having the highest (a 62 percent increased risk) followed by gallbladder (31 percent) and kidney (25 percent), according to BBC News. They found that every 28 to 35 pounds (13 to 16 kg) of extra weight was clearly linked with an increase in the risk of these three diseases, as well as cervical cancer, thyroid cancer and leukemia.
“If we could magically remove excess weight from the population, we would have 12,000 fewer cancers,” lead investigator Dr. Krishnan Bhaskaran, National Institute for Health Research Postdoctoral Fellow at the LSHTM, told The Guardian on Wednesday, adding that even the study authors were surprised by how strong the relationship was.
“The number of people who are overweight or obese is rapidly increasing both in the UK and worldwide. It is well recognized that this is likely to cause more diabetes and cardiovascular disease,” Dr. Bhaskaran added in a statement. “Our results show that if these trends continue, we can also expect to see substantially more cancers as a result.”
The LSHTM and Farr Institute researchers reported that obesity increased the risk of liver cancer by 19 percent, cervical cancer and colon cancer by 10 percent each, thyroid and ovarian cancers by 9 percent each, leukemia by 9 percent and breast cancer by 5 percent, though they noted that those effects varied by underlying BMI and other individual-level risk factors such as sex and menopausal status.
While they found some evidence that higher-weight people faced a slightly reduced risk of prostate cancer and premenopausal breast cancer, the results of the National Institute for Health Research, Wellcome Trust, and Medical Research Council-funded study suggest that obesity could account for 41 percent of all uterine cancer cases, and at least one-tenth of all gallbladder, kidney, liver, and colon cancers in the UK.
If the obesity epidemic continues to spread, it could be responsible for as many as 3,500 new cancers every year, the researchers noted. The study’s findings have prompted some experts to call on regulators to do more to encourage people to eat healthier while punishing companies that sell high-calorie foods, noted Sarah Knapton, Science Correspondent for The Telegraph.
“We have to ditch this love-in with the food industry and start penalizing those who continue to make unhealthy food.” Tam Fry of the National Obesity Forum told Knapton. “We are now starting to reap the rewards of decades of inaction and unwillingness to do anything about obesity and its consequences.”
“The public is still not aware of the harm that being overweight or obese does and we have become too worried with making fat people thin rather than preventing people getting that way in the first place,” she added. “From the earliest age, children should be educated at school about the dangers, otherwise we are just mopping up the water without turning off the tap.”
Weight Watchers 50th Anniversary Cookbook – 280 Delicious Recipes for Every Meal
Snowden: NSA Developing Cyber-Defense System Called MonsterMind
redOrbit Staff & Wire Reports – Your Universe Online
It might sound like something out of a science fiction movie, but according to Edward Snowden, it’s all too real: the US National Security Agency (NSA) is developing a cyber-defense system capable of not only automatically neutralizing foreign cyberattacks but also launching retaliatory strikes against those nations.
Snowden, the whistleblower who publically exposed extensive details of global electronic surveillance and data collection at the NSA, revealed the existence of the program known as MonsterMind in an interview with Wired senior staff reporter Kim Zetter on Wednesday. The system would use algorithms to analyze metadata to differentiate between regular network traffic and potentially malicious traffic, and that knowledge would allow the agency to instantly detect and block foreign threats.
Beyond that, Snowden does not know much about the program – after all, as Engadget’s Daniel Cooper pointed out, it has been more than a year since he was forced to flee the US. However, Wired pointed out that the only way the NSA would be able to detect these spikes in anomalous traffic would be to scan the entire Internet – intercepting private communications without a warrant or probable cause, a clear violation of the Fourth Amendment, according to Snowden.
“The government has used excessive secrecy to prevent real debate over the wisdom and legality of many of its most sweeping surveillance programs,” ACLU attorney Alex Abdo told Grant Gross of PC Magazine via email. “This newly described program is just another example of that secrecy. If the government truly is scanning all internet traffic coming into the United States for suspicious content, that would raise significant civil liberties questions.”
Furthermore, Cooper said that MonsterMind would also “be in direct contravention of… the recent oversight report that Judge John Bates carried out about the NSA’s respect of privacy and civil liberties.” He also pondered whether or not using the program to automatically launch an online attack against a foreign country would be “violating the rule that only Congress can make a formal declaration of war.”
Snowden also said that, while he was working as a contractor with the NSA, he was concerned that a program like MonsterMind could result in misdirected counterattacks launched in response to spoofed attacks. “You could have someone sitting in China, for example, making it appear that one of these attacks is originating in Russia,” he told Wired. “And then we end up shooting back at a Russian hospital. What happens next?”
Zetter compared MonsterMind to a digital version of the Star Wars anti-nuclear missile initiative proposed back in the 1980s. While that program would have kept the country safe from nuclear warfare, the new NSA cyber-initiative would ideally help protect the country’s technological infrastructure from malware. For example, it could prevent DDoS attacks against American banks or prevent malware from crippling airline or railway systems, she added.
The NSA declined to comment on Snowden’s allegations and would not confirm or deny the existence of MonsterMind, according to Zetter and Gross. However, in a statement, the agency called on the former NSA contractor to return to the US. “If Mr. Snowden wants to discuss his activities, that conversation should be held with the U.S. Department of Justice,” it said. “He needs to return to the United States to face the charges against him.”
Shop Amazon – Hot New Releases – Updated Every Hour
New Reversible USB Design Finalized And Ready For Production, Development Group Confirms
redOrbit Staff & Wire Reports – Your Universe Online
The days of having to make sure your USB connector and cable are facing the right way when you try to plug it in will soon be over, as the USB 3.0 Promoter Group announced on Tuesday that the design for the new Type-C version of the standardized data transfer connections between electronic devices has been finalized.
According to BBC News, the designers of the new USB Type-C said that the cables would be small enough to work with smartphones but “robust enough for laptops and tablets.” While they will not work in the ports found on millions of current and legacy devices, the specifications allow for passive new-to-existing adapters that will allow consumers to switch to the new version of the USB cables.
“Interest in the USB Type-C connector has not only been global, but cross-industry as well,” Brad Saunders, USB 3.0 Promoter Group Chairman, said in a statement. “Representatives from the PC, mobile, automotive and IoT industries have been knocking down our door anticipating this new standard. This specification is the culmination of an extensive, cooperative effort among industry leaders to standardize the next generation USB connector as a long-lasting, robust solution.”
The new USB connector will be comparable in size to micro USB 2.0 Type-B connectors and will feature a port size of 8.4 by 2.6mm, explained CNET’s Michelle Starr. It will also be compatible with SuperSpeed USB at 10Gbps (USB 3.1) and will also support USB Power Delivery up to 100W, with additional support for scalable power charging and future performance needs, she added.
“USB has the luxury of consumer familiarity and trust, and as we adapt the technology for the future we are committed to ensuring the USB brand promise continues with this new USB Type-C connector and cable,” said Jeff Ravencraft, USB-IF President and COO. “The USB-IF is working to establish certification and compliance testing so that consumers can have the same confidence in the next generation of certified USB technology.”
“With the Type-C spec finalized, it now comes down to the USB-IF to actually implement the sockets, plugs, cables, adapters, and devices,” said Sebastian Anthony of Extreme Tech. “We could begin seeing Type-C USB devices over the next few months, but considering the lack of backwards compatibility (an adapter is required), and the fact that the existing Micro-USB connector is mandated as the standard mobile phone charging connector by several governments around the world, it may take a little while for Type-C to reach critical mass.”
The USB 3.0 Promoter Group behind the development of the new USB cables and connectors included representatives from some of the largest tech companies on the planet, including Hewlett-Packard, Intel, Microsoft, Renesas, STMicroelectronics and Texas Instruments, said BBC News. The UK news agency compares the new reversible USB to Apple’s Lightning connector, which is also reversible and has become standard for iOS devices.
“This next generation of USB technology opens the door for the invention of an entirely new, super thin class of devices that consumers haven’t even seen yet,” said Alex Peleg, vice president, Platform Engineering Group, Intel Corporation. “The USB Type-C connector, combined with high-performance data and power, is the ideal single-cable solution for all devices now and into the future.”
“Microsoft sees this new USB Type-C interface becoming the next generation industry standard for high speed wired local connectivity. It will offer more intuitive consumer experience by means of reversible plug orientation and cable direction designs. Further, this new USB Type-C allows for radically higher data speeds and power carrying capabilities compared to the existing methods,” added Microsoft Devices Group VP for technology and silicon Ilan Spillinger. “We… believe this is an important milestone in consumer electronics ecosystem development.”
SHOP NOW: Dropcam Pro Wi-Fi Wireless Video Monitoring Camera
Identifying Skin Creams That Contain Toxic Mercury
American Chemical Society
As countries try to rid themselves of toxic mercury pollution, some people are slathering and even injecting creams containing the metal onto or under their skin to lighten it, putting themselves and others at risk for serious health problems. To find those most at risk, scientists are reporting today that they can now identify these creams and intervene much faster than before. They’re speaking at the 248th National Meeting & Exposition of the American Chemical Society (ACS).
The meeting, organized by the world’s largest scientific society, features nearly 12,000 presentations on a wide range of science topics and is being held here through Thursday.
“In the U.S., the limit on mercury in products is 1 part per million,” says Gordon Vrdoljak, Ph.D., of the California Department of Public Health. “In some of these creams, we’ve been finding levels as high as 210,000 parts per million — really substantial amounts of mercury. If people are using the product quite regularly, their hands will exude it, it will get in their food, on their countertops, on the sheets their kids sleep on.”
Identifying the toxic products has been a slow process, however. So, Vrdoljak turned to an instrument that uses a technique called total reflection x-ray fluorescence. He found that the machine can screen product samples for mercury content far more efficiently, and just as accurately, as its well-established but time-consuming counterpart. That means the team he works with and others around the country will be able to identify the sources of mercury poisoning and help those affected much faster than before.
“Testing one product using the old technique could take days,” he said. “Using the new instrument, I can run through 20 or 30 samples in a day quite easily. By identifying those products that contain mercury, we can direct people to remove them and clean up their households.”
Although the metal does lighten skin, dark spots and even acne, research has shown that the silvery liquid can cause a number of health problems, including lower cognitive functioning, kidney damage, headaches, fatigue, hand tremors, depression and other symptoms. As a result, the US and many other countries have set low limits on or have banned mercury in consumer products.
But demand is high among certain populations for these skin-lightening products. People bring them into the US in their personal luggage from other regions where the creams are popular, including Asia, Central America, the Middle East and Africa, Vrdoljak explains. Then they distribute the creams to friends and families or sell them through small ethnic stores — off the regulatory radar.
When cream users start noticing hand shaking, headaches and other symptoms, they visit their doctors. Through a urine test, they can find out whether they have high levels of mercury. In these cases, Vrdoljak says his team can step in. They analyze dozens of bottles and containers from the patients’ homes to root out the products that contain mercury. Their work has led to two product recalls earlier this year, but often, they find the cosmetics are homemade and come in unmarked containers.
“In the U.S., it’s hard to gauge how much of these products are being used,” Vrdoljak says. “But at least with this new technique, we can identify them much faster and help more people than before.”
Contents In ‘Fracking’ Fluids Raises Red Flags: Study
As the oil and gas drilling technique called hydraulic fracturing (or “fracking”) proliferates, a new study on the contents of the fluids involved in the process raises concerns about several ingredients. The scientists presenting the work today at the 248th National Meeting & Exposition of the American Chemical Society (ACS) say that out of nearly 200 commonly used compounds, there’s very little known about the potential health risks of about one-third, and eight are toxic to mammals.
The meeting features nearly 12,000 presentations on a wide range of science topics and is being held here through Thursday by ACS, the world’s largest scientific society.
William Stringfellow, Ph.D., says he conducted the review of fracking contents to help resolve the public debate over the controversial drilling practice. Fracking involves injecting water with a mix of chemical additives into rock formations deep underground to promote the release of oil and gas. It has led to a natural gas boom in the US, but it has also stimulated major opposition and troubling reports of contaminated well water, as well as increased air pollution near drill sites.
“The industrial side was saying, ‘We’re just using food additives, basically making ice cream here,'” Stringfellow says. “On the other side, there’s talk about the injection of thousands of toxic chemicals. As scientists, we looked at the debate and asked, ‘What’s the real story?'”
To find out, Stringfellow’s team at Lawrence Berkeley National Laboratory and University of the Pacific scoured databases and reports to compile a list of substances commonly used in fracking. They include gelling agents to thicken the fluids, biocides to keep microbes from growing, sand to prop open tiny cracks in the rocks and compounds to prevent pipe corrosion.
What their analysis revealed was a little truth to both sides’ stories — with big caveats. Fracking fluids do contain many nontoxic and food-grade materials, as the industry asserts. But if something is edible or biodegradable, it doesn’t automatically mean it can be easily disposed of, Stringfellow notes.
“You can’t take a truckload of ice cream and dump it down the storm drain,” he says, building on the industry’s analogy. “Even ice cream manufacturers have to treat dairy wastes, which are natural and biodegradable. They must break them down rather than releasing them directly into the environment.”
His team found that most fracking compounds will require treatment before being released. And, although not in the thousands as some critics suggest, the scientists identified eight substances, including biocides, that raised red flags. These eight compounds were identified as being particularly toxic to mammals.
“There are a number of chemicals, like corrosion inhibitors and biocides in particular, that are being used in reasonably high concentrations that potentially could have adverse effects,” Stringfellow says. “Biocides, for example, are designed to kill bacteria — it’s not a benign material.”
They’re also looking at the environmental impact of the fracking fluids, and they are finding that some have toxic effects on aquatic life.
In addition, for about one-third of the approximately 190 compounds the scientists identified as ingredients in various fracking formulas, the scientists found very little information about toxicity and physical and chemical properties.
“It should be a priority to try to close that data gap,” Stringfellow says.
‘Trojan Horse’ Treatment Could Beat Brain Tumors
A smart technology which involves smuggling gold nanoparticle into brain cancer cells has proven highly effective in lab-based tests
A “Trojan horse” treatment for an aggressive form of brain cancer, which involves using tiny nanoparticles of gold to kill tumor cells, has been successfully tested by scientists.
The ground-breaking technique could eventually be used to treat glioblastoma multiforme, which is the most common and aggressive brain tumor in adults, and notoriously difficult to treat. Many sufferers die within a few months of diagnosis, and just six in every 100 patients with the condition are alive after five years.
The research involved engineering nanostructures containing both gold and cisplatin, a conventional chemotherapy drug. These were released into tumor cells that had been taken from glioblastoma patients and grown in the lab.
Once inside, these “nanospheres” were exposed to radiotherapy. This caused the gold to release electrons which damaged the cancer cell’s DNA and its overall structure, thereby enhancing the impact of the chemotherapy drug.
The process was so effective that 20 days later, the cell culture showed no evidence of any revival, suggesting that the tumor cells had been destroyed.
While further work needs to be done before the same technology can be used to treat people with glioblastoma, the results offer a highly promising foundation for future therapies. Importantly, the research was carried out on cell lines derived directly from glioblastoma patients, enabling the team to test the approach on evolving, drug-resistant tumors.
The study was led by Mark Welland, Professor of Nanotechnology and a Fellow of St John’s College, University of Cambridge, and Dr Colin Watts, a clinician scientist and honorary consultant neurosurgeon at the Department of Clinical Neurosciences. Their work is reported in the Royal Society of Chemistry journal, Nanoscale.
How Naltrexone May Help People With Fibromyalgia
Naltrexone is a medication used for helping former narcotics who have stopped taking narcotics to remain drug free. People addicted to alcohol also take naltrexone to say alcohol free.
Although it’s used to stifle addiction, naltrexone isn’t a cure for addiction nor is the medication a narcotic. Naltrexone actually works to block the effects from narcotics, particularly the characteristic high people get after taking narcotics or drinking alcohol.
Naltrexone does cause withdrawal symptoms in people who have a physical dependence on narcotics. People generally start taking naltrexone when they’re no longer dependent on narcotics or alcohol.
This medication is prescribed to help people stop during narcotics and/or drinking copious amounts of alcohol. Naltrexone is historically helpful when used for an addiction treatment recovery program. It’s not used in other treatments for different conditions.
Though, that might be changing. Recent studies suggest naltrexone has some ‘benefit’ for people who may have fibromyalgia. In fact, sources are even going as far to say that naltrexone may help ‘ease fibromyalgia symptoms.’
Naltrexone and fibromyalgia: the study
The naltrexone fibromyalgia relationship started gaining traction within the past decade. Back in 2009, a Stanford University research report revealed that naltrexone might be a suitable treatment for people with fibromyalgia.
The study depicted tests on women who had fibromyalgia for over 10 years on average. The 10 women took a low dose of naltrexone throughout the duration of the study.
The women first spent two weeks recording the severity of the symptoms originating from their fibromyalgia. Their daily data was recorded using a handheld computer. They also took laboratory tests to test the threshold of their fibromyalgia pain, in addition to their sensitivities to cold and heat.
After the first phase of the test, the women took a placebo pill each day for another two weeks. The women weren’t informed that the pill they took was a placebo pill. After the end of that period, the women took a naltrexone pill once a day for a period of eight weeks. They spent the last two weeks of the study not taking either pill.
Throughout the duration of the study, the women continued to record their fibromyalgia symptoms each day. They also repeated lab tests every two weeks.
Naltrexone and fibromyalgia: the results and effects
The results came in at the end of the Stanford study. Interestingly enough, most of the results showed that naltrexone did play a role in helping subside symptoms from fibromyalgia.
When the women took the placebo pill, they reported a ‘2.3 percent drop in severe fibromyalgia symptoms.’ Those results were compared against their rating taken at the beginning of the study.
After switching from the placebo pill to a naltrexone pill, the results got rather interesting. The women reported ‘another 30 percent drop in the severity of their fibromyalgia symptoms.’ They also found that they had developed a ‘greater tolerance for pain and hotter temperatures when taking naltrexone.’ They didn’t, however, develop a better tolerance for colder temperatures.
Out of the 10 women tested in the study, 6 women reported that they ‘responded to naltrexone.’ Any reported side effects were said to be ‘brief and mild.’ Though, when it came to milder side effects, two women reported having ‘increasingly vivid dreams’ during the study. Another woman reported having ‘nausea and insomnia’ during the first few nights of taking the pills.
The results gave Stanford researchers confidence that there’s a connection in using naltrexone to successfully treat fibromyalgia. Since then, other studies were performed to explore the naltrexone fibromyalgia relationship.
The naltrexone-fibromyalgia relationship: more evidence
The years following the aforementioned study saw more medical researchers reprise their role in the ongoing study of the naltrexone fibromyalgia relationship.
The continued study of that particular relationship is said to help researchers and doctors find ways to treat people who don’t respond well to other FDA approved medications.
Naltrexone is said to help treat naltrexone fibromyalgia through ‘boosting the endogenous endorphin function while suppressing the central pro-inflammatory cytokines. As a result, it causes an effect that helps decrease pain and other associated symptoms.
In 2013, the American College of Rheumatology presented a new study that followed 25 patients with fibromyalgia. Most of the patients (24) were women diagnosed with the condition.
The subjects of the study were given a low dose of naltrexone (3 mg, 4.5 mg maximum) every night throughout the study’s three month period. The outcome was measured using the Revised Fibromyalgia Impact Questionnaire (FIQR) during month 3, while any adverse reactions were also recorded throughout the trial.
Patients who were taking any FDA approved naltrexone fibromyalgia medications were allowed to keep taking their medications. That accounted for 18 out of the 25 patients in the study. The remaining patients (7) took naltrexone monotherapy throughout the duration of the study.
As a result, 22 patients successfully completed the entire study. Two people actually discontinued after finding naltrexone ineffective for treating their fibromyalgia. One person dropped out of the study after experiencing diarrhea as a symptom.
The remaining study participants improved their FIQR scores by 19.5 percent on average. Half of those participants had a stronger response, showing a 41 percent improvement on average. Many of the patients also reported a ‘decrease in pain, sleep problems and anxiety’ during the study.
Is naltrexone an effective fibromyalgia treatment?
Thanks to those results, it seems like medical researchers have solid evidence for the naltrexone fibromyalgia relationship. Naltrexone may be effective for treating fibromyalgia because it’s easy for patients to tolerate and relatively inexpensive. Even though these results show a positive future for naltrexone as an effective fibromyalgia treatment option, nothing’s entirely perfect.
Until medical researchers and doctors have concrete proof that naltrexone helps subside fibromyalgia symptoms, they can’t start appointing naltrexone as a suitable candidate for FDA approval.
The aforementioned naltrexone fibromyalgia studies only represent possibilities for what naltrexone can give people who have fibromyalgia. In order to successfully prove that the medication may help treat fibromyalgia, researchers and other medical professionals need to provide evidence for how naltrexone works in the context of suppressing common ‘addiction’ mechanisms invoked by the brain.
That way, they can see if it relates to how the brain influences fibromyalgia and finally discover if naltrexone works as a treatment.
Emergency Gallbladder Surgery: Do You Need It, Or Can You Afford To Wait?
Sharon Theimer, Mayo Clinic
Study: younger, older people likelier to visit ER repeatedly with gallstone pain before surgery
Gallstone pain is one of the most common reasons patients visit emergency rooms. Figuring out who needs emergency gallbladder removal and who can go home and schedule surgery at their convenience is sometimes a tricky question, and it isn’t always answered correctly. A new Mayo Clinic study found that 1 in 5 patients who went to the emergency room with gallbladder pain and were sent home to schedule surgery returned to the ER within 30 days needing emergency gallbladder removal. The surgical complication rate rises with the time lag before surgery, the researchers say.
“It makes a big difference if you get the right treatment at the right time,” says co-lead author Juliane Bingener-Casey, M.D., a gastroenterologic surgeon at Mayo Clinic in Rochester. The study is published in the Journal of Surgical Research.
Often it’s obvious who needs emergency gallbladder removal, a procedure known as cholecystectomy, who can delay it and who doesn’t need surgery at all. But sometimes patients fall into a gray area. Mayo researchers are working to develop a reliable tool to help determine the best course of action in those cases, and the newly published study is a first step, Dr. Bingener-Casey says.
How to handle gallstone patients is a cost and quality issue in health care. In the United States, 1 in 10 women and 1 in 15 men have gallstones, and more than 1 million people a year are hospitalized for gallstone disease. The fatty food common in U.S. diets is a contributing factor, Dr. Bingener-Casey says.
ER visits and emergency surgery are typically more expensive than scheduled surgeries. In addition to cost issues, patients often prefer the convenience of scheduling surgery, so they can arrange child care and leave from work, for example. But delaying a needed gallbladder removal more than six days increases the surgical complication rate and may make patients likelier to need open-abdomen surgery rather than a minimally invasive laparoscopic procedure, the researchers noted.
“Gallbladder disease is very frequent and it’s one of the most expensive diseases for the nation as a whole. If we can get that right the first time, I think we can make things better for a lot of people,” Dr. Bingener-Casey says.
Researchers studied the billing records of 3,138 patients at Mayo in Rochester between 2000 and 2013 who went to the emergency department for abdominal pain within 30 days before gallbladder surgery. Of those, 1,625 were admitted for emergency gallbladder surgery, and 1,513 were allowed to go home and schedule surgery at a later date. Of the patients who went home, 20 percent came back to the emergency room within a month needing a cholecystectomy urgently, and of those, 55 percent were back in the ER within a week for emergency surgery.
[ Watch: Emergency Gallstone Surgery: Do You Need It, Or Can You Afford To Wait? ]
Among those discharged from the ER, younger patients who were otherwise healthy and older patients who did have other health problems were likelier than people in their 40s and 50s to return to the emergency room within a month and need gallbladder removal urgently, the study found. That suggests that younger patients, older patients and those with other serious medical conditions may benefit from a second look before they are discharged from the emergency room, the researchers say.
Researchers analyzed test results typically considered indicators of gallbladder disease including white blood cell count, temperature and heart rate and saw no difference between those who left the ER and didn’t make a repeat visit and those who left the emergency room only to come back within a month. Such metrics may be incorporated into a decision tool if they hold up during future research.
The study was funded in part by the National Institute of Diabetes and Digestive and Kidney Diseases award number K23DK93553.
The co-lead author is Elizabeth Habermann, Ph.D., scientific director of surgical outcomes research at Mayo Clinic’s Robert D. and Patricia E. Kern Center for the Science of Health Care Delivery.
Sometimes More Exercise Is Not Always Better, Especially For Heart Attack Survivors
April Flowers for redOrbit.com – Your Universe Online
Exercise is good for the body and mind. Many studies have shown that moderate, regular exercise, such as brisk walking or jogging, can help with weight loss, mood changes, the management and rehabilitation of cardiovascular disease, and even lowering the risk of death from diseases such as hypertension, stroke and Type 2 diabetes. Currently, the Physical Activity Guidelines for Americans suggests 150 minutes a week of moderate-intensity, or 75 minutes of high-intensity exercise to maintain optimum health. But is there such a thing as too much exercise?
According to a new study, published in the Mayo Clinic Proceedings, the answer is yes. The findings indicate a clear link between cardiovascular deaths in heart attack survivors who exercise to excess.
The research team, comprised of Paul T. Williams, PhD, of the Life Sciences Division, Lawrence Berkeley National Laboratory and Paul D. Thompson, MD, of the Department of Cardiology, Hartford Hospital, analyzed the association between exercise and cardiovascular disease-related deaths. Using the National Walkers’ and Runners’ Health Studies databases, the researchers conducted a prospective long-term study with about 2,400 physically active heart attack survivors. The National Walkers’ and Runners’ study confirmed prior results which demonstrated walking and running gave the same cardiovascular benefits, as long as the energy expenditures were the same. For example, running takes half as long to expend the same number of calories as walking.
Patients who ran less than 30 miles or walked less than 46 miles a week saw a remarkable 65 percent dose-dependent reduction in death from cardiovascular events. Much of the benefit was lost past this point, in what scientists call a reverse J-curve pattern.
“These analyses provide what is to our knowledge the first data in humans demonstrating a statistically significant increase in cardiovascular risk with the highest levels of exercise,” the researchers said in an Elsevier Health Sciences statement. “Results suggest that the benefits of running or walking do not accrue indefinitely and that above some level, perhaps 30 miles per week of running, there is a significant increase in risk. Competitive running events also appear to increase the risk of an acute event.” They caution, however, that “our study population consisted of heart attack survivors and so the findings cannot be readily generalized to the entire population of heavy exercisers.”
Another study in the same issue of the Mayo Clinic Proceedings describes a meta-analysis of ten cohort studies aimed at providing an accurate overview of mortality in elite athletes. Over 42,000 top athletes, participating in football, baseball, track and field, cycling were examined, including Olympic level athletes and Tour de France participants. Of the total cohort, 707 athletes were women.
“What we found on the evidence available was that elite athletes (mostly men) live longer than the general population, which suggests that the beneficial health effects of exercise, particularly in decreasing cardiovascular disease and cancer risk, are not necessarily confined to moderate doses,” comments Alejandro Lucia, MD, PhD, of the European University Madrid, Spain. “More research is needed however, using more homogeneous cohorts and a more proportional representation of both sexes.”
“Extrapolation of the data from the current Williams and Thompson study to the general population would suggest that approximately one out of twenty people is overdoing exercise,” commented James H. O’Keefe, MD, from the Mid America Heart Institute in Kansas City, MO, and first author of an editorial entitled “Exercising for Health and Longevity versus Peak Performance: Different Regimens for Different Goals,” which appears in the same issue. O’Keefe and his colleagues explain that “we have suggested the term ‘cardiac overuse injury’ for this increasingly common consequence of the ‘more exercise is better’ strategy.” Of those same 20 people, the authors note that 10 are not getting the minimum recommended 150 minutes a week of physical activity.
O’Keefe and his co-authors, Carl “Chip” Lavie, MD, and Barry Franklin, PhD, caution that a cumulative dose of not more than five hours of vigorous exercise is the upper limit identified in several studies for long-term cardiovascular health and life expectancy. They also suggest taking one to two days a week off from vigorous exercise over all, and to refrain from a high-intensity workout on an everyday basis. Instead, the researchers suggest that people from either end of the exercise spectrum—over-exercisers and sedentary people alike—might reap more long-term health benefits by changing their physical activity levels to be in the moderate range.
“For patients with heart disease, almost all should be exercising, and generally most should be exercising 30-40 minutes most days, but from a health stand-point, there is no reason to exercise much longer than that and especially not more than 60 minutes on most days,” says Lavie, who is a cardiologist at the John Ochsner Heart and Vascular Institute, New Orleans, LA. “As Hippocrates said more than 2,000 years ago, ‘if we could give every individual the right amount of nourishment and exercise, not too little and not too much, we would have found the safest way to health.’ I and my co-authors believe this assessment continues to provide wise guidance,” he concluded.
> Read more about the Physical Activity Guidelines for Americans…
Smithsonian Turns To Crowdsourcing In An Effort To Digitize Historical Documents
redOrbit Staff & Wire Reports – Your Universe Online
The Smithsonian Institution is recruiting volunteers willing to help them digitize documents like handwritten Civil War journals, letters written by famous artists and century-old botany specimen labels so that these nuggets of American history can educate and entertain Web surfers throughout the world.
The Smithsonian’s Transcription Center website, launched to the public earlier this week, is an attempt to create an online archive of the Institute’s collection while also making it more accessible to the community and to facilitate research. To that end, the Center is working with digital volunteers to transcribe historical documents and collection records, since many of them are handwritten or cannot be deciphered by computers.
Millions of objects, specimens and documents have already been added to the collection, the museum and research complex said, but there is still much work to be done. So on Tuesday, the Smithsonian issued a public call for volunteers willing to help decipher a multitude of items, ranging from handwritten specimen tags to early US currency.
“We are thrilled to invite the public to be our partners in the creation of knowledge to help open our resources for professional and casual researchers to make new discoveries,” Smithsonian Secretary Wayne Clough said. “For years, the vast resources of the Smithsonian were powered by the pen; they can now be powered by the pixel.”
“Though many specimen and documents have been digitized, handwriting can be tricky,” added Helen Thompson of Smithsonian.com. “The goal is to crowdsource the transcription of material that a computer just can’t decipher. By opening the transcription process up to the public, they hope to make those images not only accessible, but searchable and indexable to researchers and anyone else who’s interested across the globe.”
A year-long period of beta testing started in June 2013, and according to Thompson, 1,000 volunteers were able to transcribe more than 13,000 pages of archived documents during that time. One of the items digitized during that time was the personal correspondence of members of the Monuments Men, which is held in the Smithsonian’s Archives of American Art collection. A team of 49 volunteers completed that 200-page project in just one week.
In fact, in some instances, the Institute reports that volunteers are successfully digitizing items or collections in a fraction of the time that would have been required without their assistance. However, since crowdsourcing can open the door for mistakes to be made, the Smithsonian is having several individuals work on and review each page in an attempt to avoid typos or discrepancies. Once the work is finished, the accuracy will be verified by an Institute expert.
For those interested, some of the ongoing digitization projects include Mary Smith’s Commonplace Book Concerning Science and Mathematics, a handwritten book that contains a summary of several scientific discoveries from the late 1700s; a multi-volume English-Alabama and Alabama-English dictionary compiled between 1906 and 1913; and a project involving the photographing and deciphering of the tags on 45,000 bee specimens.
“Once finished projects get the Smithsonian’s stamp of approval users can download them through the collections website or the transcription center,” Thompson said. “As the Smithsonian digitizes more and more of its collections, the plan is to make them available online for volunteers to transcribe and historical scholars and enthusiasts to enjoy.”
Amazon.com – Read eBooks using the FREE Kindle Reading App on Most Devices
Venom Peptides Show Promise In Halting The Growth And Spread Of Cancer
redOrbit Staff & Wire Reports – Your Universe Online
Bee stings and snake bites are not typically viewed as things that are beneficial to a person’s health, but new research out of the University of Illinois suggests that they could be powerful tools in the treatment of cancer.
[ Watch: Treating Cancer With Venom ]
In research presented at the 248th National Meeting of the American Chemical Society (ACS) in San Francisco, California on Monday, Dr. Dipanjan Pan described how venom from bees, snakes and scorpions could form the basis of next-generation cancer-fighting drugs.
Dr. Pan and his colleagues have developed a method which utilizes nanotechnology and makes it so the venom proteins specifically target malignant cells while keeping healthy ones safe from harm. Their work would reduce or eliminate cell damage and other painful side effects usually caused by the toxins, the study authors explained.
“We have safely used venom toxins in tiny nanometer-sized particles to treat breast cancer and melanoma cells in the laboratory. These particles, which are camouflaged from the immune system, take the toxin directly to the cancer cells, sparing normal tissue,” Dr. Pan said in a statement.
There are proteins and peptides in snake, bee and scorpion venom which can attach to cancer cell membranes when they are separated from other components of the toxins and tested individually, the researchers explained. They then used nanoparticles to disguise the toxins, and found that it bypassed healthy cells.
Previous studies have indicated this activity could potentially be used to block the growth and spread of cancer, and Pan’s team noted that some substances found in these venoms could be effective anti-tumor agents. However, simply injecting venom into a patient would have noticeable side effects, such as damage to heart muscle and nerve cells, unwanted clotting or bleeding beneath the skin.
During their research, the Illinois scientists identified a substance in honey bee venom known as melittin, which they claim kept cancer cells from multiplying. Since bees do not produce enough venom for it to be extracted on a regular basis, Dr. Pan and his colleagues opted to create synthetic melittin in the laboratory.
First, they conducted computational analysis to determine how the substance would work inside a nanoparticle. Next, they conducted a test in which they injected synthetic melittin into nanoparticles. They discovered that the toxins were so densely packed that they did not expand once they reached the bloodstream, and instead went straight for the tumor. Once there, they bound themselves to cancer stem cells, preventing them from growing and spreading.
Those stem cells are “what we are interested in,” Dr. Pan told Jen Christensen of CNN.com. “Those are the cells responsible for metastasizing and also responsible for having the cancer cells grow back. If we can target better using this technique, we potentially have a better cancer treatment.”
“Unlike chemotherapy, this more targeted technique would, in theory, only affect cancer cells,” Christensen added. “If it’s successful, this natural agent found in venom could become the basis for a whole legion of cancer-fighting drugs.”
Dr. Pan believes that synthetic peptides that mimic components of other venom, such as those from scorpions or snakes, have also proven successful as a potential cancer therapy using the nanoparticle treatment.
The next step, he added, is to examine how effective the approach is in rats and pigs. Within the next three to five years, the Illinois team hopes to launch a study involving actual cancer patients.
The Beekeeper’s Bible: Bees, Honey, Recipes & Other Home Uses By Richard Jones
Secrets Behind Gecko’s Amazing Adhesive Skills Revealed In New Study
redOrbit Staff & Wire Reports – Your Universe Online
The uncanny ability of geckos to adhere to nearly any surface and even walk on ceilings is the result of a biological mechanism in their toes that the lizards can instantly turn on or off, according to new research appearing in the latest edition of the Journal of Applied Physics.
Geckos, along with spiders and some insects, appear to defy gravity with the way they are able to scale walls and cling to high surfaces, the Oregon State University researchers explained. This unique skill, it turns out, is the result of tiny branched hairs on the bottom of their feet known as “seta” that can be activated or deactivated at a moment’s notice.
These seta allow geckos to run at great speeds, evade predators and keep themselves alive – all without expending any extra energy. The new study examines the mechanics behind this unique adhesion system, which evolved independently in geckos, spiders and insects and has been around for several millions years, they added.
“Since the time of the ancient Greeks, people have wondered how geckos are able to stick to walls – even Archimedes is known to have pondered this problem,” study co-author and assistant professor of engineering Alex Greaney explained in a statement Tuesday. “It was only very recently, in 2000, that Kellar Autumn and colleagues proved unequivocally that geckos stick using van der Waals forces.”
According to Stefan Sirucek of National Geographic, the van der Waals forces occur when electrons in one atom generate a magnetic field that stimulates and attracts the electrons in a second, neighboring atom. This phenomenon allows the gecko to use a system known as “dry adhesion” in order to attach to surfaces.
While van der Waals forces are the weakest type of interatomic forces there are, Greaney explained that geckos “are able to take advantage of them because of a remarkable system of branched hairs called ‘seta’ on their toes. These seta and their hierarchy can deform to make intimate contact with even very rough surfaces – resulting in millions of contact points that each are able to carry a small load.”
Greaney added that the angle and flexibility of the seta also play an important role in the adhesive process, which expends a minimal amount of energy. In fact, geckos are capable of darting across a ceiling with millions of seta on their feet becoming sticky as needed. The process which allows them to run at speeds of 20 body-lengths per second, and the forces provided by these foot-based hairs could support 50 times the geckos’ own bodyweight.
“Understanding the subtleties of the process for switching stickiness on and off is groundbreaking,” he said. “By using mathematical modeling, we’ve found a simple, but ingenious, mechanism allows the gecko to switch back and forth between being sticky or not. Geckos’ feet are by default nonsticky, and this stickiness is activated through application of a small shear force. Gecko adhesion can be thought of as the opposite of friction.”
“What’s amazing is just how finely balanced and finely tuned this whole system is. We understand it at one level, and as we learn more and more about it, it turns out there’s a really subtle interplay of things going on,” Greaney, who co-authored the paper along with Congcong Hu of the OSU Computational Materials Research Group, told Sirucek.
While the seta have already served as the inspiration for robots capable of scaling walls, space-age adhesive materials and other types of technology, the researchers believe that their work will actually improve upon the design of tools based on the reliable suction produced by gecko feet, added Washington Post reporter Rachel Feltman.
Shop Amazon – Hot New Releases – Updated Every Hour
Safe Driver Discount Devices Could Be Used To Track A Driver’s Location
redOrbit Staff & Wire Reports – Your Universe Online
Allowing insurance companies to monitor your driving habits in exchange for a discount on your premiums might sound like a tempting offer, but a team of Rutgers University computer engineers caution that entering into such an arrangement could grant those firms access to information you may not want them to have.
In research scheduled to be presented at the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp 2014) in Seattle next month, Janne Lindqvist, an assistant professor in the university’s Department of Electrical and Computer Engineering, and her colleagues report that drivers agreeing to be monitored could inadvertently be revealing where they are driving to.
Lindqvist’s team demonstrated that a driver could reveal their travel destination, even without having a GPS device or other location-sensing technology installed in their vehicles. All an insurance company would need is a starting location such as a home address and a steady stream of data showing how fast the person is driving.
The study authors note that both insurance companies and their customers have incentive to monitor their driving speeds, since drivers who avoid sudden starts and stops tend to be lower-risk drivers, and that type of behavior tends to be rewarded. As such, some firms are offering reduced premiums to customers who are willing to use a device that constantly measures, records and reports how quickly they are driving.
“The companies claim this doesn’t compromise privacy, because all they are collecting is your speed, not your location,” Lindqvist, who is also a member of the Rutgers’ Wireless Information Network Laboratory (WINLAB), explained in a statement Monday. “But we’ve shown that speed data and a starting point are all we need to roughly identify where you have driven.”
It is difficult to determine a person’s exact driving path using this limited amount of basic information, and the results tend to be less precise than using a GPS or cellular signal to track movement, the researchers explained. However, using a technique known as “elastic pathing,” the Rutgers team stated that it is possible to accurately pinpoint a person’s destination to within one-third of a mile, sometimes with just one journey.
Elastic pathing predicts the route a person has taken by matching speed patterns with street layouts, the university explained. For example, if a person lives at the end of a cul-de-sac one-fourth of a mile from an intersection, their speed data would show a minute of driving at speeds of up to 30 mph to reach the intersection.
Then, if a left turn leads them to an expressway or boulevard and a right turn leads them to a narrow road filled with stop signs or traffic lights, it would be easy to determine which direction the driver went based on whether their speed data showed a long stretch of fast driving or a slower stretch of stop-and-go driving. By matching speed patterns with the most likely road patterns, it is possible to approximate the route a person travels and their destination.
“Lindqvist doesn’t claim that insurance companies are actually processing the data to reveal locations,” the university said. “The techniques he and his colleagues are exploring are in their early stages and are not obvious to implement. Insurance companies likely wouldn’t benefit from knowing this information, especially if it is costly to obtain.”
However, he added that it was possible that law enforcement officials could subpoena this information and conduct these data analyses if they felt it necessary to learn where somewhat had driven to. Lindqvist also noted that he was not stating that insurance companies should not monitor speed data, only that they “should not imply that their speed data collection is privacy preserving.”
GET PRIMED! Join Amazon Prime – Watch Over 40,000 Movies & TV Shows Anytime – Start Free Trial Now
Biomarker Could Reveal Why Some Develop Post-Traumatic Stress Disorder
The Mount Sinai Hospital / Mount Sinai School of Medicine
Blood expression levels of genes targeted by the stress hormones called glucocorticoids could be a physical measure, or biomarker, of risk for developing Post-Traumatic Stress Disorder (PTSD), according to a study conducted in rats by researchers at the Icahn School of Medicine at Mount Sinai and published August 11 in Proceedings of the National Academy of Sciences (PNAS). That also makes the steroid hormones’ receptor, the glucocorticoid receptor, a potential target for new drugs.
Post-Traumatic Stress Disorder (PTSD) is triggered by a terrifying event, either witnessed or experienced. Symptoms may include flashbacks, nightmares and severe anxiety, as well as uncontrollable thoughts about the event. Not everyone who experiences trauma develops PTSD, which is why the study aimed to identify biomarkers that could better measure each person’s vulnerability to the disorder.
“Our aim was to determine which genes are differentially expressed in relation to PTSD,” said lead investigator Rachel Yehuda, PhD, Professor of Psychiatry and Neuroscience and Director of the Traumatic Stress Studies Division at the Icahn School of Medicine at Mount Sinai. “We found that most of the genes and pathways that are different in PTSD-like animals compared to resilient animals are related to the glucocorticoid receptor, which suggests we might have identified a therapeutic target for treatment of PTSD,” said Dr. Yehuda, who also heads the Mental Health Patient Care Center and PTSD Research Program at the James J. Peters Veterans Affairs Medical Center in the Bronx.
The research team exposed a group of male and female rats to litter soiled by cat urine, a predatory scent that mimics a life-threatening situation. Most PTSD studies until now have used only male rats. Mount Sinai researchers included female rats in this study since women are more vulnerable than men to developing PTSD. The rats were then categorized based on their behavior one week after exposure to the scent. The authors also examined patterns of gene expression in the blood and in stress-responsive brain regions.
After one week of being exposed to soiled cat litter for 10 minutes, vulnerable rats exhibited higher anxiety and hyperarousal, and showed altered glucocorticoid receptor signaling in all tissues compared with resilient rats. Moreover, some rats were treated with a hormone that activates the glucocorticoid receptor called corticosterone one hour after exposure to the cat urine scent. These rats showed lower levels of anxiety and arousal one week later compared with untreated, trauma-exposed rats.
“PTSD is not just a disorder that affects the brain,” said co-investigator Nikolaos Daskalakis, MD, PhD, Associate Research Scientist in the Department of Psychiatry at the Icahn School of Medicine at Mount Sinai. “It involves the entire body, which is why identifying common regulators is key. The glucocorticoid receptor is the one common regulator that consistently stood out.”
Co-collaborators of the study include Joseph Buxbaum, PhD, Professor of Psychiatry, Neuroscience, Genetics and Genomic Sciences and the director of the Seaver Autism Center at the Icahn School of Medicine at Mount Sinai, Hagit Cohen, PhD, Professor of Faculty of Health Sciences and director of the Anxiety & Stress Research Unit at Ben-Gurion University in Israel and Guiqing Cai, Postdoctoral Fellow in the Department of Genetics and Genomic Sciences at the Icahn School of Medicine at Mount Sinai.
> Explore Further…
New Cause Of Osteoarthritis Revealed
A new mechanism of joint destruction caused by a natural material that grinds away healthy cartilage and worsens osteoarthritis has been identified in human hip joints for the first time by University of Liverpool scientists.
The scientists, with Professor Alan Boyde and colleagues from Queen Mary University of London, were studying the hip of a man with the genetic condition, alkaptonuria (AKU), This is a metabolic disease in which a substance called homogentisic acid accumulates in joint cartilage, causing changes to its physical properties.
The study revealed the presence of high density mineralized protrusions (HDMP), which have only been seen before in horses. These protrusions are caused as the body acts to fill in cracks in joint cartilage and can snap off, leading to sharp, dense particles in the joint which grind against healthy tissue.
To confirm the findings, the team studied eight hips donated for research by people with osteoarthritis and found the same results as in the alkaptonuria patient.
Professor Jim Gallagher led the study in Liverpool. He said: “There is no cure for osteoarthritis, but it is one of the leading causes of disability, causing immense pain and difficulty of movement to sufferers.
“The discovery of HDMP in humans means that for the first time we are seeing an important mechanism in the process which causes the disease. In effect these small, sharp particles could act like an abrasive powder scouring the surfaces of the joint.”
The researchers studied the joint without taking out the calcium – a method which is typically used to make bones softer and easier to examine. This process, which involves using acid, would normally have the effect of destroying the HDMPs thus explaining why they have not been recognized before in humans.
Professor Gallagher, from the University’s Institute of Ageing and Chronic Disease concluded: “Studying a rare illness like alkaptonuria is a worthwhile project in itself, but it can also help with new insights into much more common diseases.
“This is a case in point, and because of our work on alkaptonuria, we are now able to add a new piece to the puzzle of an illness that affects millions.”
The study, published in the Journal of Anatomy, recommends that searching for these HDMPs should now be included in the study of patients with osteoarthritis.
Sensitive Acid Sensor Controls Insulin Production
ETH Zurich
ETH Zurich researchers from the Department of Biosystems Science and Engineering (D-BSSE) in Basel have developed an implantable device that precisely monitors acid build-up in the body for people with diabetes and produces insulin if acidosis becomes a risk.
Many human metabolic functions only run smoothly if the acid level in the body remains neutral and stable. For humans, normal blood pH values lie between 7.35 and 7.45. By way of comparison, an empty stomach is extremely acidic, with a pH value of 1.5.
The body constantly monitors this narrow pH band and quickly restores the ideal pH values in the event of any deviations. This is because many proteins cease to function properly if fluids in the body become even slightly more acidic. These proteins become unstable and alter their structure or interactions with other proteins, causing entire metabolic pathways to break down.
People with type 1 diabetes are particularly at risk of high acid levels. Their bodies produce no insulin, the hormone that regulates blood sugar levels, so their cells cannot absorb any glucose from the blood and have to tap into another energy source: fat reserves. In doing so, the liver produces beta-hydroxybutyrate, an acid which supplies the muscles and brain with energy via the bloodstream. If the body continues to use fat reserves for energy, however, this produces so much acid that the blood’s pH value plummets while the sugar molecules circulate in the blood unused. If the lack of insulin is not noticed or treated in time, people with type 1 diabetes can die from ketoacidosis – metabolic shock resulting from an excess of beta-hydroxybutyrate.
Sensor measures acidity
A team of bioengineers from ETH Zurich’s Department of Biosystems Science and Engineering (D-BSSE) in Basel have now developed a new implantable molecular device composed of two modules: a sensor that constantly measures blood pH and a gene feedback mechanism that produces the necessary amount of insulin. They constructed both modules from biological components, such as various genes and proteins, and incorporated them into cultivated renal cells. The researchers then embedded millions of these customized cells in capsules which can be used as implants in the body.
The heart of the implantable molecular device is the pH sensor, which measures the blood’s precise acidity and reacts sensitively to minor deviations from the ideal pH value. If the pH values falls below 7.35, the sensor transmits a signal to trigger the production of insulin. Such a low pH value is specific for type 1 diabetes: although blood pH also drops due to alcohol abuse or exercise on account of the overacidification of the muscles, it does not fall below 7.35. The hormone insulin ensures that the normal cells in the body absorb glucose again and switch from fat to sugar as their energy source for metabolism, and the pH value rises again as a result. Once blood pH returns to the ideal range, the sensor turns itself off and the reprogrammed cells stop producing insulin.
Insulin level back to normal
The researchers have already tested their invention on mice with type 1 diabetes and related acidosis. The results look promising: mice with the capsules implanted produced the amount of insulin appropriate to their individual acid measurements. The hormone level in the blood was comparable to that of healthy mice that regulated their insulin levels naturally. The implant also compensated for larger deviations in blood sugar.
“Applications for humans are conceivable based on this prototype, but they are yet to be developed,” says Martin Fussenegger. “We wanted to create a prototype first to see whether molecular prostheses could even be used for such fine adjustments to metabolic processes,” he says. Preparing a product like this for the market, however, is beyond the scope of his institute’s staff and financial resources, Fussenegger says, and would thus have to be pursued in collaboration with an industrial partner.
Extensive experience in metabolic diseases
Researchers in Fussenegger’s group have already made headlines several times with similar synthetic networks. For instance, they developed an implant with genes that could be activated with blue light, thereby producing GLP-1, which regulates insulin production. They also put together a network that eliminates metabolic syndrome, a process set in motion by an authorized blood-pressure medicine. All of these networks respond to a signal and produce a hormonally active substance. The special thing about the new feedback mechanism, however, is that the body itself produces the signal, which is then detected by a sensor that triggers a fine-tuned therapeutic reaction.
Three groups from the D-BSSE worked on the present project. Fussenegger’s group developed the genetic network; Professor of Biosystems Engineering Andreas Hierlemann and his team tested the acidity sensor with the aid of microfluidic platforms; and Jörg Stelling, a professor of computational systems biology, modelled it in order to estimate the dynamics of the insulin production.
> Explore Further…
Inequality — A Key Issue Of Economic Research
Lindau Meeting of Nobel Laureates
Nobel laureates and aspiring young economists from all over the world to discuss models and concepts in Lindau later this month
In the aftermath of the global financial crisis and the ensuing economic and political disruptions, inequality has re-emerged as a central focus of public debate. The drivers of rising inequality of income and wealth and the various scientific models for counteractive measures will be among the central topics debated among 17 Nobel Laureates in Economic Sciences and approximately 450 aspiring young economists from more than 80 countries in Lindau, Germany, next week. The 5th Lindau Meeting on Economic Sciences will bring them together for a unique dialogue across generations, cultures and scientific backgrounds. The meeting will open on 20 August with a keynote address by German Chancellor Angela Merkel, and will also feature “a panoramic view on the situation and prospects in Latin America” by Mario Vargas Llosa, the 2010 Nobel Laureate in Literature.
From 20 to 23 August, the participating laureates and the young economists will have plenty of opportunity for an intensive exchange of ideas. The numerous lectures, discussions, master classes and panel discussions of the program will address central fields of the discipline, ranging from econometrics, game theory, and neo-classical growth theory to mechanism design and systemic risk measurement. The overarching question “How useful is economics – how is economics useful?” will also be subject of the meeting’s closing panel debate on Mainau Island on Saturday, 23 August. As a special guest of honor, Queen Silvia of Sweden will attend the concluding events that day.
Among the sessions addressing the challenge of high and rising inequality will be plenary lectures by Joseph Stiglitz, Eric Maskin and James Mirrlees.
Joseph Stiglitz: “Inequality, wealth and growth: why capitalism is failing”
Recent data note large increases in both wealth and the wealth/income ratio. But there has not been the associated decline in interest rates or increases in wages that might have been expected. Indeed, in some countries, such as the United States, there has been wage stagnation.
If “wealth” is taken to be capital, this leads to a seeming paradox, a strong contradiction to the neoclassical model, because high levels of inequality are considered to be a natural aspect of capitalism – with the short period of the few decades before 1980 representing an exception.
In his lecture, Joseph Stiglitz will resolve the seeming paradox. He will describe the centrifugal and centripetal forces that lead to increased and diminished inequality. And he will show how the balance between these forces has been disturbed since 1980 to lead to a higher ‘equilibrium’ level of inequality.
Professor Stiglitz will explain that today’s higher level of inequality is not just the result of market forces, but of policies and politics, some of which have impeded the way that a well-functioning competitive market would behave. The final piece of his analysis will link growing inequality to the financial system and the process of credit creation.
Eric Maskin: “Why haven’t global markets reduced inequality in developing economies?”
The theory of comparative advantage predicts that globalization should cause income inequality in emerging economies to fall. But this has not happened in the current era of increasing international trade (although the prediction held up well for previous globalization).
In his lecture, Eric Maskin will sketch an alternative theory – developed in collaboration with his Harvard colleague Michael Kremer – that seems to fit recent history well. The model conceives of globalization as an increase in international production. Computers are a good example: they may be designed in the United States, programmed in Europe and assembled in China.
Professor Maskin will show that when the barriers to international production come down, moderately skilled workers in emerging economies get new employment opportunities and unskilled workers don’t. It is this disparity, he will argue, that accounts for rising inequality in many developing countries.
Sir James Mirrlees: “Some Interesting Taxes and Subsidies”
There are situations where marginal tax rates of 100% or nearly 100% may be justified, according to analysis by Sir James Mirrlees. In his lecture, he will sketch three models: one, which makes unusual assumptions about preferences for labor, can justify income subsidies of low incomes with implicit marginal tax rates of 100%.
The second, assuming high substitutability between consumption and work, might justify marginal tax rates approaching 100% on the highest incomes. The last, with competition between skilled workers (such as sportsmen or inventors) for market share, might justify marginal rates of 100% on high incomes of a particular type.
Professor Mirrlees will conclude that the assumptions under which these conclusions follow may not hold in actual economies – but they might sometimes. In any case, extreme results, and the reasons for them, can help us understand how incentives work and their implications for taxation.
Scientists Develop Model To Explain Ancient Black Hole Formation
redOrbit Staff & Wire Reports – Your Universe Online
Several processes typically limit how quickly black holes can grow, so how did those located at the ends of the universe come to have masses equal to several billion suns? Researchers from the Wiezmann Institute of Science and Yale University have proposed a potential solution in the latest edition of the journal Science.
These enormous black holes, the study authors noted in a statement, consume large quantities of interstellar gas on a constant basis. It is through the light that they emit while swallowing up this gas that we can detect them during their formational period, as they appear to us as they were less than one billion years after the Big Bang.
Typically, black holes form when a massive star runs out of nuclear fuel and explodes. Without this fuel at its core resisting gravity, the star collapses and ejects much of its material outwards in a supernova. The remaining material falls inward, forming a black hole that is approximately 10 solar masses in size. The discovery of these ancient quasars, however, has led scientists to wonder how the process could have happened so quickly.
“Several processes tend to limit how fast a black hole can grow,” the Weizmann Institute explained. “For example, the gas normally does not fall directly into the black hole, but gets sidetracked into a slowly spiraling flow, trickling in drop by drop. When the gas is finally swallowed by the black hole, the light it emits pushes out against the gas. That light counterbalances gravity, and it slows the flow that feeds the black hole.”
In order to solve the mystery behind these ancient black holes, Professor Tal Alexander, Head of the Particle Physics and Astrophysics Department at the Weizmann Institute and Professor Priyamvada Natarajan of Yale University, developed a model starting with the formation of a tiny black hole in the earliest days of the universe.
“At that time, cosmologists believe, gas streams were cold, dense, and contained much larger amounts of material than the thin gas streams we see in today’s cosmos,” the Institute said. “The hungry, newborn black hole moved around, changing direction all the time as it was knocked about by other baby stars in its vicinity.”
“By quickly zigzagging, the black hole continually swept up more and more of the gas into its orbit, pulling the gas directly into it so fast, the gas could not settle into a slow, spiraling motion,” it added. “The bigger the black hole got, the faster it ate; this growth rate, explains Alexander, rises faster than exponentially.”
After a period of approximately 10 million years, the black hole would have expanded to a size of roughly 10,000 solar masses, the study authors said. At that point, the growth rate of the quasar would have slowed down, but by this time the black hole would have been well on its way to an eventual weight of at least one billion solar masses.
“Black holes don’t actively suck in matter – they are not like vacuum cleaners,” Alexander told Space.com on Sunday. “A star or a gas stream can be on a stable orbit around a black hole, exactly as the Earth revolves around the sun, without falling into it.”
“It is actually quite a challenge to think of efficient ways to drive gas into the black hole at a high enough rate that can lead to rapid growth,” he continued, adding that the “theoretical result” of their research “shows a plausible route to the formation of supermassive black holes very soon after the Big Bang.”
Keep an eye on the cosmos with Telescopes from Amazon.com
Dinosaurs May Have Reached North America Earlier Than Previously Believed
redOrbit Staff & Wire Reports – Your Universe Online
A team of researchers led by Jahan Ramezani of MIT’s Department of Earth, Atmospheric and Planetary Sciences has discovered evidence that dinosaurs lived in North America millions of years earlier than previously suggested.
Writing in the latest edition of the American Journal of Science, Ramezani and his colleagues state that precise dating of rocks which were found in the southern US and contained dinosaur fossils, suggest the giant lizards appeared in this region as early as 223 million years ago.
This discovery is in stark contrast to popular theories which state that while fossil evidence suggests the first dinosaurs appeared in the South American portion of what was then Pangaea about 230 million years ago, they did not reach the North American region of the massive landmass until significantly later (approximately 212 million years ago).
The research also suggests that dinosaurs living in North America at this time coexisted with both close non-dinosaur relatives and significantly more evolved dinosaurs for more than 12 million years. Furthermore, Ramezani’s team discovered a 16 million year gap that predates the fossil-bearing rocks during which there is no trace of any vertebrates, suggesting the rocks that would have contained those remains might have eroded.
“Right below that horizon where we find the earliest dinosaurs, there is a long gap in the fossil and rock records across the sedimentary basin,” Ramezani explained in a statement Tuesday. “If the record is not there, it doesn’t mean the dinosaurs didn’t exist. It means that either no fossils were preserved, or we haven’t found them. That tells us the theory that dinosaurs simply started in South America and spread all over the world has no firm basis.”
Ramezani, MIT geology processor Sam Bowring and University of Rhode Island geosciences professor David Fastovsky set out to complete a geochronological analysis of fossils discovered in layers of rock known as the Chinle Formation. Previous dating conducted at the Chinle Formation, which occupies portions of Arizona, New Mexico, Utah, and Colorado, determined the earliest dinosaur-like animals appeared around 212 million years ago.
However, compared to the more complete record of early dinosaur evolution present in Argentina, the North American dinosaur record is far less clear. In an attempt to bring some clarity to the situation, Ramezani and Bowring set out to more precisely date the Chinle Formation (including the portion from which previous dinosaur fossils were obtained) by taking samples from exposed sedimentary rock layers that were largely derived from volcanic debris.
They pulverized the rocks in the laboratory, isolating individual microscopic grains on the uranium-bearing mineral zircon. Zircon forms in magma shortly before volcanic eruptions, they explained, and from the moment it crystallizes, it begins the process of decaying uranium to lead. By measuring the ratio of uranium to lead isotopes, the investigative team was able to determine the age of the zircon and the rock in which it was discovered.
After analyzing individual zircon grains, Ramezani and his colleagues were able to create a precise age map for each sedimentary interval of the Chinle Formation. The National Science Foundation-funded study discovered that the fossils previously discovered in New Mexico do not actually represent the earliest dinosaurs in North America – fossils discovered in Arizona predate them by approximately 11 million years.
But even that doesn’t tell the whole story, Ramezani explained: “The fact that our record starts with advanced forms tells us there was a prior history. It’s not just that advanced dinosaurs suddenly appeared 223 million years ago. There must have been prior evolution in North America – we just haven’t identified any earlier dinosaurs yet.”
The answer to the question as to when dinosaurs actually first appeared in North America could rest in the 16 million year gap where the Chinle Formation contains no fossil evidence of dinosaurs – or any other creature, for that matter. The study authors suggest that dinosaurs may have reached the region during this period of time, and that any fossil evidence they left behind might have been erased.
“This is the kind of careful work that needs to be done before evolutionary hypotheses that relate to the origination and diversification of the dinosaurs can be addressed,” said Raymond Rogers, a professor of geology at Macalester College in Minnesota who was not involved in the study.
“This gap in the Chinle fossil record makes comparing the North American and South American dinosaur records problematic,” he added. “Existing hypotheses that relate to the timing of dinosaur evolution in North and South America arguably need to be reconsidered in light of this new study.”
Join Amazon Student – FREE Two-Day Shipping for College Students
Experts Divided On What Impact Robots And AI Will Have On Human Employment
redOrbit Staff & Wire Reports – Your Universe Online
Nearly half of all industry experts believe that robots and artificial intelligence will displace a significant number of both blue-collar and white-collar workers by 2025, and many of them are concerned that this phenomenon will result in vast increases in income inequality and a large percentage of humans who are all but unemployable.
Forty-eight percent of the nearly 1,900 research scientists, business leaders, academic researchers and other technology experts polled by the Pew Research Internet Project predicted that this dire future will soon become a reality.
The other 52 percent of responders do not believe that technology such as self-driving automobiles and intelligent digital agents will displace more jobs than it creates within the next 11 years. This group, Pew explained, believes that the jobs currently performed by humans will be assumed by robots or AI applications, but that the ingenuity of men and women will develop new industries and find new ways to make a living.
“These two groups also share certain hopes and concerns about the impact of technology on employment,” Pew’s Aaron Smith and Janna Anderson said. “For instance, many are concerned that our existing social structures – and especially our educational institutions – are not adequately preparing people for the skills that will be needed in the job market of the future.”
“Conversely, others have hope that the coming changes will be an opportunity to reassess our society’s relationship to employment itself – by returning to a focus on small-scale or artisanal modes of production, or by giving people more time to spend on leisure, self-improvement, or time with loved ones,” they added.
The Pew report highlights several reasons to be uneasy about the future of employment and the impact that evolving technology will have on it. To date, automation has only impacted blue-collar employment, but they caution the next-generation of innovation will likely eliminate white-collar positions as well.
While some highly-skilled workers will thrive in this environment, others will be forced into lower-paying positions or long-term unemployment, they noted. There is also concern the US educational system is not adequately preparing the work force of the future, and that our political and economic institutions are not equipped to handle the difficult choices that emerging robotics and AI technology will present.
On the other hand, there are some reasons to be hopeful as well. While technological advances are likely to displace some workers, they have historically been net creators of jobs, and the resilience of mankind will lead them to develop new types of work that require uniquely human capabilities. New technology will also give us more free time and allow us to have a more positive, socially beneficial relationship with work.
“Technology will continue to disrupt jobs, but more jobs seem likely to be created,” said Jonathan Grudin, principal researcher for Microsoft. “When the world population was a few hundred million people there were hundreds of millions of jobs. Although there have always been unemployed people, when we reached a few billion people there were billions of jobs. There is no shortage of things that need to be done and that will not change.”
“In a given context, automated devices like robots may displace more than they create,” added science and technology policy analyst Marjory Blumenthal. “But they also generate new categories of work, giving rise to second- and third-order effects. Also, there is likely to be more human-robot collaboration – a change in the kind of work opportunities available.”
Of course, none of these predictions could come to fruition at all, noted Jess Zimmerman of The Guardian. After all, she explained, experts have been making bold predictions about what impact technology would have on the human race since Isaac Asimov predicted five decades ago that the 2014 World’s Fair would feature hovering cars, moving sidewalks, robot housekeepers, fusion power plants and compressed air transit tubes.
“People like Asimov, people whose vision of the future is based on steady progress along the lines of existing technology, have had every reason to believe they would be right – and often have been wildly wrong,” she said. “We have no idea what we’re doing. We certainly have no idea what we’re going to do.”
“The future is desperately opaque, and there’s no better illustration than that Pew Research’s future-predicting experts can’t agree,” Zimmerman added. “The more we are clouded by human foibles and biases, the less obvious our future becomes. Asimov’s robot housemaids and flying cars are classic visions of the future less because they have plausible roots in existing technology, and more because they address the problems and needs and desires of middle-class dominant-culture Westerners.”
New 3D Tissue Model Could Improve Study Of Brain Trauma And Related Diseases
redOrbit Staff & Wire Reports – Your Universe Online
A team of bioengineers from Tufts University in Massachusetts have developed three-dimensional brain-like cortical tissue that is similar in structure and function to tissues found in the brain of a rat, exhibits biochemical and electrophysiological responses, and can be kept alive in the laboratory for more than eight weeks.
In research published in the August 11 early online edition of the journal Proceedings of the National Academy of Sciences (PNAS) and funded by the National Institute of Biomedical Imaging and Bioengineering (NIBIB), the study authors explain how they were able to create the tissue using a silk protein-based scaffold and ECM composite and primary cortical neurons.
The researchers used the tissue to investigate the chemical and electrical changes that occur within the brain immediately following a traumatic injury, and conducted a separate experiment to observe how the tissue responded to a drug. They report that their creation could serve as a better model for the study of normal brain function, as well as brain injuries and diseases, and could lead to the development of new treatments for these issues.
“There are few good options for studying the physiology of the living brain, yet this is perhaps one of the biggest areas of unmet clinical need when you consider the need for new options to understand and treat a wide range of neurological disorders associated with the brain,” senior and corresponding author Dr. David Kaplan, chair of biomedical engineering at Tufts School of Engineering, said in a statement.
“To generate this system that has such great value is very exciting for our team,” he added.
Instead of recreating a whole-brain network from the ground up, Dr. Kaplan and colleagues from the Tufts biomedical engineering, physics and neuroscience departments opted instead to develop a modular design which replicated fundamental features that are most relevant to the physiological functions at the tissue-level of the brain.
“Each module combined two materials with different properties: a stiffer porous scaffold made of cast silk protein on which the cortical neurons, derived from rats, could anchor and a softer collagen gel matrix that allowed axons to penetrate and connect three dimensionally,” the university explained. “Circular modules of cast silk were punched into doughnuts, then assembled into concentric rings to simulate the laminal layers of the neocortex.”
Each of those layers was seeded with neurons independently before they were assembled – a process that did not require glue or adhesive. The doughnuts were then immersed in the collagen gel matrix, and the combination of silk and collagen gel proved to be an optimum microenvironment for the formation and function of neural networks.
“The stiffness of the silk biomaterial could be tuned to accommodate the cortical neurons and the different types of gels, maintaining both stability in culture and brain-like tissue elasticity,” said first author Dr, Min D. Tang-Schomer, a post-doctoral scholar in biomedical engineering at Tufts. “The tissue maintained viability for at least nine weeks – significantly longer than cultures made of collagen or hydrogel alone – and also offered structural support for network connectivity that is crucial for brain activity.”
“This work is an exceptional feat,” added Dr. Rosemarie Hunziker, program director of Tissue Engineering at NIBIB. “It combines a deep understand of brain physiology with a large and growing suite of bioengineering tools to create an environment that is both necessary and sufficient to mimic brain function.”
Since the 3D brain-like tissue possessed physical properties similar to rodent brain tissue, the research team decided to test whether or not it could be used to study traumatic brain injury. They dropped a weight on the cortical tissue from various heights in order to simulate such an injury, and then recorded changes in the electrical and chemical activity in the neurons. The results were described as similar to those typically found in animal brain injury studies.
However, Kaplan pointed out that the new tissue model has advantages over animal studies, since the latter requires research to be delayed while scientists dissect the brain and prepare it for experiments. The new system, he noted, allows them to “essentially track the tissue response to traumatic brain injury in real time. Most importantly, you can also start to track repair and what happens over longer periods of time.”
He also said that the longevity of the brain-like tissue will be important in studying other brain disorders, since the fact that the tissue can be maintained in the laboratory for at least two months “means we can start to look at neurological diseases in ways that you can’t otherwise because you need long timeframes to study some of the key brain diseases.”
Kaplan and his colleagues are also looking for ways in which they can make their tissue model even more brain-like. As they reported in their new study, the cortical tissue can be modified so that the doughnut scaffold is comprised of six concentric rings, each able to be populated with different types of neurons. This set-up would mimic the six layers of the human brain cortex, each of which is home to different types of neurons, the study authors said.
Image 2 (below): This image shows confocal microscope image of neurons (greenish yellow) attached to silk-based scaffold (blue). The neurons formed functional networks throughout the scaffold pores (dark areas). Credit: Tufts University
—–
Regenesis: How Synthetic Biology Will Reinvent Nature and Ourselves by George M. Church and Ed Regis
Workplace Stress Is Risk Factor For Type 2 Diabetes
Job strain can significantly increase the risk of developing diabetes. This conclusion was based on prospective data from a population-based study conducted by scientists at the Helmholtz Zentrum München. The findings have been published in the scientific journal Psychosomatic Medicine.
Workplace stress can have a range of adverse effects on health with an increased risk of cardio-vascular diseases in the first line. However, to date, convincing evidence for a strong association between work stress and incident Type 2 diabetes mellitus is missing.
Risk of diabetes about 45 percent higher
As the team of scientists headed by Dr. Cornelia Huth and Prof. Karl-Heinz Ladwig has now discovered that individuals who are under a high level of pressure at work and at the same time perceive little control over the activities they perform face an about 45 percent higher risk of developing type 2 diabetes than those who are subjected to less stress at their workplace.
The scientists from the Institute of Epidemiology II (EPI II) at the Helmholtz Zentrum München (HMGU) in collaboration with Prof. Johannes Kruse from the University Hospital of Giessen and Marburg examined data prospectively collected from more than 5,300 employed individuals aged between 29 and 66 who took part in the population-based MONICA/KORA cohort study. At the beginning of the study, none of the participants had diabetes, while in the post-observation period, which covered an average of 13 years, almost 300 of them were diagnosed with type 2 diabetes. The increase in risk in work-related stress was identified independently of classic risk factors such as obesity, age or gender.
Holistic prevention is important – also at the workplace
According to our data, roughly one in five people in employment is affected by high levels of mental stress at work. By that, scientists do not mean ‘normal job stress’ but rather the situation in which the individuals concerned rate the demands made upon them as very high, and at the same time they have little scope for maneuver or for decision making. We covered both these aspects in great detail in our surveys,” explains Prof. Ladwig, who led the study. “In view of the huge health implications of stress-related disorders, preventive measures to prevent common diseases such as diabetes should therefore also begin at this point,” he added.
Environmental and lifestyle factors play a key role in the development of widespread diseases in Germany such as diabetes mellitus. The aim of the Helmholtz Zentrum München, a partner of the German Center for Diabetes Research (DZD), is to develop new approaches to the diagnosis, treatment and prevention of the most common diseases.
Worm Pill Could Ease Autoimmune Disease Symptoms
Experts believe a molecule in parasitic worms could help explain why worm infections can effectively treat a range of autoimmune diseases, including multiple sclerosis, psoriasis, rheumatoid arthritis and lupus.
The Monash University study, published in the FASEB Journal, successfully identified peptides from parasitic worms that suppress the body’s immune response. Researchers believe this could pave the way for a new drug containing the peptide to provide relief from the symptoms of autoimmune diseases.
Affecting as many as one in 20 Australians, autoimmune diseases occur when a person’s immune system has an abnormal response against its own cells, tissues or even entire organs, resulting in inflammation and damage.
Lead researcher Professor Ray Norton from Monash Institute of Pharmaceutical Sciences (MIPS) said experts around the world have yet to fully understand the causes of autoimmune diseases, which have risen significantly in parts of the world.
“There are more than eighty autoimmune diseases, ranging in severity from mild to life threatening in some cases. While some affect mainly one area or organ, others can affect many parts of the body,” he said.
“Many people believe there’s a link between the rise in autoimmune diseases and an increased focus on cleanliness in western societies, because the immune system is no longer exposed to the broad range of infections that previous generations had to deal with.
“There could be some truth to this because worm infection is virtually unheard of in developed countries, yet the incidence of autoimmune diseases is high. But in developing countries the opposite is true,” Professor Norton said
The new line of research offers an alternative to helminthic therapy, where people deliberately infect themselves with parasitic worms, in an attempt to put their autoimmune disease into remission. It’s thought that the worms have a calming effect on their host’s immune systems in order to ensure their survival.
Rather than using worms, the research team searched for the active components responsible for immunomodulatory effects in parasitic worms. By creating a cDNA library from the anterior secretory glands of the parasitic hookworm Ancylostoma caninium, they identified a peptide called AcK1 that dampens the immune system by inhibiting a potassium channel (Kv1.3).
Researchers found that AcK1 closely resembles ShK, a peptide from a sea anemone, which has been shown to suppress autoimmune diseases and is currently in clinical trials for the treatment of multiple sclerosis.
Dr Sandeep Chhabra from Monash Institute of Pharmaceutical Sciences, said the study will help in developing new drugs to treat autoimmune diseases.
“Our research shows that it is possible to identify individual molecules responsible for this beneficial affect,” he said.
“The next step will be to see if we can develop this into a pill that could dampen the immune system in people with an autoimmune disease. That’s a whole lot cleaner than putting a worm in your body,” Dr Chhabra said.
Fibromyalgia and Genetics- Is Fibromyalgia Genetic?
While it is true that there is no direct genetic link between fibromyalgia patients- as in the disorder is not passed from parent to child the way some other genetic disorders are- there have been studies that have proven the odds of developing fibromyalgia are much higher in those who have family members who suffer from the disorder. This leads to the conclusion that heredity is a factor in causing fibromyalgia.
In fact, DNA studies of family members of fibromyalgia and chronic pain patients have reflected several genes that could serve as an explanation as to why these disorders seem to run in families. Each of the genes found are involved in your nervous system’s response to things- such as pressure and heat- that cause pain. Sometimes, these same genes are associated with anxiety and depression as well, which could be why certain antidepressant medications actually reduce the symptoms of fibromyalgia.
So, according to research, it is currently believed that fibromyalgia is not hereditary in there is not a mutation that is responsible for a specific trait (monogenic). However, research has proven that genes do actually predispose an individual to fibromyalgia, but in a way that is much more complex- involving many different genes (polygenic).
Difference between Monogenic and Polygenic
In a classical, or monogenic, hereditary condition, the genes that you get from your parents are the factor that determines whether or not you’ll end up with a specific disease. For example, in the case of cystic fibrosis, if both of a child’s parents are carriers of this disorder, the child will have a 25 percent chance of developing the disorder, cystic fibrosis.
When it comes to the polygenic predispositions, it’s not as easy or as certain as that. In this case, your genes only show that the particular disorder or illness is possible- given the right conditions. There are other factors that will come into play to trigger a polygenic disorder.
When it comes to fibromyalgia, other factors that could trigger the disorder are: other chronic pain sources, an autoimmune disease, problems with sleep, chronic stress in an individual’s life, infectious illnesses, or even an abnormal brain chemistry. Some experts have even said that environmental factors such as sensitivities to certain foods or being exposed to certain toxins could play a role in the development of fibromyalgia.
Fibromyalgia and Genetic Links
In 1989, researchers began looking at a possible genetic component of fibromyalgia, since it does tend to run in families. This study took a look at parents and siblings of individuals with fibromyalgia and discovered that around 52 percent of those had characteristics and other components of fibromyalgia but had not received and official diagnosis. Another 22 percent didn’t show the symptoms, but there was evidence of some abnormal consistency in the muscles. More research conducted later on discovered and confirmed this high rate of fibromyalgia occurring through families and proved that a low threshold of pain also was common in those related to an individual suffering from fibromyalgia.
Even now, researchers are just beginning to get a clearer picture regarding the certain genetic factors that are associated with fibromyalgia. So far, there have been multiple studies that suggest the connection with several different genes, but most of the studies have not been repeated.
Abnormalities in genes that have been suggested by these studies include genes that are related to the neurotransmitters and hormones that have been proven to be connected to fibromyalgia, including: norepinephrine, serotonin, glutamate, GABA, and dopamine. There are some others that are related to the general function of the brain, which inhibit viral replication, as well as the brain receptors for cannabinoids and opioids.
As more is learned regarding these genetic factors, researchers could find out which of them actually contribute to an increased risk of developing fibromyalgia. In addition, researchers could find out whether or not any of them can be used to diagnose and/or treat fibromyalgia.
Finally, when it comes to fibromyalgia research, scientists are working to find a connection in the pain thresholds of individuals related to variations of specific genes. Variants of genes determine how efficiently the nerve cells are able to work to recycle serotonin has already been established with heightened sensitivity to pain.
Is Fibromyalgia Genetic?
As you can see, the answer to this burning question is: yes and no. Though there are certain genes that do actually predispose you to developing the disorder, it is not directly related to your genes as some other disorders are.
Scientists are studying the field of epigenetics more and more in regards to fibromyalgia. Epigenetics is the science of how the environment- such as certain nutrients- acts on the genes to contribute to the development of the disease/disorder. This could help to better clear up the question of whether or not fibromyalgia is genetic.
So, while it is true that no direct genetic link has been found in fibromyalgia patients, there have been studies that have proven that individuals who have family members with the disorder are much more likely to develop it themselves. This brings us to the conclusion that heredity does play a factor- even if a small one.
DNA studies done on family members of those individuals who have fibromyalgia and chronic pain have shown that there are several genes that could explain why chronic pain and fibromyalgia seem to run in families. It has also been found that these very same genes are also associated with depression and anxiety, which is most likely the reason that certain antidepressant medications actually reduce fibromyalgia symptoms.
Regular Marijuana Use Found To Have A Negative Impact On Teen Brains
redOrbit Staff & Wire Reports – Your Universe Online
Psychologists investigating the public health consequences of marijuana legalization have found a link between frequent cannabis use and cognitive decline, inattentiveness, memory decay and decreased IQ in teenagers.
The findings, which were presented recently during the American Psychological Association’s 122nd Annual Convention, found that brain imaging studies of regular marijuana users revealed significant changes in their brain structure, particularly when it comes to adolescents.
According to Dr. Krista Lisdahl, director of the brain imaging and neuropsychology lab at University of Wisconsin-Milwaukee, those scans revealed abnormalities in gray matter (the part of the brain associated with intelligence) in 16- to 19-year-old teenagers who had increased their cannabis use over the past year.
Those results remained even after researchers controlled for variables such as serious medical conditions, prenatal exposure to drugs and learning disabilities, she added. That could be problematic, given that recent research indicates that marijuana use among high school seniors had climbed from 2.4 percent in 1993 to 6.5 percent in 2012, that 31 percent of young adults between the ages of 18 and 25 reported using the drug in the last month.
“It needs to be emphasized that regular cannabis use, which we consider once a week, is not safe and may result in addiction and neurocognitive damage, especially in youth,” Dr. Lisdahl said in a statement Saturday. “When considering legalization, policymakers need to address ways to prevent easy access to marijuana and provide additional treatment funding for adolescent and young adult users.”
Citing a 2012 longitudinal study that followed over 1,000 participants from birth to age 38, Dr. Lisdahl noted that men and women who become addicted to marijuana can lose an average of six IQ points by adulthood.
Her presentation, “Neurocognitive Consequences of Chronic Marijuana Use: Preventing Early Onset Is Critical,” was one of three at the conference’s “Considering Cannabis? Potential Public Health Implications of Marijuana Legalization” symposium, held Saturday at the Walter E. Washington Convention Center in Washington DC.
Also at the conference, Dr. Alan Budney of Darthmouth College warned that some legalized forms of marijuana contained higher research levels of THC – the substance responsible for most of its psychological effects. In his presentation, entitled “Clinical Epidemiology, Characteristics, Services and Outcomes for Youth With Cannabis-Use Disorders,” he explained that THC can increase the risk of depression, anxiety and psychosis in regular pot users.
“Recent studies suggest that this relationship between marijuana and mental illness may be moderated by how often marijuana is used and potency of the substance,” he explained. “Unfortunately, much of what we know from earlier research is based on smoking marijuana with much lower doses of THC than are commonly used today.”
People’s willingness to accept legalized medical marijuana has also had an impact on how risky adolescents perceive the substance to be, Dr. Bettina Friese of the Pacific Institute for Research and Evaluation in California explained in her seminar, “Is Legalization of Medical Marijuana Related to Youths’ Marijuana Beliefs and Behaviors?”
Dr. Friese presented results of a 2013 study of over 17,000 Montana teenagers that discovered marijuana use among teens was higher in counties where a greater percentage of people had voted to legalize the substance for medicinal purposes in 2004. Furthermore, teens living in those counties were less likely to view cannabis use as dangerous, suggesting that a more accepting attitude of medical marijuana could have an even stronger impact on teens than the actual number of medical marijuana licenses available, she added.
In February researchers reported that legalizing the use of marijuana resulted in a dramatic increase in the number of children requiring emergency medical attention for exposure to the drug. That study found that unintentional marijuana exposures to children under the age of 10 had increased 30.3 percent from 2005 to 2011 in states where its use had been legalized.
“Pediatricians, toxicologists and emergency physicians need to be willing to advocate for the safety of children to lawmakers as this burgeoning industry expands across the US,” said lead author Dr. George Sam Wang of the Rocky Mountain Poison and Drug Center in Colorado. “As more states decriminalize marijuana, lawmakers should consider requirements – such as child-resistant packaging, warning labels and public education – to reduce the likelihood of ingestion by young children.”
Join Amazon Student – FREE Two-Day Shipping for College Students
Study Indicates 2010 Chilean Earthquake Caused Tremors In Antarctic Ice Sheet
redOrbit Staff & Wire Reports – Your Universe Online
A massive earthquake that affected the Maule region of Chile in February 2010 also unleashed a series of smaller seismic events known as “icequakes” nearly 3,000 to the south in Antarctica, a team of researchers report in a new Nature Geoscience study.
The Chilean earthquake, which killed over 500 people and caused an estimated $30 billion in damages, is believed to be the cause of small tremors detected by sensors in West Antarctica less than six hours later, according to the AFP news agency. It is said to be the first evidence that the ice sheet can be affected by powerful quakes occurring far away.
The team reported that 12 out of 42 monitoring stations in Antarctica showed evidence of a spike in high-frequency seismic signals. Those signals corresponded to signs of ice fractures occurring near the surface, suggesting that the ice cracked as the far-off seismic activity caused the Earth’s crust to shake, explained Science writer Carolyn Gramling.
“Earthquakes are already known to affect Antarctica’s ice shelves, thanks to the tsunamis they can spawn,” Gramling said. “But whether earthquake seismic waves, traveling through the ground, can chip away at Antarctica’s ice sheet – the ice piled on top of the continent – remained an unanswered question.”
Unanswered, that is, until Georgia Institute of Technology geophysicist Zhigang Peng and his colleagues happened to discover the answer while analyzing the impact of the Chile earthquake in South America. Peng’s team was looking for shallow seismic waves known as surface waves, which travel along the planet’s crust instead of reaching the mantle, when they came across data from some of the Antarctic monitoring stations during their research.
While reviewing that data in search of surface wave signals, they uncovered “tiny seismic signals” that they believed were “associated with ice cracking,” Peng told Gramling. It marked the first time scientists had found seismic evidence that an earthquake occurring so far away could register in Antarctica’s ice sheet.
“We interpret these events as small icequakes, most of which were triggered during or immediately after the passing of long-period Rayleigh waves generated from the Chilean mainshock,” Peng, an associate professor at the Georgia Tech School of Earth and Atmospheric Sciences, said in a statement. “This is somewhat different from the micro-earthquakes and tremor caused by both Love and Rayleigh-type surface waves that traditionally occur in other tectonically active regions thousands of miles from large earthquakes.”
Love waves and Rayleigh waves are the two basic types of surface waves, Gramling explained. Love waves shake the ground from side to side, while Rayleigh waves move in a rolling motion, compressing and expanding the ground as they move. Both types of surface waves can trigger micro-earthquakes, she added.
Some of the icequakes that took place lasted less than one second, while others were ten-times longer in duration, the study authors said. They took place in several different parts of the continent, including seismic stations along the coast and near the South Pole, though the clearest indication of induced high-frequency signals occurred at station HOWD, located near the northwest corner of the Ellsworth Mountains.
The AFP noted some of the signals were unclear or hinted that no seismic events had taken place. Nonetheless, Peng and his fellow researchers believe it is likely the tremors were the result of movement within the ice sheet itself, and not from any fault in the underlying bedrock.
“While we are not 100-percent sure, we think that those seismic signals come from ice cracking within the ice sheet, likely very close to the surface,” he told the news agency via email. “The main reason is that if those seismic signals were associated with faulting beneath the ice sheet, they would be similar to earthquakes at other tectonically active regions.”
Shop Amazon – Back to School
Scientists Use Controlled Oil Spill To Analyze The Immediate Result Of Such Disasters
redOrbit Staff & Wire Reports – Your Universe Online
When an oil spill occurs, it usually takes scientists and clean-up crews several days to arrive on the scene, making it unclear exactly what happens to the petroleum-based product during the first 24 hours after it hits the water.
In a new study, however, corresponding author Samuel Arey of the Ecole Polytechnique Fédérale de Lausanne (EPFL) and the Swiss Federal Institute of Aquatic Science and Technology (Eawag) and his colleagues set out to close that knowledge gap by conducting a controlled spill in the North Sea. They claim the findings of their field experiment could help change emergency responses in the immediate aftermath of such disasters.
According to Kukil Bora of International Business Times, the researchers discovered that once the oil is spilled onto the surface of a body of water, some of it instantly begins to evaporate into the air while some of it starts dissolving into the seawater. The dissolved toxic hydrocarbons can be harmful to marine species, while the evaporated elements are a potential cause for concern among on-site rescue workers and those living downwind of an accident location.
Arey and an international team of experts from the Helmholtz Centre for Environmental Research – UFZ, the Royal Netherlands Institute for Sea Research (NIOZ), the University of Amsterdam’s Institute for Biodiversity and Ecosystem Dynamics, Woods Hole Oceanographic Institution (WHOI) and elsewhere, published their findings Friday in the journal Environmental Science & Technology.
They said their paper was “the first report on the broad-spectrum compositional changes in oil during the first day of a spill at the sea surface,” and explained that they analyzed the composition of the oil slick shortly after allowing 4.3 cubic meters of oil to be released into the North Sea. They reported witnessing “rapid mass transfers of volatile and soluble hydrocarbons,” with more than half of the C17 hydrocarbons disappearing within 25 hours.
“In its new environment, the oil immediately begins to change its composition, and much of that change happens on the first day,” Arey explained in a statement. While some of the compounds evaporate in hours contaminating the atmosphere, others (such as toxic naphthalene) simultaneously dissolve into the seawater, posing a threat to aquatic life, he added.
In the wake of oil spills such as the 1990 Exxon Valdez disaster and the more recent Deepwater Horizon oil spill, scientists have been working to determine the extent to which marine species living in the area of a petroleum spill are exposed to toxic hydrocarbons, the researchers said. However, since many of those hydrocarbons are usually dispersed into either the air or water before scientists arrive on the scene, the question has been difficult to answer.
The researchers, in collaboration with Dutch emergency response specialists, recreated a four cubic meter oil spill in an area of the North Sea that had already been exposed to pollution, approximately 200 km off the coast of the Netherlands. They said that by analyzing the behavior of this relatively small amount of oil, they were able to gain a much better idea of the risks that larger disasters pose to both marine life and emergency response team workers.
However, as the EPFL noted, “no two oil spills are alike. Aside from the sheer volume of oil released onto the sea surface, the environmental impact of an oil spill depends on external factors, such as the wind, waves, and the temperature of the air and the water. The North Sea experiment, for instance, was carried out on a summer day with two-meter high waves. Within just over a day, the surface oil slick had almost dissipated. On a cooler day with less wind and smaller waves, the slick would have likely persisted longer.”
Fire on the Horizon: The Untold Story of the Gulf Oil Disaster by Tom Shroder and John Konrad
Siri For OS X? New Patent Suggests It May Be On The Way
redOrbit Staff & Wire Reports – Your Universe Online
A new Apple patent filed late last week describes an “intelligent digital assistant in a desktop environment,” suggesting that the newest version of their Mac operating system could include currently iOS-exclusive navigator Siri.
Siri, which has been a feature included in iPhones and iPads since 2011, has not yet made the jump to the realm of desktop computing, said Mikey Campbell of Apple Insider, However, the 92-page patent filing, which is dated August 7, suggests the Cupertino, California-based company could be planning on including a version of the virtual assistant in the latest version of OS X.
The patent describes a digital assistant that could be “invoked on a user device by a gesture following a predetermined motion pattern on a touch-sensitive surface of the user device” and would then selectively invoke “a dictation mode or a command mode to process a speech input depending on whether an input focus of the user device is within a text input area displayed on the user device.”
“In some embodiments, a digital assistant performs various operations in response to one or more objects being dragged and dropped onto an iconic representation of the digital assistant displayed on a graphical user interface,” the patent continues. “In some embodiments, a digital assistant is invoked to cooperate with the user to complete a task that the user has already started on a user device.”
What that means, explained PC Magazine’s David Murphy, is that a Mac user could potentially activate Siri simply by speaking a specific activation phrase, by completing a particular gesture on a trackpad or keyboard, or even by clicking an icon for the desktop assistant. One example cited by Apple in the patent would have a Macbook user draw two circles on a trackpad to summon Siri, he added.
“Voice input is of specific importance to the patent’s disclosures, as the technology is looking to augment keyboard and mouse input, or in some cases replace the physical tools altogether,” Campbell said. “The usual answer/response method seen with Siri for iOS is applied to the desktop variant, though more advanced operations can be performed given the extra computing power afforded by a proper computer.”
“Siri would be able to field questions and respond to commands in the same way it can on iOS, but because the desktop is a different beast with more computing power and different interaction standards, it can also do a lot of other things per the patent,” added TechCruch writer Darrell Etherington. “For example, it could use the cursor position to inform how it handles any commands it’s given by a user – so it’ll apply a copy command to a photo the mouse is hovering over, for instance.”
Etherington added that the OS X version of the digital assistant would also be able to sort and organize files in Finder, as well as drag and drop them from one app to another. The patent also suggests that desktop Siri could replace a mouse and keyboard as an input device, and that it could act as a “third hand” of sorts by allowing Mac users to interact with background apps while working on a different piece of software in the foreground.
“The patent could explain why it has taken so long for Apple to deliver Siri for the Mac; the system described is a significant departure from Siri on mobile, and would require a lot of additional engineering to make it a reality,” the TechCrunch reporter added. “There’s no Siri present in the OS X Yosemite beta preview, so it’s likely not coming this year, but this patent indicates that it’s something Apple has spent considerable time and resources developing, so hopefully we’ll see it arrive sooner rather than later.”
Likewise, Nate Swanner of SlashGear said there is currently “no detailed timeframe” for the OS X version of Siri, and that it is “unclear” when Apple could officially unveil this new feature.
Shop Amazon – Wearable Technology: Electronics
Study Finds Galápagos Hawks Hand Down Lice Like Family Heirlooms
By Daniel Stolte, University of Arizona
Study provides some of the first evidence for the hypothesis of co-divergence between parasites and hosts acting as a major driver of biodiversity
Say what you will about the parasitic lifestyle, but in the evolution of life on Earth, it’s a winner.
Given that about half of all known species are parasites, biologists have long hypothesized that the strategy of leeching off other organisms is a major driver of biodiversity. Studying populations of Galápagos hawks (Buteo galapagoensis) and feather lice that live in their plumage (Degeeriella regalis), a group led by University of Arizona ecologists and evolutionary biologists has gathered some of the first field evidence suggesting that a phenomenon called co-divergence between parasites and hosts is indeed an important mechanism driving the evolution of biodiversity.
“The idea is really simple,” said the study’s lead author, Jennifer Koop, who is a postdoctoral fellow in the lab of Noah Whiteman in the UA’s Department of Ecology and Evolutionary Biology. “Each time a host population splits into separate populations that potentially become different species, we predict that their parasites could do the same thing.”
However, biologists have long struggled to test this hypothesis, as parasites are elusive.
“Often, the evolutionary trees of parasites and their hosts are congruent – they look like mirror images of one another,” said Whiteman, who is an assistant professor in EEB, a joint assistant professor in the Department of Entomology and the School of Plant Sciences, and a member of the UA’s BIO5 Institute. “But because parasites tend to be inside or attached to hosts, their distributions are difficult to study.”
“We found the lice are passed on from mother to babies during brooding, almost like genes,” Whiteman said. “They’re evolutionary heirlooms, like your family’s silverware or engagement ring diamond.”
Because the hawks pass on the feather lice from generation to generation, the researchers wanted to know whether the louse populations diverge between populations of hawks and between individual hawks, or whether the populations of the birds and the lice diverged independently of each other.
Remarkably, the findings, which are published in the journal Biology Letters of the Royal Society, revealed that the population structure of the lice matched that of the birds across the archipelago, even though the two are very different species.
“To the lice, each bird is an island, and their populations are very different from bird to bird,” Whiteman said. “The same pattern is repeated between bird populations on different islands. It’s like Russian dolls.”
In other words, the lice living on any one bird and its offspring are more closely related than the lice living on a different bird. As the birds diversify into distinct populations on each island, their parasites diversify with them. The findings help explain the rapid rate of parasite evolution, according to the research team.
“You have to be in the right spot at the right time to see this process happening,” Koop said. “Our study empirically demonstrates an important evolutionary process in which the hawks separate into different populations, and the lice living on them do the same.”
This process is hypothesized to lead to the formation of different species, in this case different species of hawks and lice, and may explain some of the extraordinary diversity seen among parasites, she said.
The team chose the Galápagos Islands, located 575 miles off the west coast of Ecuador, for the study because the species that colonized the geologically young group of islands have evolved in isolation, making the area an ideal natural laboratory.
“Of all the vertebrate species native to the Galápagos, the Galápagos hawk is the most recent arrival,” Whiteman said. “So whatever is happening in terms of evolution of the bird population and the parasite population is most likely in the earliest stages of that process.”
In four years of fieldwork on eight major islands, the team caught hundreds of Galápagos hawks – which later were released unharmed – and collected blood samples and feather lice for genomic analysis, in a partnership with the Galápagos National Park. Whiteman said the hawks’ lice are specialized on their host species and the feathers they consume, and unable to survive on any other species.
Co-authors Karen DeMatteo and Patricia Parker, both at the Department of Biology at the University of Missouri-St. Louis, then used the DNA from the samples to generate a genetic fingerprint of each population. Parker helped with the fieldwork.
A better understanding of how parasites and their hosts coevolve has implications for biomedical sciences, according to Whiteman. In addition, it can help researchers who study parasites as evolutionary tags of the host species.
“The fact that we were able to work with these birds, which are the top predators in their habitat, and reveal some answers to fundamental questions in biology shows why such places should continue to be preserved,” Whiteman said.
Poor Hearing Confines Older Adults To Their Homes
Vision and hearing problems reduce the active participation of older people in various events and activities. This was observed in two studies carried out by the Gerontology Research Center.
Impaired vision and hearing make it difficult to interact in social situations. However, social relationships and situations in which there is an opportunity to meet and interact with other people are important for older adults’ quality of life.
“Sensory impairments are common among older adults. About one third of Europeans aged 50 and older were found to have impairment in hearing, vision, or both sensory functions. Sensory problems are markedly more common amongst older age groups,” Anne Viljanen says.
“We found that older adults with hearing problems participate in group activities and meet their friends less often than those with good hearing,” Tuija Mikkola says. Group activities are challenging for older people with hearing problems, as they often have a great deal of difficulty conversing with several people in a noisy environment. The results also showed that people with hearing difficulties perceived their ability to live their lives as they would like as poorer than those with good hearing.
Tuija Mikkola’s study is part of a broader LISPE (Life-Space Mobility in Old Age) study. In the LISPE study, 848 community-dwelling persons aged 75 to 90 years were interviewed. Almost half of the subjects reported some difficulties and one in ten reported major difficulties when conversing with another person in the presence of noise.
Anne Viljanen’s study was carried out in collaboration with a research group from the University of Southern Denmark. The data gathered by the SHARE (Survey of Health, Aging and Retirement in Europe) project includes 11 European countries. More than 27,000 persons 50 years and older from the Nordic countries, Central Europe and the Mediterranean countries participated in SHARE. SHARE did not include Finnish participants.
“The study evaluated the prevalence of hearing and vision problems and whether these sensory impairments are linked to social activity. People with vision or hearing problems were less socially active than those without sensory problems, and those with both vision and hearing problems were least socially active,” Anne Viljanen says.
Rehabilitation is important
Anne Viljanen and Tuija Mikkola think that preventive and rehabilitative measures are important in order to support older people with sensory impairments in living socially active lives. It is possible to compensate an impairment of one sense to some extent, for example people with hearing problems are more likely to use visual cues of speech. Thus, it is important to converse face-to-face with people with impaired hearing as it helps facilitate lip-reading. Concomitant hearing and visual impairment also requires special skills from healthcare and rehabilitation personnel, as well as close collaboration between different healthcare specialists.
These studies were carried out by the Gerontology Research Center, which is a collaboration between the University of Jyväskylä and the University of Tampere. The studies are part of an international consortium called Hearing, Remembering and Living Well. The studies were funded by the Academy of Finland as a part of the ERA-AGE2 call.
The results have been published in international scientific journals.
Mikkola TM, Portegijs E, Rantakokko M, Gagné J-P, Rantanen T, Viljanen A. Association of self-reported hearing difficulty to objective and perceived participation outside the home in older community-dwelling adults. Journal of Aging and Health. In Press. DOI: 10.1177/0898264314538662
Viljanen A, Törmäkangas T, Vestergaard S, Andersen-Ranberg K. Dual sensory loss and social participation in older Europeans. European Journal of Aging 2014; 11: 155-167. DOI 10.1007/s10433-013-0291-7
FDA Approved – Deluxe Digital Personal Sound Hearing Amplifier Aid
Fibromyalgia is Actually Not an Autoimmune Disease
You may have seen some conflicting information regarding fibromyalgia and autoimmune diseases. Some people will tell you it is considered to be an autoimmune disease, others will tell you it’s not. However, you must know that physicians do not consider fibromyalgia to be an autoimmune disease.
The cause of this debilitating disease is not known and individuals who have other diseases could be much more likely to be affected with it. Diseases that make an individual more susceptible to fibromyalgia are: ankylosing spondylitis, lupus, and rheumatoid arthritis. Typically, the symptoms of fibromyalgia are the same as those in some of these autoimmune diseases, which makes diagnosing much more difficult.
Why is Fibromyalgia not Considered an Autoimmune Disease?
At this point in time, fibromyalgia is not considered to be an autoimmune disease. Further research could change this, but that doesn’t seem likely right now. Though some cases of fibromyalgia do involve a dysregulation of the immune system itself, this is much different than the dysregulation caused by an autoimmune disease. Right now, researchers have been unable to totally understand the nature of the dysregulation caused by fibromyalgia.
Autoimmune Diseases Explained
As mentioned before, autoimmune diseases include ankylosing spondylitis, lupus, rheumatoid arthritis, and others. Nowhere in this list will you find fibromyalgia autoimmune disease. So, though fibromyalgia does exhibit the symptoms of an autoimmune disease/disorder, it is not considered to be one.
What are the Symptoms of an Autoimmune Disease?
There are many symptoms of an autoimmune disease- many of which are the same or almost the same in those who have fibromyalgia. Following are some of the symptoms that occur with autoimmune diseases that could also point to fibromyalgia.
- Extreme Fatigue- this is the level of fatigue that is not helped by getting some rest.
- Joint and Muscle Pain- this can be a range of pain, from general pain, to burning, to aching, general soreness in the muscles and aches/pains in the joints.
- Muscle Weakness- weak feeling in the muscles, as well as loss of hand/arm or leg/thigh strength.
- Swollen Glands- especially those in the throat, under the arms, and the tops of the legs around the groin.
Greater Susceptibility to Infections- frequent bladder infections, colds, ear infections, yeast infections, sore throat and sinus problems are very common among both fibromyalgia patients and those patients with autoimmune disorders. Additionally, you will experience a much slower recovery time if you have an autoimmune disorder.
- Sleep Disturbances- problems with falling/staying asleep.
- Weight Gain/Loss- changes in weight, usually in a 10 to 15 pound range.
- Low Blood Sugar- this points to adrenal fatigue.
- Changes in Blood Pressure- you may have very high or very low blood pressure, in combination with feelings of vertigo or dizziness, palpitations/fluctuations in heart rate, and/or fainting.
- Candida Yeast Infections- this includes sinus infections, digestive problems, thrush, or even vaginal yeast infections.
- Allergies- allergic/sensitive to certain chemicals, foods, and things in the environment.
- Digestive Problems- this includes heartburn, constipation, cramping, bloating, pain in the abdomen, excessive gas, and even diarrhea is quite common.
- Depression/Anxiety- changes in both mood and emotions, excessively irritable, and even panic attacks.
- Memory Problems- this often manifests as what is called “brain fog” where you can only vaguely remember things.
- Thyroid Problems- typically, the problem is hypothyroidism, though sometimes can be hyperthyroidism, and typically does not show up on a thyroid test. This can manifest itself as excessive hair loss and a lowered body temperature.
- Headaches that keep occurring- this can be severe headaches or migraines.
- Low Grade Fevers- quite common, some experience this every single day.
- PMS- bloating, heavy bleeding, extreme cramps and an irregular cycle are quite common with both fibromyalgia and an autoimmune disease.
Common Fibromyalgia Symptoms
– Pain All Over the Body- this is aching, throbbing, stabbing/shooting, burning pain deep within the muscles.
– Fatigue- feeling completely drained of energy (can be one of the most debilitating of the symptoms).
– Difficulty Sleeping- this includes both falling and staying asleep. You will not be getting adequate sleep, so you feel quite deprived of sleep upon getting up.
– Brain Fog- problems with focusing/concentrating on things, retaining information recently learned, etc.
– Stiffness Upon Waking in the Mornings- muscles feel more sore in the early mornings and individuals feel more stiff than they usually are. Typically, gently stretching your muscles and taking a warm shower/bath helps to loosen them up.
– Knotting, Cramping, Weakness in Muscles- no matter how much you do to relax the muscles, they still feel very tense. The pain that is caused by fibromyalgia itself could be a source of muscle weakness.
– Digestive Disorders- abdominal pain, bloating, constipation, nausea, diarrhea, gas, IBS, and more are very common with fibromyalgia. Also, slow digestion and acid reflux are common.
– Migraines/Headaches- these are typically present at least twice per week and are rated as severe pain- usually with a migraine component. The pain is partially due to trigger points located in the head, neck, and shoulders.
– Problems with Balance- typically, individuals afflicted with fibromyalgia have trouble walking and their odds of falling down are increased.
– Burning/Itching Skin- you may have itchy/red bumps or your skin may be completely clear and it burns like when you have been sunburned.
Compare Fibromyalgia and Autoimmunity
Though some of the symptoms are quite similar- and even exactly the same in some cases- the research into fibromyalgia has not found a link to autoimmunity. There have been no inflammatory markers that have been elevated consistently, there have been no antibodies discovered, and researchers have not observed the damage that is typical of autoimmune activity within the body.
However, there has been a significant overlap that has been observed between specific autoimmune conditions and fibromyalgia that shows the possibility that those who have autoimmunity are susceptible to developing fibromyalgia. These are:
- Rheumatoid Arthritis
- Hashimoto’s Autoimmune Thyroiditis
- Lupus
Why Do People Get Confused?
The fact that people misunderstand the meaning of ‘autoimmunity’ greatly contributes to the misunderstanding between the two. Additionally, it could be to the similarities between the two.
For example: both fibromyalgia and autoimmune disorders have fatigue, pain, and several other very common symptoms; both fibromyalgia and autoimmune disorders can be quite difficult to diagnose and can take a long time to sort out; a common poor understanding of the two- even those in the medical community don’t always understand fibromyalgia and auto immune disorders, so they lump them all together because on the surface, they appear the same.
However, you must understand the difference between the two because though they appear to be the same, the treatments are very different. You don’t want to be treated for one if you have the other- the treatment would not be successful.
New Findings Could Lead To Good News For Cancer Research, Prevention And Treatment
Rayshell Clapper for redOrbit.com – Your Universe Online
Researchers released some pretty incredible findings last week about cancer. In a multiple institute study including researchers and authors from the University of California at Santa Cruz (UCSC), the Buck Institute for Research on Aging, the University of California at San Francisco (UCSF), the University of North Carolina, Chapel Hill (UNC), and the Broad Institute of Harvard and MIT looked closely at how cancers are classified. The study is part of the Pan-Cancer Initiative of the Cancer Genome Atlas (TCGA).
According to a statement from the UCSC, at present “Cancers are classified primarily on the basis of where in the body the disease originates.” This means that cancers are classified based on the tissue: breast, lung, bladder, colon, et cetera. However, the researchers found what they believe will be a better way to classify cancers: based on cell type not just tissue type.
According to the Buck Institute for Research on Aging, “Scientists analyzed the DNA, RNA and protein from 12 different tumor types using six different TCGA “platform technologies” to see how the different tumor types compare to each other. The study showed that cancers are more likely to be molecularly and genetically similar based on their cell type of origin as opposed to their tissue type of origin (e.g. breast, kidney, bladder, etc.).”
Based on these findings, the researchers believe at least 10 percent of cancer patients would have their cancer reclassified. As UCSF points out, that means one out of every 10 cancer patients would have their cancers more accurately diagnosed.
Reclassification means that these patients would likely receive different and more accurate treatment thus hopefully leading to a higher likelihood of survival and remission. These findings could also lead to future research on cancer treatments to find more accurate medications and procedures according to news from UNC.
As UCSC explains, to find all this, “The research team used statistical analyses of the molecular data to divide the tumors into groups or “clusters,” first analyzing the data from each platform separately and then combining them in an integrated cross-platform analysis…All six platforms as well as the integrated analysis converged on the same divisions of the cancers into 11 major subtypes. Five of those subtypes were nearly identical to their tissue-of-origin counterparts. But some tissue-of-origin categories split into several different molecular subtypes, and some subtypes encompass tumors with several different tissues of origin.”
Specifically, the study researchers identified breast and bladder cancers as two areas where they found that the more cell-specific classification was more appropriate and accurate than the tissue-specific classification. UNC explained that in breast cancer, the breast is a very complex organ with many different cell types, which obviously leads to a variety of breast cancers: luminal, HER2-enriched, and basal-like. The Mayo Clinic defines the differences in these three types of breast cancers. First, though, it is important to understand the hormone status of breast cancer.
In breast cancer, a type may be estrogen receptor (ER) positive, progesterone positive (PR) positive, or hormone receptor (HR) negative. ER positive refers to a type of breast cancer that is sensitive to estrogen whereas PR positive means that type is sensitive to progesterone while HR negative refers to the fact that type of cancer does not have hormone receptors thus it will not be affected by treatments focusing on blocking hormones. Additionally, breast cancers consider the genetic makeup of breast cancer. Those that have too many copies of the HER-2 gene have too much of the growth-promoting protein HER-2, so treatments focus on slowing and killing these cancer cells.
To that end, luminal breast cancers can be ER positive, PR positive but HER-2 negative (called luminal A breast cancer) or ER positive, PR negative, and HER-2 positive (called luminal B breast cancer). HER2-enriched cancers are ER negative, PR negative, but HER-2 positive. And the basal-like cancers are ER, PR, and HER-2 negative. Basal-like cancers are also called triple-negative breast cancer. It is the basal-like breast cancers where the researchers found that a more accurate way of classifying them is via cell origin rather than tissue origin because the basal-like breast cancers looked more like ovarian cancer and cancers of the squamous-cell (skin) type.
Although breast cancer shows the clearest indicators supporting the idea that cancers can and should be classified by both tissue and cell type, other cancers supported this as well, namely bladder cancer. UCSC explains that bladder cancer split into three subtypes based on cell-origin classification: bladder cancer only, bladder cancers that clustered with lung adenocarcinomas, and bladder cancers that were squamous-like cancers.
In continuing research, these five institutes will broaden samples from tumors from the 12 tumor types used in this study to 21 tumor types. All expect that more cancers will be reclassified.
Not only does this study give greater understanding to cancers, but it can and likely will lead to better treatment options. More hopefully, this study and those that follow could also lead to better cancer prevention.
According to UCSC, the work was performed as part of the UCSC-Buck Institute Genome Data Analysis Center for the TCGA project led by Stuart, Benz, and David Haussler, director of the UC Santa Cruz Genomics Institute. The corresponding authors of the paper are Stuart, Benz, and Charles Perou of the University of North Carolina, Chapel Hill. The co-first authors are Katherine Hoadley of UNC; Christina Yau of the Buck Institute; Denise Wolf of UCSF; and Andrew Cherniack of the Broad Institute of Harvard and MIT. Stuart’s graduate students Sam Ng and Vladislav Uzunangelov also made significant contributions to the analysis.
Results of this research were published August 7 in the journal Cell.
Tortoises Trained To Use Touchscreen As Part Of Spatial Navigation Study
redOrbit Staff & Wire Reports – Your Universe Online
An international team of scientists has successfully trained four red-footed tortoises how to use a touchscreen, according to new research appearing in the July edition of the journal Behavioral Processes.
The research, which was led by Dr. Anna Wilkinson of the University of Lincoln’s School of Life Sciences, was part of a study designed to teach the creatures navigational techniques. Previous research has demonstrated that red-footed tortoises are proficient in several types of spatial cognition tasks, including the radial arm maze.
For the new study, Dr. Wilkinson’s team attempted to determine if the tortoise was able of learning a spatial task in which they were required to touch a stimulus presented in a specific position on a touchscreen. They also looked at the relationship between this task and performance in a related spatial task requiring whole body movement.
Red-footed tortoises were selected because their brain structure is vastly different than that of mammals. In mammals, the hippocampus is used for spatial navigation, and while it is believed the reptilian medial cortex has a similar function, little behavioral research has been conducted in this field. In order to examine how tortoises learn to navigate around their environment, the study authors tested how they relied on cues to get around.
“Tortoises are perfect to study as they are considered largely unchanged from when they roamed the world millions of years ago. And this research is important so we can better understand the evolution of the brain and the evolution of cognition,” Dr. Wilkinson, who first started training the tortoises while at the University of Vienna, explained in a recent statement.
The study authors gave strawberries or other treats to the reptiles whenever they examined, approached and pecked blue circles on the screen. Two of the creatures went on to apply their knowledge to a real-life situation in which the research team placed them in an area with two empty food bowls that resembled the blue circles. Those tortoises went over to the bowl on the same side as the circles they had been trained to peck on the screen.
“Their task was to simply remember where they had been rewarded, learning a simple response pattern on the touchscreen,” Dr. Wilkinson said. “They then transferred what they had learned from the touchscreen into a real-world situation. This tells us that when navigating in real space they do not rely on simple motor feedback but learn about the position of stimuli within an environment.”
“The big problem is how to ask all animals a question that they are equally capable of answering,” she added. “The touchscreen is a brilliant solution as all animals can interact with it, whether it is with a paw, nose or beak. This allows us to compare the different cognitive capabilities.”
In addition to Dr. Wilkinson, other authors of the study included Julia Mueller-Paul and Ulrike Aust of the University of Vienna’s Department of Cognitive Biology, Michael Steurer of the University of Vienna’s Faculty of Physics, Geoffrey Hall of the University of York’s Department of Psychology and the University of New South Wales’ School of Psychology, and Ludwig Huber of the University of Veterinary Medicine Vienna’s Messerli Research Institute.
Shop Amazon – Dell E2014T Touch Screen LED-Lit Monitor