Researchers Study Owls To Learn About Stealth Technology

Alan McStravick for redOrbit.com — Your Universe Online

Researchers are learning about stealth technology from a famous winged inhabitant in nature. Would you like to know just who-who they are taking their lead from? If you guessed the owl, well the “who” clue wasn´t exactly the most difficult.

The reason they are looking at the owl is for its ability to fly silently. It is able to do this because of the natural design of its plumage, which aids in noise reduction. This allows the owl to hunt in a relatively stealthy manner.

The study, performed by researchers at the University of Cambridge, England, focused on the wing structure of the owl in an attempt to better understand how it is able to dissipate any noise that might have been created through flapping during flight. The researchers believe they can apply their findings, ultimately, to the future design of conventional aircraft. They are presenting their findings this week at the American Physical Society´s (APS) Division of Fluid Dynamics meeting in San Diego, California.

“Many owl species have developed specialized plumage to effectively eliminate the aerodynamic noise from their wings, which allows them to hunt and capture their prey using their ears alone,” said Justin Jaworski with the department of applied mathematics and theoretical physics at the University of Cambridge. “No one knows exactly how owls achieve this acoustic stealth, and the reasons for this feat are largely speculative based on comparisons of owl feathers and physiology to other not-so-quiet birds such as pigeons.”

One of the challenges for the researchers was learning how this wing design was able to mitigate air turbulence as air flows over the wing. All wings, both natural and human designed, will create turbulent eddies as the air flows across it. Sound is produced when eddies cross the trailing edge of the wing. With a conventional aircraft´s wing, which is particularly hard across the trailing edge, the sound produced is significant.

The researchers recognized, in their study of the owl´s wing, three distinct physical attributes that they believe allow them to fly silently. The first is a comb of stiff feathers that are on the leading edge of their wing where the air begins to flow across. The second is a soft, downy material that covers the top of the wing. The third is a flexible fringe across the trailing edge where the air flows off of the wing. The researchers are trying to determine if their silent flight is produced as a result of one of these attributes or via a combination of all of them.

To try to make this determination, the researchers worked to develop a theoretical basis for the owl´s silent flight ability based upon the flexible fringe on the trailing edge of the wing. The trailing edge, in both natural and engineered wings, is typically the primary source for noise coming off of a wing. The team built upon previous owl research, which had suggested that the noise from the wing was not dependent upon air speed. This previous research showed there was a significant reduction of high frequency noise across a range that human ears are sensitive to.

The team employed mathematical models to demonstrate how elastic and porous properties of a trailing edge could be implemented on a wing so that aerodynamic noise would depend on the flight speed as if there were no trailing edge at all. “This implied that the dominant noise source for conventional wings could be eliminated,” said Nigel Peake, one of the researchers from the University of Cambridge. “The noise signature from the wing could then be dictated by otherwise minor noise mechanisms such as the roughness of the wing surface.”

The team presented their findings in a public talk at the meeting on November 18, 2012.

Researchers Say Unemployment Increases Risk Of Heart Attack

Connie K. Ho for redOrbit.com — Your Universe Online

Researchers from Duke University recently revealed results from a study that showed that unemployment may increase the risk of having a heart attack.

In particular, the scientists studied the impact of losing jobs multiple times, suffering short spurts without work, and long-term unemployment in relation to the risk for acute myocardial infarction (AMI). Researchers looked at 13,451 U.S. adults between the ages of 51 and 75 years of age and assessed their risk for having an AMI, commonly known as a heart attack. The participants were part of the long-term Health and Retirement Study and were given biennial follow-up interviews between 1992 and 2010. The findings of the research project were recently published in the Archives of Internal Medicine, a publication affiliated with the Journal of the American Medical Association (JAMA).

“Results demonstrated that several features of one’s past and present employment increased risks for a cardiovascular event. Although the risks for AMI were most significant in the first year after job loss, unemployment status, cumulative number of job losses and cumulative time unemployed were each independently associated with increased risk for AMI,” explained the authors in a prepared statement.

At the start of the study, 14 percent of the participants were unemployed, 69.7 percent had lost one job or more, and 35.1 percent had spent some time unemployed. For those who were unemployed, there was an elevated risk of having a heart attack compared to workers who had not lost their job. In addition, the risk of developing a heart attack was incrementally increased from one job (a hazard ratio of 1.22) to a total of four job losses (a hazard ration of 1.63) compared to individual who had no job loss.

“We found that the elevated risks associated with multiple job losses were of the magnitude of other traditional risk factors, such as smoking, diabetes mellitus and hypertension,” continued the authors in the statement.

“In the context of the current U.S. economy and projected increases in job instability and unemployment among workers, additional studies should investigate the mechanisms contributing to work-related disparities in AMI to identify viable targets for successful interventions.”

Along with the publication of the study´s findings, William T. Gallo, a researcher at City University of New York, wrote a commentary that accompanied the study in the Archives of Internal Medicine. Gallo argued that the study by the Duke University researchers should be one of the last to look at the effects of unemployment, as there is a lack of research being done on explaining the ℠hows´ and ℠whys´ of the “socioeconomic exposure” of workers.

“Explorations of these questions, however limited, should mark the beginning of the next period of research,” wrote Gallo in the commentary. “Sufficient evidence exists of the negative influence of job loss on health. The next generation of studies should identify reasonable pathways from job separation to illness so that nonoccupational interventions may be developed and targeted to the most vulnerable individuals.”

For those interested in heart attack prevention, the Mayo Clinic provides a set of recommendations to reduce the risk of suffering an AMI. One piece of advice for individuals is the importance of making lifestyle changes. This can include participating in physical activity, consuming healthy foods, lowering and managing stress in a healthy way, as well as avoiding smoking.

How Much Do Plant Leaves Shrink When They Dry Out?

University of Arizona

Leaves shrink when they dry out. What sounds straightforward has far-reaching consequences for scientists studying how ecosystems work or reconstructing past climates, a team of 40 middle school students led by a UA graduate student has discovered.

A research team consisting of a University of Arizona graduate student, about 40 middle school students and a UA research lab has undertaken the first systematic study looking at how much plant leaves shrink when they dry out. The results are published in the November issue of the American Journal of Botany, one of the foremost publication venues in the botanical sciences.

“Our simple observation that leaves shrink when they dry out has very important consequences for our understanding how ecosystems work,” said Benjamin Blonder, a graduate student in the UA´s department of ecology and evolutionary biology who led the research. “Many studies in ecology, especially reconstructions of past climate, depend on knowing how big leaves are. By relying on measurements of dried leaves, a very large number of climate and ecology studies may have obtained biased conclusions.”

For example, when scientists reconstruct climate and precipitation in the past to figure out whether an area was subjected to droughts or whether it was wet, they often turn to fossilized leaves, Blonder explained.

According to Blonder, the specific area of a leaf in relation to its mass also is a very useful parameter in predicting how much carbon a plant can capture from the atmosphere.

If leaves undergo dramatic changes in size during fossilization, the conclusions are likely to be off. The same effect would be expected when researchers use dried leaves from museum collections for their calculations.

“You measure the area of a leaf, enter that into an equation, and it will calculate the estimated precipitation for that site. If you have the wrong estimate of leaf area, you´ll have the wrong estimate of precipitation,” he said.

“People already knew leaves shrink a lot when they´re dried out, but we didn´t know by how much,” Blonder said. “Also, I wanted to know if the shrinkage could be reversed.”

So he set out to collect and measure leaves from various areas, including Costa Rica, Hawaii and the Rocky Mountains in Colorado.

At the time, Blonder spent two days each week at Miles Exploratory Learning Center in the Tucson Unified School District, supported by BioME, a UA graduate training grant funded by the National Science Foundation´s GK-12 Program.

“I realized I had more than 100 potential scientists right there in front of me,” he said.

“Ben and I had been discussing the possibility of some kind of hands-on research project,” said Rebecca Lipson, the middle school science teacher at Miles Exploratory Learning Center, who was partnered with Blonder as a BioME fellow to teach her students ranging from 6th through 8th grade.

Blonder´s doctoral advisor Brian Enquist, a professor in the UA department of ecology and evolutionary biology, said he was enthusiastic about the chance to for his lab to help out in Lipson´s classroom.

“Ben´s enthusiasm for sharing the love of science with young students was infectious,” Enquist said. “I am thrilled that we had this opportunity to share and teach science that we do at the UA with such bright and engaged kids.”

“I wanted to bring in someone who is a great role model for my students,” Lipson said. “Many of them tend to think of a scientist as a dull professor in a white lab coat who never leaves the lab, and I like to shatter that notion. Having a young, passionate grad student in my class helps my students to get an idea of the vast array of what science is really like.”

With Blonder as their “principal investigator,” or “PI” as they called him, all 105 students in Lipson´s four science classes embarked on a quest to find out exactly how much leaves shrink when they dry out, what parameters determine the amount of shrinkage depending on the species, and whether leaves return to their original size once they are rehydrated.

Each class focused on a specific aspect of the research. Blonder and his student collaborators examined leaves from four plant species native to the area and tasked them with determining the effect of a particular treatment.

“My students studied what happened to the leaves when they soaked them in water, immersed them in mud, let them dry out or rehydrated them afterward.”

When leaves dry out, they shrink about 20 percent on average, the team discovered. In the most extreme case, the leaves of the mountain meadow-rue (Thalictrum fendleri), an herb from the Rocky Mountains, shriveled down to one-fifth of their original size.

“Through the experiments in the classroom, we found that a leaf comes back to its original size when we soak it in water,” Blonder said, “which provides an easy and useful strategy for scientists doing studies that depend on accurate measurements of leaf area.”

Delving deeper into the project, the students tried to answer the question of what determines how much a leaf shrinks.

“At the beginning, we thought there would be a very simple explanation,” Blonder said. “But it turned out that we ended up with many variables that determine the amount of shrinkage in a leaf of a given species. We used data from hundreds of species, yet there is no simple answer.”

The group did find that the amount of structural investment a plant puts into its leaves is a crucial factor determining how much a leaf will shrink when it dries out.

“The more mass and tissue the plant invests into its leaves in terms of components that provide mechanical strength, the less shrinkage will occur,” Blonder said.

Almost half of the participating students completed the necessary prerequisites and assignments to qualify as co-authors on the scientific paper that resulted from the study.

“Our school has nearly 40 percent of students that qualify for special needs education services,” Lipson explained. “Our philosophy is to target those children who struggle in reading, writing or math and give them opportunities to really engage in their learning and understand concepts a deeper level.”

The BioME program has been spearheaded by Judith Bronstein, a University Distinguished Professor in the UA´s department of ecology and evolutionary biology, who served as its principal investigator.

“Projects like this one have been hugely beneficial to the Tucson community,” Bronstein said, adding that over the course of five years, 52 BioME graduate fellows have engaged tens of thousands of school children in actual research projects.

“The goal has been to expose them to real science, in the sense that you don´t know what the answer is. This makes science much more compelling,” Bronstein said.  “I am not aware of another such program anywhere in the country in which a grad fellow directly involved school children in writing a scientific paper.”

Financial cutbacks have led the National Science Foundation to eliminate the program that funded BioME, as well as two other highly successful training programs that place graduate students in Tucson classrooms.

Blonder wants to continue teaching at Miles Exploratory Center. Enquist has been actively applying for alternative funding sources to keep the partnership alive.

“This was such a fun and rewarding experience,” Enquist said. “It is important for developing science literacy in our schools that we keep such exchanges going.”

“We are very lucky that Dr. Enquist wanted us to continue our partnership in education,” Lipson said. “Now that the formal program no longer exists, it will be up to individual students, their advisors and their departments to keep this alive.”

“I think we have shown you can work in a school and get serious science published,” Lipson said.

Blonder added: “This is a nice example of where science and teaching really do come together to produce a study with real scientific value.”

On The Net:

The Descent of Man: Why One Scientist Thinks The Human Race May Be Getting Dumber

Jedidiah Becker for redOrbit.com — Your Universe Online
Take a glance at the arc of human civilization. As just a few notable achievements, you might start with the discovery of agriculture before moving on to survey the architectural marvels of the ancient world, the revolution of Gutenberg´s printing press and finally landing on the modern ubiquity of rapidly evolving computer technology. This view tends to give a sense that the human intellect may have a nearly limitless potential to master nature — Hannah Arendt´s Homo faber, “man the creator.” And that may well be the case. But going a step further, the furiously paced advances taking place in nearly every branch of science also incline most of us to suspect that we, as a species, just keep getting brighter with each new addition to our vast library of accumulated knowledge and technology.
However, a professor of pathology and developmental biology at Stanford University’s Crabtree Laboratory believes there´s cause to suspect that humanity´s intellectual prowess may actually be eroding, and at an astonishing rate. In a recent paper titled “Our Fragile Intellect,” Professor Gerald Crabtree opens his discussion with an odd statement: “I would be willing to wager that if an average citizen from Athens of 1000 BC were to appear suddenly among us, he or she would be among the brightest and most intellectually alive of our colleagues and companions. We would be surprised by our time-visitor´s memory, broad range of ideas and clear-sighted view of important issues. I would also guess that he or she would be among the most emotionally stable of our friends and colleagues.”
And Crabtree clarifies that this speculative superiority doesn´t just apply to the historically revered Greeks. In fact, he says, all of our ur-ancestors of roughly 3,000 to 6,000 years ago were likely smarter and more emotionally stable than us, whether they were in ancient Africa, Asia, India or the Americas. And the reason for their superiority, he explains, is not a matter of knowledge or culture, per se, but rather one of genetics.
In a provocative thesis that has already ruffled the feathers of the scientific community, Crabtree crafts a compelling argument using a back-of-the-envelope statistical analysis of genetic mutation frequencies, a dash of tried-and-true evolutionary theory and a pinch of anthropological speculation. The result is a persuasive case for why we might all be getting dumber.
At the bottom of his theory are two core ideas: The first — supported by modern neurobiology and genetics — is that the biological basis for human intelligence is made up of a strikingly large and surprisingly fragile constellation of genetic players. The second hinges on the idea that this intricate and unstable conglomerate of genes was forged together in the furnace of intense evolutionary pressures — pressures that we Homo sapiens have, ironically, largely managed to mitigate thanks to our unparalleled powers of creative intelligence.
THE FRAGILITY OF INTELLIGENCE
One of the hallmarks of Mother Nature is that she invariably picks winners. And though their victories may be short-lived, every species that ever occupied a branch in the tree of life managed to do so because it was able to outfox its foes. In evolution´s Coliseum, only the robust, the adaptable and the lucky survive the perpetual competition with enemies, environment and fate. Thus it might seem counterintuitive that something as vital to our evolutionary survival as intelligence would rest upon a flimsy foundation. After all, our early human ancestors couldn´t beat a chimp in an arm wrestling match or outrun a lion on the open plain, but they could fashion and accurately launch a spear at a target from a distance, organize and execute complex hunting strategies, and tame the mysterious power of the flame — all abilities that required an advanced, adaptable and abstract kind of intelligence. And it was largely this quality rather than the brute strength and speed of the beasts that allowed our ancestors to survive and thrive in the harshest environments that nature’s creative kitchen could cook up.
So if our unique intelligence was the linchpin of our evolutionary fate, how and why was it left to dangle from such seemingly thin thread?
The Genetic Complexity of Intelligence
The first step in understanding our cognitive fragility, Crabtree explains, is to recognize the extreme genetic complexity of this thing we call intelligence. In the first half century or so following Watson and Crick´s discovery of DNA as life´s heritable playbook, there was a strong tendency among scientists towards a kind of genetic reductionism — the habit of thinking about any given characteristic of an organism as being the product of a single chunk of DNA; the notion that there exists a more or less one-to-one relationship between a gene and a given biological trait. While there are a few traits for which this kind of logic does hold true — such as for the single genes that give us a widow´s peak or the ability to roll our tongues — most complex biological traits require the combined action of numerous genetic influences.
Although advances in research have largely disabused modern scientists of this overly simplified view of the relationship between genetics and biological traits, the ghost of reductionism still haunts the halls of academic institutions and research laboratories throughout the world. But when it comes to thinking about human intelligence, the notion that we derive our richly complex cognitive abilities from just one or a handful of genes isn´t simply incorrect — it´s likely incorrect by several orders of magnitude. So how many ℠intelligence genes´ does the average human need for his quotidian routines? “How many genes are required to carry out our everyday tasks, read a book, care for a loved one, conceive of a just law or compose a song?” Crabtree asks. The answer, it turns out, is probably in the thousands.
Using several techniques to approximate the number of genes required for “full intellectual and emotional function” — the most effective of which he believes to be the analysis and extrapolation of the number of so-called ℠X-linked intellectual deficiency genes´ that reside (as the name indicates) on the X chromosome — Crabtree puts the number squarely in the range of 2,000 to 5,000 genes, all of which are necessary to keep our brain´s cognitive cogs optimally turning. That´s a lot of genes to devote to one biological process — somewhere between one-tenth and one-fourth of the total number of genes in our entire genome, to put it in perspective. But given the complexity and importance of intelligence in our survival, this high number may not seem all that surprising.
And yet the question remains as to why our intelligence should be particularly vulnerable simply because it requires a large number of genes to function properly. After all, isn´t the robustness of the human organism at least partly due to the fact that evolution has equipped most of our biological systems with overlapping redundancies, back-up programs and fail-safe mechanisms so that the whole machine doesn´t break down when one of our biological widgets goes kaput? A corporation with thousands of employees typically doesn´t experience so much as a hiccup if one worker is sick for a week. But a small business with just five people on the payroll could quickly tank if one team member isn´t there to fulfill his duties. For many of our body´s processes, this logic holds true: More genetic components translates into an increased robustness. In the case of intelligence, however, Crabtree says there are two specific reasons why this multiplicity of genetic players may prove to be more of a liability than an asset.
For starters, he explains, the large number of genes that are required for full cognitive function creates a proverbial flock of sitting ducks for random genetic mutations. “The larger the number of genes required, the more susceptible we are as a species to random genetic events that reduce our intellectual and emotional fitness.” Thanks to our finely tuned biological machinery, the frequency of mutations in our genome is actually astoundingly low. In yeast cells, for instance, random mutations only occur at a rate of about 3.8 x 10-10 to 8.4 x 10-9 per-base-pair per generation (that´s in the range of one mistake per couple trillion or so nucleotides). When it comes to duplicating and passing on life´s blueprint, evolution didn´t leave much room for mistakes, and millions and millions of years of merciless natural selection have ensured that the replication of our DNA takes place with mind-boggling accuracy.
However, while genetic mutations are statistically rare, every now and again one slips in through the backdoor. And the simple fact that there are so many genes associated with intelligence significantly increases the likelihood that one or more of these will be affected by a random mutation in the same way that buying additional lottery tickets increases your chances of winning (except in this case, you´re increasing your chances of losing “¦ Think of it as more of a Hunger Games sort of situation). If there were just one little gene for intelligence hiding amongst the other 20,000 or so in the human genome, then the probability that that single gene would get hit by a mutation would be quite low. However, the fact that there likely 2,000 to 5,000 of these genes increases that probability by several orders of magnitude.
Intelligence Genes Work as a Chain, Not a Network
So, the fact that our intelligence requires so many genes to work properly actually increases our exposure to deleterious mutations. Bummer. But in his thesis, Crabtree points to yet another reason to suspect that our intelligence genes might be particularly fragile. In addition to the problem presented by the sheer number of these genes, he says that there´s also a systemic weakness related to how they work together to produce our cognitive abilities.
As just mentioned, many of our body´s systems have multiple built-in fail-safe mechanisms that serve to increase the robustness of the whole organism. Many vital biological systems tend to work as a sort of network in which different components have overlapping, parallel or even duplicate roles, ensuring that the overall function of the system doesn´t depend on any one part.
With intelligence, however, it seems that the picture may again be a bit different. Crabtree explains that the mutation of any single intelligence gene can significantly compromise the integrity of the whole intelligence apparatus. And for this reason, he says, “these genes do not operate as a robust network, but rather as links on a chain in which failure of any one of the links gives rise to deficiency.” Thus when it comes to intelligence, a single weak link in the genetic chain can (and probably does) lead to the impaired function of the entire system.
SO HOW DID WE GET SO SMART AND WHY ARE WE GETTING DUMBER?
By now the more observant reader may have spotted a small kink in Professor Crabtree´s thesis: If the unique intelligence of Homo sapiens is such an extremely brittle biological phenomenon that it appears destined for genetic degeneration, then how in the world did our ancestors ever acquire it to start with?
Even putting aside the thesis about our cognitive fragility and decline, the rise of intelligence in humans is still one of the most intriguing riddles in modern anthropology and evolutionary biology. Crabtree buttresses his theory of declining intelligence by pointing to several things that we do know about the emergence of human intelligence and what they might tell us about its rise and possible descent.
Somewhere between 50,000 and 500,000 years ago our prehistoric African ancestors began to experience a rapid enlargement of both the volume of their skulls as well as the size of their frontal cortex — a part of the brain involved in complex problem solving, decision making, social interaction and other higher-level cognitive tasks. And this development wasn´t a mere coincidence. It was part of a new kind of evolutionary survival strategy — a shift from relying predominantly on traits like strength and speed to one that depended on characteristics like cunning, prediction, abstract thought and a more sophisticated, intuitive grasp of the laws of physics.
This strategic adjustment of survival strategies was not an easy one, says Crabtree, and the casualties were probably extremely high. “In the transition to surviving by thinking, most people (our non-ancestors) probably died simply due to errors of judgment or a lack of an intuitive, non-verbal comprehension of things such as the aerodynamics and gyroscopic stabilization of a spear while hunting a large dangerous animal,” he explains in his paper. “This optimization [for survival by intelligence] probably occurred in a world where every individual was exposed to nature´s raw selective mechanisms on a daily basis.”
Thus for early pre-human species attempting to make the switch to survival by thinking instead of survival by strength and speed, the pressures of natural selection were intense, and there was very little room for error. If you couldn´t quite predict the trajectory of your spear within a thin margin of error, or weren´t able to recalibrate a complex hunting strategy in a split second while charging through an open glade, chances are you weren´t going to survive for long and you certainly weren´t going to leave behind many offspring to propagate your genes.
Roving the African plains in small dispersed packs that would not develop language until much later, survival as an early hunter-gatherer was not for the dull-witted. Life for our early ancestors who made the strategic switch to surviving by their wits was, in Crabtree´s words, “more intellectually demanding than we would commonly think.” In fact, he says, “life as a hunter-gather [sic] required at least as much abstract thought as operating successfully in our present society.” If you ask most people, they tend to view skills like the ability to write literature, operate complex machinery or design computer software as far more demanding than those needed to carry out tasks like crafting a primitive weapon out of stone and wood, or tracking a herd of animals across the plain. Yet as Crabtree points out, most of the abilities that we consider to be intellectually ℠sophisticated´ are actually nothing more than byproducts of the core intellectual traits that our ancestors developed to survive in a hostile world — traits which Mother Nature selected for us with unforgiving austerity. In this view of intelligence, the ability to write a symphony or perform higher mathematics are merely the “collateral effects” of the fundamental survival abilities that we attained in evolution´s refining fire.
Somewhere between 50,000 and 500,000 years ago, our 2,000 to 5,000 intelligence genes were being fine-tuned by life´s master engineer, natural selection, to perform tasks of unprecedented intellectual complexity. All the skills that we´ve subsequently developed since the dawn of civilization are thus, in a sense, an epilogue to the story of the rise of human intelligence. Driving home this point, Crabtree reemphasizes that “it seems that if one is a good architect, mathematician or banker, these skills were an offshoot of the evolutionary perfection of skills leading to our ancestor´s survival as nonverbal, dispersed hunter-gathers [sic].”
THE UNSEEN INFLUENCE OF CIVILIZATION
It is therefore interesting — and more than a little ironic — to note that if Crabtree´s thesis is correct, some of these “offshoots” of our intelligence could potentially contain the seed of our intellectual destruction. To get a grasp on this twist of evolutionary fate, it´s necessary to reiterate the fact that the stunningly complex and fragile constellation of genes that gave us our intelligence was only possible because Mother Nature was such a harsh and unforgiving tutor. The development of our brittle intellectual abilities actually required that natural selection cut us no slack. Any branches of pre-human prototypes that couldn´t quite get the right combination of those 2,000 to 5,000 ℠smart genes´ were simply pruned off the tree of life — and the score or more of extinct hominine species suggests that these casualties were extremely high.
By at least 50,000 years ago, however, evolution´s perpetual tinkering had led to the development of a species that was capable of interacting with its environment at a level of sophistication and adaptability unprecedented in the history of life. In fact, so sophisticated was this new tool, the human brain, that it rapidly began branching out to perform new types of tasks that nature had not even intended in its original design. No sooner did the brain of Homo sapiens possess the abstract problem-solving skills needed to fashion weapons and coordinate hunts that it began to extend these abilities to other tasks like constructing a symbolic language for higher-level communication with its peers, taming animals for long-term use rather than immediate consumption, and unraveling the mysteries of how to grow and harvest seeding plants. In evolutionary time, the development of all of the features of human civilization appeared just a wink after we had learned to think.
According to Crabtree´s theory, this ability to tame nature and therefore mitigate the harshness of our environment may paradoxically be the greatest blessing and curse that our unique intelligence has given us as a species. While the details of his argument are constructed around technical calculations of mutation frequencies and abstruse analyses of genetic dysfunction, the basic idea is fairly clear: As the amenities of civilization have made our lives easier, they have simultaneously weakened the genetic foundations of the intelligence which made that civilization possible in the first place. Our triumphant ancestors in northern Africa, he explains, “did not have organized agriculture that permitted life at high density in cities and societies. Thus, the selective pressures that gave us our capacity for abstract thought and human mental characteristics operated among hunter gathers [sic] living in dispersed bands nothing like our present day high-density, supportive societies.”
If the pressure of intense natural selection in the wild was necessary to bring together and maintain all those intelligence genes, then the civilizations that we have formed since the advent of human intelligence have significantly weakened those pressures. As we mastered the art of agriculture and urban living, life become softer, and the intense pressure to maintain intelligence went a little slack. “Community life would,” guesses Crabtree, “tend to reduce the selective pressure placed on every individual, every day of their life.” And as we became more urban, traits other than intellectual prowess grew relatively more significant. For instance, while the selective pressure for intelligence loosened up, other qualities like resistance to infectious diseases became increasingly vital to survival among larger, more stationary groups of individuals.
The result of softened selective pressures for intelligence, Crabtree postulates, is that more and more harmful mutations to our intelligence genes have probably managed to slip into our genome in the past few thousand years. Out in the prehistoric wild, a mutation that caused even a minor dulling of the intellect probably would have been enough to spell certain death for an ancestor who carried it. However, in the cozy berth of a large agrarian society, life became increasingly forgiving of these genetic mishaps, and rather passing away into oblivion these genetic mutations have been passed on to the next generation. And thus, per Crabtree´s thesis, the intellectual foundation of our intelligence has probably been eroding since the dawn of civilization
Using a string of dizzying calculations rooted in current data about the frequency of various genetic mutations, Crabtree has even made an estimate of the number of deleterious mutations that the human intellect may have suffered in the last few millennia: “Within 3000 years or about 120 generations, we have all very likely sustained two or more mutations harmful to our intellectual or emotion stability.” And as if that weren´t bad enough news, he also says there´s a good chance that our intelligence is eroding exponentially as we accumulate these mutations.
SCIENCE AS SAVIOR
So is it all speculation or is there empirical substance to this gloomy thesis? Are we getting dumber or aren´t we?
Crabtree concludes his paper by suggesting that his hypothesis could be empirically tested with a relatively basic set of experiments that involve sequencing the genomes of individuals whose last common ancestors spanned the period from present day to about 5,000 years ago. This would allow researchers to estimate the speed with which mutations have been accumulating as well as whether and to what degree the selective pressures for these intelligence genes have diminished.
For his part, Crabtree says, he certainly hopes that his theory proves to be incorrect. After all, life in civilization is probably immeasurably more pleasant than the harsh existence that our first ancestors experienced in daily combat with the undiluted forces of nature. Community life has made living easier for everyone, says Crabtree: “Indeed, that´s why I prefer to live in such a society.”
But even if his thesis turns out to be spot on and we are in fact losing our intellectual edge, Crabtree suggests that we need not get too worried about it. While the deterioration of our intelligence may be happening quickly on an evolutionary timescale, the astounding pace at which the fields of genetics, nanotechnology and biotech are advancing means that we will probably have a way to address and rectify this and many other genetic problems in just a few generations, if not sooner.
“One does not need to imagine a day when we might no longer be able to comprehend the problem or the means to do anything about the slow decay in the genes underlying our intellectual fitness. Nor do we need to have visions of the world´s population docilely watching reruns on televisions that they can no longer understand or build.” The reason, explains Crabtree, is that our technological prowess is outpacing our genetic decline.
“It is exceedingly unlikely that one hundred or two hundred years will make any difference at the rate of change that might be occurring. “¦ The sciences have come so far in the past hundred years that we can safely predict that the accelerating rate of knowledge accumulation within our intellectually robust society will lead to the solution of this potentially very difficult problem by socially and morally acceptable means.”
In other words, civilization´s collective intellectual legacy may actually be able to save us from itself.

Week At The Spa Provides Tangible Health Benefits

redOrbit Staff & Wire Reports – Your Universe Online

Just one week at a health spa improves emotional and physical well-being, with measureable improvements in health, a new study finds.

Researchers from Thomas Jefferson University Hospital evaluated 15 participants before and after their visit to the We Care Spa, a health and wellness spa in Desert Hot Springs California, and found appreciable improvements in health after just one week.

“Programs such as these have never before been formally evaluated for their safety and physiological effects,” said lead author Dr. Andrew Newberg, director of research at the Jefferson-Myrna Brind Center of Integrative Medicine.

The researchers´ pilot study is one of the first to attach scientific data to the outcomes of a health and wellness spa stay.

The week-long program included diet modification, meditation and colonic hydrotherapy, voluntarily participation in low-risk hatha and Vishnu flow-yoga programs, and a juice-fast cleansing very low calorie diet of approximately 800 calories per day.

Stress management was provided through daily structured meditation and yoga programs, as well as time for personal meditation, deep breathing and heightened awareness.

In preparation for their visit to the spa, participants were asked to modify their diet three to four days prior to their arrival by replacing their normal diet with fruit, sprouts, raw and steamed vegetables, salads, herbal teas, prune juice in the morning, laxative teas or herbal laxatives nightly and avoiding pasta, meat, cheese, caffeine, alcohol and processed foods.

The participants, which included 13 women and two men between the ages of 21 and 85, had no history of significant medical, neurological or psychological conditions. Each underwent a thorough physical evaluation before and after their week at the spa, including weight, Body Mass Index (BMI), blood pressure, complete blood count (CBC) liver function, EKG, and tests to determine their levels of cholesterol, triglycerides, thyroid hormone and the concentration of metals such as mercury and lead in their blood.

The researchers also evaluated various psychological and spiritual measures among the participants before and after their week at the spa.

The results showed the spa program resulted in a weight decline, on average, of 6.8 pounds, a 7.7 percent decrease in diastolic blood pressure, a decrease in mercury, sodium and chloride levels, a 5.2 percent decline in cholesterol level and lowered mean BMI.

Declining cholesterol levels seemed to be curiously associated with a decline in HDL’s, the good high-density lipoproteins, which is of some concern, though they remained within the range regarded as beneficial.

Hemoglobin levels rose by 5.9 percent, but no statistically significant changes in liver or thyroid function or EKG measurements were observed.

While no serious adverse effects were reported by any participant, the researchers noted changes in the participants’ sodium and chloride concentrations, suggesting that those interested in going to a spa program should check with their physician to make sure they do not have any medical problems or medications that could put them at risk for electrolyte disturbances.

Improvements in anger, tension, vigor, fatigue and confusion were also seen, as was a statistically significant improvement in anxiety and depression levels measured by the Speilberger Anxiety Scale and the Beck Depression Index.

Participants also reported significant changes in their feelings about spirituality and religiosity.

The researchers noted that it was not possible to differentiate the effects of each of the individual elements of the program to determine which components were responsible for the changes observed.

“This will require an evaluation of one or more elements–such as yoga, very low calorie diet or colonics–in isolation to determine which elements have the most significant effects,” said Newberg.

The researchers plan to study the effects of a spa stay among people with specific diseases, such as diabetes.

Their complete findings will be available in the December issue of Integrative Medicine, A Clinician’s Journal.

Meteorites May Help Find A Common Denominator Between Mars And Earth

April Flowers for redOrbit.com – Your Universe Online
A team of scientists led by the Carnegie Institution for Science recently studied the hydrogen in water from the interior of Mars and found that Mars was formed from similar building blocks to that of Earth. There are differences, however, in the later evolution of the two planets, which implies that terrestrial planets such as Earth have similar water sources — chondritic meteorites.
Unlike Earth, however, rocks on Mars that contain atmospheric volatiles such as water do not get recycled into the planet’s deep interior.
The origin, abundance and history of water on Mars are subjects of much controversy. Although the sculpted channels of the Martian southern hemisphere speak loudly of flowing water, the terrain is ancient, leading planetary scientists to describe early Mars as “warm and wet” and current day Mars as “cold and dry.”
The focus of the debate is on how the interior and crust of Mars formed and how they differ from Earth. The team of scientists, including members from NASA’s Johnson Space Center and the Lunar and Planetary Institute, studied water concentrations and hydrogen isotopic compositions trapped inside crystals within two Martian meteorites to understand the history of Martian water and other volatiles. These meteorites are called shergottites. They are of the same primitive nature, but one is rich in elements such as hydrogen, and the other is not.
The two meteorites, which are pristine samples of various Martian volatile element environments, contain trapped basaltic liquids. One, which has a hydrogen isotopic composition similar to that of Earth, appears to have changed little on its way from the Martian mantle up to the surface of Mars. The second meteorite, however, appears to have sampled Martian crust that had been in contact with the Martian atmosphere. The two meteorites represent two very different sources of water: one sampled water from the deep interior — representing water that existed when Mars formed — and the other sampled the shallow crust and atmosphere.
“There are competing theories that account for the diverse compositions of Martian meteorites,” said Tomohiro Usui. “Until this study there was no direct evidence that primitive Martian lavas contained material from the surface of Mars.”
The team inferred that Martian surface water has had a different geologic history than Martian interior water because the two meteorites have such different hydrogen isotopic compositions. They claim that the difference is mostly likely because atmospheric water has preferentially lost the lighter hydrogen isotope to space, and has preferentially retained the heavier hydrogen isotope (deuterium).
An important mystery could be solved by the fact that the enriched meteorite has incorporated crustal and atmospheric water. Scientists have been wondering if Martian meteorites that are enriched in components, such as water, coming from an enriched, deep mantle, or have they been overprinted by interaction with the Martian crust.
“The hydrogen isotopic composition of the water in the enriched meteorite clearly indicates that they have been overprinted, so this meteorite tells scientists more about the Martian crust than about the Martian mantle,” Alexander said. “Conversely, the other meteorite yields more information about the Martian interior.”
The water concentrations in the meteorites are very different as well. The non-enriched meteorite has a rather low concentration, meaning that the interior of Mars is rather dry. The enriched meteorite, however, has 10 times more water than the other one, suggesting that the surface of Mars could have been very wet at one time.
“To understand the geologic history of Mars, more information about both of these environments is needed,” Carnegie’s Conel Alexander said.
The team’s findings will be published in the December issue of Earth and Planetary Science Letters.

Apes Have Mid-life Crises Suggesting Biological Link

redOrbit Staff & Wire Reports – Your Universe Online
Apes, like humans, experience a decline in happiness during middle age, which rebounds as they approach old age, according to a new study that suggests the infamous mid-life crisis may have biological, rather than sociological, roots.
Humans across many cultures report a dip in happiness during their late-40s when compared with their life satisfaction during younger and older years.
In the current study, an international team of researchers studied 508 great apes in captivity, and found that the animals´ sense of well being bottomed out during their late 20s to mid-30s — the ape equivalent of middle age — before recovering in old age.
“There’s a common understanding that there’s a dip in wellbeing in middle age, and that’s been found in many datasets across human cultures,” said study leader Alex Weiss, a psychologist at Edinburgh University, during an interview with The Guardian.
“We took a step back and asked whether it’s possible that instead of the midlife crisis being human-specific, and driven only by social factors, it reflects some evolved tendency for middle-aged individuals to have lower wellbeing.”
The findings suggest that the midlife crisis may have its roots in the biology humans share with our nearest evolutionary cousins.
The researchers asked zookeepers, caregivers and others who worked with 508 male and female chimpanzees and orangutans of various ages to complete questionnaires about the animals.
The apes included two separate groups of chimpanzees, and a group of orangutans from Sumatra or Borneo. All resided in zoos, sanctuaries and research centers across the U.S., Australia, Japan, Canada and Singapore.
The survey included queries about each animal’s mood, their pleasure when socializing and their success at achieving certain goals.
The researchers also asked the caregivers to describe how they would feel about being the ape for a week.
The responses were scored from one to seven.
The researchers analyzed the caregivers´ responses, and found that wellbeing in the apes dropped during middle age, but rose again as the animals moved into old age.
The peak wellbeing occurred, on average, at 28.3 and 27.2 years old for the chimpanzees, and 35.4 years old for the orangutans. Considering that great apes often live to 50 or more during captivity, these ages correspond to middle age.
“In all three groups we find evidence that wellbeing is lowest in chimpanzees and orangutans at an age that roughly corresponds to midlife in humans,” Weiss said.
“On average, wellbeing scores are lowest when animals are around 30 years old.”
The researchers said the temporary fall in ape wellbeing could be the result of depressed apes dying younger, or of age-related changes in the brain that are also present in humans.
Weiss noted that the similarities between humans, chimps and orangutans go beyond genetics and physiology, and said he believes the findings could provide a deeper understanding of the emotional crisis some humans experience during middle age.
“If we want to find the answer as to what’s going on with the midlife crisis, we should look at what is similar in middle-aged humans, chimps and orangutans,” he said.
Humans and apes face similar social pressures and stress factors, he added.
“You don’t have the chimpanzee hitting mid-life and suddenly they want a bright red sports car, but there may be other things that they want like mating with more females or gaining access to more resources,” he told BBC Nature.
But other scientists are skeptical about the findings.
“What can produce a sense of wellbeing or contentedness that varies across the lifespan like this? It’s hard to see anything in an ape’s life that would have that sort of pattern, that they would cogitate about,” said Robin Dunbar, professor of evolutionary psychology at Oxford University, during an interview with The Guardian.
“They’re not particularly good at seeing far ahead into the future, that’s one of the big differences between them and us.”
Psychology professor Alexandra Freund at the University of Zurich also found the study´s findings dubious, saying that the concept of a midlife crisis was questionable even in humans.
“In my reading of the literature, there is no evidence for the midlife crisis. If there’s any indication of decline in emotional or subjective wellbeing it is very small and in many studies, it’s not there at all,” she said.
But study co-author Andrew Oswald, an economics professor at the University of Warwick who has researched human happiness for 20 years, says the phenomenon is authentic.
“The mid-life crisis is real and it exists in…our closest biological relatives, suggesting that it is probably explained by biology and physiology,” he said.
“One of the reasons we decided to look at ape data was that when you study humans, that U-shape is exactly the same when you adjust statistically for things like education, income and marriage,” he told BBC.
It was “quite mind-blowing… to find it in apes,” he said.
“Maybe evolution needed us to be at our most dissatisfied in midlife,” he said.
Weiss said the study provides interesting opportunities for future research, since the mid-life crisis has long been thought to be specific to human society.
“What [this study] says is that it may be a part of the picture, but it’s clearly not all of the picture.”
“We have to look deeper into our evolutionary past and that of the common ancestors that we share with chimpanzees, orangutans and other apes.”
The study was published online on Monday in the journal Proceedings of the National Academy of Sciences (PNAS).

Drinking Alcohol Reduces Risk Of Death When Injured, Hospitalized

Michael Harper for redOrbit.com — Your Universe Online

Alcohol can make a person act in an uncharacteristic manner, persuading them to do things they would never do with a sober mind. In fact, some people (depending on their poison) are even taken to feeling indestructible with the right amount of spirits in them. This is all very well known behavior, of course.

One new study, however, is suggesting that people who become injured and require hospitalization are more likely to live if they have alcohol in their system.

“This study is not encouraging people to drink,” states Lee Friedman, author of this study and University of Illinois at Chicago (UIC) injury epidemiologist. While this study has found that those patients with some alcohol in their system were more likely to survive their injuries, they were not able to conclude that drunk people are somehow invincible. In fact, Dr. Friedman´s research found people are more likely to sustain some sort of injury after they´ve been drinking. However, once these patients become injured as a result of boozing, the alcohol may act as a sort of protection.

“After an injury, if you are intoxicated there seems to be a pretty substantial protective effect,” said Dr. Friedman today in a statement.

“The more alcohol you have in your system, the more the protective effect.”

To conduct his research, Dr. Friedman analyzed the data from the Illinois Trauma Registry for more than 190,000 patients between 1995 and 2009. Looking specifically at those trauma patients who had a blood alcohol content of zero to 0.5 percent, Dr. Friedman found 6,733 of these patients died in the hospital.

Dr. Friedman suggests the presence of alcohol in a patient´s blood directly corresponds with their likelihood of surviving their trauma. For instance, those who had been submitted to the hospital for fractures, internal injuries and open wounds were more likely to survive if they had a bit of alcohol in their system. There was one notable exception, however: Those patients with some booze in their blood who had been burned were not protected by alcohol.

According to Dr. Friedman´s study, those patients with a low blood alcohol concentration (below 0.1 percent) were less likely to survive their trauma.

“At the higher levels of blood alcohol concentration, there was a reduction of almost 50 percent in hospital mortality rates,” Dr. Friedman said.

“This protective benefit persists even after taking into account injury severity and other factors known to be strongly associated with mortality following an injury.”

Not many studies have been conducted to understand the correlation between alcohol and mortality rates in hospitals. Some studies have been conducted to measure the protective properties of alcohol on animals, but according to Dr. Friedman, these studies have mostly contradicted one another.

In conclusion, Dr. Friedman stressed how important it is for doctors and clinicians to recognize the signs of inebriation and to understand how alcohol can affect a patient´s treatment. While further research still needs to be done to better understand why alcohol seems to protect the injured, Dr. Friedman suggests hospitals and first responders could one day begin treating these patients with drugs that mimic the effects of alcohol.

Mosquitoes May Fly Well In The Rain, But They Fail In The Fog

Lee Rannals for redOrbit.com — Your Universe Online

Researchers have determined that despite their amazing ability to fly in the rain, mosquitoes fail miserably while trying to fly in heavy fog.

Scientists reported at the 65th meeting of the American Physical Society’s (APS) Division of Fluid Dynamics that just like airplanes, the blood-sucking insects are grounded when fog thickens.

“Raindrop and fog impacts affect mosquitoes quite differently,” Georgia Tech researcher Andrew Dickerson said in a statement. “From a mosquito’s perspective, a falling raindrop is like us being struck by a small car. A fog particle — weighing 20 million times less than a mosquito — is like being struck by a crumb. Thus, fog is to a mosquito as rain is to a human.”

Mosquitos get struck by a drop once every 20 seconds during a rainstorm on average, and still manage to stay afloat. However, a cloud of fog presents a different scenario.

Water droplets in a fog cloud are so small they do not weigh down a mosquito enough to affect its ability to fly, regardless of their abundance. So, the team decided to determine why the flying insects are so affected by the haze. The researchers looked to high-speed videography for a little help in making that determination.

They found that mosquitoes have reduced wing-beat frequency in heavy fog, but retain the ability to generate sufficient force to lift their bodies, even after significant dew deposition. However, they are unable to maintain an upright position required for sustainable flight.

Fog impacts a mosquito’s halteres, which are the insect’s primary flight control mechanism. Halteres are small knobbed structures that evolved from the hind wings and flap anti-phase with the wings, providing gyroscopic feedback through Coriolis forces.

These halteres are a comparable size to the fog droplets, and they flap about 400 times each second, striking thousands of drops per second. Although they can normally repel water, repeated collisions with 5-micron fog particles hinder flight control and leads to flight failure, according to the research.

“Thus the halteres cannot sense their position correctly and malfunction, similarly to how windshield wipers fail to work well when the rain is very heavy or if there is snow on the windshield,” Dickerson commented. “This study shows us that insect flight is similar to human flight in aircraft in that flight is not possible when the insects cannot sense their surroundings.”

He said that for humans, visibly hinders flight, but for insects it is their gyroscopic flight sensors that suffer.

Canadian Researchers Find Correlation Between Childcare and Obesity

Brett Smith for redOrbit.com – Your Universe Online

With the snack company Hostess recently making headlines and Thanksgiving just days away, many people are taking the opportunity to focus on the problem of obesity; childhood obesity in particular.

A timely study was published recently in the Journal of Pediatrics that linked child care by an extended family member or daycare with a 50 percent increased risk for childhood obesity.

“We found that children whose primary care arrangement between 1.5 and 4 years was in daycare-center or with an extended family member were around 50 percent more likely to be overweight or obese between the ages of 4-10 years compared to those cared for at home by their parents,” said lead author Marie-Claude Geoffroy, a researcher affiliated with the University of Montreal at the time of the study.

“This difference cannot be explained by known risk factors such as socioeconomic status of the parents, breastfeeding, body mass index of the mother, or employment status of the mother,” she added.

The study included over 1,600 Québec families with children born between 1997 and 1998. Mothers were interviewed about the type of care for their children at 1.5 years, 2.5 years, 3.5 years, and 4 years. The children categorized by the type of care which they had received the most, including ‘daycare center’ (30 percent), ‘family daycare’ (35 percent),  ‘parents’ (19 percent). ‘extended family member’ (11 percent), or with a ‘nanny’ (5 percent).

Over the six-year course of the study, the researchers determined the children’s body mass index by recording their weight and height. The researchers then classified children with excessive weight or obesity based on international standards (IOTF).

Although the scientists were able to make a strong correlation between the type of care and the risk for obesity, they were unable to uncover a mechanism responsible for this phenomenon.

“Diet and physical activity are avenues to follow,” said co-author Sylvana Côté. “Parents don’t have to worry; however, I suggest to parents they ensure their children eat well and get enough physical activity, whether at home or at daycare.”

The scientists noted that sending young children to daycare doesn´t necessarily condemn them to becoming overweight and could be an opportunity to establish healthy habits.

“The enormous potential of the impact of daycare on the nutritional health of children 2-5 years of age was also noted by the Extenso unit of the University of Montreal Nutrition Reference Centre, which has developed a Web portal specifically devoted to children in daycare,” said Jean Séguin, who also co-authored the study.

Last week, prominent member of Congress Rep. Dennis Kucinich, D-Ohio, spoke out against the childhood obesity epidemic and proposed a bill that would aim at reducing the visibility of junk food that is targeted at this segment of the population.

“Everyone here knows the problem we have with childhood obesity in America,” he said standing on the House floor. “Childhood obesity is at an epidemic level. We all know young people who have consumed various types of food that has left them in a condition that is unhealthy. And yet did you know that we are actually giving tax deductions out to big companies that go ahead and advertise and market products that contribute to childhood obesity?”

“So what I’m doing is introducing a bill right now that would protect children’s health by denying any deduction for advertising and marketing that’s directed at children to promote the consumption of food at fast-food restaurants or of any kind of food that’s of poor nutritional quality,” he said.

Adolescents Looking To Bulk Up Turn To Controversial Steroid Use

Lawrence LeBlond for redOrbit.com – Your Universe Online

The new “in” when it comes to body image is large, lean and muscular. And to get that way, many teenagers are turning to diet and exercise, protein powders, and more worrisome, steroid use, in the hopes of enhancing muscle development. And although these techniques in the past have been mainly seen among boys, in some cases it is nearly as widespread among girls, according to a new study published in the journal Pediatrics on Monday.

The data comes from a study of close to 2,800 kids and teens at 20 different middle and high schools in the Minneapolis/St. Paul area. The study, which took place during the 2009/10 school year, found that Asian students in the study were three to four times more likely to have used steroids in the past year than white students. Marla Eisenberg from the University of Minnesota and her colleagues noted that most of the Asians involved in the study were Hmong.

According to Eisenberg´s research, about five percent of middle and high school students have reported using anabolic steroids to put on muscle. In addition to steroid use, more than a third of boys and a fifth of girls in the study also reported using protein powder or shakes to bulk up, and between 5 and 10 percent used a non-steroid muscle-enhancing substance, such as creatine.

These findings suggest that “increasing muscle strength or mass or tone is an important piece of body image for both boys and girls,” said Eisenberg, professor of pediatrics at the UMINN´s School of Medicine. “Kids really are seeing that as a goal.”

And it´s not just a behavior isolated to athletes, she said. Students who said they did not even play sports were reporting muscle-enhancing efforts.

Eisenberg thinks the media may be one factor driving teens to do more to get their body toned, another could be pressure from athletic instructors.

Dr. Linn Goldberg, of Oregon Health & Science University in Portland, said the pressure for kids to start using steroids starts in high school. “You get the influence of older teens in high school, so when you’re a 14-year-old that comes in, you have 17-year-olds who are the seniors, and they can have great influence as you progress into the next stage of your athletic career.”

Eisenberg said teenage interest in building their physiques is nothing new. What is new, however, is a social and cultural emphasis “not just about having a healthy physique,” but about achieving the “perfect” muscular body, which ultimately is “just one more cultural ideal that young people find hard to achieve.”

As a result, the good reason teens have to be physically active–skill development, having fun and general fitness–run the risk of being overshadowed by the goal of looking like someone in a magazine ad or an athlete in their favorite sport, she said.

And given greater awareness of performance-enhancing and muscle-building substances, teens know there are many different ways to bulk up, many of which “are not recommended and not safe, but may be quite effective,” Eisenberg explained.

This study is a reminder that parents need to be aware that these behaviors are going on and that they need to be discussed with their children, noted Joel Brenner, medical director of the Sports Medicine Program at Children’s Hospital of the King’s Daughters in Norfolk, Va., and chair of the American Academy of Pediatrics Council on Sports Medicine and Fitness (COSMF).

Steroid-use is particularly dangerous and should be avoided, but inappropriate changes to diet or exercise can also be hazardous, he added. Parents need to stay aware of their child´s goals and make sure their activities remain “part of an overall fitness program,” Brenner told USA Today´s Michelle Healy.

Eisenberg said in the majority of those surveyed, student-athletes were more likely than their peers to use most methods for muscle-building, but steroid use was equally common for both athletes and non-athletes.

The study also showed higher adolescent use of steroids and other muscle-building substances than most other research has shown previously, and is “a cause for concern.” However, it isn´t clear whether the findings would apply to an area outside of the Twin Cities, or among wealthier students, as the research focused mainly on the poor and middle-class.

Anabolic steroids are synthetic versions of testosterone, the male sex hormone. Steroids are generally prescribed to treat conditions involving hormone deficiency or muscle loss, but when they are used for non-medical purposes, it is often administered at much higher dosages, according to the National Institue on Drug Abuse.

Overuse of steroids can cause mood swings, can stunt growth and, in younger kids, can accelerate puberty. Steroid use has become mainstay in professional sports, including baseball, football and boxing. Experts have worried that the drive to get ahead at any cost could trickle down to college athletes and also to high school and even middle school athletes.

Goldberg, co-developer of the ATLAS & ATHENA program to prevent steroid and other substance use on high school teams, said it’s important to give teens healthier alternatives to build muscle.

“I would stay away from all supplements, because you don’t know what’s in them,” Goldberg, who wasn’t involved in the new study, told Genevra Pittman of Reuters Health.

“What’s important is to teach kids how to eat correctly,” he said. Goldberg said getting enough protein through food, eating breakfast and avoiding muscle toxins like alcohol and marijuana can all help young athletes get stronger without shakes or supplements.

Changes To Facebook’s EdgeRank Draw The Ire Of Many

Michael Harper for redOrbit.com — Your Universe Online

Facebook is once again having to mend fences between themselves and the businesses that use the social networking site as an advertising tool.

Some companies, including NBA franchise Dallas Mavericks, have recently complained that their posts are being seen by fewer and fewer users. Facebook recently retooled the algorithm used to send out these posts to the users, a move which the company says drastically affected the way users engaged with these companies, reports Donna Tam of CNET.

In a press conference held on Friday, Facebook explained the algorithm used to determine who sees what in their timeline.

Called EdgeRank, Facebook´s algorithm is said to keep unwanted and irrelevant contact from a user´s feed. Will Cathcart, a product manager for the social giant, explained that if a user ever says they don´t want to see updates from a particular company, EdgeRank works to make sure they don´t.

In addition to explaining what doesn´t show up on a user´s news feed, Cathcart also explained what stories do appear in the feed. In order to make it to a user´s news feed, a post has to meet a few requirements, write Brandon Bailey for Silicon Beat.

For instance, if a user has interacted with content from the source in the past, the post is more likely to appear in the news feed. If a user has commented or liked a post from a company before, then EdgeRank will push future posts through. If a user has ignored posts in the past, however, it´s less likely further posts will show up in their feed.

Likewise, if a user has interacted with a similar post in the past, then these new posts are more likely to make an appearance in the feed.

EdgeRank also looks at interaction on a larger scale, so if thousands of Facebook users are interacting with a post – commenting on it, sharing it, etc – then it´s more likely to make it in more users´ feed.

When Facebook reworked EdgeRank in September, they also began to factor in some elements which may cause some users to view these posts as spam. As such, EdgeRank has been working to keep this potentially-Spammy material out of users´ news feeds.

Naturally, if a business or organization wants to pay for the privilege of a promoted post, then more users will have this post in their feed. The idea of a promoted post is relatively new and has been heavily pushed by Facebook in the months after their lackluster IPO as a way for the company to bring in extra dollars. Companies large and small can set up free pages for their business and have their posts and stories show up organically in users´ news feeds without paying anything extra.

However, with promoted posts, businesses can pay to have these posts appear front and center in more users´ feeds, ensuring more people can see and interact with the post. However, the number of people reached by these posts depends on how many fans the company has on the network, a number which closely corresponds with how large a company or organization has become.

Matt Idema, a product marketing director for ads at Facebook said the company created promoted posts as a way to give all businesses, large and small, an equal opportunity to when it comes to reaching their fans.

“We received significant adoption among small businesses. Several hundred thousand have used it,” said Idema, speaking with Forbes.

Yet, for all the success Facebook says they´re having with promoted posts, it´s never good to have large, high-profile organizations, such as the Dallas Mavericks, decide to take their advertising dollars elsewhere.

Sunita Williams And Her Expedition 33 Crew Land In Kazakhstan Safely

[WATCH VIDEO: Expedition 33 Crew Returns]

Lawrence LeBlond for redOrbit.com – Your Universe Online

After spending 127 days in space and 125 days aboard the International Space Station, the three members of Expedition 33 have landed safely in their Soyuz TMA-05M spacecraft north of Arkalyk, Kazakhstan at 7:56 a.m. Kazakhstan time on November 19 (7:56 p.m. EST Nov. 18).

Commander Sunita Williams, and Flight Engineers Akihiko Hoshide and Yuri Malenchenko, wrapped up their 4-month mission, which began on July 15, and headed home for a pre-dawn landing. This was the first dark landing for a station crew since April 6, 2006, when Expedition 12 crew members returned.

Before disembarking from the Space Station, Williams officially handed over the reins to Kevin Ford, who now becomes Commander of Expedition 34. Ford and his crew, Russian cosmonauts Oleg Novitskiy and Evgeny Tarelkin, arrived at the station on October 25 and will remain as a skeleton crew until Soyuz brings three new members to the orbiting lab in December.

While aboard the Space Station, Williams and her team advanced the scope of research by conducting several experiments in physical science, Earth observation and technology demonstrations. Notable research included radiation level testing, assessing the effects of microgravity on the spinal cord, and investigating dynamic processes of Earth, such as glacial melt and ecosystem impacts.

The crew also participated in high-intensity, low-volume exercise training through an innovative new technique called the Integrated Resistance and Aerobic Training Study-Sprint. This experiment was designed to test how to prevent loss of muscle, bone and cardiovascular functions while in space.

The Expedition 33 crew also managed a number of resupply visits to the station. The most notable was the inaugural supply mission for SpaceX. Also, the crew participated in several challenging spacewalks, one of which lasted more than 6 hours, as Williams and Hoshide tried to repair a leaky radiator.

During their time aboard the Space Station, Williams and her crew orbited the Earth 2,032 times and traveled more than 54 million miles. Williams ranks sixth for the longest amount of days (322) spent in space by an American; she ranks 2nd for a female. Williams also holds the record for the most cumulative time spent on spacewalks by a female: a total of 50 hours and 40 minutes. Malenchenko has spent 642 days in space over five flights.

Effect Of Trance-like States On The Brain Studied Using Brazilian Mediums

Alan McStravick for redOrbit.com – Your Universe Online

In this week´s online open source journal PLOS ONE, a new study details how brain activity affects individuals who engage in psychography. Psychography, or automatic writing, is a technique used by mediums in an effort to free-write messages from the deceased or from spirits.

Researchers from Thomas Jefferson University and the University of Sao Paulo in Brazil embarked on this study, analyzing the cerebral blood flow (CBF) of Brazilian mediums during this mystical practice. What the team was able to determine was that the brain activity of the mediums underwent a significant decrease in activity when they would enter this mediumistic dissociative state.

To collect the data, researchers selected 10 mediums and injected them with a radioactive tracer that would allow them to visualize their brain activity during both normal writing and also during the practice of psychography. Of the 10 mediums observed in the study, five were considered as experienced while the other five were less expert. To observe the brain activity, researchers employed the use of SPECT (single photon emission computed tomography), allowing them to conclusively view the brain and the areas within it that were active and inactive at different times during the experiment.

“Spiritual experiences affect cerebral activity, this is known. But, the cerebral response to mediumship, the practice of supposedly being in communication with, or under the control of the spirit of a deceased person, has received little scientific attention, and from now on new studies should be conducted,” says Andrew Newberg, MD, director of Research at the Jefferson-Myrna Brind Center of Integrative Medicine and a nationally-known expert on spirituality and the brain, who collaborated with Julio F. P. Peres, Clinical Psychologist, PhD in Neuroscience and Behavior, Institute of Psychology at the University of Sao Paulo in Brazil, and colleagues on the research.

With between 15 and 47 years of automatic writing experience, and having performed the act as many as 18 times a month, each of the mediums was also right-handed, found to be in satisfactory mental health and not currently using any type of psychiatric drugs. Each medium was able to report that during the study, the trance-like state associated with psychography was achieved and that during the normal writing control task each was in a regular state of consciousness.

The data collected from both the experienced and non-expert mediums showed two different outcomes. The researchers noted that while the experienced psychographers showed a lowered level of activity, the less-expert psychographers showed an increase in CBF in the same observed area. The area that the SPECT focused on was the left hippocampus (limbic system), right superior temporal gyrus, and the frontal lobe regions of the left anterior cingulated and right precentral gyrus during both the act of psychography as compared to their normal, non-trance writing. This region has been deemed to be important due to its association with reasoning, planning, generating language, movement and problem solving. The lowered activity for the experienced mediums suggests, according to researchers, an absence of focus, self-awareness and consciousness during psychography.

As mentioned, the less expert psychographers presented data that showed the opposite from their more experienced counterparts. Their increased levels of CBF in the same frontal areas may be related to their more purposeful attempt at performing psychography.

The team pointed out that as none of the mediums had current mental disorders, their data supports currently held evidence that dissociative experiences are not uncommon in the general population and are not necessarily indicative of a mental disorder, especially when experienced in a religious or spiritual context. They do believe that additional research should be conducted to specifically address criteria for distinguishing between both healthy and pathological dissociative expression as it relates to mediumship.

The team also performed a detailed analysis of the writing samples that were collected. What they determined was that the complexity scores for the trance-induced writing was much higher than the control writing was. The more experienced mediums showed the highest complexity scores, which one would think would actually require more CBF activity in the frontal and temporal lobes. The writings composed during psychography typically involved ethical principles, the importance of spirituality, and bringing together the fields of science and spirituality.

Researchers have developed a few different hypotheses for why they think their data showed what it did. The first states that as frontal lobe activity decreases, the areas of the brain that support mediumistic writing are further disinhibited so that the overall complexity can increase. This process is similar to what is seen during alcohol and drug use. According to Newberg, “While the exact reason is at this point elusive, our study suggests there are neurophysiological correlates of this state.”

“This first-ever neuroscientific evaluation of mediumistic trance states reveals some exciting data to improve our understanding of the mind and its relationship with the brain. These findings deserve further investigation both in terms of replication and explanatory hypotheses,” states Newberg.

Lonesome George’s Species May Not Be Extinct After All

redOrbit Staff & Wire Reports – Your Universe Online

At the time of his death, it appeared that Lonesome George was the last member of his species, but new research suggests that his breed of giant tortoises might live on after all.

According to Jennifer Viegas of Discovery News, experts at Yale University have reportedly discovered DNA evidence suggesting that members of the species Chelonoidis abingdoni could still exist.

Those researchers collected genetic material from more than 1,600 giant tortoises, and discovered 17 hybrids that were ancestors of Lonesome George — and some of them could have been sired by purebred C. abingdoni tortoises, Viegas added. Their findings have been published in the journal Biological Conservation.

The Yale researchers conducted genetic testing at an area known as Volcano Wolf on the island of Isabela, one of the Galapagos Islands (traditional home of the C. Abingdon), explained Sasha Ingber of National Geographic News. There, they managed to identify three male turtles, nine female turtles, and five juvenile turtles (less than 20 years of age) that share DNA with Lonesome George’s subspecies.

The presence of juvenile tortoises give the authors hope that they will be able to help resurrect the species.

“Our goal is to go back this spring to look for surviving individuals of this species and to collect hybrids,” Adalgisa ‘Gisella’ Caccone, senior author of the study and a research scientist at the university’s Department of Ecology and Evolutionary Biology, told Viegas. “We hope that with a selective breeding program, we can reintroduce this tortoise species to its native home.”

“This isn’t the first time Chelonoidis nigra abingdoni has been revived: The massive reptiles were last seen in 1906 and considered extinct until the 1972 discovery of Lonesome George, then around 60 years old, on Pinta Island,” added Ingber. “The population had been wiped out by human settlers, who overharvested the tortoises for meat and introduced goats and pigs that destroyed the tortoises’ habitat and much of the island’s vegetation.”

And in related research, scientists from the University of Florence, are investigating baby Hermann’s tortoises in order to determine the effect of sperm storage on the creature’s fertilization process, BBC Nature‘s Michelle Warwicker reported on Friday.

According to Warwicker, female members of the species tend to mate with multiple male partners, and are capable of storing sperm within their bodies for multiple years. The researchers determined that when siblings sired by multiple fathers were hatched, the mating order of those fathers did not impact the chances of a successful fertilization.

While prior research had suggested that the last male to participate in the mating process would have been responsible for the majority of the offspring, Dr. Sara Fratini and her colleagues found evidence suggesting that the sperm tended to become mixed in the oviduct of the female Hermann’s tortoises, BBC Nature reported.

“Sperm storage has been frequently reported in reptiles and birds, and is associated with a promiscuous mating system,” Warwicker said. “Chelonian species are known for their long-term sperm storage, and females are capable of storing viable sperm for three or four years in specialized tubes within their oviduct.”

“To better understand this system the team set up a series of planned matings between Hermann’s tortoises (Testudo hermanni hermanni) and conducted paternity tests on tortoise hatchlings from 16 egg clutches,” she added. “They found that 46% of the clutches had been ‘multi-sired’: fertilized by two or three males.”

Furthermore, they determined that of the egg clutches that had been fertilized by at least three different male tortoises, a “significant” amount of genetic material from previous partners had been found in the distribution of the sperm, Warwicker said. Dr. Frantini also believes that female tortoises might also intentionally use older sperm before the newer sperm, in order to make sure that it is still effective.

Fratini, along with Giulia Cutuli, Dr. Stefano Cannicci and Professor Marco Vannini, detail their findings in the latest edition of the journal Behavioral Ecology and Sociobiology.

NASA Sun Observer Captures Two Solar Eruptions Over Four Hours

[ Watch the Video: Double Prominence Eruptions ]

Lawrence LeBlond for redOrbit.com – Your Universe Online

On November 13, the Sun emitted an M6 classification solar flare, one of the weakest designations still able to cause some disturbances on Earth. Now, just a few days later, the Sun is at it again.

NASA´s Solar Dynamic Observatory (SDO) caught spectacular images and video of the Sun bursting with two prominence eruptions over a four-hour period on November 16, between the hours of 1 a.m. and 5 a.m. EST. The SDO captured the event in the 304 Angstrom wavelength of extreme ultraviolet light.

While some solar flares can potentially disrupt satellites and electrical systems around Earth, this latest “double trouble” eruption was aimed away from the third rock from the Sun, so we should be out of harm´s way.

Still, the event was nothing less than stunning, as the short NASA video shows a red-glowing loop of plasma shooting out from the surface of the Sun. The plasma loop was so massive it shot out past the range of the SDO´s view.

According to NASA, the prominence plasma flows along a tangled and twisted structure of magnetic fields generated by the sun´s internal dynamo. When the structure becomes unstable, a prominence erupts and bursts outward, releasing plasma.

When these prominence eruptions occur in the direction of the Earth, harmful radiation is sent hurtling across space. While the radiation cannot penetrate the planet´s atmosphere, affecting the surface, it does pose a threat to precious objects, such as GPS and scientific satellites, as well as the International Space Station, orbiting the Earth.

The disruptions caused from radiation can last anywhere from a few minutes to a few hours, depending on the size of the eruptions the radiation hails from.

Genetic Variants Could Predict Time Of Death And Whether You Are A Night Owl Or Early Bird

redOrbit Staff & Wire Reports – Your Universe Online
Scientists have for the first time identified a common gene variant which not only determines whether or not a person is an early riser or a night owl, but could also indicate what time of day an individual is likely to die.
The research, which is published in this month’s edition of the Annals of Neurology, was conducted at the Department of Neurology at Beth Israel Deaconess Medical Center (BIDMC). It revealed variation in a lone nucleotide located near a gene known as “Period 1” between two different study groups that exhibited different wake-sleep behavior patterns.
“Many of the body’s processes follow a natural daily rhythm or so-called circadian clock. There are certain times of the day when a person is most alert, when blood pressure is highest, and when the heart is most efficient,” the BIDMC explained in a Friday press statement. “Several rare gene mutations have been found that can adjust this clock in humans, responsible for entire families in which people wake up at 3 a.m. or 4 a.m. and cannot stay up much after 8 at night.”
The variant discovered by the researchers impacts “virtually the entire population” and “is responsible for up to an hour a day of your tendency to be an early riser or night owl,” they added. The discovery “could help with scheduling shift work and planning medical treatments, as well as in monitoring the conditions of vulnerable patients.”
First author Dr. Andrew Lim, an Assistant Professor in the Division of Neurology at the University of Toronto, who began the study about 15 years ago while he was a postdoctoral fellow at the BIDMC Department of Neurology, explained that the circadian clock (commonly referred to as the biological clock) is responsible for determining a person’s sleep habits, as well as their ideal mental and physiological performance times.
He also says that it can influence “the timing of acute medical events like stroke and heart attack.”
Lim and his colleagues began their work several years ago while attempting to discover why seniors were having difficulty sleeping. The BIDMC team and colleagues at Chicago’s Rush University had recruited 1,200 healthy, 65-year-old individuals, who agreed to undergo neurological and psychiatric evaluations once a year, the medical center explained.
“The cohort’s original intent was to determine if there were identifiable precursors to the development of Parkinson’s disease or Alzheimer’s disease,” they said. “As part of the research the subjects were undergoing various sleep-wake analyses using a wristband called an actigraph, which provides a reliable record of an individual’s pattern of activity. Additionally, in order to provide the scientists with information on sleep-wake patterns within a year of death, the participants had agreed to donate their brains after they died.”
“But the investigation took a new turn when Lim learned that the same group of subjects had also had their DNA genotyped,” BIDMC officials added. “Teaming up with investigators from Brigham and Women’s Hospital (BWH), Lim and his colleagues compared the wake-sleep behavior of these individuals with their genotypes. These findings were later verified in a group of young volunteers.”
It was shortly after this that they located the nucleotide variation in the “Period 1” region of the genome. There, they report, 60-percent of the people have adenine (A) nucleotide bases and 40-percent at guanine (G) nucleotide bases. Due to each individual having two chromosome sets, there is a roughly 36-percent chance of having a pair of A bases, a 16-percent of having two G bases, and a 48-percent chance of having one of each.
“This particular genotype affects the sleep-wake pattern of virtually everyone walking around, and it is a fairly profound effect so that the people who have the A-A genotype wake up about an hour earlier than the people who have the G-G genotype, and the A-Gs wake up almost exactly in the middle,” BIDMC Chief of Neurology Clifford Saper, the man in charge of the laboratory where Lin began his work, said.
Additional research determined that the variant could affect the body’s so-called circadian rhythm in other ways, including the projected time of death. According to Saper, the average person is likely to die sometime in the late morning hours (with 11am being the average time of death).
The researchers went back to the 65-year-old subjects who had passed on since the beginning of the study, and learned that “this same genotype predicted six hours of the variation in the time of death: those with the AA or AG genotype died just before 11 a.m., like most of the population, but those with the GG genotype on average died at just before 6 p.m.”
Along with Lin and Saper, credited co-authors of the study include Drs. Anne-Marie Chang, Joshua M. Shulman, Towfique Raj, Lori B. Chibnik, Sean W. Can, Katherine Rothamel, Christophe Benoist, Amanda J. Myers, Charles A. Czeisler, Aron S. Buchman, David A Bennett, Jeanne F. Duffy, and Philip L. De Jager.

Study Finds Swimming Can Help Boost Young Kids’ Development

redOrbit Staff & Wire Reports – Your Universe Online

Learning how to swim at an early age could make youngsters smarter and help them reach developmental milestones faster than the average child, according to research conducted at one Australian university.

According to a Thursday press statement, Griffith University Institute for Educational Research professor Robyn Jorgensen and colleagues surveyed the parents of 7,000 children under the age of five from Australia, New Zealand, and the US over a three-year period.

“A further 180 children aged 3, 4 and 5 years have been involved in intensive testing, making it the world´s most comprehensive study into early-years swimming,” the institute said.

Jorgensen said that the study demonstrates that kids who participate in swimming programs during their formative years tend to acquire a vast array of skills earlier than those who do not, and that some of those skills can help them “into the transition into formal learning contexts such as pre-school or school.”

“The research also found significant differences between the swimming cohort and non-swimmers regardless of socio-economic background,” she added. “While the two higher socio-economic groups performed better than the lower two in testing, the four SES groups all performed better than the normal population.”

The researchers, who hailed from Griffith University, Kids Alive Swim Program and Swim Australia, report that there were no gender differences between the study subjects and the rest of the population.

Those who learned how to swim at a young age tended to reach physical milestones earlier, score “significantly better” in visual-motor skills tasks such as drawing lines and cutting paper, and perform better when completing mathematical tasks, they reported. Furthermore, they were also found to have outperformed non-swimmers in both reading and numerical literacy, and did a better job of expressing themselves verbally as well.

“Many of these skills are highly valuable in other learning environments and will be of considerable benefit for young children as they transition into pre-schools and school,” the researchers said.

Instrument Package Delivered for NASA’s Upcoming MAVEN Mars Mission

redOrbit Staff & Wire Reports – Your Universe Online

The remote sensing instrument package that will be a key part of an upcoming NASA mission investigating how Mars could have lost its atmosphere is ready for integration into the spacecraft.

The $20 million package — which was built at the University of Colorado Boulder (CU-Boulder) Laboratory for Atmospheric and Space Physics (LASP) and includes the Imaging UltraViolet Spectrograph (IUVS), its electronic control box, the Remote Sensing Data Processing Unit (RSDPU) — was delivered to Lockheed Martin for integration on the Mars Atmosphere and Volatile Evolution (MAVEN) spacecraft on Friday, the university announced in a prepared statement.

MAVEN, a $670 million mission currently scheduled for a November 2013 launch, will attempt to analyze and comprehend how the loss of atmospheric gases over multiple eons has affected the planet’s climate.

“With the delivery of this package, we are shifting from assembling the basic spacecraft to focusing on getting the science instruments onto the spacecraft,” CU-Boulder geological sciences professor, LASP associate director, and project head Bruce Jakosky said in a statement. “This is a major step toward getting us to launch and then getting the science return from the mission.”

The RSDPU will receive and execute commands, directing the IUVS in the performance of its tasks. The IUVS will then collect UV light and spread it out on a spectra, which is recorded using image detectors. The IUVS will act as the “eyes” of the remote sensing package, Nick Schneider, lead MAVEN IUVS scientist and LASP research associate explained.

“The IUVS allows us to study Mars and its atmosphere at a distance by looking at the light it emits,” Schneider said. “Ultraviolet light is especially diagnostic of the state of the atmosphere, so our instrument provides the global context of the whole atmosphere for the local measurements made by the rest of the payload.”

Twenty-one days following the launch, the remote sensing package will be activated for an initial operations test, and later on during the voyage from Earth to Mars, it will be powered up on two additional occasions for in-flight calibration and to ensure that all facets of the machinery are operating properly, CU-Boulder explained.

“MAVEN will be the first mission devoted to understanding the Martian atmosphere, with a goal of determining the history of the loss of atmospheric gases to space through time, providing answers about Mars climate evolution,” the university said. “By measuring the current rate of gas escaping to space and gathering enough information about the relevant processes, scientists should be able to infer how the planet’s atmosphere evolved over time.”

“The MAVEN spacecraft will carry two other instrument suites,” they added. “The Particles and Fields Package, built by the University of California Berkeley Space Science Laboratory with support from LASP and NASA Goddard, contains six instruments that will characterize the solar wind and the ionosphere of the planet. The Neutral Gas and Ion Mass Spectrometer, provided by NASA Goddard, will measure the composition and isotopes of neutral ions.”

Lunar ‘Water Rush’: Robots May Search For Water On The Moon

Brett Smith for redOrbit.com — Your Universe Online

The prospect of finding frozen water on the moon has several companies scrambling to stake a claim in “them thar lunar hills.”

“This is like the gold rush that led to the settlement of California,” said Phil Metzger, a physicist who leads the Granular Mechanics and Regolith Operations Lab, part of Kennedy Space Center’s Surface Systems Office. “This is the water rush.”

Water has already been found on asteroids and its discovery on the moon represents a top prize for NASA’s exploration plans because the resource has so many potential uses for wayfaring astronauts. Comprised of two hydrogen atoms and one oxygen atom, water can be turned into everything from rocket fuel to a source of fresh air and water.

One of the companies leading the charge to mine the moon is the Pittsburgh-based Astrobotic Technology. The company is currently in the midst of developing a solar-powered rover designed to search and drill for the frozen water.

“Our intent is to land on the surface of the moon in October 2015 and find water,” said the president of Astrobotic, John Thornton, alluding to his company´s recent deal with SpaceX to launch a lander and rover on a Falcon 9 rocket.

Thornton added that a number of competitors have sprung up and this shows the potential for landing a robotic explorer is real.

“If we were doing something really big and no one else was trying to do it, then it might not be that big,” he said.

Human visitors to the lunar surface never found signs of frozen water as they walked along the moon´s equator between 1969 and 1972. Water has never been found in any rock or soil samples ever collected from the moon. Within the past 15 years, several probes found signs frozen water not only exists on the moon, but that it is quite pervasive.

Scientists are also curious to find out if any frozen water is in the form of a powder, like the type skiers plow through as they swish down a mountainside, or if it´s completely solid ice. Some scientists expect to find evidence of water seeping down between granules of soil and freezing to create rocks as hard as granite.

“Our best guess is it’s going to be the ice,” Thornton said. “Probably small little pieces of ice mixed in with the regolith.”

According to an official statement on the NASA website, the agency is excited about the chances to use a new resource for deep space exploration.

For its part, Astrobotic said it wants to use the robotic prospector to map where the largest deposits of water and other helpful chemicals are located. The company could then use the information to efficiently extract the materials from the moon. According to Thornton, there are no plans to send water or other lunar samples back to the Earth.

“The beauty of sending a robot is they don’t demand a return ticket,” Thornton said. “Once we know where the water is and what form it is in, we can develop systems to produce it in useable quantities. Water is a critical resource because you can drink it, breathe it and use it for rocket fuel.”

Himalayan Glaciers Will Continue Shrinking Regardless Of Temperature

Michael Harper for redOrbit.com — Your Universe Online

The Himalayan glaciers are going to continue to shrink, no matter which way the temperatures go, according to new research published in the journal Geophysical Research Letters.

Summer Rupper, a geology professor at Brigham Young University, has just returned from her research in the Bhutan region of the Himalayas and has reported findings that nearly 10% of these glaciers will disappear within the next 30 years. She also reported that while these glaciers disappear, they´ll also be releasing 30% less melt water back into the ecosystem. Even if the temperatures stay the same, Rupper suggests these glaciers will continue to dwindle away.

Though it is often blamed for the disappearance of glaciers, Rupper says the increasing temperatures in this region aren’t the main culprit. Other eroding factors, such as evaporation, high winds and humidity also play a key role in the glacial melt. And with the massive size of these bodies of ice–some more than 13 miles across–it could take decades for them to completely respond to these imbalances.

“These particular glaciers have seen so much warming in the past few decades that they’re currently playing lots of catch up,” said Rupper in a prepared statement.

In fact, if temperatures were to climb by just 1 degree Celsius, the melting of these glaciers would quicken to 25% annually with a 65% loss of melt water. Though Rupper´s research suggests these glaciers will melt regardless of the temperature, it´s very likely the temperature will increase, greatly affecting the speed at which these behemoths melt.

According to the geology professor, the only way for these Bhutan glaciers to avoid melting away is for snowfall levels to nearly double. This isn´t likely, of course, as the warming temperatures have brought more rain to the area than snow. This rain also contributes to the glacial melt, leading to flooding, which could devastate the Bhutan region.

“Much of the world’s population is just downstream of the Himalayas,” said Rupper. “A lot of culture and history could be lost, not just for Bhutan but for neighboring nations facing the same risks.”

To conduct her research, Rupper teamed up with graduate students from Columbia University, researchers from the Lamont-Doherty Earth Observatory, NASA and Bhutan´s Department of Hydro-Meteorological Services.

The team hiked through the Himalayan region to reach some of the most secluded glacial hotspots to take their measurements. Once there, they placed a weather station and other monitoring equipment to actively watch the glacier for years to come.

“It took seven days just to get to the target glacier,” said Rupper, who returned from her trip in October. “For our pack animals, horsemen and guides, that terrain and elevation are a way of life, but I’ll admit the westerners in the group were a bit slower-moving.”

In September, a report from the National Research Council (NRC) concluded glaciers in the Himalayas are, in fact, melting at an alarming rate. The glaciers in the Eastern and Central Himalayas in particular are melting rapidly, while those glaciers in the western Himalayas are actually growing.

The NRC report differs slightly from Rupper´s, suggesting these disappearing glaciers will melt and provide water to the surrounding areas. Rupper´s report suggests these glaciers will evaporate and erode as they disappear, releasing less water back into the rivers and streams.

Autistic College Youth Gravitate Toward STEM Majors

Alan McStravick for redOrbit.com – Your Universe Online

Parents of children with an autism spectrum disorder (ASD) received some much needed good news in the form of a study published online in the Journal of Autism and Development Disorders.

The study, co-authored by Paul Shattuck, PhD, assistant professor at the Brown School at Washington University in St. Louis, confirms a previously held belief that individuals with an ASD typically graduate to science, technology, engineering and mathematics (STEM) majors in college. However, the study also shows that young adults with an ASD have a significantly lower rate of college enrollment, overall.

“STEM careers are touted as being important for increasing both national economic competitiveness and individual career earning power,” Shattuck says. “If popular stereotypes are accurate and college-bound youth with autism gravitate toward STEM majors, then this has the potential to be a silver lining story for a group where gloomy predictions about outcomes in adulthood are more the norm.”

With this study, Shattuck has broken new ground, as this is the first time a national picture of college enrollment and STEM participation for young adults with an ASD has been taken. The researchers compared ASDs to 10 other disability categories for the study. The other disability categories were: speech/language impairment; intellectual disabilities; emotional disturbances; hearing impairment; visual impairment; orthopedic impairment; other health impairment; traumatic brain injury; and multiple disabilities.

What Shattuck and his team discovered was that 34.3 percent of students with an ASD typically gravitated toward STEM majors. This percentage is not only higher than students representative of the other 10 disability categories, but is also higher than the 22.8 percent of students from the general population who had declared a STEM major in college. The subjects most likely to be selected for study by an individual with an ASD were science and computer science, according to the study results.

The unfortunate showing of the study was that young adults with an ASD also have one of the lowest overall college enrollment rates when compared to students in the other disability categories. The researchers learned that several factors contributed to this finding. Playing a role in whether or not an individual with an ASD would enroll in college were factors such as gender, family income and ability to carry on a conversation, among others.

“Clearly, only a subset of youth with autism will head to college after high school,” Shattuck says. “A low family income puts these young people at a disadvantage even if they are cognitively capable. We need to get better at connecting students with financial aid to help them achieve their highest potential and be contributing members of society.”

Shattuck and the research team seem to think they are witnessing a shifting trend in this lowered enrollment, however. They believe that advances in early identification and treatment of individuals with an ASD are very likely to increase their college enrollment rates. By extension, they also see that increased college enrollment will lead to an increased participation in STEM majors.

“More and more children are being identified as having autism,” Shattuck says, “children who grow up to be adults. With the majority of a typical lifespan spent in adulthood, that phase of life is the one we know least about when it comes to autism spectrum disorders.”

“This study is the latest addition to a growing body of evidence we are building here at the Brown School about the needs, strengths and challenges facing this vulnerable population,” Shattuck concludes.

The study–entitled “STEM Participation Among College Students with an Autism Spectrum Disorder”–was funded by the National Science Foundation (NSF); the National Institute of Mental Health (NIMH); the Institute of Education Sciences (IES); and Autism Speaks.

Equator Ran Through Western North America 450 Million Years Ago

April Flowers for redOrbit.com – Your Universe Online

According to an international team of scientists, led by the University of Durham, UK, the recent storms that have battered the east coast of America may have been much more frequent in the region 450 million years ago.

The findings of their study, published in the journal Geology, pinpoint the positions of the Equator and the landmasses of the USA, Canada and Greenland, during the Ordovician Period 450 million years ago, indicating the equator ran down the western side of North America with a hurricane belt to the east.

This hurricane belt affected an area that covers modern day New York State, New Jersey and most of the eastern seaboard. The team used the distribution of fossils and sediments to map the line of the Ordovician Equator down to southern California.

This new research is the first to accurately locate and map the ancient Equator and adjacent tropical zones, while previous studies had fueled controversy about the precise location of the ancient equator. The team says these new results reveal how fossils and sediments can accurately track equatorial change and continental shifts over time.

Professor David Harper, Department of Earth Sciences, Durham University, UK, in a statement, said: “The equator, equatorial zones and hurricane belts were in quite different places in the Ordovician. It is likely that the weather forecast would have featured frequent hurricane-force storms in New York and other eastern states, and warmer, more tropical weather from Seattle to California.”

The scientists believe there would have been similar climate belts to those of today since polar regions existed 450 million years ago.

The team, which included scientists from Canada, Denmark and the USA, discovered a belt of undisturbed fossils and sediments more than 3,700 miles long. The belt, which lacks typical storm-related sedimentary features where the deposits are disturbed by bad weather, stretched from the southwestern United States to North Greenland. Like the equatorial zone today, the Late Ordovician equatorial zone had few hurricane-grade storms.

Sedimentary deposits recorded on either side of the belt provide evidence of disturbance by severe storms, however. Generally, hurricanes form in the areas immediately outside of equatorial zones where temperatures of at least 80 degrees Fahrenheit combine with the Earth’s rotation to create storms. The team believes the hurricane belts would have existed in the tropics on either side of the ancient equator.

Undisturbed fossil accumulations and sediments defined the position of the equatorial belt, which is coincident with the Late Ordovician equator as interpreted from magnetic records that were taken from rocks of a similar age from the region. This provided the team with a precise equatorial location and confirmed the Earth’s magnetic field operated much in the same way as it does today.

Using the evidence of the disturbed and undisturbed sedimentary belts together with burrows and shells, the team pieced together the giant jigsaw map, enabling them to see that North America sat on either side of the Equator.

Christian Rasmussen, University of Copenhagen, said, “The layers of the earth build up over time and are commonly exposed by plate tectonics. We are able to use these ancient rocks and their fossils as evidence of the past to create an accurate map of the Ordovician globe.”

Professor Harper added, “The findings show that we had the same climate belts of today and we can see where North America was located 450 million years ago, essentially on the Equator.”

“While the Equator has remained in approximately the same place over time, the landmasses have shifted dramatically over time through tectonic movements. The undisturbed fossil belt helps to locate the exact position of the ancient Laurentian landmass, now known as North America.”

Pallasites Created By Dramatic Collision Of Asteroid And Protoplanet

April Flowers for redOrbit.com – Your Universe Online

Some meteorites found on Earth contain strikingly beautiful, translucent, olive-green crystals embedded in an iron-nickel matrix. These “space gems,” called pallasites, are only found in a tiny fraction of the total number of meteorites, but they have fascinated scientists since they were first identified as originating in outer space more than 200 years ago.

A new study, published in the journal Science and led by John Tarduno at the University of Rochester, reveals the origins of these space gems are more dramatic than originally thought. A team of geophysicists used a carbon dioxide laser, a magnetic field, and a sophisticated recording device to show that the pallasites were likely formed when a smaller asteroid crashed into a planet-like body approximately 30 times smaller than Earth. This impact resulted in a mix of material that make up the distinctive meteorites.

“The findings by John Tarduno and his team turn the original pallasite formation model on its head,” said Joshua Feinberg, assistant professor of earth sciences at the University of Minnesota. “Their analysis of the pallasites has helped to significantly redefine our understanding of how these objects formed during the early history of our solar system.”

The composition of pallasites — iron-nickel and the translucent, gem-like mineral olivine — leads many scientists to assume they were formed where those two materials typically come together – at the boundary of the iron core and rocky mantle in an asteroid or other planetary body. The team has discovered that tiny metal grains in the olivine were magnetized in a common direction. This revelation led the scientists to conclude that the pallasites must have been formed much farther from the core.

“We think the iron-nickel in the pallasites came from a collision with an asteroid,” said research team member Francis Nimmo, professor of earth and planetary sciences at the University of California Santa Cruz. “Molten iron from the core of the smaller asteroid was injected into the mantle of the larger body, creating the textures we see in the pallasites.”

“Previous thinking had been that iron was squeezed up from the core into olivine in the mantle,” said Tarduno. “The magnetic grains in the olivine showed that was not the case.”

For the metal grains in the olivine to become magnetized a churning, molten iron core is required to create a magnetic field. Temperatures at the core-mantle boundary — which reach approximately 930 Celsius — are too hot for that field to form, meaning that the pallasites must have formed at relatively shallow depths in the rocky mantle where temperatures were much cooler.

The scientists were able to heat the metal grains past their individual Curie temperatures–the point at which a metal loses its magnetization by using a carbon dioxide laser. A highly sensitive measuring instrument called a SQUID (superconducting quantum interference device) was used to record the values as the grains were cooled in the presence of a magnetic field in order to become re-magnetized. The team was able to calculate the strength of the original magnetic field and determine the rate of cooling using prior published work on metal microstructures.

“The larger the parent body was, the longer it would have taken for the samples to cool,” said Nimmo. “Our measurements, combined with a computer model we developed, told us that the parent body had a radius of about 200 km–some 30 times smaller than earth.”

The measurements helped the scientists to classify the parent body of the pallasites as a protoplanet — a small celestial object with the potential of developing into a planet.

The study also clears up questions about the possibility of protoplanets having dynamo activity – a rotating, liquid iron core that can create a magnetic field.

“Our magnetic data join mounting evidence from meteorites that small bodies can, indeed, have dynamo action,” said Tarduno.

US Seat Belt Use Reaches Record High In 2012

redOrbit Staff & Wire Reports – Your Universe Online

A record 86% of Americans buckled up in 2012, up 2 percentage points from 2011, according to an annual survey by the National Highway Traffic Safety Administration (NHTSA).

“When it comes to driving safely, one of the most effective ways to protect yourself and your family is to use a seat belt,” said Transportation Secretary Ray LaHood in a statement released with the annual report.

The greatest improvements in seat belt use occurred in the South, where compliance rates rose from 80% in 2011 to 85% this year. The West has the highest percentage of seat belt use, at 94%.

According to the National Highway Traffic Safety Administration’s annual survey, “seat belt use has steadily increased since 1994, coinciding with a decline in the percentage of unrestrained daytime passenger vehicle fatalities.”

“Thanks to the ongoing work of our state and local partners and national efforts such as ‘Click it or Ticket,’ we’ve made steady gains in belt use in recent years,” said NHTSA Administrator David Strickland.

“Moving forward, it will be critical to build on this success using a multi-faceted approach that combines good laws, effective enforcement, and public education and awareness.”

The NHTSA noted that 32 states and the District of Columbia have “primary laws” that require seat belt use. These laws mean occupants can be pulled over solely for not using seat belts. Another 17 states have weaker, “secondary laws” that allow motorists to be cited for not wearing seat belts only when they have been pulled over for another violation.

“New Hampshire is the only state that has not enacted either a primary or secondary seat belt law, though the state’s primary child passenger safety law applies to all drivers and passengers under the age of 18,” the NHTSA said.

The annual survey is the only nationwide probability-based observational survey of seat belt use in the United States. The NHTSA enlisted observers who watched for seat belt use as it actually occurred at randomly selected roadway sites, thus providing the best tracking of the extent to which passenger vehicle occupants are buckling up.

The NHTSA´s full report can be viewed here.

Researchers Successfully Transplant Neurons Made From Human Stem Cells

Connie K. Ho for redOrbit.com — Your Universe Online

Researchers from the Sanford-Burnham Medical Research Institute recently discovered that neurons developed from stem cells can boost brain activity following transplantation with a laboratory model. The findings show that the cells could possibly be used in the future to treat Alzheimer´s disease and other types of neurodegenerative illnesses.

Currently, scientists are able to develop neurons and other brain cells from stem cells. However, it is difficult to transplant the neurons properly. With the new findings, the researchers are able to move past this hurdle.

“We showed for the first time that embryonic stem cells that we´ve programmed to become neurons can integrate into existing brain circuits and fire patterns of electrical activity that are critical for consciousness and neural network activity,” explained the study´s senior author Stuart A. Lipton, a clinical neurologist, in a prepared statement.

In the study, the team of investigators transplanted neurons from human stem cells into a rodent hippocampus, the brain´s informational processing center. Next, each of the transplanted neurons was activated with optogenetic stimulation. Otpogenetic stimulation is considered a new technique that mixes light and genetics together to specifically control the cellular behavior of animals or living tissues.

The researchers then tracked the high-frequency oscillations with neurons that already existed at a distance from transplanted ones to understand whether the newly transplanted human neurons were functioning. These light-stimulated transplanted neurons helped stimulate the existing neurons to send out high-frequency oscillations. As such, the transplanted human neurons were successful in maintaining close by neuronal networks to fire off and conducting electrical impulses that were similar to the normal rate of a hippocampus that worked.

With funding for the research provided by the California Institute for Regenerative Medicine and U.S. National Institutes of Health (NIH), the researchers believe that this type of technology could have much use in the future.

“Based on these results, we might be able to restore brain activity–and thus restore motor and cognitive function–by transplanting easily manipulated neuronal cells derived from embryonic stem cells,” concluded Lipton, who also serves as the director of Sanford-Burnham´s Del E. Webb Neuroscience, Aging, and Stem Cell Research Center, in the statement.

The research also addresses the issue of Alzheimer´s, which has been growing in the last few years. According to the National Institutes of Health, Alzheimer´s is a form of dementia that can impact a person´s behavior, memory and thinking. The U.S. Centers for Disease Control and Prevention (CDC) also reported that, for older adults, Alzheimer´s is the most common form of dementia with almost five million Americans reported as having the disease. Factors like age, family, high blood pressure, high cholesterol and diabetes can all play role in the risk of Alzheimer’s. For those who believe that they have symptoms related to Alzheimer´s, it´s best to consult with a doctor on the causes of memory loss or similar symptoms. Support from family and friends is important as well.

The results of the study were recently published in the Journal of Neuroscience.

Researchers Develop Smaller, Cheaper Device To Monitor Medical Vital Signs

Connie K. Ho for redOrbit.com — Your Universe Online

First there was the record player. Then came the portable walkman and CD player. Now there´s the mp3 player and iPod. With the evolution of music, it is clear that technology has made things become smaller and smaller. Even though devices aren´t as large as before, they still retain their effectiveness. A new study from electrical engineers at Oregon State University (OSU) revealed the development of a new device that can track medical vital signs with the help of sensors that are as small as a postage stamp and affordable as well.

In particular, the innovative technology costs less than a quarter and is the size of a bandage. Currently, there is a patent processing for the monitoring system and researchers plan to have it undergo clinical trials. If approved, the monitoring system would be used as a disposable electrical sensor. Addressing the various benefits of the new device, the results of this new technology were recently announced at the Custom Integrated Circuits Conference in San Jose, California.

“Current technology allows you to measure these body signals using bulky, power-consuming, costly instruments,” commented Patrick Chiang, an associate professor in the OSU School of Electrical Engineering and Computer Science, in a prepared statement.

One of the main purposes of the new monitoring system would be to gather data, like atrial fibrillation and pulse rate, while also tracking EEG brain signals. With these various uses, the monitoring system would be a good fit to help nursing care patients with dementia by recording their daily physical activity or even prove beneficial to weight loss programs.

“What we´ve enabled is the integration of these large components onto a single microchip, achieving significant improvements in power consumption,” continued Chiang in the statement. “We can now make important biomedical measurements more portable, routine, convenient and affordable than ever before.”

Furthermore, compared to previous technology, the new monitoring system is smaller in terms of size, weight and power usage. The new electronic device could simply be attached to the heart or other body part to measure vital signs. With no need for a battery, the device is smaller and utilizes radio-frequency for power. Lastly, the device is about 10 times less the cost of current technology systems that measure the same qualities.

“The entire field of wearable body monitors is pretty exciting,” remarked Chiang in the statement. “By being able to dramatically reduce the size, weight and cost of these devices, it opens new possibilities in medical treatment, health care, disease prevention, weight management and other fields.”

In moving forward with the study of this new technology, the team of investigators plans to collaborate with members of the private industry. They believe that the system could be used along with cell phones or other devices that have high radio-frequency. The device would also be able to run on alternative energy-harvest power resources, like heat or movement from the body.

Detector To Help In The Search For Elusive Dark Matter Material

April Flowers for redOrbit.com – Your Universe Online

Nearly a mile underground beneath the Black Hills of South Dakota, scientists from Lawrence Livermore National Laboratory (LLNL) are using a tank to make key contributions to a physics experiment that will look for one of nature’s most elusive particles, “dark matter.”

The Large Underground Xenon (LUX) experiment located at the Sanford Underground Research Facility in Lead, SD is the most sensitive detector of its kind to look for dark matter, which is thought to comprise more than 80 percent of the mass of the Universe. Scientists believe dark matter could hold the key to answering some of the most challenging questions facing physicists in the 21st century. However, dark matter has eluded detection so far.

The researchers from LLNL have been involved with the LUX project since 2008.

“We at LLNL initially got involved in LUX because of the natural technological overlap with our own nonproliferation detector development programs,” said Adam Bernstein, who leads the Advanced Detectors Group in LLNL’s Physics Division. “It’s very exciting to reflect that as a result, we are now part of a world-class team that stands an excellent chance of being the first to directly and unambiguously measure cosmological dark matter particle interactions in an earthly detector.”

LUX is at the cutting-edge of the science and technology of rare event detection, and as such is of direct interest for LLNL and U.S. nonproliferation, arms control and nuclear security missions. According to LLNL, “In particular, cryogenic noble liquid detectors of this kind may allow for improved, smaller footprint reactor antineutrino monitoring systems, with application to the International Atomic Energy Agency reactor safeguards regime.”

Detectors of very similar design, using xenon or argon, have excellent neutron and gamma ray detection and discrimination properties. They may also assist with missions related to the timely discovery and characterization of fissile materials in arms control and search contexts.

Scientists and technicians from LLNL have made important contributions to LUX.

Peter Sorensen, lab staff physicist, has directed the LUX Analysis Working Group, and spent months at the site helping to install the detector. Sorensen has also written numerous peer-reviewed articles on how to perform searches for a range of dark matter candidates using LUX and related detectors.

Another staff physicist, Kareem Kazkaz, is the author of the LUX detector simulation package known as LUXSIM. Kazkaz has directed the Simulations Working Group for the project. The LUXSIM simulation software is uniquely well suited for low background detectors of this kind. Other users in the dark matter and nonproliferation communities have picked up the software to use in their projects.

John Bower and Dennis Carr (now retired) are LLNL technicians who have played key roles in the manufacture and installation of elements of the LUX detector. These elements include building the precision-machined copper photo multiplier tube mounting apparatus.

Gerry Mok, lab safety engineer, has performed detailed calculations demonstrating the safety of the LUX pressurized and cryogenic systems under a range of possible accident scenarios, which was important to the successful safety review of the LUX detector.

Located in the famous former Homestake gold mine, the Sanford Underground Research Facility (Sanford Lab) is owned and operated by the South Dakota Science and Technology Authority. It is supported by the Department of Energy and the DOE’s Lawrence Berkeley National Laboratory.

The LUX detector took three years to build at Sanford Lab in a surface facility. This past July, it was installed in an excavated cavern 4,850 feet underground with nearly a mile of solid rock protecting the sensitive equipment from the cosmic radiation that constantly bombards the surface of the earth. If the detector were on the surface, cosmic radiation would drown out the faint dark matter signals.

The surrounding rock also emits small amounts of natural radiation, which LUX must be protected from as well. The detector was lowered into a very large stainless steel tank — 20 feet tall by 24 feet in diameter. The tank was then filled with more than 70,000 gallons of ultra-pure de-ionized water that will shield the detector from gamma radiation and stray neutrons.

Gender Differences Impact Diagnosis of Depression

Connie K. Ho for redOrbit.com — Your Universe Online

“Men are from mars, women are from Venus” is the title of a book written by John Gray, a relationship counselor and author from the United States. Gray highlighted how relationship problems between males and females could be caused by gender differences, especially in terms of how each handles stress differently.

The idea of male and female differences can be seen in a new study from the University of Westminster in the United Kingdom that showed how symptoms of depression are more readily identified in women as opposed to males. The group of investigators found that identifying symptoms of depression was dependent on the gender of the identifier and the person who was depressed. Researchers believe that gender stereotypes could possibly impact the public’s view of people who are suffering from depression.

The findings of the study were featured in the journal PLoS ONE.

“Poor mental health literacy and negative attitudes toward individuals with mental health disorders may impede optimal help-seeking for symptoms of mental ill-health. The present study examined the ability to recognize cases of depression as a function of respondent and target gender, as well as individual psychological differences in attitudes toward persons with depression,” wrote the researchers in the paper.

In the research project, the scientists looked at two fictitious subjects, Jack and Kate. With non-clinical terms, the two were described as having the same exact feelings related to depression. However, the only difference was that one participant was male and the other participant was female. Sample language from the test included: “For the past two weeks, Kate/Jack has been feeling really down. S/he wakes up in the morning with a flat, heavy feeling that stick with her/him all day. S/he isn’t enjoying things the way s/he normally would. S/he finds it hard to concentrate on anything.”

The participants of the study were then asked to determine which individual suffered from a mental health disorder and how they would go about resolving the issue. The results showed that both females and males would identify Kate as having the mental health disorder. Furthermore, more females than males would identify Jack as suffering form depression.

The participants also differed in their take on how to offer mental health counseling. Males had a greater likelihood of encouraging Kate to find professional help than their female counterparts. In addition, both females and males had the same likelihood of suggesting that Jack seek professional help.

Lastly, the team of investigators discovered that perspectives on depression were related to anti-scientific attitudes and skepticism on psychiatry. They believe that it is necessary to take into account gender stereotypes and biases on mental health. As such, the researchers believe that these results can be used in terms of improving mental health literacy.

“The present results underscore the role of individual differences in mental health literacy. Initiatives that consider the impact of gender stereotypes as well as individual differences may enhance mental health literacy, which in turn is associated with improved help-seeking behaviors for symptoms of mental ill-health,” wrote the scientists in the paper. “In the future, it will be important to more carefully assess inter-individual differences in mental health literacy as a function of both the target and the respondent.”

Ask An Expert – Is Thorium Power The Future Of Energy?

John P. Millis, Ph.D. for redOrbit.com — Your Universe Online

This article is the first in a new series where redOrbit´s in-house experts will answer questions submitted by you, the reader. Got a science or space question that´s stumping you? Each week we´ll select a handful of the wiliest questions you can whip up to tease the brains of our resident gurus (we call them ‘geeks’).

Question:

I’ve been reading about thorium and how it can be used in liquid sodium reactors to produce electricity; in the process it would also be able to consume spent nuclear waste from our nuclear plants. I have come to find that India and China may be also researching this technology but the US is not. Why is that? I see it as a more stable output of electricity versus solar and wind which have fluctuating output reliant on weather.

Answer:

As the world stands at the edge of an energy crisis the search for alternative fuels is accelerating. It seems that nuclear power plants, specifically fission reactors, are the best immediate solution to our energy needs. (Fusion reactors are still decades away from viability while solar, wind, and other solutions do not have the needed efficiency to replace the power needs of high-consuming countries like the US.)

However, Uranium — the most popular constituent fuel for fission reactors — is an exhaustible resource. So, as we seek the next evolution in reactor design it seems prudent to explore other fuel possibilities that are more abundant.

What is Thorium Power?

It has long been assumed that Thorium would supplant Uranium as the primary fission fuel. It is far more plentiful than Uranium and can be used in its natural isotope 232Th.

However, the Thorium itself does not contain high quantities of fissile material, therefore it needs to be transmuted into the artificial Uranium isotope 233U using additional fissile material — such as Uranium — to ignite the reaction.

In general Thorium fission would proceed in a similar manner to Uranium based reactions. However, there are differences in how the material is handled and “ignited” that require unique considerations.

Is Thorium Power Viable?

Thorium has been pushed as a nuclear fuel because, in addition to being more plentiful than other fissile elements, there are substantially fewer radioactive byproducts (waste) than in typical Uranium reactors.

Additionally, during the reaction process 232U and 233U are inevitably interspersed and cannot be chemically separated. As a result the enrichment of the uranium to weapon´s grade levels is not possible. (However, 233U has been tested in at least one nuclear warhead, though the yield was significantly lower. It is unclear how the contamination from 232U would affect this.)

But, while it seems that the use of Thorium is a foregone conclusion there are roadblocks that stand in the way. One problem is that some reactor designs would create significant 233U, a long-lived radioactive waste product. Also, while the reactions work well in lab tests, achieving high fuel utilization — a measure of the amount of energy extracted from the fuel source — would present a challenge in the majority of reactors currently in operation and would therefore require significant modification.

Also, the various reaction chains that could be pursued have inherent challenges. Namely, they either produce intermediate products that can corrupt the yield of 233U, or they require uranium recycling technology that is unproven and in its infancy.

Is Thorium The Future?

Maybe. There are clear advantages to increased use of Thorium in nuclear reactor designs, however the technology is still unproven and there are certainly disadvantages as well.

Proponents of Thorium power argue for Molten-Salt reactors, like those pioneered in the 1960s. On paper such reactors have all the benefits of traditional Thorium driven reactions with none of the downside. However the technology is underdeveloped and the specifics of up-scaling — taking a proof of concept to a full scale reactor — are unknown, so these claims are unverified. And in this age of tight regulatory control in the United States, Molten-Salt reactors face a long, expensive road to viability.

In light of this, firms in the United States are partnering with our colleagues in Russia and China to explore the use of Thorium as a primary nuclear fuel source. But a 2011 study argued that Thorium reactors, because of the need for significant up-front capital investment, are not expected to enter the mainstream in the near-term, despite the potential benefits.

However, the use of Thorium in mixed oxide (MOX) fuels is a possibility right now. As an alternative to enriched Uranium, the addition of Thorium to the fuel mix would lower the radioactive waste and maximize the destruction of hazardous Plutonium.

The likelihood of Thorium reactors completely replacing Uranium designs is very low, but the expanded use of Thorium in fuel mixtures is a reasonable stop-gap until Fusion reactors finally come online 20 — 30 years down the road.

Stopping The Spread Of Melanoma By Removing Protein Affecting Metastasis

Connie K. Ho for redOrbit.com — Your Universe Online

Researchers from Virginia Commonwealth University recently revealed that, through lab experiments, they have been able to stop the spread of cancer from a tumor to other parts of the body, otherwise known as metastasis, in melanoma. Using the findings from the study, the researchers will be able to look into developing new targeted therapies that can help stop metastasis in melanoma and possible other types of cancers.

With this breakthrough, the scientist believe that they have the ability to eliminate melanoma differentiation associated gene-9 (mda-9)/syntenin, a specific protein. In the experiment, the researchers discovered that Raf kinase inhibitor protein (RKIP) was able to interact and suppress with mda-9/syntenin. The protein was originally cloned in a laboratory and past studies showed how it interacted with c-Src, another protein, to produce a set of chemical reactions that later boosted metastasis.

“Prior research suggests that RKIP plays a seminal role in inhibiting cancer metastasis, but, until now, the mechanisms underlying this activity were not clear,” explained Paul Fisher, the program co-leader of Cancer Molecular Genetics at Virginia Commonwealth University Massey Cancer Center, in a prepared statement. “In addition to providing a new target for future therapies, there is potential for using these two genes as biomarkers for monitoring melanoma development and progression.”

The team of investigators discovered that RKIP become attached to mda-9/syntenin, which resulted in limiting the expression of mda-9/syntenin. With the finding of this physical interaction, the scientists believe that they could possibly create small molecules that are similar to RKIP and the molecules could be used as drugs to treated metastasis in cancers like melanoma.

There was also a difference in terms of the level of mda-9/syntenin and RKIP. While malignant and metastasis melanoma cells had higher levels of mda-9/sytnenin compared to RKIP, the healthy melanocyte cells that create pigment in eyes, hair, and skin had higher levels of RKIP than mda-9/syntenin. The researchers believe that different levels in the proteins could be used in diagnosis, particularly in following the progression of a disease or tracking a patient´s response to a particular treatment.

“Our findings represent a major breakthrough in understanding the genetic mechanisms that lead to metastasis in melanoma. Prior studies have shown that levels of mda-9/syntenin are elevated in a majority of cancers, including melanoma, suggesting that our findings could be applicable for a wide range of diseases,” continued Fisher, who also serves as chairman of VCU´s Department of Human and Molecular Genetics and director of the VCU Institutes of Molecular Medicine, in the statement.

Moving forward, the scientists plan to determine how they can develop small molecules that mimic RKIP. These molecules could potentially be utilized in new treatments for melanoma. The research comes at a particularly crucial time as over one million cases of skin cancer have been diagnosed each year in the U.S. According to the researchers, melanoma is the deadliest form of skin cancer. The National Institutes of Health, the National Foundation for Cancer Research, the American Cancer Society, among other organizations, provided funding for the study.

The findings were recently published online in the journal Cancer Research.

Fear Of Relaxation Is A Real Phobia

Alan McStravick for redOrbit.com – Your Universe Online
Phobias are irrational fears. Whether it´s fear of flying, public speaking, snakes or the dark, these phobias are usually manifested after a traumatic experience that plants itself deep in the subconscious of the sufferer. A phobia can continue to hold sway over an individual even though that person can understand the irrationality of it. And if one avoids the source of their phobia it typically can increase the sense of worry and fear.
An important technique to allay a phobia involves relaxation. The individual should maintain a strict focus on staying relaxed while slowly and gradually introducing themselves to the object or situation they fear. In so doing, the individual is provided with reassuring evidence that the situation doesn´t control their feelings. Additionally, it gives that person the confidence to realize they have more control than they had previously thought. But what if your irrational phobia is that you are afraid to relax? Christina Luberto, a doctoral student in the University of Cincinnati´s Department of Psychology has developed a way to identify these phobic individuals.
While most of us look forward to a leisurely Saturday, catching up on reading or our favorite TV show, or a fantastic getaway vacation, there are some among us who experience the same level of anxiety when faced with relaxation as we would having to speak before the American Association for Nude Recreation.
Luberto has developed a questionnaire, which she calls the Relaxation Sensitivity Index (RSI) designed to examine this phenomenon. Her preliminary findings on the RSI are to be presented next week at the 46th annual convention of the Association of Behavioral and Cognitive Therapies (ABCT) to be held in National Harbor, Maryland.
“Relaxation-induced anxiety, or the paradoxical increase in anxiety as a result of relaxation, is a relatively common occurrence,” explains Luberto. “We wanted to develop a test to examine why certain individuals fear relaxation events or sensations associated with taking a time-out just to relax.”
Comprised of 21 items, the RSI is a questionnaire that was designed to specifically explore fears that are associated with relaxation anxiety. The RSI focuses on the three key categories of physical, cognitive and social issues. The physical issues address sensations like lowered breathing or touch that might spark a phobic response. Cognitive issues relate to an increase in anxiety due to thought processes associated with relaxation. And finally, social issues refer to how others might perceive you during a state of relaxation.
In the study, comprised of 300 undergraduate college students who were, on average 21 years of age, female and Caucasian, participants were asked to rate, on a 0 to 5 scale, how a series of statements applied to them.
The idea to explore relaxation sensitivity stemmed from studies done previously on a related concept of anxiety sensitivity. Anxiety sensitivity, unlike relaxation sensitivity, is the fear of arousal. However, the initial results of  Luberto´s RSI study find that participants who rate high in relaxation sensitivity also rate high in anxiety sensitivity. “This suggests that for some people, any deviation from normal functioning, whether it is arousal or relaxation, is stressful,” states Luberto. Additionally, results show the RSI to be a reliable and valid measure of relaxation-related fears. This questionnaire is able to accurately identify which individuals have experienced an increase in anxiety when relaxing in the past.
Despite these results, Luberto believes additional research should be conducted to examine the overall effectiveness of the RSI across a more diverse population. This diversity would include participants beyond the college age. Also, administration of the RSI, according to Luberto, should be performed among individuals with psychiatric disorders. The hope for the RSI is that it will eventually help to identify patients who would not respond to being treated through the use of relaxation therapies. As discussed above, these therapies are often a common method in treating anxiety disorders.
The study was supported by the UC Department of Psychology´s Frakes Foundation Endowment Fund and William Seeman Psychology Fund.
The ABCT is a multidisciplinary organization committed to the advancement of scientific approaches to the understanding and improvement of human functioning through the investigation and application of behavioral, cognitive and other evidence-based principles to the assessment, prevention, treatment of human problems and enhancement of health and well-being.

Cyber Threats Forecast For 2013 By Georgia Tech

Peter Suciu for redOrbit.com — Your Universe Online

Every cloud may have a silver lining the saying goes, but there will likely be no such silver lining for the future of cloud computing, which tops the list of serious computer security threats for 2013. On Wednesday, the Georgia Tech Information Security Center (GTISC) and the Georgia Tech Research Institute (GTRI) released the Georgia Tech Emerging Cyber Threats Report for 2013 at the Georgia Tech Cyber Security Summit, a gathering of industry and academic leaders in the field of cyber security.

According to the findings of the report, there are several specific threats to follow over the next year. Among the most ominous is the use of cloud computing for malicious purposes. As this emerging technology offers flexible provisioning capabilities that allow legitimate businesses to quickly add or subtract computing power, it could also be used to instantly create a powerful network of so-called zombie machines for use in nefarious purposes.

“If I’m a bad guy, and I have a zero-day exploit and the cloud provider is not up on their toes in terms of patching, the ability to exploit such a big capacity means I can do all sorts of things,” said Yousef Khalidi, of the Microsoft Windows Azure team in the report.

The cloud also opens the possibility for cloud-based botnets, which could provide a way to create vast, virtual computing resources. This could actually convince the cyber criminal to look for other ways to co-opt cloud-based infrastructure, such as using cloud computing resources to create clusters of temporary virtual attack systems.

Other worrisome trends for 2013 include dangers with globalized supply chains, search poisoning, mobile threats including browser and wallet vulnerabilities, and malware counteroffensive.

George Tech researchers noted serious security problems with globalized supply chains, including security flaws in products manufactured by some Chinese companies, notably Huawei and ZTE. The concern is that these systems could offer built-in backdoors for cyber espionage and make those systems vulnerable to compromise. And it isn´t just American companies that could be at risk — or believe that there is a risk. According to the report, the Chinese have the same concerns about U.S.-made products.

Cyber criminals may also continue to look at ways to manipulate search engine algorithms and other automated mechanisms that control how and information is presented to Internet users during a search. This threat, known as search history poisoning, could move beyond typical search engine results as researchers fear that cyber criminals may look to find ways to manipulate the histories of search results from users and use legitimate resources for illegitimate gains.

This threat could be especially worrisome when coupled with cloud-based data. If an individual machine is compromised a user is generally “safe” when moving to a “clean” machine, but if the user´s search history and online profile are compromised then the malicious search follows the user to any machine!

And these threats could increasingly make a jump from the desktop to mobile in 2013. The good news, according to the findings in the report, is that the threat is not as serious as previously thought. The app store model through which most mobile software is distributed remains a fairly stalwart first line of defense against the bulk of smartphone-based malware. However, researchers noted that aggressive patching policies and updates from the OEMs and carriers would only increase the security of the devices.

The bigger threat in mobile devices will come from the explosive proliferation of devices that will only serve to tempt attackers. The biggest threat thus won´t be in apps, but could be through the mobile web and increased use of the mobile wallet.

The final notable concern noted in the report was malware counteroffensive techniques as the developers of malicious software work to employ various methods hinder malware detection. This includes efforts to harden their software with techniques similar to those already employed in Digital Rights Management (DRM), as well as looking for exploits in new interfaces and features on mobile devices.

If this sounds bleak there is hope besides simply unplugging and giving up on technology. The key is knowing the threats exist, which is the goal of such reports in the first place.

“Every year, security researchers and experts see new evolutions in cyber threats to people, businesses and governments,” said Wenke Lee, director of GTISC. “In 2013, we expect the continued movement of business and consumer data onto mobile devices and into the cloud will lure cyber criminals into attacking these relatively secure, but extremely tempting, technology platforms. Along with growing security vulnerabilities within our national supply chain and healthcare industry, the security community must remain proactive, and users must maintain vigilance, over the year ahead.”

Rediscovering The Most-Legged Animal On Earth

Lee Rannals for redOrbit.com — Your Universe Online

[ Watch the Video: Leggiest Animal On Earth ]

Scientists in California re-discovered the leggiest animal on Earth several years ago living outside Silicon Valley.

Paul Marek and colleagues provided details of the millipede lllacme plenipes’ complex anatomy and its rarity in the journal ZooKeys.

The female lllacme plenipes have up to 750 legs, compared to the males who only have a maximum of 562 legs.

The scientists said the proliferation of legs may be an adaption to its lifestyle spent burrowing underground or to enable it to cling to the sandstone boulders found near the species habitat.

“This relict species is the only representative of its family in the Western Hemisphere. Its closest presumed relative, Nematozonium filum, lives in South Africa and this early relationship was established more than 200 million years ago when the continents coalesced in the landmass Pangaea,” the lead author Dr. Paul Marek, from the University of Arizona, said in a statement.

This species not only has an extraordinary number of legs, but it also has body hairs that produce silk, a jagged and scaly translucent exoskeleton, and comparatively massive antennae that are used to help it find its way through the dirt.

lllacme plenipes’ mouth is unlike other millipedes that chew with developed grinding mouthparts. It is rudimentary and fused into structures the scientists believe are used for piercing and sucking plant or fungal tissues.

The species lives in a tiny area near San Juan Bautista, just east of the San Andreas Fault. The analysis of the animal indicates other areas of suitability limited to the terrestrial areas on the edge of Monterey Bay eastward to San Jaun Bautista and throughout the Salinas Valley.

The area is unique because it has a thick layer of fog that accumulates like soup in a deep bowl. This feature, alongside the species’ unique set of features in its habitat, make this area a special place deserving of attention as the home of this rare millipede.

lllacme plenipes were thought to be extinct over the past 80 years, and Marek said that rediscovering it “was wonderful.”

“To tell you the truth, and this is the experience every time I find a species I´ve never seen before, it was an exhilarating experience,” Marek told Scientific American. “Even when reading about other entomological discoveries (whether it be the Lord Howe Island stick insect or bioluminescent cockroaches) it´s exciting to think about all the fantastic and diverse life forms that live with us on the planet.”

The team was able to help distinguish it from other millipede species by taking a special look into its genitals.

“For this relict species in particular, and especially since there´s nothing like it in the Western Hemisphere, [distinguishing the species] is pretty straightforward. It´s so completely different, anatomically, from anything else in the area,” he told Scientific American. “For questions about species that are more closely related to one another, millipede taxonomists use the anatomical differences in the genitalia under the lock-and-key hypothesis — in a crude way, ℠the idea that a lock from one species cannot be opened with a key from another´.”

Due to lower population numbers, and its limited habitat, the future of the millipede species is uncertain. Marek said exploitation as a result of over-collecting is something they tried to avoid while gathering information about them.

“The idea with conservation is to preserve as much biological diversity as possible, and not only is this species so different than anything else in the area, the habitat that it lives in is filled with so many different and unique species — things that we know so little about,” he told Scientific American.

He said many species found in the lllacme plenipes’ habitat are unlike any other found on Earth, including everything from beetles and salamanders to oak trees and mosses.

Happily Married Couples Live Longer, Study Says

Lee Rannals for redOrbit.com — Your Universe Online

Married couples live longer and are able to adapt better to health setbacks, according to researchers from Michigan State University and the University of Cincinnati.

The rate of single people is continuing to rise, from 400,000 in 1960 to 7.6 million in 2011, according to U.S. Census data. However, new research indicates being single and willing to mingle forever and always isn’t necessarily the best option for your end-goal of longevity.

Researchers reported they found, while looking at the national healthy survey data from nearly 200,000 people, that the rate of mortality among men in cohabiting relationships dropped by 80 percent, while the rate dropped 59 percent for women.

Karen Sherman, author of “Marriage Magic! Find It, Keep It, & Make It Last,” told CBS News in Cleveland married individuals undergoing heart bypass surgery are three times more likely to stay alive 15 years later than their single peers.

The Census Bureau statistics shows American marriages have dropped from 2.45 million in 1990 to 2.08 million in 2010.

“This helps us to understand the implications of this relatively new rise in cohabitation,” MSU sociologist Hui Liu, the study´s lead researcher, told the CBS news station. “Many assume marriage and cohabitation are wholly the same, but our research showed that cohabitation, generally, led to a shorter lifespan.”

However, being married isn’t always the solution. The study, published in the Journal of Marriage and Family, also found a bad marriage could be lethal to your health. The authors wrote high-conflict marriages have been shown to cause more stress.

A study reported by redOrbit back in August found marriage can increase middle-aged women’s drinking rate, and could also lower men’s drinking rate.

“Marriage and divorce have different consequences for men´s and women´s alcohol use,” study author Corinne Reczek, an assistant professor at the University of Cincinnati, told Health Day. “For men, it´s tempered by being married and exacerbated by being divorced.”

The researchers suggest men appear to be slowing their drinking rate after getting married, but that a compromise is essentially made as women drink more. While a man may not drink as much once married, he still drinks more than the woman did before she tied the knot, according to the study.

Perhaps each study supports the other, as one claims married people are more adaptable, and the other shows adaptation is made to each others alcohol habits.

Nano-manufacturing Improved With Nanometer-scale Diamond Tips

University of Illinois at Urbana-Champaign

One of the most promising innovations of nanotechnology has been the ability to perform rapid nanofabrication using nanometer-scale tips. The fabrication speed can be dramatically increased by using heat.  High speed and high temperature have been known to degrade the tip“¦ until now.

“Thermal processing is widely used in manufacturing,” according to William King, the College of Engineering Bliss Professor at Illinois. “We have been working to shrink thermal processing to the nanometer scale, where we can use a nanometer-scale heat source to add or remove material, or induce a physical or chemical reaction.”

One of the key challenges has been the reliability of the nanometer-scale tips, especially with performing nano-writing on hard, semiconductor surfaces. Now, researchers at the University of Illinois, University of Pennsylvania, and Advanced Diamond Technologies Inc., have created a new type of nano-tip for thermal processing, which is made entirely out of diamond.

“The end of the diamond tip is 10 nm in size,” King explained. “Not only can the tip be used for nanometer-scale thermal processing, but it is extremely resistant to wear.”

The research findings are reported in the article, “Ultrananocrystalline diamond tip integrated onto a heated atomic force microscope (AFM) cantilever,” that appears in in the journal Nanotechnology. The study shows how the 10 nm diamond tip scans in contact with a surface for a distance of more than 1.2 meters, and experiences essentially no wear over that distance.

“The scan distance is equal to 100 million times the size of the tip,” said King. “That´s the equivalent of a person walking around the circumference of the earth four times, and doing so with no measurable wear.”

“The robustness of these diamond-based probes under such harsh conditions–high temperatures and stresses in an oxidizing environment–is quite remarkable and exceeds anything I’ve seen with other AFM probes,” said Robert Carpick, professor of mechanical engineering and applied mechanics at University of Pennsylvania and co-author on the study. “This level of durability combined with the multifunctionality of a thermal probe really opens up new applications for the AFM.”

“We are pleased with the results since they prove once again the superiority of diamond tips to any other types of probe tips when it comes to low wear and resistance to harsh environment,” said Nicolaie Moldovan, a scientist at Advanced Diamond Technologies and co-author on the study.

The authors on the study are Hoe-Joon Kim, Suhas Somnath, Jonathan Felts, and William King, University of Illinois; Tevis Jacobs and Robert Carpick, University of Pennsylvania; and Nicolaie Moldovan and John Carlisle, Advanced Diamond Technologies Inc.

On The Net:

Foot Massage Eases Symptoms Of Breast Cancer

Michael Harper for redOrbit.com — Your Universe Online

No matter which side of the debate you stand, one cannot refute the fact that foot massage carries with it some benefits. These benefits may be as simple and small as general relaxation, but they are benefits all the same. Some even believe that foot massage, when administered by someone with the proper training, can even relieve other ailments, such as poor circulation and fatigue as well as lend a certain sensual benefit to the recipient.

Now, one Michigan State University researcher suggests that foot massage is so powerful, it may even be able to ease the symptoms of breast cancer in women. This MSU study was the first large-scale and random study of reflexology as a compliment to standard cancer treatments and was published in the latest issue of Oncology Nursing Forum.

“It´s always been assumed that it´s a nice comfort measure, but to this point we really have not, in a rigorous way, documented the benefits,” Gwen Wyatt, the lead author in the research, said in a statement. “This is the first step toward moving a complementary therapy from fringe care to mainstream care.”

All puns aside, Reflexology is based on the idea that different points in our feet can trigger different responses in the rest of our body and even improve the functions of some parts of the body.

To conduct this study, Wyatt and team found 385 women with advanced-stage breast cancer who were currently undergoing chemotherapy or hormonal therapy. These women were then split into 3 different groups. The first group of women received a specialized foot massage from a certified reflexologist. The second group of women received a basic foot rub which was meant to act as a sort of placebo, while the third group of women received no sort of foot love and were placed only in the care of standard medical treatment.

Wyatt and team asked these groups of women about their symptoms when they received the massages, then again at 5 and 11 weeks after foot treatment.

The group of women who received the specialized massage reported “significantly less” shortness of breath, a symptom which Wyatt says is common among breast cancer patients. Wyatt also suspects that because these women could breathe more easily, they were also able to perform daily tasks without losing their breath.

Wyatt expressed her surprise that the benefits from this specialized foot massage showed themselves more in the physical sense than the psychological sense.

“We didn´t get the change we might have expected with the emotional symptoms like anxiety and depression,” Wyatt said. “The most significant changes were documented with the physical symptoms.”

Another surprise from the results: Those women who received only placebo foot massages claimed to experience significantly less fatigue after receiving the rub down. The women who received the specialized massage didn´t experience the same results. Based on these results, Wyatt suggests that anyone can rub down a cancer patient as an effective and inexpensive form of therapy to lessen fatigue.

Wyatt also mentions that the reflexology technique didn´t reduce pain or nausea, though the women who received this treatment were already taking medication for these symptoms. Therefore, this part of the study was essentially rendered moot.

Scientists Develop Gel-Based Sponge To Effectively Deliver Cells And Drugs

Connie K. Ho for redOrbit.com — Your Universe Online

Bioengineers from Harvard recently revealed that they have been able to develop injectable gel-based sponges that can deliver cells and drugs as well as be shaped into whatever size or form.

In particular, the sponge is able to revert back to its original shape once it has released whatever drugs or stem cells inside the body. Researchers believe that the item could be used for therapeutic purposes as a prefabricated healing kit.

“What we´ve created is a three-dimensional structure that you could use to influence the cells in the tissue surrounding it and perhaps promote tissue formation,” noted principal investigator David J. Mooney, a professor of bioengineering at the Harvard School of Engineering and Applied Sciences (SEAS) and a Core Faculty Member at the Wyss Institute for Biologically Inspired Engineering at Harvard, in a prepared statement.

Using the injectable sponge, the researchers were able to show how live cells can be delivered intact with the use of a small-bore needle. The sponge, made up of seaweed-based jelly alginate, could also contain large and small proteins or drugs. These drugs and proteins would be late released to help take apart mechanisms in the body. The sponge was useful, with no need to be implanted surgically.

“The simplest application is when you want bulking,” continued Mooney in the statement. “If you want to introduce some material into the body to replace tissue that´s been lost or that is deficient, this would be ideal. In other situations, you could use it to transplant stem cells if you´re trying to promote tissue regeneration, or you might want to transplant immune cells, if you´re looking at immunotherapy.”

In order to develop the sponge, the researchers used cyrogelation, a freezing process that forms pure ice crystals following the freezing of water in the alginate solution. When the ice crystals melt, there is a set of pores and, if done correctly, a strong and compressible gel forms. With these various facets, the “cryogel” is considered unique in the biomedical engineering field.

“These injectable cryogels will be especially useful for a number of clinical applications including cell therapy, tissue engineering, dermal filler in cosmetics, drug delivery, and scaffold-based immunotherapy,” commented the study´s lead author Sidi Bencherif, a postdoctoral research associate in Mooney´s lab at SEAS and at the Wyss Institute, in the statement. “Furthermore, the ability of these materials to reassume specific, pre-defined shapes after injection is likely to be useful in applications such as tissue patches where one desires a patch of a specific size and shape, and when one desires to fill a large defect site with multiple smaller objects. These could pack in such a manner to leave voids that enhance diffusion transport to and from the objects and the host, and promote vascularization around each object.”

In moving forward with the project, the researchers are planning to experiment with the degradation rate of the bioscaffold. They are interested in having the sponge break into pieces at the same rate as the growth and replacement of new tissue. Having filed a patent application of the device, Harvard´s Office of Technology is looking into commercialization and licensing options.

The findings on this new biocompatible technology was recently published in the Proceedings of the National Academy of Sciences (PNAS).

Image 2 (below): Left: A fully collapsed square-shaped cryogel rapidly regains its original memorized shape, size, and volume upon hydration. Right: Photos show the placement of a cryogel inside a 1-mL syringe, and the recovery of a square gel after injection through a normal 16-gauge needle. Images courtesy of Sidi Bencherif.

Some Apparently Vegetative Patients Are Aware And Can Follow Commands

April Flowers for redOrbit.com – Your Universe Online

Alex Seaman is 20 years old, and for the last year and a half, he’s been a patient at the Royal Hospital for Neuro-disability. He is awake and even has his eyes open at times, but Alex has no apparent awareness and is unable to talk or respond with his body.

Alex suffered a severe head injury last April. He had been out celebrating a friend’s 18th birthday and missed his bus stop. The bus was still moving when he jumped off and hit his head.

“What I´d like to know is, if he recognizes us, if he is aware and if he´s happy,” said his mother, Sally Seaman, to UK reporter Fergus Walsh.

Alex has had several bouts of critical illness in the 18 months he’s been in the hospital, including pneumonia and infection, making it impossible for caregivers to make a full assessment of his injuries. They believe he could be vegetative — which means he is awake, but with no awareness of himself or the outside world.

Alex and his family have been given a rare opportunity for Alex to be part of a research project at Addenbrooke’s Hospital, Cambridge. The study will discover if Alex has a functioning mind trapped in an unresponsive body. Using functional magnetic resonance imaging (fMRI), a team of Cambridge University neuroscientists at the Wolfson Brain Imaging Centre has been looking for hidden awareness in otherwise unresponsive patients.

Increased blood flow can reveal the parts of the brain, which are active when someone is thinking, and this increased blood flow is measurable using fMRI.

BBC’s Panorama program has followed Alex and a number of other young men with severe brain injuries in Britain and Canada for the past year. The patients were asked to imagine playing tennis or walking around their house. Such imaginings, in healthy patients, produce two distinct patterns of brain activity which the scanner can pinpoint.

Professor Adrian Owen published research two years ago showing nearly one in five apparently vegetative patients were aware and able to follow commands in the scanner.

Alex responds and can perform both the tennis and house tasks, against all odds. Alex was also shown photographs of faces and buildings. These images were flashed on a screen, and the scan results suggest Alex may be able to recognize the faces of his family and his girlfriend, Jess.

His mother says, “It gives us something to work with, and shows you should never give up, no matter how bleak it looks.”

Another patient in the study, Scott Routley, is in Canada where Professor Owen is currently based. Scott, 39, was injured in a car crash 12 years ago, and has been unresponsive ever since. Scott’s confirmation, under the scanner, that he feels no pain is a breakthrough event. It is the first time any brain-injured patient has been asked to answer something clinically relevant to their care.

“Scott has been able to show he has a conscious, thinking mind. We have scanned him several times and his pattern of brain activity shows he is clearly choosing to answer our questions. We believe he knows who and where he is,” Professor Owen, who led the study at the Brain and Mind Institute of Western Ontario, told Walsh. Owen says that medical textbooks might need rewriting based on Scott and his ability to answer questions.

Prof Bryan Young at University Hospital, London has been Scott’s neurologist for a decade. Young said the scan results overturned all the behavioral assessments that had been made over the years.

“I was impressed and amazed that he was able to show these cognitive responses. He had the clinical picture of a typical vegetative patient and showed no spontaneous movements that looked meaningful,” he told BBC News

Since Scott responded in the scanner, observational assessments continue to suggest that he is vegetative.

Scott responding is not really a surprise to his parents, Anne and Jim, though. They were already convinced that Scott sometimes responded by lifting his thumb or moving his eyes. But the sure knowledge that he can answer questions raises fears.

“In the back of your mind you´re always wondering is he happy?” says Anne. “Does he want to go on with his life? Not that we´d do anything to stop that. We´ll always be there for him.”

Whether the Routleys would do anything or not, the question has a great deal of relevance for patients in Britain. More than 40 vegetative patients have been allowed to die over the last two decades through the withdrawal of their feeding tubes. Such actions have to be approved by the courts. The question is whether this technology could change such decisions in the future.

Professor Owen does not see his study as a pathway for allowing patients to decide their fate.

“Just because a patient can answer yes and no questions, doesn´t necessarily mean they have the cognitive faculties, the understanding to decide whether to live or die.”

The families involved in Owen’s study have no intention of giving up on their sons. Miraculous events do happen, sometimes.

Stewart Newman was released earlier this year from RHN. Newman was injured in a car accident five years ago. Today, he communicates using a letter-board and by giving a thumbs-up, accompanied by a big smile. When asked what it was like to be trapped in his body, Newman replied, “I would scream at a wall.”

Another patient, Steven Graham, demonstrated that he had created new memories since his brain injury. Asked if his sister had a daughter, Graham answered yes. His niece was born after his car crash five years previously.

Not every patient was able to respond, however. At least one scanned by the fMRI was completely unable to show awareness.

This study might give a voice to these patients, a voice they have been denied by conventional medicine.

As Alex´s mother, Sally, said to him: “The tests show you are in there and we´ve just got to fight to get you back out again.”

Prof Owen said, “Asking a patient something important to them has been our aim for many years. In future, we could ask what we could do to improve their quality of life. It could be simple things like the entertainment we provide or the times of day they are washed and fed.”

Researchers: Bacteria Becoming More Resistant To Antibiotics

Lee Rannals for redOrbit.com — Your Universe Online

New research is showing that a certain type of bacteria is becoming more resistant to antibiotic treatments.

Extending the Cure (ETC) reported the second most common infection in the US, urinary tract infection (UTI), is becoming harder to treat with antibiotics.

ETC, a project of the Centers for Disease Dynamics, Economics & Policy, found the available arsenal of drugs used to treat UTIs are losing their effectiveness, with the overall share of resistant bacteria increasing by over 30 percent between 1999 and 2010.

UTIs account for about 8.6 million visits to health care providers each year, according to the Centers for Disease Control and Prevention (CDC). Over half of all US women will get a UTI in their lifetime.

“Without proper antibiotic treatment, UTIs can turn into bloodstream infections, which are much more serious and can be life-threatening,” Ramanan Laxminarayan, director of Extending the Cure, said in a statement. “These findings are especially disturbing because there are few new antibiotics to replace the ones that are becoming less effective. New drug development needs to target the types of drug-resistant bacteria that cause these infections,” he said.

Researchers found negative trends that suggest high levels of antibiotic overuse in the Southeastern region of the US between 1999 and 2010. Unnecessary use makes antibiotics less effective in fighting off infections because microbes become more adept at surviving treatment.

ETC found the burden of antibiotic resistance for urinary tract infections was highest in the Southeast, particularly in the East South Central and South Atlantic states. States in New England and the Pacific regions of the country had lower levels of resistance.

Previous work by ETC showed these regions are among the most intensive users of antibiotics, which speeds up the development of resistant strains of the bacteria causing these infections.

The team of researchers found significant regional variation and alarming regional trends in the use of antibiotics between 1999 and 2010. Since 1999, the percentage of antibiotic prescriptions filled nationwide has dropped by 17 percent. However, high-consumption states are lagging in this trend and are seeing the smallest decrease in prescriptions.

ETC said in 2010, the five states with the highest rates of antibiotic use in the nation were Kentucky, West Virginia, Tennessee, Mississippi, and Louisiana. The maps show higher-than-average use of antibiotics in other regions of the country.

It also found the five states in 2010 with the lowest antibiotic use in the nation were Alaska, Hawaii, California, Oregon, and Washington.

“While nationally, people are starting to use antibiotics more judiciously, the new findings also show the message might not be reaching everyone. People continue to consume antibiotics at much higher rates in certain parts of the country, and the problem appears to be getting worse,” Laxminarayan said in the statement.

He said they hope public health officials and health care leaders will be able to use ResistanceMap and the Drug Resistance Index to better target their education efforts to reduce inappropriate use.

Patients who live in remote areas may want antibiotics for cold or the flu, which are viruses that cannot be treated with an antibiotic. These people have infrequent access to their doctor and want to ensure they get a “cure” on their visit, according to ETC.

ETC used its Drug Resistance Index tool to conduct its analysis of drug resistance. This tool aggregates information about resistance trends and antibiotic use into a single measure. They populated the index with data from The Surveillance Network, which is a database that contains samples of millions of bacterial cultures pulled from laboratories nationwide.

Researchers analyzed the number of prescriptions filled in US retail pharmacies using data from IMS Health during their study.

Human Eye Gives Visionary Design For New, More Natural Lens Technology

Optical Society of America

Drawing heavily upon nature for inspiration, a team of researchers has created a new artificial lens that is nearly identical to the natural lens of the human eye. This innovative lens, which is made up of thousands of nanoscale polymer layers, may one day provide a more natural performance in implantable lenses to replace damaged or diseased human eye lenses, as well as consumer vision products; it also may lead to superior ground and aerial surveillance technology.

This work, which the Case Western Reserve University, Rose-Hulman Institute of Technology, U.S. Naval Research Laboratory, and PolymerPlus team describes in the Optical Society’s (OSA) open-access journal Optics Express, also provides a new material approach for fabricating synthetic polymer lenses.

The fundamental technology behind this new lens is called “GRIN” or gradient refractive index optics. In GRIN, light gets bent, or refracted, by varying degrees as it passes through a lens or other transparent material. This is in contrast to traditional lenses, like those found in optical telescopes and microscopes, which use their surface shape or single index of refraction to bend light one way or another.

“The human eye is a GRIN lens,” said Michael Ponting, polymer scientist and president of PolymerPlus, an Ohio-based Case Western Reserve spinoff launched in 2010. “As light passes from the front of the human eye lens to the back, light rays are refracted by varying degrees. It’s a very efficient means of controlling the pathway of light without relying on complicated optics, and one that we attempted to mimic.”

The first steps along this line were taken by other researchers[1, 2] and resulted in a lens design for an aging human eye, but the technology did not exist to replicate the gradual evolution of refraction.

The research team’s new approach was to follow nature’s example and build a lens by stacking thousands and thousands of nanoscale layers, each with slightly different optical properties, to produce a lens that gradually varies its refractive index, which adjusts the refractive properties of the polymer.

“Applying naturally occurring material architectures, similar to those found in the layers of butterfly wing scales, human tendons, and even in the human eye, to multilayered plastic systems has enabled discoveries and products with enhanced mechanical strength, novel reflective properties, and optics with enhanced power,” explains Ponting.

To make the layers for the lens, the team used a multilayer-film coextrusion technique (a common method used to produce multilayer structures). This fabrication technique allows each layer to have a unique refractive index that can then be laminated and shaped into GRIN optics.

It also provides the freedom to stack any combination of the unique refractive index nanolayered films. This is extremely significant and enabled the fabrication of GRIN optics previously unattainable through other fabrication techniques.

GRIN optics may find use in miniaturized medical imaging devices or implantable lenses. “A copy of the human eye lens is a first step toward demonstrating the capabilities, eventual biocompatible and possibly deformable material systems necessary to improve the current technology used in optical implants,” Ponting says.

Current generation intraocular replacement lenses, like those used to treat cataracts, use their shape to focus light to a precise prescription, much like contacts or eye glasses. Unfortunately, intraocular lenses never achieve the same performance of natural lenses because they lack the ability to incrementally change the refraction of light. This single-refraction replacement lens can create aberrations and other unwanted optical effects.

And the added power of GRIN also enables optical systems with fewer components, which is important for consumer vision products and ground- and aerial-based military surveillance products.

This technology has already moved from the research labs of Case Western Reserve to PolymerPlus for commercialization. “Prototype and small batch fabrication facilities exist and we’re working toward selecting early adoption applications for nanolayered GRIN technology in commercial devices,” notes Ponting.

On The Net:

RedOrbit Exclusive Interview: Dr. Caspar Addyman, Birbeck University of London, Centre for Brain and Cognitive Development

Jedidiah Becker for redOrbit.com — Your Universe Online

While our brains seem programmed to find baby laughter to be one of the most pleasant sounds imaginable, scientists at Birbeck University of London believe there´s also a lot that those adorable little coos and giggles can teach us about the early development of the human brain.

Led by Dr. Caspar Addyman of university´s world-renowned BabyLab, a team of researchers is preparing to embark on a study that they´ve dubbed The Baby Laughter project. The team plans to study the laughter of hundreds of babies under the age of two and a half in hopes of getting a closer look at how the infant brain works as well as how its cognitive abilities change over time.

Dr. Addyman recently spoke with redOrbit about the importance of baby laughter in brain development and what his team hopes to learn from studying it.

Read the original article “Baby Laughter Project Aims To Understand Cognitive Development” first.

RO: In very broad strokes, can you describe what we already know about baby laughter and what it can tell us about a child´s cognitive development?

Addyman: The short answer is that we don´t know much. This is largely because laughter is tricky subject to study in the laboratory. People have been speculating about the causes and purposes of early laughter for a long time but there has been very little systematic work. Surprisingly, one of the earliest researchers to take this topic seriously was Charles Darwin. He published a paper on his careful observations of his infant son Doddy. Darwin drew parallels between infant laughter and the playfulness of puppies and kittens.

In the 1940´s Jean Piaget, the father of developmental psychology, took up this idea. Like Darwin, Piaget observed his own children and thought laughter and playfulness were signs of cognitive mastery. According to his theory, learning in early childhood is a continual cycle of accommodation and assimilation. Accommodation is the serious and effortful adaption of the child´s mental model to new facts about world. Assimilation is the pleasurable experimentation and exploration that takes place in light of this new knowledge. A baby in a phase of assimilation will laugh and smile at his or her newfound skill. In the process they would be likely to discover something new and provoking, prompting a further round of accommodation.

By Piaget´s theory, laughter should track cognitive development. But we don´t if this theory is true. There was some work in the 1970´s but it was limited and inconclusive. By conducting a large global survey of what makes babies laugh, we first hope to establish that babies do in fact laugh as they are learning and then use this as a new window onto what we already know about early cognitive development.

RO: What are some of the specific aspects of cognitive development that you and your team intend to study in the Baby Laughter Project?

Addyman: We will be looking at baby laughter from both a social and a cognitive perspective. Can it tell us anything new about infants´ social interactions, and can we find evidence that laughter tracks cognitive development? Importantly, we will be looking at laughter across the whole first few years of life to see what differences there are between babies of different ages.

There are a lot of cognitive landmarks in the first two years of life. Babies must understand object permanence and basic physical principles like gravity and the solidity of (most) objects. They start to learn the meaning of social cues like eyegaze or pointing. They start to work out the meaning of nouns, verbs and of more abstract concepts like ℠no´ or ℠all gone´. Developmental psychologists have lots of theories about how this development takes place and have conducted lots of experiments to demonstrate the approximate ages at which landmarks are reached. We hope that our research will provide evidence to corroborate these other findings. For instance, is knocking over blocks funniest for babies who are just establishing a naïve theory of gravity?

Obviously, we mustn´t overlook the fact that laughing is a highly social experience. Research on adult laughter finds that the majority of our laughter is provoked by social interaction rather than events that are inherently funny or amusing. Laughter acts like social glue. Babies can smile and laugh long before they can talk and this is clearly important to their bonding with their caregivers. We will be looking at where, when and with whom babies laugh the most (and the least), and seeing if there are differences between babies. Does a baby´s laughter relate to other aspects of their temperament? Are there cultural differences?

RO: Can you give a preview of some of the specific types of experiments that will be included in the Baby Laughter Project?

Addyman: Initially, the project is collecting survey data from families with babies under two and a half. We have a detailed questionnaire that we would like parents to fill in. We are also encouraging friends and family to fill in short ℠field reports´ or send us videos documenting particular incidents that make babies laugh. The more data we get the better, so we´d encourage your readers to take part or tell their friends.

Depending on what we find in the survey, we may try and reproduce some laughter in the laboratory. For example, it might be instructive to examine the game of peek-a-boo. It is a very popular with babies of all ages, but the chances are, it is a very different game at different ages. For young babies, it is all about object permanence — they are pleasantly surprised when a parent reappears. As they get older and their memory develops, it becomes more an exercise in anticipation. Later still, the important and entertaining aspects of peek-a-boo might be related to turn-taking and social interaction. Sadly, it is very expensive to run experiments with babies. So bringing laughter to the laboratory would depend on getting the necessary funding.

RO: While happy babies are obviously more pleasant to work with than screaming babies, is there something about these developmental processes that make laughter a particularly robust tool for studying them?

Addyman: If Piaget is right, then laughing babies are likely to be highly engaged with whatever is in front of them. This would be useful in the laboratory, as many experiments with babies are a difficult balancing act between fun and boredom. In the classic ℠habituation´ paradigm in infancy research, we show a baby the same type of thing over and over again (for example, a picture of a dog then a picture of another dog, etc) until he or she gets bored, then we change something and see if the baby perks up (if we show them a picture of a cat, for instance). The trouble is that babies might not perk up because they can´t tell the difference (to them the cat is just another furry blob.) Alternatively, they could be bored by the whole situation, ignoring the screen to concentrate on pulling off their own socks. The more complex the question you are investigating, the harder it is keep the baby engaged long enough for them to discover what we want them to attend to.

Parental reports and funny YouTube videos are no substitute for controlled laboratory experiments. But they can give us more immediate and more convincing evidence that a baby understands a particular concept.

RO: You mention that laughter has been a “strangely neglected” feature of child behavior in previous studies on cognitive development. Do you have any idea as to why this might be?

Addyman: I think there are three reasons. Firstly, laughter is incredibly tricky to study in a scientific setting. Laughter is spontaneous, capricious and highly personal. Getting babies to laugh is relatively easy but getting them to laugh on demand is just as hard as stand-up comedy. Secondly, it is very difficult to untangle the social and the cognitive aspects of laughter. Is this baby laughing at me or with me? Finally, I suspect that a lot of scientists would think that laughter isn´t an appropriate topic for ℠serious´ investigation. I disagree strongly and I hope to prove that when laughing babies really do get the joke.

Dr. Addyman, thanks very much for taking the time have a chat with us. On behalf of the redOrbit team and our readership, we wish you the best of luck on the Baby Laughter Project and look forward to reading about your research once it´s complete.

Biography

Caspar Addyman is a Research Fellow at the Centre for Brain and Cognitive Development at Birkbeck, University of London (aka Birkbeck Babylab). He received a BA in Mathematics from the University of Cambridge in 1996 and then spent 9 years working in finance and banking. Fed up of corporate life, he did a night school BSc degree in psychology at Birkbeck and PhD in Psychology from Birkbeck. Graduating in 2009, he has been a postdoctoral researcher with appointments at Birkbeck and the University of Burgundy, France.

His research focuses on learning and development in the first year or two of life and combines connectionist models of behavior and empirical work with babies. His publications have looked at early abstract thought, statistical learning, language acquisition and time perception. His current main research interests are in the development of our sense of time and the meaning of baby laughter. In a separate strand of work he has been developing smartphone apps to track the cognitive and emotional effects of drug use. He runs the YourBrainonDrugs blog and developed the Boozerlyzer, a smartphone app that track the effects of alcohol. He is currently working to adapt the technology to track the effects of medication on Parkinson´s disease.

Marriage Threatened By Personality Traits

Alan McStravick for redOrbit.com – Your Universe Online
Nick Frye-Cox of the University of Missouri has recently published a study that details a particular threat to marriages and other intimate relationships. Frye-Cox is a doctoral student in the Department of Human Development and Family Studies. Along with other researchers, his contention is that persons who possess a certain dissociative personality trait unwittingly harm their personal relationships.
While communication can be challenging for individuals who are intimate with one another, the trait, referenced above, called alexithymia, is a further impediment to open communication between spouses or loved ones. The research team has concluded that when one partner in a relationship suffers from alexithymia, both can experience profound loneliness and a lack of intimate communication that can lead to poor marital quality.
A person with alexithymia very often appears to many as one who is superadjusted to reality. This is because they tend to think in very concrete, realistic and logical terms. Unfortunately, this dry thinking too often leads to the exclusion of emotional responses to problems.
Deficiencies of sufferers could include such things as problems identifying, describing and working with one´s own feelings. This is often marked by a lack of understanding of the feelings of others, as well. They also will suffer difficulty distinguishing between feelings and the bodily sensations of emotional arousal, such as sweaty palms for nervousness or heat flush with anger.
Another noteworthy characteristic of the condition includes the inability to dream or fantasize due to what researchers have claimed a restricted imagination. Dreams that have been reported by sufferers tend be very logical and realistic. Clinical experience has suggested that it is the structural features of dreams rather than an ability to recall them that is an important signifier of alexithymia.
Many sufferers seem to present the appearance of a contradiction to the above mentioned characteristics because they can experience chronic dysphoria or break out into fits of crying or rage. However, when questioned by experienced professionals, it is usually revealed that they are incapable of providing a description of their feelings or even appear somewhat confused about questions that are inquiring about specific feelings.
According to the late Peter Sifneos, MD, it is a common misconception that individuals afflicted with alexithymia are totally unable to express emotions verbally or that they fail to recognize that they experience emotions at all. He had noted that patients would often mention that they felt anxiety or depression. The distinguishing factor of the condition is their overall inability to elaborate beyond a few limited adjectives, like ℠happy´ or ℠unhappy´ when describing their feelings. It is this inability that contributes to an emotional detachment from themselves and creates difficulty when trying to connect with others.
“People with alexithymia have trouble relating to others and tend to become uncomfortable during conversations,” Frye-Cox said. “The typical alexithymic person is incredibly stoic. They like to avoid emotional topics and focus more on concrete, objective statements.”
Despite this seeming disconnect, sufferers of alexythymia often marry because they still feel the basic human need to belong. This need, like eating or sleeping, is just fundamental, according to Frye-Cox..
“Once they are married, alexithymic people are likely to feel lonely and have difficulty communicating intimately, which appears to be related to lower marital quality,” Frye-Cox said. “People with alexithymia are always weighing the costs and benefits, so they can easily enter and exit relationships. They don´t think others can meet their needs, nor do they try to meet the needs of others.”
For this study, data was collected from both spouses in 155 heterosexual couples. Frye-Cox states that the proportion of alexithymic individuals in this sampling, 7.5 percent of men and 6.5 percent of women, is a representation that is consistent with the general population. Alexithymia is often recognized with other conditions on the autism spectrum. It is also found with post-traumatic stress disorders, eating and panic disorders, substance abuse and depression.
The study, entitled “Alexithymia and marital quality: The mediating roles of loneliness and intimate communication,” is to be published in the Journal of Family Psychology.
Co-author of the study Colin Hesse, an assistant professor in the Department of Communication, has pointed out that, via previous research he had conducted, affectionate communication, such as hugging or touching, could be beneficial to those who have high levels of alexithymia, allowing them to lead more fulfilling lives.

How The Brain Affects Quick Judgments In Social Settings

April Flowers for redOrbit.com – Your Universe Online

We make snap judgments about other people all the time, whether we like to admit it or not. In speed dating, this is especially true because we are deciding someone’s romantic potential in relatively few seconds. How we make those fast decisions is not very well understood, however.

A research team from the California Institute of Technology (CALTECH) and Trinity College, Dublin has found that people make such speed-dating decisions based on two factors that are related to activity in two distinct parts of the brain.

It’s not very surprising that the first factor affecting the number of dates a person gets is physical attractiveness. The second factor is a bit less obvious, involving people’s personal preferences. For example, how compatible a potential partner may be.

The findings, published in the November 7 issue of Journal of Neuroscience, are among the first to look at what happens in the brain when people make rapid-judgment decisions that carry real social consequences.

“Psychologists have known for some time that people can often make very rapid judgments about others based on limited information, such as appearance,” says John O’Doherty, professor of psychology and one of the paper’s coauthors. “However, very little has been known about how this might work in real social interactions with real consequences–such as when making decisions about whether to date someone or not. And almost nothing is known about how this type of rapid judgment is made by the brain.”

The study recruited 30 heterosexual males and females who were placed into a functional magnetic resonance imageing (fMRI) machine. They were shown pictures of potential dates of the opposite sex and given four seconds to rate, on a scale from 1 to 4, how much they would want to date that person. They were shown as many as 90 faces, then outside the fMRI, they were asked to rate the same faces again — this time on attractiveness and likeability on a scale from 1 to 9.

The particpants later took part in an actual speed dating event where they spent five minutes talking to some of the potential dates they had initially rated in the fMRI machine. They listed the ones they wanted to see again, and just like a real speed dating event, they were given each other’s contact information if there was a match.

Unsurprisingly, the team found that people who were highly rated on attractiveness were the ones who got the most date requests. Seeing an attractive face is associated with activity in a region of the brain known as the paracingulate cortex, a part of the dorsomedial prefrontal cortex (DMPFC). The DMPFC is an important area for cognitive control decision making. In particular, the paracingulate cortex has been shown to be active when the brain is comparing options, like which face is more attractive.

Jeff Cooper, former postdoctoral scholar in O’Doherty’s lab, said that the phenomenon was fairly consistent across all participants. This shows that nearly everyone considers physical attraction when judging a potential romantic partner. This judgment is correlated with activity in the paracingulate cortex.

Cooper says that’s not the only thing happening, though. More activation was shown in the rostromedial prefrontal cortex (RMPFC) when participants saw a person they wanted to date, but who was not rated as very desirable by everyone else. The RMPFC is part of the DMPFC, but sits farther in front than the paracingulate cortex and has been associated with consideration of people’s thoughts, comparisons of oneself to others, and perceptions of similarities with others. In addition to physical attractiveness, the study finds, people consider individual compatibility.

Good looks remain the most important factor in determining date requests, however, a person’s likeability — as perceived by other people — is also important. Likeability serves as a tie breaker if two people are rated equal on attractiveness and a more likeable potential date was more likely to be asked for a date.

“Our work shows for the first time that activity in two parts of the DMPFC may be very important for driving the snapshot judgments that we make all the time about other people,” O’Doherty says.

The researchers have not followed up on the dating couples, although Cooper says at least a few couples were still together after six weeks. The study focused solely on the neural mechanisms behind rapid judgments, they were not concerned with the long-term romantic success of the dating event.

List Of Diseases Spread By Deer Tick Grows, Along With Their Range

From malaria-like problems to potentially fatal encephalitis, newly emerging tick-borne afflictions challenge scientists to improve diagnosis and surveillance

An emerging tick-borne disease that causes symptoms similar to malaria is expanding its range in areas of the northeast where it has become well-established, according to new research presented today at the annual meeting of the American Society of Tropical Medicine and Hygiene (ASTMH).

Researchers from the Yale School of Public Health reported that from 2000 to 2008, cases of babesiosis–which invades red blood cells and is carried by the same tick that causes Lyme disease–expanded from 30 to 85 towns in Connecticut. Cases of the disease in Connecticut, where it was first reported in 1991, also have risen from 3 to about 100 cases per year.

The findings on babesiosis presented at the ASTMH annual meeting were accompanied by discussions of a range of other investigations into newly emerging tick-borne diseases, which include afflictions that can cause fatal encephalitis, an inflammation of the brain.

“Today’s findings underscore the shifting landscape of tick-borne diseases, whose rapid emergence can challenge the best efforts of science and medicine to diagnose, treat, and prevent their occurrence,” said Peter Krause, MD, a researcher at the Yale School of Public Health in New Haven, Connecticut.

ASTMH President James W. Kazura, MD, FASTMH, said: “This is a real-time illustration of the inter-connectedness of human and animal health that many people don’t often think about. Ticks are a major carrier for many human diseases and efforts like this offer timely information that is of regional and clinical importance.”

Lyme disease–with 20,000-30,000 cases reported each year in the United States–is still the best known example of a recently emerged tick-borne disease. But research points to a growing number of pathogens carried by the deer tick, all of which are expanding their range.

Malaria look-alike in United States

A prime example is babesiosis, which is caused by the parasite Babesia microti. It has similarities to malaria in that it invades and destroys red blood cells. In the United States, this parasite is the most common pathogen transmitted through blood transfusions.

Acute cases are commonly associated with fever, fatigue, chills, headache, sweats and muscle pain. Infection can be asymptomatic or severe, causing death in about 6 to 9 percent of patients hospitalized with the illness. If transmitted through a blood transfusion, the mortality rate is about 20 percent. However, if properly diagnosed, babesiosis generally is promptly cured with antibiotics.

Its range is expanding:

Krause’s colleague at Yale, Maria Diuk-Wasser, PhD, said that as Babesia has expanded its range. In some northern Connecticut towns the current rate of deer tick infection is now similar or even higher than in coastal Connecticut or the highly-endemic Nantucket Island, where about 10 percent of deer ticks are carrying the B. microti parasite.

The expansion of Babesia’s range in Connecticut follows a similar explosion of the parasite in New York’s Lower Hudson Valley, where the number of cases diagnosed in residents increased 20-fold from 2001 to 2008, from 6 cases to 119 cases per year during 2001 to 2008.

Babesiosis is now considered endemic in Connecticut, Massachusetts, Minnesota, New Jersey, New York, Rhode Island, and Wisconsin. And cases have turned up in at least 8 other states, from Washington to northern California in the West and from Maine to Maryland in the East.

In a separate study, Krause, Diuk-Wasser, Durland Fish, MD, and colleagues found evidence that Lyme disease and babesiosis parasite co-infection in mice appears to increase the transmission of Babesia microti and enhance its ability to become established in new areas.

They studied mice that had been deliberately infected with either one of the parasites that cause the diseases–B. microti in the case of babesiosis and B. burgdorferi for Lyme–or both at the same time. They allowed ticks to feed on the mice, and then each week over a six-week period they measured the percentage of ticks infected with each pathogen. They found ticks that fed on the mice infected with both the Lyme and babesiosis parasites were more likely to be carrying Babesia–and at higher concentrations–than ticks that fed on the mice infected only with the babesiosis parasite.

“This suggests that Lyme disease is somehow intensifying transmission of babesiosis,” Krause said.

Encephalitis-causing Ticks Emerging in Northeast

Marc El Khoury, MD, with New York Medical College in Valhalla, New York, reported on two related diseases: deer tick virus, which, as its name suggests, is carried by the hard-bodied deer tick, and Powassan virus (POWV), which is carried by a soft-bodied tick that feeds on groundhogs and woodchucks. But the two diseases share a common ancestor and are difficult to tell apart in standard antibody tests.

Until recently, however, deer tick virus was not considered a threat to human health. The first clue that deer tick virus could cause human disease came in 2001 when deer tick virus RNA, taken from the brain of a man who died in 1997 shortly after a presumed Powassan encephalitis infection, was sequenced.

Now, El Khoury reports that, in Lyme-endemic areas, many, if not all, cases previously diagnosed as POWV are likely deer tick virus. Furthermore, the number of cases appears to be rising rapidly. Between 1958 and 2003–a span of 45 years–only about 40 cases of POWV were reported in the United States and Canada. Then, in four years, from 2008 to 2012, 21 cases were reported from Wisconsin and Minnesota, and 12 cases from New York State.

“Almost all of these cases are in Lyme country, where humans are much more likely to be preyed upon by deer ticks carrying deer tick virus than ticks carrying Powassan virus,” El Khoury said. “Now it appears that in Lyme-endemic areas, people can not only get Lyme disease or babesiosis, but also a deer tick virus-related meningoencephalitis.”

Many infections are probably mild or asymptomatic. But more severe infections can progress to encephalitis, which can have a case fatality rate of up to 15 percent and cause permanent nerve or brain damage in about 50 percent of diagnosed cases. Powassan virus infections (that may in fact be deer tick virus) have been reported in Pennsylvania, New Jersey, Massachusetts, New York, Connecticut, Maine, Vermont, Minnesota, and Wisconsin.

There’s Nothing Like Family

And that’s not all. Deer ticks also are known to transmit a bacterial disease known as HGA (human granulocytic anaplasmosis) Also known as ehrlichiosis, HGA has become the third most frequent vector-borne disease in North America and Europe, and is now emerging in Asia, according to J. Stephen Dumler, MD, at Johns Hopkins University School of Medicine in Baltimore, Maryland.

HGA attacks white blood cells, and while milder forms cause fever and muscle pain, it can also cause serious disease and immune system malfunction that can lead to opportunistic infections. It is related to Rocky Mountain spotted fever (transmitted by another tick species) and typhus (transmitted by lice.)

HGA’s rapid spread has been abetted by an expanded family of deer tick relatives, with different, closely related tick species carrying the disease in the Western United States, Europe and Asia, Dumler said. But as in the case of POWV and deer tick virus, limited information can sometimes lead to incorrect conclusions when it comes to the growing menagerie of tick-borne pathogens.

Dumler reported on an unusual outbreak of life-threatening HGA in China between 2007 and 2010 that affected hundreds of patients. But when scientists looked more closely, scrutinizing patients’ blood for foreign DNA and sequencing whatever they found, the culprit was identified not as HGA but as a novel tick-borne virus–one that had a 30 percent case fatality rate. And just this summer, a novel, closely-related and dangerous tick-borne virus infected two Missouri men.

Sam Telford, SD, MS, of Tufts University in Massachusetts noted that one of the biggest challenges posed by the emergence of new tick-borne diseases is the ability to match surveillance capabilities with the discovery of new diseases.

“We increasingly need to apply the most sophisticated genetic tools to identify the numerous new tick-borne microbes that have the theoretical capacity to infect humans,” Telford said. “Only by raising awareness among health professionals of what to look for, publishing case reports with good laboratory details, and doing good epidemiology will we be able to truly understand and appropriately respond to emerging disease threats.”

On the Net:

New Study Demonstrates Lasting Emotional Benefits of Meditation

April Flowers for redOrbit.com – Your Universe Online
Meditation has been part of the human experience for at least 5,000 years. Our first written records of the ancient art are found in Indian scriptures, called tantras. Around 2,500 years ago, Siddhartha Gautma, commonly called Buddha, began teaching meditation as a road to enlightenment. However, it wasn´t until the 1960s that Western professors and researchers began studying the effects of mediation in earnest.
Participating in an 8-week meditation training program can have measurable effects on how the brain functions even when someone is not actively meditating, according to a new study led by researchers from the Massachusetts General Hospital (MGH) and Boston University (BU). The findings, published in the latest issue of Frontiers in Human Neuroscience, show differences in those effects based on the specific type of meditation practiced.
“The two different types of meditation training our study participants completed yielded some differences in the response of the amygdala — a part of the brain known for decades to be important for emotion — to images with emotional content,” says Gaëlle Desbordes, a PhD research fellow at the Athinoula A. Martinos Center for Biomedical Imaging at MGH and at the BU Center for Computational Neuroscience and Neural Technology.
“This is the first time that meditation training has been shown to affect emotional processing in the brain outside of a meditative state.”
Meditation training improves practitioners’ emotional regulation according to previous studies, by reducing activity in the amygdala — a structure at the base of the brain that is known to have a role in processing memory and emotion. Previous neuroimaging studies showed that those changes were only observed while study participants were meditating. The current study, however, tests a new hypothesis — that meditation training could also produce a generalized reduction in amygdala response to emotional stimuli. The team attempted to measure this response using functional magnetic resonance imaging (fMRI).
The participants in this study were enrolled in a larger investigation into the effects of two forms of meditation at Emory University. Healthy adults who had never meditated before participated in 8-week courses in one of the two forms — mindful attention meditation and compassion meditation. Mindful attention is the form most commonly studied, which focuses on developing attention and awareness of breathing, thoughts and emotions. Compassion meditation, on the other hand includes methods designed to develop loving kindness and compassion for oneself and others. A third, control, group participated in an 8-week health education course.
Three weeks before and three weeks after the training, 12 participants from each group underwent fMRI in Boston at the Martinos Center’s imaging facilities. Volunteers viewed a series of 216 images — 108 per session — of people in situations with either positive, negative, or neutral emotional content as their brains were scanned. The researchers did not mention meditation before the sessions, and confirmed afterwards that the participants had not meditated in the scanner. Assessments of depression and anxiety symptoms were completed before and after the training as well.
The results were varied. Post-training brain scans of the mindful attention group showed a decrease in activation of the right amygdala in response to all images. This supports the hypothesis that meditation can improve emotional stability and response to stress. The compassion meditation group also showed a decrease in right amygdala activity, but only in response to positive or neutral images. Subjects who reported practicing compassion meditation outside of the training session showed an increase in activity in the right amygdala when viewing negative images — all of which depicted some form of human suffering. The left amygdala showed no significant changes in any group.
“We think these two forms of meditation cultivate different aspects of mind,” Desbordes explains. “Since compassion meditation is designed to enhance compassionate feelings, it makes sense that it could increase amygdala response to seeing people suffer. Increased amygdala activation was also correlated with decreased depression scores in the compassion meditation group, which suggests that having more compassion towards others may also be beneficial for oneself. Overall, these results are consistent with the overarching hypothesis that meditation may result in enduring, beneficial changes in brain function, especially in the area of emotional processing.”

For Thousands of Women, Costs Of Divorce Include Health Coverage

Alan McStravick for redOrbit.com – Your Universe Online

Each year, more than a million couples go through the Big D, and I don´t mean Dallas. While that old country song refers to divorce, for 115,000 women each year it might as well stand for ℠disruption of insurance coverage,´ according to a new study out of the University of Michigan.

The study found this loss of coverage is not temporary either. Following a divorce, the overall rate of women´s health insurance coverage can remain depressed for more than two years.

“Given that approximately one million divorces occur each year in the U.S., and that many women get health coverage through their husbands, the impact is quite substantial,” according to Bridget Lavelle, a University of Michigan PhD candidate in public policy and sociology. Lavelle is also the lead author of this current study, which appears in the December issue of the Journal of Health and Social Behavior.

Lavelle conducted the study along with University of Michigan sociologist Pamela Smock. In it, they analyzed nationally representative longitudinal data from 1996 to 2007 on women between the ages of 26 and 64. Their research was supported by the University of Michigan’s National Poverty Center.

Their research has shown that approximately 65,000 divorced women will lose all health insurance coverage in the months following the divorce. This is because they no longer qualify as dependents under their husbands´ policies or they have difficulty paying the premiums for other sources of private insurance. Despite the financial hardship incurred from divorce, many of these women do not qualify for Medicaid or other public insurance options.

While women who have their own employer-based coverage are less likely than other women to lose coverage, they are not completely immune from this possibility. This is, in no small part, due to financial losses that are associated with the divorce. These losses severely inhibit their ability to meet their ordinary expenses, much less expenses related to their share of an employer-sponsored health insurance plan.

“Women in moderate-income families face the greatest loss of insurance coverage,” says Lavelle. “They are more likely than higher-income women to lose private coverage and they have less access than lower-income women to public safety-net insurance programs.”

In the course of their research, Lavelle and Smock were able to determine that both full-time work and education could act as important buffers in protecting women from losing health insurance after divorce. However, as many women only work part-time or in jobs that simply don´t provide health insurance coverage, they found that the protective effects of employment are not universal.

“The current health care and insurance system in the U.S. is inadequate for a population in which multiple marital and job changes over the life course are not uncommon,” Lavelle and Smock conclude. “It remains to be seen how effective the Affordable Care Act will be in remedying the problem of insurance loss after divorce, but the law has provisions that may help substantially.”

Until the full implementation of the ACA, however, tens of thousands of women will lose their private health insurance. This detrimental blow is in addition to all of the other economic losses that typically accompany divorce.

The researchers, Lavelle and Smock, are also affiliated with the University of Michigan Institute for Social Research (ISR) and the College of Literature, Arts and Sciences (LSA).

Carbs At Night May Reduce Risk Of Diabetes And Cardiovascular Disease

Connie K. Ho for redOrbit.com — Your Universe Online

A new study found that that limiting consumption of carbs to dinner meals could help decrease the risk of diabetes and cardiovascular disease while boosting satiety.

The research was conducted by a group of investigators from the Hebrew University of Jerusalem. The scientists were interested in studying an experimental diet where carbohydrates were mostly consumed during dinnertime. In particular, they wanted to see how this study could impact individuals who were severely or morbidly obese. People who are considered obese have weights that are greater than what is thought to be a healthy weight, and obesity is linked to an elevated risk of particular diseases and other health issues. For adults, obesity ranges are calculated with the body mass index (BMI) that factors in height and weight of individuals.

“The idea came about from studies on Muslims during Ramadan, when they fast during the day and eat high-carbohydrate meals in the evening, that showed the secretion curve of leptin was changed,” noted Zecharia Madar, a professor at the Institute of Biochemistry, Food Science and Nutrition at Hebrew University of Jerusalem, in a prepared statement.

In the study, the researchers randomly placed 78 police officers in either an experimental group where they consumed carbohydrates only at dinnertime or in a controlled weight loss group where carbohydrates were consumed throughout the day. 63 participants successfully completed the program, which spanned over a six-month period. During the study, the scientists tracked the impact of the experimental diet on the hormones leptin, ghrelin, and adiponectin. Leptin is a satiety hormone that has high levels in the blood at night, but low levels in the day. On the other hand, ghrelin is the hunger hormone and has high levels during the day but low levels during the night. Lastly, adiponectin is found to be related to insulin resistance, the metabolic syndrome, and obesity; the curve of this hormone is especially low and flat in people considered obese.

Based on the results of the study, the researchers believe that there are benefits to consuming carbohydrates only during evening meals. These benefits can particularly be helpful for people who are at risk for developing cardiovascular disease or diabetes. The findings of the study were published in the journal Obesity as well as Nutrition, Metabolism & Cardiovascular Diseases.

“The findings lay the basis for a more appropriate dietary alternative for those people who have difficulty persisting in diets over time,” explained Madar in the statement. “The next step is to understand the mechanisms that led to the results obtained.”

The study comes at a particular crucial time, as the rates of obesity are rising throughout the world. According to the Centers for Disease Control and Prevention (CDC), over one-third of adults in the United States are considered obese. As well, obesity is pricey for all those involved—in 2008, medical costs related to obesity were predicted to be $147 billion. In order to combat this issue, the CDC recommends that people eat more fruits and vegetables, consume less foods that are high in fat and sugar, drink more water and limit sugary drinks, as well as participate in physical activities like a 10-minute walk, three times a day, five days of the week.

Girls With Stressed Mothers Grow Up To Be Stressed Teenagers

Connie K. Ho for redOrbit.com — Your Universe Online

Researchers from the University of Wisconsin-Madison recently discovered that early family stress during infancy could be related to changes that occur in the daily brain function and anxiety of teenage girls.

The data was pooled from a population study that looked at the relationship between stress and the developmental pathway of the brain. For female infants who lived in homes with stressed out mothers, they were more likely to have higher levels of cortisol, a stress hormone, in preschool than other children of the same age. In particular, the researchers believed that elevated levels of cortisol and changes in brain activity could be preliminary factors for increased levels of anxiety for teens at 18 years of age. While these results were seen with girls in the study, the researchers did not see the same patterns with the male participants of the study. The research findings were recently published in Nature Neuroscience.

“We wanted to understand how stress early in life impacts patterns of brain development which might lead to anxiety and depression,” explained the study´s first author Cory Burghy, a researcher at the Waisman Laboratory for Brain Imaging and Behavior at the University of Wisconsin-Madison, in a prepared statement. “Young girls who, as preschoolers, had heightened cortisol levels, go on to show lower brain connectivity in important neural pathways for emotion regulation – and that predicts symptoms of anxiety during adolescence.”

In the study, the team of investigators utilized brain scans to show how teen girls with stressed out mothers during infancy displayed higher levels of stress. The scans demonstrated lower connections between the center of the brain, otherwise known as the amygdala, and the part of the brain that managed emotions, otherwise known as the ventromedial prefrontal cortex.

“Merging field research and home observation with the latest laboratory measures really makes this study novel,” remarked Dr. Richard Davidson, a professor of psychology and psychiatry at the University of Wisconsin-Madison, in the statement. “This will pave the way to better understanding of how the brain develops, and could give us insight into ways to intervene when children are young.”

The current study was based off a previous project that looked at 570 children and families who were enrolled in the Wisconsin Study of Families and Work (WSFW). The researchers looked at the impact of day care, maternity leave, and other factors related to family stress. With the current study, the researchers scanned the brains of 57 subjects (28 females and 29 males) to better understand the connection between the amygdala and the prefrontal cortex. Observing earlier results, the scientists found that the females had weaker connections and mothers had more stress that resulted in symptoms like depression, frustration, and marital conflict.

The team of investigators then questioned the teens on anxiety symptoms or stress they were currently facing. They discovered that there was a link between childhood stress and that elevated levels of cortisol in childhood could affect the development of the girls´ brains, causing there to be weaker connections between the amygdala and the prefrontal cortex.

“Our findings raise questions on how boys and girls differ in the life impact of early stress,” continued Davidson, who also serves as a lab director at the university, in the statement “We do know that women report higher levels of mood and anxiety disorders, and these sex-based differences are very pronounced, especially in adolescence.”

Based on the findings, the researchers believe that the project helps explain some of the changes that have happened from the first study to the current study.

“Now that we are showing that early life stress and cortisol affect brain development,” concluded Marilyn Essex, a University of Wisconsin-Madison professor of psychiatry and co-director of the WSFW, in the statement. “It raises important questions about what we can do to better support young parents and families.”

Down Syndrome Researchers Remove Extra Copy Of Chromosome 21

Brett Smith for redOrbit.com – Your Universe Online
Geneticists from the University of Washington won a key victory in the battle against genetic diseases by successfully removing the extra chromosome 21 from cells derived from a person with Down syndrome, according to the team´s report in the journal Cell Stem Cell.
Down syndrome, also known as trisomy 21, is caused by an extra copy of chromosome 21. In addition to the tell-tale facial features such as an abnormally small chin and almond-shaped eyes, other common features of the genetic illness include heart defects, impaired intellect, premature aging and an increased risk of developing certain types of leukemia.
“We are certainly not proposing that the method we describe would lead to a treatment for Down syndrome,” said study co-author Dr. David Russell, from the University of Washington´s Department of Medicine.  “What we are looking at is the possibility that medical scientists could create cell therapies for some of the blood-forming disorders that accompany Down syndrome.”
These treatments could include treating Down syndrome patients with leukemia with genetically-modified stem cells that are derived from their own cells, but without the extra chromosome. Stem cells could be taken from the bone marrow of the patients, doctors could remove the extra chromosome, and then the healthy cells could then be grown and transplanted back into the patient.
Russell added that his team´s findings could not only lead to new treatments for symptoms of the disease, they could also help geneticists to better understand the underlying causes of the disease and any potential treatments or preventions they might be able to pursue.
And beyond its potential relevance for Down syndrome research, the study could also have implications for stem cell research. Stem cell researchers often have difficulty dealing with trisomies — the creation of a third, extra chromosome — in their cell cultures, and the ability to prevent or efficiently remove them could prove invaluable.
To achieve the removal of the extra chromosome, the geneticists used a virus as a vehicle to deliver an engineered gene called TKNEO to a targeted spot on chromosome 21. The researchers used the TKNEO gene because of its predicted response to specific growth conditions. When the modified stem cells were grown in conditions that selected against TKNEO, the cells survived their growth conditions by spontaneously losing chromosome 21. The researchers also tested other survival-based mechanisms, including point mutations, gene silencing, or deletion of the TKNEO. However, chromosome loss was found to be the most common cause for survival.
Russell noted there were many challenges facing this groundbreaking research, but the hard work of his colleague, Dr. Li B. Li of the UW´s Department of Medicine, enabled the team to overcome them.
“Dr. Li´s achievement was a tour de force,” Russell said.
Russell also said the results of the study were important because they resulted in a clean removal of the extra chromosome that did not alter the rest of the genetic code in any way.
“Gene therapy researchers have to be careful that their approaches do not cause gene toxicity,” he said. “This means, for example, that removal of a chromosome must not break or rearrange the remaining genetic code. This method shouldn´t do that.”