Chuck Bednar for redOrbit.com – Your Universe Online
2014 Arctic sea ice coverage is the sixth lowest recorded since 1978, researchers from the NASA-supported National Snow and Ice Data Center (NSIDC) at the University of Colorado, Boulder revealed on Monday.
The region’s sea ice, which declined to its annual minimum on September 17, melted back from its maximum extent in March to a coverage area of 1.94 million square miles (5.02 million square kilometers). This year’s minimum was similar to last year’s and below the 1981-2010 average of 2.40 million square miles (6.22 million square km).
“The summer started off relatively cool, and lacked the big storms or persistent winds that can break up ice and increase melting,” Walter Meier, a research scientist at NASA’s Goddard Space Flight Center in Maryland, said in a statement. “Even with a relatively cool year, the ice is so much thinner than it used to be. It’s more susceptible to melting.”
During the summer, the Northwest Passage above Canada and Alaska remained ice-bound while a sliver of open water reaching beyond 85 degrees north latitude in the Laptev Sea near Siberia marked the farthest north open ocean had reached in over three decades, Meier and his research colleagues reported.
While more of the Arctic has been covered in ice over the past two years since 2012’s record low summer, the researchers are quick to point out that this is not an indication the region’s conditions are returning to normal. This year’s minimum extent remains in line with a downward trend, Meier said.
Overall, the Arctic Ocean is losing about 13 percent of its sea ice per decade. Extent measurements include areas that are at least 15 percent ice-covered, and the NASA-developed computer analysis used in the NSIDC report are based on data provided by NASA’s Nimbus 7 satellite (which was operational between 1978 and 1987) and the US Department of Defense’s Defense Meteorological Satellite Program (which has been active since 1987).
The NSIDC emphasized that all Arctic sea ice extent figures were “preliminary,” and that “changing winds could still push the ice extent lower.” The organization said that it would issue a formal announcement in early October that would provide a complete analysis of “the possible causes behind this year’s ice conditions, particularly interesting aspects of the melt season, the set up going into the winter growth season ahead, and graphics comparing this year to the long-term record.”
[ Watch the Video: Arctic Sea Ice, Summer 2014 ]
Sea ice observations conducted from space using satellites are just one of the methods used by NASA and the NSIDC to track changes in the Arctic region and the impact it has on climate, the US space agency said. For the past several years, it has also used Operation IceBridge flights to measure Arctic sea ice and ice sheets during the spring, and this month marked the launch of the new Arctic Radiation – IceBridge Sea and Ice Experiment (ARISE) field experiment.
ARISE analyzes the relationship between retreating sea ice and the Arctic climate, while NASA said that “Earth’s vital signs” are regularly monitored using a fleet of satellites, airborne equipment and ground-based observatories. The goal is to combine long-term data and computer analysis tools “to better see how our planet is changing,” the agency added, and the information is shared with various institutions both in the US and all over the world.
BICEP Detection Of Gravitational Waves Called Into Question As Researchers Discover Potential New Way To Observe Them
Chuck Bednar for redOrbit.com – Your Universe Online
As one team of experts has published a study claiming to have discovered hints of gravitational waves in stars, another has released a new galactic dust map that casts doubts on previous research claiming to have detected such waves earlier this year.
In the first study, the authors demonstrated how these invisible ripples in the fabric of space and time could be observed by looking at the stars. Their new model proposes that a star oscillating at the same frequency as a gravitational wave will absorb energy from the wave and become brighter – an overlooked prediction of Einstein’s 1916 theory of general relativity which contradicts previous assumptions about the behavior of the waves.
They explained that gravitational waves are similar in nature to the sound waves emitted after an earthquake, except that the source of the so-called space tremors are events like supernovae, binary neutron stars or the mergers of black holes and neutron stars. While scientists have long known about the existence of gravitational waves, the researchers said that they have never directly observed them (though they are currently attempting to).
One of the reasons that detecting these waves is so difficult is because they interact so weakly with matter, the research team explained. However, their new model suggests that these gravitational waves could actually have more of an impact on matter than previously realized, and that stars with oscillations (vibrations) matching their frequency can resonate and absorb a large amount of energy from the ripples.
“It’s like if you have a spring that’s vibrating at a particular frequency and you hit it at the same frequency, you’ll make the oscillation stronger. The same thing applies with gravitational waves,” Barry McKernan, a research associate in the Museum’s Department of Astrophysics and a professor at the Borough of Manhattan Community College, said in a statement.
If the stars absorb a tremendous pulse of energy, they can become temporarily energized and made brighter than usual while they slowly discharge the energy. McKernan and his colleagues report that this method could provide scientists with a new way to indirectly detect gravitational waves – using detectors to search for a drop in the intensity of gravitational waves measured when a star at the right frequency passes in front of an energetic source.
In related research, a galactic dust map released by the European Planck space telescope on Monday and submitted to the journal Astronomy & Astrophysics (A&A), found a significant amount of interstellar dust in the region of sky previously studied by the BICEP2 telescope when it reportedly detected gravitational waves earlier this year.
That dust, according to Ian O’Neill of Discovery News, may have obscured the primordial light in which the South Pole-based telescope purportedly detected the signal of gravitational waves. The discover means that it is possible that BICEP2’s potential detection of gravitational waves may have been nothing more than a false alarm.
“In a nutshell, last March, astrophysicists… announced the potentially historic discovery that their experiment had, for the first time, detected the signal of gravitational waves etched into the ancient ‘glow’ of the Big Bang – a ubiquitous radiation seen at the outermost reaches of the observable Universe known as the cosmic microwave background, or simply CMB,” explained O’Neill.
“The discovery of gravitational waves would be historic in itself, but the ramifications of seeing gravitational waves in the CMB would be far-ranging. These gravitational waves would have their origins just after the Big Bang during a rapid period of expansion known as ‘inflation,’” he added. “This would therefore provide captivating evidence for one of the leading theories of cosmic birth… In short, the discovery of ancient gravitational waves could tie up some of the most fundamental questions of the quantum and cosmological nature of our Universe.”
However, as BBC News science correspondent Jonathan Amos points out, the new Planck report describes properties of dust polarization across a large percentage of sky at intermediate and high galactic latitudes, including a portion of the field relevant to BICEP2.
In fact, Amos said the findings were “not encouraging” because there was “significantly more dust” in the so-called southern hole than the BICEP2 team had expected. In fact, most of the data – if not all of it – could have been attributed to dust. Planck scientist Dr. Cécile Renault admitted to Amos that it was “possible,” but that their measurements had already accounted for a high degree of error.
“Even if the American and European approaches turn out to be unsuccessful this time, these groups will have pointed the way for future observations that are planned with superior technology,” the BBC News writer added. “Planck has actually now identified parts of the sky that have less dust than the area probed by BICEP.”
—–
Traveling at the Speed of Thought: Einstein and the Quest for Gravitational Waves by Daniel Kennefick
Blood Test Could Help Identify Those At Risk For Psychosis
University of North Carolina Health Care
A study led by University of North Carolina at Chapel Hill researchers represents an important step forward in the accurate diagnosis of people who are experiencing the earliest stages of psychosis.
Psychosis includes hallucinations or delusions that define the development of severe mental disorders such as schizophrenia. Schizophrenia emerges in late adolescence and early adulthood and affects about 1 in every 100 people. In severe cases, the impact on a young person can be a life compromised, and the burden on family members can be almost as severe.
The study published in the journal Schizophrenia Bulletin reports preliminary results showing that a blood test, when used in psychiatric patients experiencing symptoms that are considered to be indicators of a high risk for psychosis, identifies those who later went on to develop psychosis.
“The blood test included a selection of 15 measures of immune and hormonal system imbalances as well as evidence of oxidative stress,” said Diana O. Perkins, MD, MPH, professor of psychiatry in the UNC School of Medicine and corresponding author of the study. She is also medical director of UNC’s Outreach and Support Intervention Services (OASIS) program for schizophrenia.
“While further research is required before this blood test could be clinically available, these results provide evidence regarding the fundamental nature of schizophrenia, and point towards novel pathways that could be targets for preventative interventions,” Perkins said.
Clark D. Jeffries, PhD, bioinformatics scientist at the UNC-based Renaissance Computing Institute (RENCI), is a co-author of the study, which was conducted as part of the North American Prodrome Longitudinal Study (NAPLS), an international effort to understand risk factors and mechanisms for development of psychotic disorders.
“Modern, computer-based methods can readily discover seemingly clear patterns from nonsensical data,” said Jeffries. “Added to that, scientific results from studies of complex disorders like schizophrenia can be confounded by many hidden dependencies. Thus, stringent testing is necessary to build a useful classifier. We did that.”
The study concludes that the multiplex blood assay, if independently replicated and if integrated with studies of other classes of biomarkers, has the potential to be of high value in the clinical setting.
Classroom Intervention Can Help Shy Children Learn
A program that helps teachers modify their interactions with students based on an individual’s temperament helps shy children to become more engaged in their class work, and in turn, improves their math and critical thinking skills.
Led by NYU’s Steinhardt School of Culture, Education, and Human Development, the study offers an evidence-based intervention to help shy children, who are often at risk for poor academic achievement. The findings appear in the School Psychology Review.
Shy children are described as anxious, fearful, socially withdrawn, and isolated. In the classroom, they are less likely to seek attention from teachers and to be engaged with their peers. As a result, research shows that they may have difficulty in school, and teachers may perceive them as being lower in academic skills and intelligence than their more outgoing classmates.
“The needs of shy kids are important but often overlooked because they’re sitting quietly, while children with behavioral problems get more attention from teachers,” says Sandee McClowry, a professor in NYU Steinhardt’s Department of Applied Psychology and the study’s senior author. “It is important to get shy children engaged without overwhelming them.”
Shyness is one of four temperaments identified in INSIGHTS into Children’s Temperament, an intervention designed to help teachers and parents match environmental demands with an individual’s personality. The program provides a framework for appreciating and supporting differences in the personalities of children, rather that trying to change them. Participants in the program learn to recognize four temperaments: shy, social and eager to try, industrious, and high maintenance.
In the current study, the researchers evaluated whether INSIGHTS supports the academic development – specifically critical thinking, math and language skills – of children in urban, low-income schools. Nearly 350 children and their parents across 22 elementary schools were followed during kindergarten and across the transition into first grade. Half of the schools participating were randomized to INSIGHTS, while the other half, which served as the control group, participated in a supplemental after-school reading program.
“Kindergarten and first grade are big shifts for children, regardless of temperament. For example, teacher-student ratios are higher and classes are more structured. For shy kids, this transition is a particular challenge,” McClowry says.
The researchers were especially interested in what happens after summer break, as studies have shown that high-risk children’s skills decline over the summer while they are out of school. By providing children with extra support in the last part of kindergarten, the researchers hoped to sustain the students’ skills over the summer.
Over 10 weeks, teachers and parents in the INSIGHTS program learned how to recognize differences in children and support them in ways that are specific to their individual temperaments. During the same time period, children participated in INSIGHTS classroom activities, using puppets, flashcards, workbooks, and videotapes to help them solve daily dilemmas – for instance, having a substitute teacher or a play date at an unfamiliar house – and understand how individuals differ.
While all children enrolled in INSIGHTS showed improvements in academic skills, the effects were substantially greater for shy children. Shy children who participated in INSIGHTS had significant growth in critical thinking skills and stability in math skills over the transition from kindergarten to first grade, compared to their shy peers in the control group who declined in both areas.
The researchers observed no gains in language arts skills among shy kids from the INSIGHTS intervention compared to the control group, perhaps due to the benefits the children in the control group gained from the supplemental reading program.
“Our study supports creating an environment that makes shy children feel safe and respected in order to support their development,” said Erin O’Connor, an associate professor in the Department of Teaching and Learning at NYU Steinhardt and the study’s lead author. “We need to reframe our understanding of these children, because for the most part, shy children are not just going to ‘come out of their shell.'”
LEGO-Inspired Components Could Make Microfluidic System Development Cheaper, Easier
Chuck Bednar for redOrbit.com – Your Universe Online
Drawing inspiration from LEGO® building blocks, researchers have developed a new type of component that makes it possible to construct a 3D microfluidic system by simply snapping together small modules by hand.
According to the USC Viterbi School of Engineering team behind the breakthrough, these so-called “labs on a chip” are used by experts working in biotechnology, chemistry and other scientific fields to precisely manipulate small volumes of fluids for use in DNA analysis, pathogen detection, clinical diagnostic testing and synthetic chemistry.
These systems are typically built in a cleanroom on a two-dimensional surface using the same technology developed to produce integrated circuits for the electronics industry, they explained. This can be an expensive and time-consuming process, as making a single device often requires researchers to design, assemble and test multiple different versions of a single device – a process that can take up to two weeks and cost thousands of dollars.
“You test your device and it never works the first time,” explained USC graduate student Krisna Bhargava. “If you’ve grown up to be an engineer or scientist, you’ve probably been influenced by LEGO® at some point in your childhood. I think every scientist has a secret fantasy that whatever they’re building will be as simple to assemble.”
Along with USC chemical engineering and materials science professor Noah Malmstadt and biomedical engineering graduate student Bryant Thompson, Bhargava, from the university’s Mork Family Department of Materials Science set out to find a way to make the construction process simpler, less expensive and less time-consuming.
The study authors, whose work appears in Monday’s edition of the Proceedings of the National Academy of Sciences (PNAS), started by identifying the basic elements typically used in microfluidic systems. However, after spending some time separating the functions of the devices into standardized modular components, similar to how electrical engineers break down circuitry, they decided to consider a new approach.
“The founders of the microfluidics field took the same approach as the semiconductor industry: to try to pack in as much integrated structure as possible into a single chip,” Bhargava explained. “In electronics, this is important because a high density of transistors has many direct and indirect benefits for computation and signal processing.”
“In microfluidics, our concerns are not with bits and symbolic representations, but rather with the way fluidics are routed, combined, mixed, and analyzed; there’s no need to stick with continuing to integrate more and more complex devices,” he added. So he and his colleagues decided to borrow an approach from the electronics industry.
Bhargava’s team came up with the idea of 3D modular components that encapsulated the common elements of microfluidic systems, as well as a connector capable of attaching those individual components together. They devised computer models for eight modular fluidic and instrumentation components (MFICs), each of which would perform a simple task and would be about one cubic centimeter in size, or slightly smaller than a traditional six-sided die.
The study authors said that their work in developing these MFICs marks the first time that a microfluidic device has been broken down into individual components that can be assembled, disassembled and re-assembled repeatedly. They attribute their success to recent breakthroughs in high-resolution, micron-scale 3D printing technology.
“We got the parts back from our contract manufacturer and on the first try they worked out better than I could have dreamed. We were able to build a working microfluidic system that day, as simple as clicking LEGO® blocks together,” said Bhargava, whose work was funded in part by the National Institutes of Health (NIH).
“You pull out everything you think is going to work, you stick it together and you test it,” he continued. “If it doesn’t work, you pull part of it out, swap out some pieces and within a day you’ve probably come to a final design, and then you can seal the system together and make it permanent. You have a massive productivity gain and a huge cost advantage.”
“MFICs will vastly increase the productivity of a single grad student, postdoc, or lab tech by enabling them to build their own instruments right in the lab and automate their workflow, saving time and money,” added Malmstadt. “People have done great things with microfluidics technology, but these modular components require a lot less expertise to design and build a system. A move toward standardization will mean more people will use it, and the more you increase the size of the community, the better the tools will become.”
Benefits Of Telecommuting Are Greater For Some Workers, Study Finds
Phil Ciciora, Business & Law Editor, University of Illinois
Even in a hyperconnected world where laptops, phones, tablets and now even wristwatches are tethered to the Internet 24/7, employers are still wary about the performance and social costs imposed by employees who work remotely.
But a new study by a University of Illinois business professor says telecommuting yields positive effects for two important measures of employee performance, and it can even produce very strong positive effects under certain circumstances for some employees.
According to Ravi S. Gajendran, a professor of business administration at Illinois, telecommuting is positively associated with improvements in task- and context-based performance, which refers to an employee’s organizational citizenship behavior, including their contributions toward creating a positive, cooperative and friendly work environment.
“After Yahoo changed its telecommuting policy, this question of, ‘Is telecommuting good for performance?’ came to the fore,” he said. “At the time, there was a lot of debate about it, but there was very little evidence available. Well, now we have some evidence that says telecommuters are good performers as well as good co-workers on the job.”
To perform the study, Gajendran and co-authors David A. Harrison of the University of Texas at Austin and Kelly Delaney-Klinger of the University of Wisconsin at Whitewater developed a theoretical framework linking telecommuting to employee performance. They analyzed field data from 323 employees and 143 matched supervisors across a variety of organizations.
Their findings should quell any concerns from upper-level management about the coveted work arrangement, Gajendran said.
“Although we found that telecommuting’s positive effect was modest, even a small positive effect is a big deal, because a lot of employers assume the worst with working remotely,” Gajendran said. “Even if there were no effect at all – if the study found that telecommuting essentially did no harm, that it’s no different than being in the office – that in and of itself would be a finding.”
According to the study, telecommuters want to be seen as “good citizens” of the company in order to justify their flexible work arrangements.
“They feel compelled to go above and beyond to make their work presence more visible, to make themselves known as assets,” Gajendran said. “In fact, they almost overcompensate by being extra helpful, because they know in the back of their minds that their special arrangement could easily go away. So they give a little extra back to the organization.”
The extra effort could also be a genuine show of appreciation, Gajendran said. “Their thinking could be, ‘My boss is giving me something special, I’ve got to reciprocate and give a little back,’ ” he said. “Our data doesn’t tease that apart, but I imagine it’s possible. If you’re working remotely, you don’t want your co-workers to resent that arrangement. You want them to continue to think you’re helpful. You don’t want to be ‘out of sight, out of mind.’”
The study also found that allowing a worker who has a good relationship with their boss to telecommute doesn’t necessarily move the needle much in job performance.
“It doesn’t hurt performance; it remains the same,” Gajendran said. “It’s essentially flat. For those workers, it’s status quo.”
But if a worker doesn’t have a great relationship with their boss, it turns out that telecommuting actually works to improve their performance.
“When the employee-employer relationship is strained, and then the boss says, ‘OK, I’m going to allow you to work from home,’ it improves the employee’s performance, possibly because they feel more beholden toward their boss,” he said.
By contrast, if an employee has a great relationship with their boss, and their boss then gives them the option to telecommute, “it’s just one more perquisite for a star employee,” Gajendran said.
“But for someone who doesn’t have the greatest relationship with their supervisor, getting this special work arrangement is significant,” he said. “The employee is motivated to give back and work harder to ensure that arrangement doesn’t get taken away. So their performance actually gets better.”
Gajendran has previously studied the employer-employee relationship through the lens of “leader-member exchange,” which involves cultivating trust, loyalty, developmental feedback and support between a team leader and a team member.
Although it is more likely that managers would extend telecommuting privileges only to subordinates who rank high on the “leader-member exchange” (LMX) scale, telecommuting is likely to enhance the task and contextual performance of subordinates who rank low on the scale.
“It seems like a no-brainer that supervisors should grant telecommuting privileges to high LMX employees, to those who managers and supervisors trust and believe worthy of receiving special privileges,” he said. “But in light of evidence from our study, which suggests that telecommuting has an even greater positive effect on employees who don’t have the greatest relationship with their supervisors, eligibility policies may need to be rethought to ensure that low LMX employees also have the opportunity to access virtual work arrangements.”
The other question the researchers considered when workers are allowed to telecommute is what happens to an employee’s “contextual performance,” also known as their organizational citizenship behavior, which encompasses everything from being a cooperative, helpful and considerate colleague as well as being a dedicated employee who works hard, takes initiative and follows organizational rules.
The researchers found that, under some circumstances, telecommuting can actually enhance that aspect of work.
“Apart from doing your job well, citizenship behavior is, ‘Are you helpful to others? Are you a dedicated member of the organization? Are you committed?’ ” Gajendran said. “All of those things are more difficult to demonstrate if you’re a telecommuter. But our research shows that telecommuting has positive effects not only for an employee’s task-based performance, but also for their contextual performance in the work environment itself.”
Although relatively widespread, telecommuting isn’t the norm in most workplaces, nor is it a perk that’s automatically granted to standout employees.
“An employee not only has to ask for it, they also have to be approved for it, so that whole process makes it seem special,” Gajendran said. “And when an employee is allowed to telecommute, they feel a debt of gratitude to the organization.”
But does it matter if everyone is getting the special treatment or if only a select few are allowed to telecommute?
“It turns out that if everyone is getting it, then it’s seen as less special, and enthusiasm about it wanes,” Gajendran said. “The employee sees it as a normal part of work life, so they don’t think it’s necessary to go above and beyond to justify it. But if it’s a perk that’s only given to a select group of people, then they think, ‘Hey, this is a big deal.’ The freedom and autonomy that comes with it becomes valued, and that’s more motivating, which drives up performance and thereby makes the employee a better organizational citizen.”
Gajendran cautions that the research does not weigh in on who and what type of business should allow telecommuting.
“We’re merely trying to say it all depends on the context in which it unfolds, and certain circumstances more than others dictate when it would be beneficial,” he said.
The study will appear in the journal Personnel Psychology.
> Continue reading…
Study Pinpoints Part Of The Brain Responsible For Slow Wave Sleep
Chuck Bednar for redOrbit.com – Your Universe Online
Researchers from the Harvard School of Medicine and the University at Buffalo School of Medicine and Biomedical Sciences have discovered a region of the brain responsible for causing people to fall into a deep sleep.
This slumber-promoting circuit, which is located deep in the primitive brainstem, is only the second such “sleep node” ever discovered in the brains of mammals, the study authors said. In research published online last month in Nature Neuroscience, they explain how this region is not only capable of but also necessary for producing what is known as slow wave sleep (SWS) in humans.
By using genetically targeted activation and optogenetically based mapping to examine the brain’s circuitry, the researchers found that half of all sleep-promoting activity originates from a region of the brainstem known as the parafacial zone (PZ). The brainstem is a primordial part of the brain and is responsible for regulating the basic functions necessary for survival, including breathing, body temperature, blood pressure and heart rate.
“The close association of a sleep center with other regions that are critical for life highlights the evolutionary importance of sleep in the brain,” said Caroline E. Bass, assistant professor of Pharmacology and Toxicology in the University of Buffalo School of Medicine and Biomedical Sciences and a co-author on the recently-published paper.
She and her colleagues found that a specific type of neuron in the PZ which produces the neurotransmitter gamma-aminobutyric acid (GABA) is responsible for producing SWS. Furthermore, using a set of innovative tools, they were able to precisely control those neurons remotely, essentially allowing them to turn the neurons on and off at will.
“These new molecular approaches allow unprecedented control over brain function at the cellular level,” said Christelle Ancelet of the Harvard School of Medicine. “Before these tools were developed, we often used ‘electrical stimulation’ to activate a region, but the problem is that doing so stimulates everything the electrode touches and even surrounding areas it didn’t. It was a sledgehammer approach, when what we needed was a scalpel.”
“To get the precision required for these experiments, we introduced a virus into the PZ that expressed a ‘designer’ receptor on GABA neurons only but didn’t otherwise alter brain function,” added Patrick Fuller, assistant professor at Harvard and senior author on the Nature Neuroscience paper. “When we turned on the GABA neurons in the PZ, the animals quickly fell into a deep sleep without the use of sedatives or sleep aids.”
The research team, whose work was funded by the National Institutes of Health (NIH), said that the exact interactions between these neurons and other sleep and wake-promoting regions of the brain still need to be analyzed. However, they believe their findings could ultimately lead to the invention of new medications to treat insomnia and other sleep disorders, as well as the development of safer and more effective anesthetics.
“We are at a truly transformative point in neuroscience, where the use of designer genes gives us unprecedented ability to control the brain,” said Bass. “We can now answer fundamental questions of brain function, which have traditionally been beyond our reach, including the ‘why’ of sleep, one of the more enduring mysteries in the neurosciences.”
New Research Predicts 2014 CO2 Emissions In Excess Of 40 Billion Tons
Chuck Bednar for redOrbit.com – Your Universe Online
Carbon dioxide emissions, which are one of the main contributors to global warming, are expected to reach a record high of 40 billion tons in 2014, according to new Global Carbon Project (GCP) data released this weekend.
The GCP, which is co-led in the UK by researchers from the Tyndall Centre for Climate Change Research at the University of East Anglia (UEA) and the College of Engineering, Mathematics and Physical Sciences at the University of Exeter, said that those figures reflect a projected 2.5 percent increase in the burning of fossil fuels.
Based on these new statistics, future CO2 emissions cannot exceed 1,200 billion tons for a likely 66 percent chance of keeping average global warming under 2 degrees Celsius, the researchers said. At the current rate of emissions, this maximum carbon dioxide quota would be used up in approximately three decades time, meaning that there is just one generation before the 2 degree Celsius limit could be exceeded.
In order to avoid this, climate scientists warn that more than half of all remaining fossil fuel reserve may need to be left unused. The GCP’s Global Carbon Budget report was released just two days before the start of the 2014 UN Climate Summit in New York, where world leaders will meet in an attempt to catalyze climate change, reduce emissions and work towards a new global agreement in 2015.
“The human influence on climate change is clear,” Tyndall Centre director Corinne Le Quéré said in a statement on Sunday. “Politicians meeting in New York need to think very carefully about their diminishing choices exposed by climate science.”
“We need substantial and sustained reductions in CO2 emissions from burning fossil fuels if we are to limit global climate change,” she added. “We are nowhere near the commitments necessary to stay below 2°C of climate change, a level that will be already challenging to manage for most countries around the world, even for rich nations.”
The Global Carbon Budget, which included figures for 2013 as well as projections for 2014, found that China, the USA, the EU and India are the largest emitters, accounting for a combined 58 percent of all global emissions. China’s carbon emissions increased by 4.2 percent in 2013, while the USA’s grew by 2.9 percent and India’s by 5.1 percent.
The EU, on the other hand, decreased total emissions by 1.8 percent – although it continues to export a third of its emissions to China and other producers through imported goods and services, according to the GCP. For the first time, China’s per-capita emissions overtook those in the EU in 2013, and the Asian nation’s CO2 emissions are now larger than both the US and the EU combined, according to the report.
Furthermore, the statistics found that CO2 emissions resulting from the burning of fossil fuels are 65 percent above 1990 levels. The findings served as the basis of a series of research papers appearing in the journals Nature Climate Change, Nature Geoscience and Earth System Science Data Discussions (ESSD).
“The time for a quiet evolution in our attitudes towards climate change is now over. Delaying action is not an option – we need to act together, and act quickly, if we are to stand a chance of avoiding climate change not long into the future, but within many of our own lifetimes,” said University of Exeter professor Pierre Friedlingstein, lead author of the Nature Geoscience paper.
“We have already used two-thirds of the total amount of carbon we can burn, in order to keep warming below the crucial 2°C level. If we carry on at the current rate we will reach our limit in as little as 30 years’ time – and that is without any continued growth in emission levels,” he added. “The implication of no immediate action is worryingly clear – either we take a collective responsibility to make a difference, and soon, or it will be too late.”
Oculus VR Unveils Upgraded Crescent Bay Prototype VR Headset
Chuck Bednar for redOrbit.com – Your Universe Online
Irvine, California-based VR technology firm Oculus VR has announced an enhanced prototype of its virtual reality headset that features built-in audio and higher resolution, Reuters reporter Lisa Richwine and other media outlets reported this weekend.
The new device, which was unveiled by CEO Brendan Iribe Saturday during the company’s Oculus Connect developer conference, is also said to be lighter than the previous prototype of the Oculus Rift headset. However, according to Richwine, Iribe noted that while the device is “not the consumer product,” this most recent prototype is “much, much closer” to what the headset will be.
According to AP reporter Derrik J. Lang, the new device has been nicknamed Crescent Bay, and it also features a higher refresh rate than the original 2012 Oculus Rift prototype and 360-degree head tracking technology. The new model also boasts “improved ergonomics” over previous models, added Emanuel Maiberg of Gamespot.
The company, which was acquired earlier this year by Facebook for $2 billion, has already shipped approximately 100,000 development kits to film and video game makers in 130 countries, Iribe said. The Oculus chief executive went on to state that Crescent Bay “has the presence we need for consumer VR,” and that the difference between it and Oculus Rift Developer Kit 2 was as pronounced as that between the first and second developer kits.
In a recent blog entry, the Oculus Team said that while the hardware is still “incredibly early” and that there are “plenty of technical challenges left to solve,” they called Crescent Bay “the best virtual reality headsets we’ve ever built.”
They also announced they had also created original demo content entitled Crescent Bay Experiences that was developed in-house and was “designed to demonstrate the power of presence and give you a glimpse into the level of VR experience you can expect to see come to life in gaming, film, and beyond.”
Earlier this month, Oculus teamed up with Samsung to their $200 Gear VR mobile headset, which includes a slot that allows the Galaxy Note 4 smartphone to be used as a VR display, said Lang and Adi Robertson of The Verge. Robertson said that Oculus had created its first actual in-headset user interface for the Gear VR, and that the company went on to announce that it was planning to release a mobile app for VR games in the near future.
“Oculus product VP Nate Mitchell says that its current online games catalog (not yet a real store) has seen 699,000 downloads since launch,” Robertson noted, adding that Mitchell also announced “a new demo from longtime partner Epic Games… and a partnership with development company Unity, which will support the Rift as an official platform on its free and paid versions, removing a major barrier to making VR games.”
Finally, the Oculus Team also revealed they had signed a licensing agreement to use RealSpace3D audio technology, a software stack they said was developed over the course of a decade and was based on technology originally created at the University of Maryland. RealSpace3D’s tech enables high-fidelity VR audio with a combination of HRTF spatialization and integrated reverberation algorithms.
NASA’s MAVEN Spacecraft Successfully Completes Orbit-Insertion Maneuver
Chuck Bednar for redOrbit.com – Your Universe Online
The NASA spacecraft that will explore the climate history of Mars by studying its upper atmosphere, the Mars Atmosphere and Volatile EvolutioN (MAVEN), successfully entered orbit around the planet Sunday night, the US space agency has confirmed.
At 10:26 pm EDT, after the vehicle turned to point the main engines in the right direction and conducted a planned slow-down burn that lasted a little over half an hour, NASA tweeted via the official MAVEN Mission Twitter account that it had received “initial confirmation” the spacecraft had indeed entered completed its orbital insertion maneuver.
[ Watch the Video: ScienceCasts: Colliding Atmospheres – Mars Vs Comet Siding Spring ]
“This is such an incredible night,” John Grunsfeld, NASA’s chief for science missions, told AP Aerospace Writer Marcia Dunn shortly after the entry maneuver was complete. Deputy director for science at Goddard Space Flight Center (GSFC) Colleen Hartman added, “I don’t have any fingernails any more, but we’ve made it. It’s incredible.”
MAVEN was reportedly traveling at speeds in excess of 10,000 mph when it conducted its engine burn, firing its engines in order to reduce its speed enough so that it could enter orbit. The communications lag resulting from the 138 million miles separating Earth and Mars caused a 12 minute delay in confirmation, Dunn added.
The successful completion of the orbital insertion maneuver marks the end of a journey that took MAVEN approximately 10 months and covered 442 million miles (711 million kilometers), according to the US space agency. It also marks the end of 11 years of concept and development for the project, which will soon enter its science phase.
Image Above: Members of the mission team at the Lockheed Martin Mission Support Area in Littleton, Colorado, celebrate after successfully inserting NASA’s Mars Atmosphere and Volatile Evolution (MAVEN) spacecraft into orbit around Mars at 10:24 p.m. EDT Sunday, Sept. 21. Credit: Lockheed Martin
MAVEN is now set to begin a six-week commissioning phase, which will include maneuvering the spacecraft into its final orbit and testing both instruments and science-mapping commands, according to NASA. Afterwards, its one-Earth-year primary mission will begin, and the spacecraft will begin taking measurements of the composition and structure of the planet’s atmospheric gases and study how it interacts with the sun and solar wind.
The spacecraft will be investigating how Mars’ climate has changed over time due to the loss of atmospheric gases, and its instruments will be able to detect trace amounts of chemicals in the air high above the planet’s surface. Those chemicals will allow scientists to test theories that the sun’s energy caused nitrogen, carbon dioxide and water from the atmosphere to eventually erode, turning the planet into the dry, desolate land mass it is today.
“The MAVEN science mission focuses on answering questions about where did the water that was present on early Mars go, about where did the carbon dioxide go,” explained Bruce Jakosky, MAVEN principal investigator from the University of Colorado, Boulder’s Laboratory for Atmospheric and Space Physics (LASP). “These are important questions for understanding the history of Mars, its climate, and its potential to support at least microbial life.”
[ Watch the Video: Investigating The Martian Atmosphere ]
MAVEN originally launched on November 18, 2013 from Cape Canaveral, Florida, carrying with it a trio of instrument packages that will be responsible for providing the measurements necessary to better understand the evolution of the planet’s atmosphere. Those three instrument packages include the Particles and Fields Package, the Remote Sensing Package and the Neutral Gas and Ion Mass Spectrometer (NGIMS).
The Particles and Fields Package contains a total of six instruments – the Solar Wind Electron Analyzer (SWEA), the Solar Wind Ion Analyzer (SWIA), the Suprathermal and Thermal Ion Composition (STATIC), the Solar Energetic Particle (SEP), the Langmuir Probe and Waves (LPW), the Extreme Ultraviolet Monitor (EUV) and the Magnetometer (MAG) – and will measure the planet’s solar wind and ionosphere.
The Remote Sensing Package, which includes the Imaging Ultraviolet Spectrograph (IUVS), will be used to determine the global characteristics of the upper atmosphere and ionosphere via remote sensing, while the Neutral Gas and Ion Mass Spectrometer (NGIMS) will measure the composition and isotopes of neutral ions, according to LASP’s MAVEN mission website.
To learn more about the MAVEN mission, visit: http://www.nasa.gov/maven and http://mars.nasa.gov/maven/
—–
FOR THE KINDLE – The History of Space Exploration: redOrbit Press
Narrow Focus On Physical Activity Could Be Ruining Kids’ Playtime
William Raillant-Clark, University of Montreal
While public health authorities focus on the physical activity benefits of active play, a new study from the University of Montreal reveals that for children, playing has no goal – it is an end in itself, an activity that is fun, done alone or with friends, and it represents “an opportunity to experience excitement or pleasure, but also to combat boredom, sadness, fear, or loneliness.”
“By focusing on the physical activity aspect of play, authorities put aside several aspects of play that are beneficial to young people’s emotional and social health,” explains Professor Katherine Frohlich of the university’s Department of Social and Preventive Medicine, who supervised the study. “Play is a way to achieve various objectives, including the improvement of physical health and the development of cognitive and social aptitudes. Obviously, we must ensure children’s development and combat obesity. But to get there, must we distort play?”
The study involved a photography and interview project with 25 Montreal area children, aged 7 to 11 years, as they photographed and talked about their favorite ways to play. One 10 year old girl loved climbing on a modern art sculpture near her home, for example. “Play is an activity that brings pleasure and is purposeless,” explained the study’s first author Dr. Stephanie Alexander, also of the university’s Department of Social and Preventive Medicine. Children’s photographs of their leisure activities show that sports are well represented – balls, bicycles, hockey, and baseball – but so are many sedentary activities, such as puzzles, knitting, reading, movies, and video games. Animals and pets were also photographed by many.
The semi-structured interviews allowed Alexander to better understand the meaning of play for the children. “Play reframed as a way for improving physical health removes the spontaneity, fun, and freedom in children’s play, which is also important for their well-being,” Alexander said. “Active play alone does not make up many children’s preferences.” It is also clear that risk-taking is an integral part of children’s play preferences. “Allowing children to take acceptable risks while remaining vigilant is indeed beneficial to their development,” Alexander added. “An overemphasis on safety may contribute to the emergence of a generation of young people that is less and less able to cope with the unpredictable.”
In summary, the researchers identified four dimensions of play particularly important to children: play as an end in itself (children play for fun, not for exercise or for developing their mental and social skills); play isn’t necessarily active (many children also enjoy more sedentary games); children feel ambiguous about scheduled play activities (children have little time for free play); and risk is considered a pleasurable component of their play. “Despite the abundance of messages targeting children and play and health, children’s perspectives are rarely taken into account within public health, although they have social and scientific value,” Frohlich said. “We hope that our findings will inform and improve the way authorities and indeed parents approach playtime.”
About this study: Stephanie A. Alexander, Katherine L. Frohlich and Caroline Fusco published “Problematizing “Play-for-Health” Discourses Through Children’s Photo-Elicited Narratives” in Qualitative Health Research on August 21, 2014. The researchers received funding from the Canadian Institutes of Health Research (CIHR) and Social Sciences and Humanities Research Council of Canada (SSHRC.)
Global Warming May Cause Fall Leaf Coloration To Start Later, Last Longer
Chuck Bednar for redOrbit.com – Your Universe Online
The fall foliage transformations that so colorfully mark the start of the autumn season could soon start arriving later and lasting longer due to climate change, researchers from Princeton University report in the latest edition of the journal Global Ecology and Biogeography.
In the study, senior author David Medvigy, an assistant professor of geosciences and associated faculty member at the Princeton Environmental Institute, and his colleagues explain that global warming could cause summer temperatures to linger later into the year, thus delaying fall leaf peeping in some areas of the US.
For example, Medvigy’s team said that by the end of the century, the paper birch (New Hampshire’s state tree) could change colors one to three weeks later than usual. While some trees will be less susceptible to the ongoing heat, the more southern the region, the more likely there will be a greater overall delay in leaf coloration, they noted.
For trees to produce colored leaves, the researchers said that daily temperatures need to be low enough and daylight hours must be short enough. Their work reveals that not only can daily temperature and daylight hours be used to predict the timing of leaf coloration, but the influence of those factors also depends largely on the individual species of the tree and the specific geographical region where it is located.
“We’re really interested in understanding how these systems will change as we experience global warming or climate change,” Medvigy explained in a statement Thursday. “What these results are suggesting is that different locations will change in different ways, and that these differences are actually going to be quite interesting.”
In addition to the aesthetic aspects of the colorful fall foliage, and its economic importance to some regions of the country, the study authors said that their findings have important implications for predicting growing seasons, as well as both agricultural and ecosystem productivity, he noted. In particular, delays in when leaves change their hues could impact how much carbon ecosystems remove from the atmosphere.
“When plants have green leaves, they’re doing photosynthesis and taking carbon out of the atmosphere,” explained Medvigy. “The longer you have green leaves, the more carbon dioxide you can take out of the atmosphere. At least, that’s how the current thinking goes. So, figuring this out could potentially be important for understanding the impacts of climate change.”
Video Above: It’s the first day of autumn, and the telltale signs are here: crisp weather, pumpkin spice lattes and, most importantly, the leaves are changing colors. Ever wonder why some leaves turn red, others yellow and some just turn brown? We’ll tell you all about the chemistry behind this seasonal spectacle in the latest American Chemical Society Reactions episode.
According to University of Wisconsin-Milwaukee geography professor Mark D. Schwartz, fall leaf coloration typically signifies the end of the growing season in temperate climates, so it is essential to fully understand current and future coloration cycles to better understand what lies ahead for agriculture, water supplies and animal behavior.
The types of crops that people plant, the kinds of pests that could damage those crops, and the feeding and reproductive habits of animals are all influenced by the length of the growing season, said Schwartz, who was not involved in the research. This is especially true in the western US, where plants affect the availability of water.
While spring, the time during which the growing season begins, has been well studied, fall has been more difficult for experts to characterize, Schwartz explained. The season is more complex and more dependent on geography, and existing models are usually based on highly localized data, he said. Furthermore, the professor said that most of those models do take into account how plants respond to regional autumn conditions.
“When you get at the growing season you can relate this to a huge number of things. In order to understand how it might change in the future we have to understand how it functions now,” said Schwartz. “This research is a useful addition to what we’re trying to do in terms of improving the way that we model plants. A lot of models that we use in terms of global change are fairly simplistic.”
Medvigy came up with the idea for the study after observing that many of those models struggled to explain the timing of when leaves should change color. He teamed up with researchers at the NOAA’s Geophysical Fluid Dynamics Laboratory (GFDL), and together they analyzed data on leaf-change dates for several different types of trees in Alaska and Massachusetts obtained through the USA National Phenology Network online database and Harvard Forest, a 3,500-acre research property managed by Harvard University.
“The species examined were American beech, aspen, black oak, northern red oak, paper birch, red maple, sugar maple and sweet birch,” the university explained. “They grouped the tree species into three categories based on their tolerance of shade. For example, birches need a great deal of sunlight; beeches can survive in a shaded environment; and oaks are somewhere in the middle. The nearly 20 species the study reviewed fell neatly into one of these three categories.”
The researchers found that US prediction modeling improved dramatically when the analyses included data from multiple sites spread throughout a large area (also known as macro-scale observations). Additionally, they also reported that both temperature and duration of sunlight are both significant factors in determining when tree leaves color in the fall. Most previous work in the field has relied on one factor or the other, but not both, said Medvigy.
“Predictions based on those studies were less effective over broader regions,” the university said. “The researchers also found that the timing of leaf change is more sensitive to temperature in warmer areas than in colder regions.” What that means is that, if fall temperatures increase, tree species in Massachusetts “will respond to a greater degree” than those in Alaska.
While Alaska’s foliage season “is in September and is unlikely to change in the next 100 years,” Massachusetts foliage season is expected to eventually be pushed back a month, from October to November, Medvigy said. In southern states, the change will take place even later. Now that he has a better grasp on the information needed to predict future changes to leaf coloration, he and his colleagues now plan to use the findings to generate more sophisticated models.
“We now have a much better understanding of how temperature, day-length and leaf color are related,” said Medvigy. “This understanding will help us make better forecasts for climate, as well as for the basic dynamics of forests. My group is now investigating these issues together with researchers from GFDL.”
—–
Behind the Pathophysiology of Fibromyalgia
Diseases and other physical conditions possess their own individual changes and physiological processes. Due to this, doctors and medical researchers study the pathophysiology of these conditions.
Pathophysiology is a term describing the physiological processes linked to injury or diseases. In other words, it describes any functional changes associated with injuries or diseases.
The term more or less combines these other widely used medical terms:
- Pathology, a medical discipline describing conditions that are observed as a disease state
- Physiology, the biological discipline describing mechanisms or processes that operate within an organism.
As a singular term, pathophysiology is used to explain the physiological processes associated with the development of a condition or disease. Doctors and medical researchers use pathophysiology to study how injuries and diseases affect people, especially when they study a single disease or injury.
Fibromyalgia is a common subject of pathophysiology studies, since the condition itself is shrouded in mystery regarding its origin or, rather, cause.
The main issue that makes fibromyalgia complex is the fact that people with this disorder often show hyper-sensitivity to various painful and non-painful ‘sources of stimulation,’ while also exhibiting an altered physiological response to painful stimulation via the spinal cord.
While there are various studies about fibromyalgia, many medical resources don’t yet understand how the mechanisms behind the condition actually affect the body. What they do understand is that the pathophysiology of fibromyalgia can help them find out ways to organize factors that will help recognize how the condition manifests in people. In this article, we’re going to review the known pathophysiology of fibromyalgia.
The pathophysiology of fibromyalgia: about fibromyalgia
Fibromyalgia is a condition commonly characterized by its widespread bodily pain, fatigue, sleep problems, cognitive dysfunction and depression symptoms.
Fibromyalgia affects millions around the world. It’s said to affect as much as ‘1 in 20 people worldwide.’ In the United States alone, as much as 10 percent of the general population are affected by fibromyalgia. The condition affects both men and women, but women are ‘7 times more likely to develop’ over men.
This condition commonly affects people between the ages of 30 and 60. It might also develop in children and elderly people. The exact cause behind fibromyalgia isn’t known. Interestingly enough, immune system disturbances, hormonal changes and the impairment of the body’s pain pathways have been found to play a role in affecting patients with the condition.
Even though fibromyalgia seems like a physical disorder, the dysfunction of the brain’s chemical (neurotransmitters) function also seem to play a large role in the development of the condition. This more or less suggests a multifaceted ’cause’ behind the condition, however medical researchers and doctors are still learning why that may be the case after all.
Fibromyalgia is considered a part of a family of disorders known as affective spectrum disorders or ASD. This family of disorders includes major depressive disorder, post-traumatic stress disorder, generalized anxiety disorder, irritable bowel syndrome attention-deficit or hyperactivity disorder and migraines.
The pathophysiology of fibromyalgia actually reveals that fibromyalgia shares some characteristics, pathologies and manifestations associated with other affective spectrum disorders. The pathophysiology of fibromyalgia includes several aspects that it shares with other conditions, such as:
- Environmental triggers
- Psychosocial factors
- Genetic factors
- Neuro-endocrine issues
- Problems with the autonomic nervous system
In the following section, we’re going to take a look at several aspects associated with the pathophysiology of fibromyalgia and other conditions.
The pathophysiology of fibromyalgia: the aspects
Behind every condition or disease, there are several aspects that influence its development. That’s something that the pathophysiology of fibromyalgia has revealed to researchers over the years.
Environmental causes
Fibromyalgia actually has several environmental causes that might influence the development of the condition. Several of those causes may include physical trauma, injury, psychosocial stress, abuse and emotional trauma.
The pathophysiology of fibromyalgia has also revealed that fibromyalgia shares common symptoms with other conditions that often develop in people with fibromyalgia. Major depressive disorder, anxiety disorder and depression are common disorders present in people with fibromyalgia.
Genetic factors
Several genetic factors play a role in the pathophysiology of fibromyalgia, in addition to conditions associated with the disorder. One study revealed that the frequency of fibromyalgia in patients with first relatives who had fibromyalgia genetic makeup was about 6.4 percent. The average number of ‘tender points’ among those relatives was 17 out of the 18 points tested at the time. The serotonin transporter gene (5-HTT) is said to play a role in enhanced pain sensitivity.
Problems with the autonomic nervous system
People with fibromyalgia often experience issues with their autonomic nervous system. It causes increased pain and significantly impairs the body’s ability to manage stress. The condition may also cause decreases in blood pressure, in addition to decreased pain inhibition, since it stops the body from producing normal levels of growth hormone and growth factor.
Neuro-endocrine issues and sleep problems
Psychological stress and fibromyalgia share a lot of symptoms. Fibromyalgia is also associated with the body’s inability to suppress cortisol, the body’s stress hormone.
People with fibromyalgia also experience problems with sleep. Even though studies haven’t found whether sleep problems cause the condition, there’s a clear association between the two.
People with the condition often report cases of insomnia, early morning awakening and poor sleeping habits.
Problems with pain sensitivity
People with fibromyalgia often report an increased sensitivity to any type of pain sensation. They are characteristically ‘more sensitive’ to hot and cold sensations and may experience bodily pressure from a lack of blood flow.
On an interesting note, the abnormal levels of norepinephrine and serotonin (neurotransmitters of the brain) are said to have some influence behind increased pain sensitivity. The disruption of signals within the brain may contribute to causing the characteristic increased pain sensitivity in patients with this condition.
Closing thoughts
The pathophysiology of fibromyalgia helps both doctors and medical researchers learn more about the chronic condition. Since various conditions share factors that relate to the development of fibromyalgia, it’s important to understand the relationship between fibromyalgia and other conditions that often co-exist with it.
Understanding the pathophysiology of fibromyalgia will ultimately help doctors and researchers develop new treatment options for fibromyalgia, too.
BYU-Developed E1 Streamliner Tops 200 MPH In Setting New Land Speed Record
Chuck Bednar for redOrbit.com – Your Universe Online
An ultralight electric car build by engineering students at Brigham Young University (BYU) in Utah has shattered its own world land speed record, passing the 200 mph threshold and besting the old mark by nearly 50 mph.
According to the UPI news agency, the vehicle – an E1 streamliner named Electric Blue – achieved speeds of 204.9 mph during two qualifying runs earlier this month at the Bonneville Salt Flats. The same vehicle had previously set the previous land speed record by traveling at speeds of 155.8 mph in 2011.
“When we set the record three years ago we felt like we left a lot on the table,” BYU student and team captain, Kelly Hales, said in a statement. “On paper we thought we could get 200 mph but we never had the conditions just right – until now.”
Electric Blue, which is the result of more than 10 years of design work by over 130 BYU engineering students, accomplished the feat earlier this month in front of roughly 180 teams and their cars. The vehicle was driven by Utah Salt Flats Racing Association president Jim Burkdoll, and the record was certified by the Southern California Timing Association, according to the university.
Electric Blue is known as a streamliner because it features a long, slender design that encloses the wheels inside the body to improve aerodynamics. It competes in the E1 division, which includes cars weighing under 1,100 pounds. Other streamliners, including one built by Ohio State University students, have managed to achieve higher speeds than Electric Blue, but in far heavier vehicles requiring different weight classes.
Now that it has shattered the 200 mph barrier, the design team has said that Electric Blue will be retired, according to Macrina Cooper-White of The Huffington Post. Dr. Mike Miles, a manufacturing professor at the university who worked as an advisor on the project, said that this was “kind of the last hurrah” for the car.
“We were going to retire the car last year when head faculty advisor, Dr. Perry Carter, left for an LDS [Church of Jesus Christ of Latter-day Saints, which owns and operates BYU] mission, but we petitioned for one more year,” Hales added. “Now the car will officially retire with a record we think will be unbeatable for a while.”
The vehicle was built out of lightweight carbon fiber by university students over a six-year span. They were assisted by computer programs that model wind tunnels, and aerodynamic performance and lithium iron phosphate batteries were largely responsible for the car’s success in reaching record-breaking speeds over the past four years.
BYU said that approximately half of the students who worked on the streamliner over the past 10 years were manufacturing engineering technology majors, while 40 percent were mechanical engineering majors and the rest were from a variety of other programs, including electrical engineering. Many were unpaid volunteers.
Miles took over for Dr. Carter as the project’s faculty adviser, and congratulated the former advisor and all the students who were or had been involved in the project for what he called “an amazing achievement.” Electric Blue’s fate remains unclear, according to the university. It could wind up on display in an auto racing museum, or at the university’s engineering building, but it will not be dismantled, the school noted.
—–
Shop Amazon – Rent eTextbooks – Save up to 80%
Supplies, ISS-RapidScat, 3D Printer En Route To ISS Following Sunday Morning Launch
Chuck Bednar for redOrbit.com – Your Universe Online
After weather forced Saturday’s originally scheduled liftoff to be temporarily postponed, the SpaceX Dragon resupply craft successfully launched on-time early Sunday morning and is now en route to the International Space Station (ISS), NASA officials have confirmed.
[ Watch the Video: Liftoff Of SpaceX-4 ]
The Dragon craft, which is carrying 2.5 tons of supplies and science experiments to the orbiting laboratory, separated from its Falcon 9 booster following a successful climb to orbit that began at 1:52 am EDT at Cape Canaveral Air Force Station’s Launch Complex 40. It is expected to reach the ISS at 7:04 am EDT on Tuesday, September 23.
As the Falcon 9 and Dragon flew along a path mostly parallel to the East Coast of the US, the nine Merlin 1D engines of the first stage shut down as planned less than three minutes into the flight, allowing the lone engine of the second stage to ignite and propel the spacecraft the rest of the way into space before detaching from the orbiter.
“From what I can tell, everything went perfectly,” Hans Koenigsman, vice president of Mission Assurance for SpaceX, said in a statement. Once Dragon arrives at the ISS, ESA astronaut Alexander Gerst and NASA astronaut Reid Wiseman will reach out to the uncrewed Dragon with the station’s robot arm and maneuver the capsule to latch onto a port of the station.
Later, the crew will unload the equipment and supplies inside the Dragon, as well as materials which NASA said are essential for 255 science and research investigations during the station’s Expeditions 41 and 42. The spacecraft also contains a small habitat that holds 20 mice which will be used for microgravity research into bone density.
Also on board is the ISS-Rapid Scatterometer (ISS-RapidScat), a new Earth science navigation that NASA said will be used to monitor ocean winds from the high vantage point of the space station. ISS-RapidScat is a remote sensing instrument that will calculate surface wind speed and direction using radar pulses reflected from the surface of the ocean at different angles – information that will improve weather forecasting and hurricane monitoring.
“We’ll be able to see how wind speed changes with the time of day,” explained Ernesto Rodríguez, principal investigator for ISS-RapidScat at NASA’s California-based Jet Propulsion Laboratory (JPL). “ISS-RapidScat will link together all previous and current scatterometer missions, providing us with a more complete picture of how ocean winds change. Combined with data from the European ASCAT scatterometer mission, we’ll be able to observe 90 percent of Earth’s surface at least once a day, and in many places, several times a day.”
The Dragon’s cargo also includes the first ever 3D printer to be sent to outer space, known as 3D Printing In Zero-G Technology Demonstration (3D Printing In Zero-G). The Made In Space-developed technology could allow ISS crew members to use additive manufacturing to quickly and cheaply fabricate parts themselves instead of waiting for the arrival of the next cargo resupply mission, NASA officials explained.
“Testing this on the station is the first step toward creating a working ‘machine shop’ in space,” Jessica Eagan of the International Space Station Program Science Office at Marshall Space Flight Center, where the technology was tested and certified, said earlier this month. “This capability may decrease cost and risk on the station, will be critical when space explorers venture far from Earth and will create an on-demand supply chain for needed tools and parts.”
“If the printer is successful, it will not only serve as the first demonstration of additive manufacturing in microgravity, but it also will bring NASA… a big step closer to evolving in-space manufacturing for future missions to destinations such as an asteroid and Mars,” she added. Likewise, Ken Cooper, principal investigator for 3D printing at Marshall, called the project “the first step in sustaining longer missions beyond low-Earth orbit.”
In addition, the Dragon cargo includes a plant-based study designed to analyze the growth and development of a small flowering cabbage-like plant seedling in a microgravity environment, and the a 22-inch satellite that will test how small probes move and position themselves in space using advanced thruster technology that relies on a new class of non-pyrotechnic materials (Electrically-Controlled Solid Propellants) that are ignited only by electric current.
—–
FOR THE KINDLE: The History of 3D Printing: redOrbit Press
If Someone in My Family has had Fibromyalgia, Will I too?
Fibromyalgia actually isn’t passed directly to children from their parents like some other medical diseases are. So even while it isn’t a direct hereditary issue, scientific studies have been conducted to show that people have a higher risk of developing fibromyalgia if they are in a family where a member has had it before or currently has it.
In addition, if someone comes from a family who has no history of developing fibromyalgia, then the chances of them developing it are relatively slim. So while hereditary is at least one factor in why people can develop fibromyalgia, it isn’t direct (ex. A child will not get fibromyalgia just because their parents got it).
Fibromyalgia has been such a hot medical topic, that the scientific and medical research into it lately has gone much deeper than before. Scientists have even taken DNA samples of people who have fibromyalgia to look at the genes and see if they can tell us why people are more likely to develop fibromyalgia if someone has had it in their family already.
Each of our genes play a major role in our body and our response to these medical issues, and while we haven’t found any conclusive evidence quite yet, we have found a link between fibromyalgia and depression, which would probably help explain why taking anti-depressant medicine can help to bring down the symptoms of fibromyalgia.
The Hereditary of Fibromyalgia
The evidence is clear that there is a genetic factor to fibromyalgia. As we’ve already discussed, if someone else in your family like a sibling, parent or grandparent has developed fibromyalgia in the past or already has it, then you are far more likely to develop it than a person who doesn’t have a family history of fibromyalgia.
A family history of fibromyalgia is definitely a risk factor in whether or not you develop fibromyalgia yourself. But it’s not the only risk factor. Let’s say you come from a family with no history of fibromyalgia. In that case, you may think you’re immune to developing it yourself. While the chances are considerably lower, you are also definitely not immune to it.
Other factors include whether or not you’ve had arthritis, lupus or other autoimmune diseases. The autoimmune disease itself may trigger the development of fibromyalgia in your body. Other but less risky factors include trauma to the spine, emotional stress, and infections. Scientists also believe that sleep disorders, toxins and food sensitivities could also influence the development of fibromyalgia.
So when all is said and done, fibromyalgia is not hereditary in the way we may think, like having green eyes due to a mutation for given traits, but the evidence does suggest that your genes have the potential to at least influence the development and spread of fibromyalgia in your body.
What Does That Mean?
In most hereditary conditions, let’s say that both of your parents have cystic fibrosis. You would then have a twenty five percent chance of developing cystic fibrosis yourself. With fibromyalgia, however, it’s not exactly that simple. This is because that with fibromyalgia, there is no percent chance of you developing fibromyalgia if one or both of your parents had it. It only means that your genes would make it more likely for you to develop fibromyalgia under the right circumstances. Other factors will definitely play a role in developing fibromyalgia, however.
The Genetics of Fibromyalgia
Scientific studies have shown the numbers of the genetics of Fibromyalgia. For parents who had fibromyalgia, studies have shown that a very slight majority of their children had at least some symptoms of Fibromyalgia, but not enough to be officially diagnosed with it. Roughly a fourth of the children had absolutely no symptoms of fibromyalgia whatsoever. After expanding their research, the scientists were eventually able to discover that a roughly equal percentage of relatives in the same families also had at least some symptoms of fibromyalgia, even if they could not be completely diagnosed with it. So we can definitely say that there is a connection of fibromyalgia in the genes as revealed by the scientific studies.
So what kind of genes are we exactly talking about here? Don’t worry; these same scientific studies have revealed that kind of information as well. The studies have revealed that genetic abnormalities have been the primary genetics linked to fibromyalgia, specifically genes that deal with hormones and neurotransmitters. Eventually, we may also be able to know what exactly contributes to developing fibromyalgia, and then if there is anything we can do to treat the condition based on the new information.
Unfortunately, we don’t really know as much about fibromyalgia as we should. All that we really know so far is that there is a genetic link on it, and unfortunately if your parents, siblings or relatives have developed it, then you might have to accept that the chances for you developing it as well have increased.
Scientists and medical researchers have also discovered that environmental factors may play a role in developing fibromyalgia as well, or that fibromyalgia might be a combination of both the environment and genetic factors. These environmental factors can include a physical trauma, mental stress, or some form of illness or other medical condition. Other studies have shown that women are more likely to develop fibromyalgia as well, so daughters in the family are probably more likely to develop fibromyalgia than the sons. Fewer men have also been diagnosed with fibromyalgia than women.
However, just because someone in the family has fibromyalgia does not mean that everyone else, or even anyone, in the family is going to develop it. It only means that you would have a higher chance. If you begin to feel any symptoms of fibromyalgia, then you should speak to your doctor immediately.
Paleontologists Sniff Out A New, Large-Nosed Species Of Hadrosaur
Chuck Bednar for redOrbit.com – Your Universe Online
The month of September has already brought us several new amazing dinosaur species, including the massive Dreadnoughtus and the first-ever semi-aquatic species known as the Spinosaurus, but the latest fossil find edges out both of those discoveries by a nose.
In research published online Wednesday in the peer-reviewed British publication the Journal of Systematic Paleontology, researchers from North Carolina State University and Brigham Young University in Utah have unveiled a new, 30-foot-long, 8,500-pound plant-eating hadrosaur that had a rather striking profile thanks to the rather prominent feature which earned it the name Rhinorex condrupus, which roughly translates to “King Nose.”
According to Rachel Feltman of the Washington Post, Rhinorex is definitely worthy of its moniker. She proclaimed it “sovereign of the schnozes, baron of the beaks, and prince of the proboscises” thanks to the fact that it had a massive nose rather than a bony or fleshy crest atop its head, as is typically the case with its cousins (which include the Parasaurolophus and the Edmontosaurus).
The researchers report that this new species lived in what is now Utah approximately 75 million years ago, during the Late Cretaceous period, and its fossils were found in storage at BYU by Terry Gates of NC State and the North Carolina Museum of Natural Sciences and Rodney Sheetz from the Brigham Young Museum of Paleontology.
It had originally been excavated from Utah’s Neslen formation during the 1990s, and had been studied mostly for its well-preserved skin impressions, the study authors said. It wasn’t until Gates and Sheetz attempted to reconstruct the skull that they discovered it belonged to a brand-new style of hadrosaur. They had nearly the entire skull, the researchers said, but it took two years to dig the fossils out of the sandstone encasing it.
Gates believes that the find will help fill in some gaps about habitat segregation during the Late Cretaceous, stating that while paleontologists had discovered other hadrosaurs from the same time period, they were “located about 200 miles farther south that are adapted to a different environment. This discovery gives us a geographic snapshot of the Cretaceous, and helps us place contemporary species in their correct time and place.”
When asked how having such a large nose might have benefitted the Rhinorex, Gates added, “The purpose of such a big nose is still a mystery. If this dinosaur is anything like its relatives then it likely did not have a super sense of smell; but maybe the nose was used as a means of attracting mates, recognizing members of its species, or even as a large attachment for a plant-smashing beak. We are already sniffing out answers to these questions.”
Earlier this month, Dr. Kenneth Lacovara, an associate professor in the Drexel University College of Arts and Sciences, and his colleagues reported the discovery of Dreadnoughtus schrani – an 85 foot long, 65 ton dinosaur that was the largest land animal for which a body mass can be accurately calculated. Dreadnoughtus would have been nearly impervious to attack, the authors said, and weighed as much as a dozen African elephants.
A few days later, an international team of researchers revealed that they had unearthed Spinosaurus aegyptiacus, an enormous Cretaceous-era predator that had a number of adaptations that would to have made it better suited for spending a considerable amount of time in the water. In addition to being the first semi-aquatic dinosaur, Spinosaurus was also known to travel on solid ground, making it the largest predator to ever walk the Earth.
The Milky Way Will Eventually Be Consumed By Nearby Andromeda Galaxy
Chuck Bednar for redOrbit.com – Your Universe Online
When galaxies grow too massive to continue making their own stars, they begin cannibalizing other nearby galaxies, experts from the University of Western Australia and an international team of colleagues reported this week in the journal Monthly Notices of the Royal Astronomical Society.
Dr. Aaron Robotham, a postdoctoral researcher at the University of Western Australia node of the International Centre for Radio Astronomy Research (ICRAR), and his associates looked at over 22,000 galaxies and found that the most massive galaxies were far less efficient at forming stars than their smaller counterparts.
Instead of making their own new stars, these galaxies grew by consuming other galaxies. This occurs because as galaxies grow, they have more gravity, and can therefore have an easier time pulling in their neighbors. The reason that star formation slows down in these massive galaxies, Dr. Robotham noted, is believed to be due to extreme feedback events occurring in the active galactic nucleus, a bright region located at its center.
“The topic is much debated, but a popular mechanism is where the active galactic nucleus basically cooks the gas and prevents it from cooling down to form stars,” he explained in a statement from the Royal Astronomical Society, adding that gravity is ultimately expected to cause all galaxies to merge into a handful of super-giant galaxies.
However, that process will take several billion years to take place. “If you waited a really, really, really long time that would eventually happen,” Dr. Robotham continued, “but by really long I mean many times the age of the Universe so far.”
What’s more, Washington Post reporter Rachel Feltman explained that our own Milky Way has already consumed other galaxies, and will ultimately be on the other end of this phenomenon. In approximately five billion years, the nearby galaxy Andromeda is expected to become large enough to consume the Milky Way, according to the study authors.
[ Watch the Animation: Milky Way Collides With Andromeda Galaxy ]
“The Milky Way hasn’t merged with another large galaxy for a long time but you can still see remnants of all the old galaxies we’ve cannibalized,” Dr. Robotham said in an ICRAR statement. “We’re also going to eat two nearby dwarf galaxies, the Large and Small Magellanic Clouds, in about four billion years,” before it winds up being consumed by Andromeda approximately one billion years later.
The overwhelming majority of the research data was collected using the Anglo-Australian Telescope in New South Wales as part of the Galaxy And Mass Assembly (GAMA) survey. The GAMA survey is led by Professor Simon Driver at ICRAR, involved more than 90 scientists and took roughly seven years to complete. This new paper is one of more than 60 publications to have resulted from the project, and another 180 are currently in progress.
In addition to Dr. Robotham, researchers from the University of St. Andrews, the Australian Astronomical Observatory, the University of Central Lancashire, the Sydney Institute for Astronomy at the University of Sydney, Monash University, the University of Cape Town, the University of Queensland, the European Southern Observatory, the University of Melbourne and elsewhere were involved in the study.
Image 2 (below): Some of the many thousands of merging galaxies identified within the GAMA survey. Credit: Professor Simon Driver and Dr. Aaron Robotham, ICRAR
—–
Keep an eye on the cosmos with Telescopes from Amazon.com
How does Blood Testing Play into Fibromyalgia?
Actually, a recent new blood test discovery for fibromyalgia has been found to be more accurate than was previously believed, as it will not confuse fibromyalgia with other chronic diseases.
But before we go into what this new form of blood testing is, it’s also important to understand why diagnosing fibromyalgia is so difficult, so we can understand why developing this new blood test would have been equally as hard and why it’s so groundbreaking in the medical field.
Diagnosing Fibromyalgia
Fibromyalgia is a condition of having multiple symptoms of pain, aching, fatigue, and anxiety and depression throughout the body. The reason why it is so difficult to diagnose is because it has so many symptoms that, taken individually, are basically symptoms for other disease and conditions as well.
Doctors have to take each symptom into consideration, and run through all of the other conditions and diseases as well, before officially diagnosing a person with fibromyalgia or not.
Further making this diagnosing process complicated is that we have no clear way to test for fibromyalgia. In addition, many doctors do not know enough about fibromyalgia to properly diagnose it, and many people might have to go through several doctors before getting a final diagnosing (during which the symptoms will only become worse and more painful).
Blood Testing
The truth is that not very many laboratory tests are very effective at diagnosing fibromyalgia. But recently a new blood test has been conducted that is supposedly very accurate when diagnosing fibromyalgia, manufactured by EpicGenetics, a lab in California. This blood test is supposedly able to diagnose fibromyalgia by identifying markers in the blood cells that would only be there in people who have fibromyalgia.
The cost of this blood test is very expensive, running under $1,000, and most insurance companies will not cover it either. Before simply walking in and taking the blood test, you should always consult with your doctor first.
Before this blood test came about, doctors attempted to (and still do) diagnose fibromyalgia using physical tests, such as identifying tender, sore points on your body. The doctor will then run a few more tests to make sure that the condition you have is not more serious than fibromyalgia.
There were also blood tests that doctors would run too, such as ordering a complete blood count and measuring red cells and white cells, as well as tests in your liver and kidney. However, none of these tests were very definitive in diagnosing fibromyalgia.
The new blood test by EpicGenetics is being hailed as the first definitive blood test for diagnosing fibromyalgia, and can also yield the complete tests to you in just under a week. The blood tests look for protein molecules in your blood stream. These molecules are produced by white blood cells.
People who have fibromyalgia will typically have fewer of these protein molecules in their blood stream (and subsequently have a weaker immune system as well). These immune system biomarkers can be found in people with fibromyalgia, but they can also be found in people with other diseases as well.
But usually, only fibromyalgia patients will have a low level of these protein molecules, which the EpicGenetics blood tests look for. There are patterns that show people with some of the symptoms of fibromyalgia but who also have high enough levels of these protein molecules probably don’t have fibromyalgia. Today, this blood tests detecting and revealing the levels of these protein molecules in the blood stream is believed to be over ninety percent accurate.
Fibromyalgia is a big deal. It is estimated that five million Americans suffer from it (not to mention the tens of millions around the world), but only a few thousand have actually taken this test. Part of it is the high cost that is not covered by insurance, while another part of it is that it is relatively new and will take some time to gain traction and a reputation in the medical world.
Many doctors are puzzled by this, since before it could cost over ten thousand dollars in medical bills to diagnose a patient through the various tests and visits. But much of this was covered by insurance.
Most doctors working on the EpicGenetics blood test believe it is effective and want to increase its appeal to potential fibromyalgia patients. One reason they believe they can do this is to lower down the price, by licensing the test to other labs.
Time will tell, but chances are that this blood test will take off as many people who display many symptoms of Fibromyalgia will be excited that only one simple test will finally be able to diagnose and confirm for them whether or not whether they have fibromyalgia, rather than spend months or even years before finally being diagnosed with fibromyalgia (or finding out that they never had it all along).
Fibromyalgia was previously a very difficult medical subject. It was very difficult to diagnose, and there were (and still are) very limited things that we know about it. So this epic genetics blood test is truly an epic breakthrough in the research of fibromyalgia. There is no doubt that this new blood test is not really a victory for the professional medical community who put it together, but rather for the fibromyalgia patients.
Research into fibromyalgia is only likely to continue and we’ll uncover more things about it, and eventually, we may even find out what specific gene(s) makes people have and develop the disorder in the first place. And it’s likely that with the more and more EpicGenetics blood tests that are being conducted, we will only come closer and closer to reaching that goal of finally finding out what the true cause behind fibromyalgia is, as researchers and scientists will be able to look at thousands more blood specimens.
"Angelina Effect" Spurs Increased Breast Cancer Screenings In The UK
Chuck Bednar for redOrbit.com – Your Universe Online
Angelina Jolie’s decision to have a double mastectomy after being told she would likely develop breast cancer had a tremendous impact on testing for the disease in the UK, the authors of a new Breast Cancer Journal study reported on Friday.
According to BBC News online health editor Helen Briggs, referrals to breast cancer clinics more than doubled after Jolie announced last May that she had undergone the surgery after doctors informed her that she had an 87 percent chance of contracting breast cancer due to a high-risk gene.
University of Manchester professor Gareth Evans and his colleagues examined referrals to over 20 genetic centers and clinics in the UK following the press coverage of her admission, and found that those reports “encouraged women with genuine concerns about their family history to get advice,” said Briggs.
“The Angelina Jolie effect has been long-lasting and global, and appears to have increased referrals to centers appropriately,” Evans told BBC News, adding that her decision to undergo a double mastectomy likely had “a bigger impact than other celebrity announcements, possibly due to her image as a glamorous and strong woman.”
“This may have lessened patients’ fears about a loss of sexual identity post-preventative surgery and encouraged those who had not previously engaged with health services to consider genetic testing,” he added, telling the Huffington Post that high-profile cases such as Jolie’s “often mean that more women are inclined to… take the necessary steps to prevent themselves from developing the disease.”
Jolie was found to have the BRCA1 mutation, which is inherited from a parent and is responsible for at least 10 percent of all instances of breast cancer, the study authors explained. Women that have the BRCA1 gene mutation have between 45 percent and 90 percent risk of developing breast cancer in their lifetime, but those who have a strong family history of breast cancer and/or a living relative with the disease can be tested for the mutation.
Following Jolie’s announcement last year, new clinical guidance was published in the UK recommending that only women who faced an increased risk of developing breast cancer should be referred for genetic testing at a family history clinic or a regional genetics center. While news stories like this often have a short-term effect on health-related behaviors, Evans and his fellow investigators wanted to see if Jolie’s announcement had a lasting impact.
They reviewed data from 12 family history clinics and nine regional genetic centers throughout the UK, and found that there was a two and a half-fold increase in referrals by general practitioners during the two months immediately following Jolie’s announcement (versus June and July 2012).
That increase continued from August to October, with referrals doubling over that same period the previous year, and they also found no increase in inappropriate or unnecessary referrals during this time. Evans said he expected to find that the increase in visits would be the result of concerned but healthy women returning for early repeat screenings, the opposite was actually true – most of the women were actually late for their screenings.
“Defective versions of BRCA1 and its sister gene BRCA2 are together responsible for about a fifth of breast cancers. Women who inherit BRCA1 have a 60 percent to 90 percent risk of developing breast cancer. BRCA2 increases the risk by 45 percent to 85 percent. Both gene mutations also raise the risk of ovarian cancer,” The Guardian explained.
MIT Researchers Developing Flexible, Form-Fitting Next-Generation Spacesuit
Chuck Bednar for redOrbit.com – Your Universe Online
A next-gen, skintight spacesuit that would improve astronaut mobility, provide support and reduce mass in comparison to current gas-pressurized models is in the works at MIT, officials the Cambridge-based research university revealed on Thursday.
Dava Newman, a professor of aeronautics and astronautics and engineering systems at MIT, and her colleagues have engineered active compression garments which include small, spring-like coils that contract when exposed to heat. Those coils are developed from a type of material known as a shape-memory alloy (SMA), which can recall an engineered shape, and even when bent or deformed, it can spring back to this pre-programmed shape.
Newman and her colleagues incorporated the coils in a tourniquet-like cuff, and then applied a current to generate heat. Once the temperatures reached a pre-determined level, the coils contracted and returned to their desired form, tightening the cuff in the process. During the course of their tests, the researchers said they discovered that the pressure produced by those coils was equal to that required to fully support an astronaut while in space.
“With conventional spacesuits, you’re essentially in a balloon of gas that’s providing you with the necessary one-third of an atmosphere [of pressure] to keep you alive in the vacuum of space,” explained Newman, who has spent more than 10 years designing a new type of flexible, form-fitting spacesuit.
“We want to achieve that same pressurization, but through mechanical counterpressure – applying the pressure directly to the skin, thus avoiding the gas pressure altogether,” she added. “We combine passive elastics with active materials… Ultimately, the big advantage is mobility, and a very lightweight suit for planetary exploration.”
The coil design, which was conceived by Bradley Holschuh, a postdoc in Newman’s lab, is one step towards the ultimate goal of replacing the conventional gas-pressurized suit with one that is less bulky, more form-fitting and able to plug into a spacecraft’s power supply to essentially cause the material to “shrink-wrap” around an astronaut’s body. Holschuh, Newman and their colleagues detail their efforts in the journal IEEE/ASME: Transactions on Mechatronics.
While this is not the first attempt to develop a skin-tight spacesuit, the one obstacle that designs have been unable to overcome is how to make an extremely snug pressurized suit that astronauts can still easily get into and out of. The MIT team has turned to shape-memory alloys as a possible way to solve this problem, since the materials only contract when they are heated and can easily revert to a looser shape once they cool down.
Holschuh’s team looked at more than a dozen different types of shape-changing materials to find the one most suitable for space. They finally opted for nickel-titanium shape-memory alloys, which Jennifer Chu of the MIT News Office said was “ideal for use in a lightweight compression garment” because when it is “trained as tightly packed, small-diameter springs,” the material “contracts when heated to produce a significant amount of force.”
Since nickel-titanium shape-memory alloys are typically produced in reels of thin, straight fibers, Holschuh used a technique devised by another team of MIT researchers to engineer a heat-activated robotic worm. The researchers first trained the material to return to its original shape by winding raw SMA fiber into extremely tight, millimeter-diameter coils. They then heated the coils to 450 degrees Celsius to set them into an original shape.
“At room temperature, the coils may be stretched or bent, much like a paper clip. However, at a certain ‘trigger’ temperature (in this case, as low as 60 C), the fiber will begin to spring back to its trained, tightly coiled state,” Chu explained. “The researchers rigged an array of coils to an elastic cuff, attaching each coil to a small thread linked to the cuff. They then attached leads to the coils’ opposite ends and applied a voltage, generating heat.”
Between 60 degrees and 160 degrees Celsius, the coils contracted, pulling on the attached threads and causing the cuff to tighten. Holschuh described them as “basically self-closing buckles. Once you put the suit on, you can run a current through all these little features, and the suit will shrink-wrap you, and pull closed.”
The research team now needs to find a way to keep the suit tight. They are said to be considering two options: either maintaining a constant temperature that is warm enough, or including some kind of locking mechanism that prevents the coils from loosening. The first option would require astronauts to carry around heavy battery packs, which would impede mobility, so Holschuh and Newman are currently exploring the latter option.
“As for where the coils may be threaded within a spacesuit, Holschuh is contemplating several designs,” Chu said. One would feature a coil array at the suit’s center that is connected to each of the limbs, and when activated the coils would pull on attached threads to tighten and pressurize the suit. Alternatively, smaller arrays could be places in multiple strategic locations to produce localized tension and pressure, she added.
—–
FOR THE KINDLE – The History of Space Exploration: redOrbit Press
ISS Particle Detector Findings Could Shed New Light On Origins Of Dark Matter
Chuck Bednar for redOrbit.com – Your Universe Online
Thanks to a particle detector module mounted to the exterior of the International Space Station (ISS), researchers from the MIT Laboratory for Nuclear Science and their colleagues have collected new measurements which could help scientists learn more about the origin and characteristics of dark matter.
The unit, known as the Alpha Magnetic Spectrometer (AMS), captures incoming cosmic rays from throughout the galaxy. Out of about 41 billion instances of cosmic particles entering the detector, they were able to identify 10 million electrons and positrons, which are stable antiparticles of electrons that can exist in small numbers within the cosmic ray flux. These positrons provide hints about the origin of dark matter, the MIT researchers said.
Previous experiments have observed an excess of the particles, which suggests they could not originate from the cosmic rays but from a new and different source, they added. Last year, scientists using the AMS were able to accurately measure the onset of this excess for the first time, and those findings could ultimately help reveal new information about the dark matter whose collisions could be responsible for creating those positrons.
The team reported the ratio of the number of positrons to the combined number of both positrons and electrons (also known as the observed positron fraction) within a wider energy range than before. From this data they found that the positron fraction increases quickly at low energies, then slows down and ultimately levels off at higher energies. This is said to be the first experimental observation of the positron fraction maximum (243 to 307 gigaelectronvolts).
[ Listen to the Podcast: The Future Of Dark Matter Research – With Guest Dr. Matthew Walker ]
“The AMS results announced today are tremendously provocative, and will drive scientists around the world to continue pursuing one of the biggest mysteries in the cosmos: dark matter,” NASA chief scientist Ellen Stofan said in a statement. “The clear and definitive data from AMS represent the caliber of scientific discovery enabled by our unique laboratory in space, the International Space Station. Today we are one step closer to answering the fundamental questions about how our universe works, and we look forward to many more exciting twists in this developing story.”
“The new AMS results show unambiguously that a new source of positrons is active in the galaxy,” added Paolo Zuccon, an assistant professor of physics at MIT. “We do not know yet if these positrons are coming from dark matter collisions, or from astrophysical sources such as pulsars. But measurements are underway by AMS that may discriminate between the two hypotheses.”
The research was funded by the US Department of Energy, and the findings were published Thursday in the journal Physical Review Letters in two separate studies – “Electron and Positron Fluxes in Primary Cosmic Rays Measured with the Alpha Magnetic Spectrometer on the International Space Station” and “High Statistics Measurement of the Positron Fraction in Primary Cosmic Rays of 0.5–500 GeV with the Alpha Magnetic Spectrometer on the International Space Station.”
According to the MIT researchers, almost 85 percent of the universe is made up of dark matter, which is essentially invisible to modern telescopes because it does not emit or reflect light. Astronomers have been limited to observing its effects in the form of unusual gravitational forces that appear to bind together galaxy clusters that otherwise would have come apart, leading them to develop the theory about the existence of this unseen source of gravitational mass.
The AMS project was designed to try and identify the source of this dark matter by collecting a constant flux of cosmic rays. Those rays are believed to include leftover material from collisions between dark matter particles, which release a specific amount of energy dependent upon the mass of the mass of the original particles. When those particles are annihilated, they create particles that eventually become electrons, protons, antiprotons, and positrons.
[ Listen to the Podcast: Hunting For Cosmic Rays – With Special Guest Dr. Stephan Funk ]
“As the visible matter in the universe consists of protons and electrons, the researchers reasoned that the contribution of these same particles from dark matter collisions would be negligible” the MIT researchers explained. “However, positrons and antiprotons are much rarer in the universe; any detection of these particles above the very small expected background would likely come from a new source.”
The onset, maximum position, offset and other features of this excess will help scientists figure out if positrons arise from pulsars and other astrophysical sources, or from dark matter. After continuously collecting data since 2011, the AMS team analyzed 41 billion incoming particles, and identified 10 million positrons and electrons with energies ranging from 0.5 to 500 gigaelectronvolts (GeV) – a wider energy range than they had previously measured.
“The researchers studied the positron fraction versus energy, and found an excess of positrons starting at lower energies (8 GeV), suggesting a source for the particles other than the cosmic rays themselves,” MIT said. The positron fraction slowed, then peaked at 275 GeV, which indicates that the data could be compatible with a dark matter source of positrons. The research could indicate that dark matter is a new kind of particle.
“The new phenomena could be evidence for the long-sought dark matter in the universe, or it could be due to some other equally exciting new science,” said Barry Barish, a professor emeritus of physics and high-energy physics at the California Institute of Technology who was not involved in the experiments. “In either case, the observation in itself is what is exciting; the scientific explanation will come with further experimentation.”
Report Suggests Global Population Could Top 12 Billion By 2100
Chuck Bednar for redOrbit.com – Your Universe Online
The global population could soar well beyond expectations in the years ahead, topping current projections that there will be nine billion men and women living on planet Earth by 2100 by more than two billion, researchers from the United Nations Population Division and several universities claim in a new study.
According to National Geographic’s Robert Kunzing, the study authors used a “probabilistic” statistical method to determine that there is an 80 percent chance the number of living people by the start of the next century will be somewhere between 9.6 billion and 12.3 billion (up from 7.2 billion currently).
Their work, said Damian Carrington of The Guardian, contradicts two decades of research suggesting the number of people living on Earth would peak at about nine billion people by the year 2050. The analysis shows that the challenges facing the world’s food supply, healthcare, and society as a whole may actually worsen in the second half of the century.
“The consensus over the past 20 years or so was that world population, which is currently around 7 billion, would go up to 9 billion and level off or probably decline,” corresponding author Adrian Raftery, a professor at the University of Washington, said in a statement. “We found there’s a 70 percent probability the world population will not stabilize this century. Population, which had sort of fallen off the world’s agenda, remains a very important issue.”
Raftery, who teaches statistics and sociology at the university, and his colleagues used the most recent UN population data that was released in July as the basis of their research, which was published online Thursday by the journal Science. They are said to be the first team of researchers to generate a population report using modern statistics, known as Bayesian statistics, to create better predictions by combining all available information into their projections.
Most of that expected growth will occur in Africa, where the current population of one billion is expected to quadruple by the end of the century due to a slower-than-anticipated decrease in birth rates in sub-Saharan Africa, the study authors said. Other regions are expected to experience less change.
The report suggests that there is an 80 percent chance the population in that continent will be between 3.5 billion and 5.1 billion people by the end of the century, while the population of Asia is expected to increase from 4.4 billion now to five billion in 2050 before beginning to decline. North America, Europe, and Latin America and the Caribbean are all projected to stay below one billion each, the researchers said.
Most predictions of global population growth were based primarily on future life expectancy and fertility rates, and previous techniques relied largely on what changes experts believed would take place in those trends, they noted. On the other hand, the new forecast technique uses statistical methods to combine government data and expert forecasts for such things as mortality rates, fertility rates and international migration.
“Population policy has been abandoned in recent decades. It is barely mentioned in discussions on sustainability or development such as the UN-led sustainable development goals,” Simon Ross, chief executive of the group Population Matters, told The Guardian. “The significance of the new work is that it provides greater certainty. Specifically, it is highly likely that, given current policies, the world population will be between 40-75 percent larger than today in the lifetime of many of today’s children and will still be growing at that point.”
A Little Plastic In Your Toothpaste?
Rayshell Clapper for redOrbit.com – Your Universe Online
If you were to search for the definition of polyethylene in the Oxford Dictionary, this is what would come up: “A tough, light, flexible synthetic resin made by polymerizing ethylene, chiefly used for plastic bags, food containers, and other packaging.” In other words, polyethylene is plastic.
Most people are comfortable with plastic in their food containers, their trashcans or even in their hair brushes, but not many would like to have plastic in their toothpaste, right? Well, Crest, a product of Proctor & Gamble, puts polyethylene in its popular toothpastes including such pastes like Crest MultiCare Whitening. However, Proctor & Gamble recently announced that within six months all tubes of its toothpaste with the plastic beads in it will be off the shelf as reported by UPI.
Now, if these plastic microbeads served a purpose, many users would probably not be quite as concerned, but UPI explains that the polyethylene is in the toothpaste purely for decorative purposes only. On top of that, DentalBuzz points out that the box does not list the ingredients of the toothpaste. That’s pretty curious, right? Trish Walraven, the author of the DentalBuzz article agreed, so she, a dental hygienist, tested the plastic microbeads and found that the little blue specks of plastic that are overwhelmingly present in the toothpaste do not dissolve in acetone, alcohol, or in the mouth. Why is this a problem?
According to Walraven, “Around our teeth we have these little channels in our gums, sort of like the cuticles around our fingernails. The gum channel is called a sulcus, and it’s where diseases like gingivitis get their start. A healthy sulcus is no deeper than about 3 millimeters, so when you have hundreds of pieces of plastic being scrubbed into your gums each day that are even smaller than a millimeter, many of them are getting trapped.” This means that the sulcus is more vulnerable to disease, bacteria, and other dental health issues.
All this just for decorative purposes?
On top of the health concerns that come with polyethylene in our toothpaste, the plastic is a major environmental concern because the plastic is not biodegradable and never breaks down. It just gets smaller and smaller and lasts practically forever according to WCPO Cincinnati. So the plastic microbeads included just for decorative purposes could, and likely do, damage gums and teeth as well as the environment. Clearly, these reasons explain why dental hygienists and consumers alike are concerned about the plastic in their toothpaste.
DentalBuzz and WCPO Cincinnati listed some of the Crest brands that definitely have polyethylene in them:
• Crest 3D White Radiant Mint
• Crest Pro-Health For Me
• Crest 3D White Arctic Fresh
• Crest 3D White Enamel Renewal
• Crest 3D White Luxe Glamorous White
• Crest Sensitivity Treatment and Protection
• Crest Complete Multi-Benefit Whitening Plus Deep Clean
• Crest 3D White Luxe Lustrous Shine
• Crest Extra White Plus Scope Outlast
• Crest SensiRelief Maximum Strength Whitening Plus Scope
• Crest Pro-Health Sensitive + Enamel Shield
• Crest Pro-Health Clinical Gum Protection
• Crest Pro-Health For Life for ages 50+
• Crest Complete Multi-Benefit Extra White+ Crystal Clean Anti-Bac
• Crest Be Adventurous Mint Chocolate Trek
• Crest Be Dynamic Lime Spearmint Zest
• Crest Be Inspired Vanilla Mint Spark
• Crest Pro-Health Healthy Fresh
• Crest Pro-Health Smooth Mint
That totals 19 different Crest toothpaste brands containing the decorative plastic microbeads. On top of that, some smaller companies and brands of toothpaste also use polyethylene in their pastes. UPI acknowledges that the FDA has approved the use of polyethylene in foods and healthcare products, but dentists and dental hygienists are definitely concerned about their presence. Just because the FDA approves their use does not mean that their use is a good idea. Currently, the American Dental Association (ADA) has no plans to rescind its seal of approval from Crest products, although it does acknowledge that it will “continue to monitor and evaluate new scientific information on this issue as it becomes available.”
WCPO Cincinnati reporter Maxim Alter was provided a statement from the spokesperson at Proctor & Gamble saying that Crest currently has toothpastes without the plastic microbeads. “The majority of our product volume will be microbead-free within six months,” the spokesperson said. “We will complete our removal process by March of 2016.”
As of now, the ADA recommends that consumers follow the FDA’s recommendations on the use of dental health care products. Many people will likely go a step further and avoid the plastic decorations in their toothpastes.
—–
Amazon.com – Read eBooks using the FREE Kindle Reading App on Most Devices
The Most Painful Spots of Fibromyalgia
If you have been diagnosed with fibromyalgia, then you would know that it is a very painful chronic illness, but that it also has strict guidelines for anyone to be diagnosed with it. Ranging from blood tests to an examination of the symptoms, doctors will do whatever tests they can to determine if you have fibromyalgia, but no tests are more effective then identifying the pressure points on the body.
You may think that you have pain all throughout your body, and indeed, you might do. But there are actually very specific pressure points on the body where the doctors will test to see if you have pain. Many medical professionals and researchers believe that these pressure points could lead to a central problem of the pain you are suffering from fibromyalgia.
Fibromyalgia Pressure Points
As part of their test to determine if you have Fibromyalgia, doctors will put pressure on these points to see if you react to a high intensity of pain there. So while you may have pain all around your body, doctors will not be able to diagnose you with Fibromyalgia one way or the other unless they base their decisions on these pressure points. The pressure points include the back of your neck, shoulders, upper chest, just above the knees, the hips, elbows, and the upper part of the buttocks.
Simple lab tests alone will not be able to identify whether or not you have pain in these pressure points. A physical exam will need to be conducted of these pressure points, and shouldn’t take any longer than ten or fifteen minutes. When pressure is applied to these points, you will have to tell your doctor if you feel or don’t feel any pain, at which point they’ll check off the pressure points that you don’t feel any pain under.
In order to officially be diagnosed with fibromyalgia, you will have to have widespread pain in all four quadrants of your body for at least three months, and respond to having high levels of pain when pressure is applied to at least 100 of the 18 pressure points. There has been found to be a high level of accuracy with this method of diagnosing fibromyalgia, as ninety percent of the time doctors will make a correct diagnosis using this method. This is more accurate than blood tests and other forms of testing for diagnosing fibromyalgia as well.
Your doctor won’t apply enough pain to these areas to make you feel pain no matter what. Usually, they’ll only apply enough pressure with their finger to make their fingernail become white. You may even want to try applying pressure to the pressure points in this manner yourself.
Trigger Points
Another important aspect of this treatment is trigger points in your body. Ninety percent of the body’s pressure points are also trigger points, so what’s the real difference? The difference is that trigger points are nodules that you can feel in tight muscles. If you apply pressure to a trigger point, it will not only hurt where you are applying pressure, but also send pain throughout your body.
In comparison, applying pressure to a pressure point will not send pain throughout your body and only cause pain in the area that is having pressure applied to it. Another difference is that trigger points can restrict the motion that you can make, while pressure points do not.
But like we have said, ninety percent of your pressure points are also trigger points. Well, this is actually a beneficial thing. It means that there are a wider range of treatments and therapies available for relieving the pain in your body, and relieving the pain of only one of your trigger points is enough to reduce the pain of other trigger and pressure points as well.
Steps that you can take to relieve the pain of your trigger and pressure points include taking hot showers or baths, having a massage, or doing anything that relieve the tension in the muscles to reduce the pain of the trigger points when pressure is applied to them.
What if you don’t pass the Treatment?
Remember, you need to have a pressure point count of at least 11 out of the 18 in order to be officially diagnosed with Fibromyalgia. But let’s say that your pressure count amounts to anywhere under eleven.
The good news is that you most likely don’t have Fibromyalgia, but the bad news is that you still don’t know what illness you have, and you’re still feeling pain without knowing how you should treat it!
Many medical professionals believe that you can still have fibromyalgia even if your pain from pressure points is fewer than 11. This is why many doctors and medical professionals have tossed out the idea of the pressure points for diagnosing fibromyalgia. Another diagnoses process has yet to be formulated together that will prove to be as effective as the pressure points process was.
You’ll want to stay in touch with the latest Fibromyalgia news and research to see if anything comes up, if you agree with the doctors who believe that the pressure points process is an ineffective means of diagnosing fibromyalgia. Currently, however, a majority of doctors and medical professionals do trust in the pressure points process, so you’ll have to talk to your personal doctor to get their opinion on how to effectively diagnose and then treat fibromyalgia.
Something else that you could have besides fibromyalgia is chronic fatigue system, if you do not pass the test of having more than 11 pressure points with pain. If you don’t have pain in less than 11 of the pressure points, and if other tests are run that seem to support the fact that you officially don’t have fibromyalgia, then doctors might instead test you for chronic fatigue syndrome.
Vitiligo Treatment Holds Promise For Restoring Skin Pigmentation
David Olejarz, Henry Ford Health System
A treatment regimen is safe and effective for restoring skin pigmentation in vitiligo patients, according to a Henry Ford Hospital study.
“Our findings offer patients with vitiligo worldwide a renewed hope for a bright future in the treatment of this disfiguring disease,” says Henry Lim, M.D., chair of Dermatology at Henry Ford and the study’s lead author. “Patients with lesions on their face and arms could have a more rapid response to the combination treatment.”
Patients were randomly divided into two study groups: Group A received the combination therapy; Group B received only NB UVB treatment.
Henry Ford dermatologists described the repigmentation results as “superior,” and said the treatment combination holds promise as a future therapy for the more than 50 million people worldwide living with vitiligo. It affects about one in every 100 people in the United States.
The study was published online Wednesday in the Journal of the American Medical Association-Dermatology.
In a multi-center study led by Henry Ford, dermatologists sought to evaluate the safety and effectiveness of a treatment combination of afamelanotide, a drug that induces skin pigmentation, and phototherapy using narrowband ultraviolet-B rays (NB UVB). Phototherapy, or ultraviolet light therapy, is the treatment of choice for many patients with widespread vitiligo. It has been shown to be effective, though the degree of repigmentation varies.
Dr. Lim, an international vitiligo expert, says afamelanotide “enhances the ability of the UVB to induce repigmentation of the skin.”
Key findings:
– Repigmentation occurred faster in patients who received the combination treatment compared to patients who received NB UVB.
– Patients who received the combination treatment achieved appearance of pigment on their face and arms after 40 days compared to 60 days for patients who received NB UVB.
– In dark-skinned patients, repigmentation occurred faster in the combination group compared to the NB UVB group.
Afamelanotide is in the process of being submitted for approval from the U.S. Food and Drug Administration for use in treating vitiligo.
Vitiligo is a skin disease that causes the skin to lose color and develop white patches that vary in size and location. It develops when cells called melanocytes are killed by the body’s immune system, causing the area of skin to turn white because the cells no longer make pigment. Vitiligo is more noticeable in individuals with darker skin tones, but it affects all races and ethnicities.
While vitiligo is neither contagious nor life-threatening, there is no cure. However, it causes low self-esteem and depression for those living with the disease.
The Henry Ford study represents its latest research into investigating new treatment options for vitiligo. In a 2012 study published in the Journal of the American Academy of Dermatology, Henry Ford dermatologists showed the benefits of skin cell transplant surgery, called melanocyte-keratinocyte transplantation or MKTP. Henry Ford has since performed more than 190 MKTP procedures on patients from Michigan, 23 other U.S. states and Canada.
For this new study, 55 patients were enrolled at four sites – Henry Ford, Icahn School of Medicine at Mount Sinai in New York and
Vitiligo and Pigmentation Institute of Southern California and University of California Davis’ Department of Dermatology.
In the two study groups, 28 patients were enrolled in Group A and 27 patients in Group B. Both groups received phototherapy two to three times a week for six months for a total of 72 treatments. In addition to phototherapy, patients in Group A received a dose of 16 mg of afamelanotide in four monthly treatments. Afamelanotide, about the size of a grain of rice, was implanted just under the skin.
Two common vitiligo assessment scoring systems – Vitiligo Area Scoring Index and Vitiligo European Task Force – were used to evaluate the repigmentation response.
While patients in both groups showed repigmentation, the response in Group A was superior to Group B by the 56th day of treatment and even better by the 168th day of treatment. The most common side effect was redness of the skin.
The study was funded by Clinuvel Pharmaceuticals.
New Deluxe Kindle E-Reader, Kid-Friendly Fire Tablet Among New Products Announced By Amazon
Chuck Bednar for redOrbit.com – Your Universe Online
Amazon has unveiled multiple new entries into its Kindle product line, including a kid-friendly version of its Kindle Fire, a tablet computer that will cost less than $100 and what they are calling the most advanced e-reader ever.
The online retailer has also announced upgrades to its Fire OS, a new version of its basic Kindle e-reader that will include a touch screen, as well as upgraded 7-inch and 8.9-inch Fire tablets, Reuters reporters Jennifer Saba and Deepa Seetharaman wrote on Thursday. The upgraded hardware and new products are expected to begin shipping in October.
According to Amazon, the new Kindle Voyage will be 7.6 mm thin and weigh less than 6.4 ounces, making it the thinnest device ever produced by the company. It will use a new, higher-resolution Paperwhite display capable of producing 300 pixels per inch, and the device will feature a front light that will automatically adjust brightness.
Kindle Voyage will also feature a new force sensor which allows users to turn pages by lightly pressing it, and doing so will activate a haptic actuator to deliver a slight vibration, the company said. It will also come with free, built-in 3G to allow readers to download books without having to seek out Wi-Fi hotspots. Kindle Voyage will cost $199 and will be released alongside a touch-screen enabled, $79 Kindle with a faster processor and twice the storage.
“Our mission with Kindle is to make the device disappear, so you can lose yourself in the author’s world,” said Jeff Bezos, Amazon.com Founder and CEO. “Kindle Voyage is the next big step in this mission. With the thinnest design, highest resolution and highest contrast display, reimagined page turns, and all of the features that readers love about Kindle – books in seconds, no eyestrain or glare, readability in bright sunlight, and battery life measured in weeks, not hours – Kindle Voyage is crafted from the ground up for readers.”
David Limp, Amazon’s senior vice president in charge of Kindle devices, told the Wall Street Journal that Amazon expects the audience for the new Kindle Voyage to be smaller for the cheaper versions of the e-reader. He said that it was designed to be the best single-purpose device of its kind, and that when they showed it to hardcore readers, they found that it was “difficult to get it out of their hands.”
Bezos’s company also announced a new version of the Fire HD tablet that will feature a quad-core processor that runs at speeds of up to 1.5 GHz, a front facing camera as well as a rear camera said to be capable of taking photos or capturing videos in 1080p full HD. It will also feature a high-definition display with more than one million pixels. It will come in two sizes, six-inch and seven-inch (costing $99 and $139 respectively) and five different color schemes.
Amazon also is prepping a higher-performance tablet known as the Fire HDX 8.9 that comes with a 2.5 GHz quad-core processor and a 339 ppi HDX display which can automatically adjust display color to make book pages more closely resemble paper in any light. The company also said that its graphics engine is 70 percent faster than before, and that it will be the first tablet computer to feature Dolby Atmos audio.
“For the low prices, the Fire HD tablets have strong hardware features,” said Agam Shah of IDG News Service, adding that Amazon claims the devices “provide three times the graphics performance than Samsung’s entry-level Galaxy Tab 4 tablets,” and that the tablets also feature “Gorilla Glass displays, which reduces the chances of screens cracking or getting scratches. The rugged screen is typically available on tablets above $200.”
Both the Fire HD and the Fire HDX will run on Amazon’s new operating system, Fire OS 4 “Sangria”, which is based on Android OS 4.4 KitKat. In addition to updating the visual design of the user interface and adding individual profiles for devices used by multiple people, Amazon said it will be faster and easier to use, able to predict movies and television shows that users would want to watch, and power-saving features that can increase battery life by 25 percent.
“Fire OS is not just a device OS – it deeply integrates with the Amazon cloud to further improve performance and ease-of-use, to enable cross-platform syncing, and to power services that require more processing than is possible on a mobile device,” the company said, adding that it would also include a family-sharing feature for apps, games, audiobooks and video content. Fire OS 4 will be available on all 4th generation Fire devices and will also be available for all 3rd generation Amazon tablets as a free, over-the-air software update.
Finally, Amazon also revealed a new kid-friendly entry into the Kindle Fire product line, the Fire HD Kids Edition. The company claims this is the first actual tablet built from the ground-up for children and it features a quad-core processor, an HD display, and both front- and rear-facing cameras. The Fire HD Kids Edition also comes with a two-year, no-questions-asked replacement policy and one year of unlimited access to 5,000 books, movies, TV shows, educational apps and games at no additional cost.
All items are available for pre-order today and will start shipping in October. Pre-order your Kindle Fire today!
Humans Likely Not The Reason That Chimps Attack And Kill Each Other
Chuck Bednar for redOrbit.com – Your Universe Online
Chimpanzee-on-chimpanzee violence is not the result of increased aggression resulting from exposure to human activities, researchers from the University of Minnesota and an international team of colleagues report in the latest edition of the journal Nature.
Rather, an in-depth analysis of lethal aggression among different groups of mankind’s closest relatives revealed that the behavior is an adaptive strategy which evolved so the creatures can eliminate rivals and gain better access to resources or territory, lead author Michael L. Wilson and his associates explain in the study.
Wilson and his colleagues spent five decades studying the behavior of chimpanzees in Africa, attempting to solve the mystery as to why they are the only creatures other than humans to engage in coordinated attacks on other members of the same species – behavior first observed by British anthropologist Jane Goodall in the 1970s.
“Observations that chimpanzees kill members of their own species have influenced efforts to understand the evolution of human violence,” said University of Michigan anthropologist John Mitani, who helped come up with the idea for and was one of more than 30 experts from the US, Germany and elsewhere involved in the research.
According to the researchers, the study provides “compelling evidence” that this type of killing is an evolved tactic, not an incidental result of aggression made worse by human activities such as deforestation. The findings indicate that human interference and encroachment is not actually an influential predictor of chimp-on-chimp aggression.
Wilson, Mitani and their colleagues compiled and analyzed roughly 50 years worth of research data pertaining to 18 different chimpanzee communities and four groups of bonobos, the creatures’ kinder, gentler cousins. The authors found that chimps were no more likely to attack each other when human interference like feedings or habitat destruction occurred, and the bonobos would not kill one another, even when exposed to those same manmade disturbances, said Rachel Feltman of the Washington Post.
The study authors reported that approximately 150 chimpanzees were confirmed or suspected to have been killed by other members of their own species during the course of the study, noted USA Today’s Traci Watson. Richard Wrangham, a primatologist at Harvard University and co-author of the study, told Watson that chimps patrol their territories in large groups and that their social structure leads to “a tendency… to kill neighbors.”
The chimpanzees responsible for the attacks were invariably male members of the species who acted together in groups, the researchers said. Their victims tended to be other males and nursing infants of other communities, and were unlikely to be closely related to the aggressors. When young chimps were killed, the attackers sometimes removed them from their mothers in situations where they could have also killed her, but chose not to do so.
“Humans have long impacted African tropical forests and chimpanzees, and one of the long-standing questions is if human disturbance is an underlying factor causing the lethal aggression observed,” said co-author Dr. David Morgan, a research fellow with the Lester E. Fisher Center for the Study and Conservation of Apes at Chicago’s Lincoln Park Zoo.
“A key take-away from this research is that human influence does not spur increased aggression within or between chimpanzee communities,” he added. “The more we learn about chimpanzee aggression and factors that trigger lethal attacks among chimpanzees, the more prepared park managers and government officials will be in addressing and mitigating risks to populations particularly with changing land use by humans in chimpanzee habitat.”
DNA Of Modern Europeans Can Be Traced Back To Three Ancient Tribes
Chuck Bednar for redOrbit.com – Your Universe Online
A comparison of nine ancient genomes to those of modern humans has revealed that present-day Europeans descended from at least three distinct groups of ancient humans, not two as previously believed.
Researchers from the Howard Hughes Medical Institute (HHMI), Harvard Medical School, the University of Tübingen in Germany and an international team of colleagues found that modern Europeans derived from three highly differentiated populations: west European hunter-gatherers, ancient north Eurasians related to Upper Paleolithic Siberians, and early European farmers who were primarily of Near Eastern origin.
As the researchers explained in the September 18 edition of the journal Nature, the ancient north Eurasians are the new group which has been added into the mix, as they apparently contributed DNA to both present-day Europeans and the people that journeyed across the Bering Strait into the Americas over 15,000 years ago.
“Prior to this paper, the models we had for European ancestry were two-way mixtures. We show that there are three groups,” co-senior author and HMS genetics professor David Reich said in a statement. “This also explains the recently discovered genetic connection between Europeans and Native Americans. The same Ancient North Eurasian group contributed to both of them.”
According to BBC News online science editor Paul Rincon, Reich and his associates reached their conclusion after analyzing the genomes of seven hunter-gatherers from Scandinavia, one hunter whose remains were discovered in a cave in Luxembourg, and an early farmer from Stuttgart, Germany. They also found that the ancestry of both ancient Near Eastern farmers and their European descendants can be traced further back to a previously unknown lineage called the Basal Eurasians.
“Our study does indeed show that European origins were more complex than previously imagined,” Iosif Lazaridis, a postdoctoral research fellow at HMS, told Reuters. “It seems that Europeans – who are often considered one group today – actually have a complex history with at least three groups admixing in different proportions in their history.”
Nearly all Europeans were found to have ancestry from all three of these ancient groups, Reuters reporter Will Dunham explained. The ancient north Eurasians contributed as much as 20 percent of their genetics, and the researchers said that this group connects all modern Europeans and Native Americans.
The study also revealed that people in northern Europe (particularly the Baltic states) had the highest proportion of western European hunter-gatherer ancestry, with up to half of the DNA of Lithuanians coming from this group, Dunham said. Southern Europeans obtained the bulk of their genetic ancestry from the ancient farmers, with as much as 90 percent of the DNA of Sardinians tracing back to these early European immigrants.
In addition to reviewing data from those nine ancient skeletons, Reich and a team of over 100 experts worldwide also looked at 203 present-day populations living all over the world, and compared the ancestral genomes with those of 2,345 people in their contemporary populations. Doing so required them to develop new computational methods of genetic analysis, the study authors explained.
“Figuring out how these populations are related is extremely hard,” Reich said in a statement. “There’s a lot that happened in Europe in the last 8,000 years, and this history acts like a veil, making it difficult to discern what happened at the beginning of this period. We had to find statistics that were able to tell us what happened deep in the past without getting confused by 8,000 years of intervening history, when massive and important events occurred.”
“What we find is unambiguous evidence that people in Europe today have all three of these ancestries: early European farmers who brought agriculture to Europe, the indigenous hunter-gatherers who were in Europe prior to 8,000 years ago, and these ancient north Eurasians,” he added.
—–
GET PRIMED! Join Amazon Prime – Watch Over 40,000 Movies & TV Shows Anytime – Start Free Trial Now
Fitbit Or FitFail? Are Wearable Activity Monitors Really Worth It?
Rayshell Clapper for redOrbit.com – Your Universe Online
Wearable lifestyle activity monitors are all the rage right now in the health community. Those who are interested and committed love to record their stats on devices like Fitbit and then share the data with their laptops or tablets to compile their data. People can see devices like the Fitbit all over the place because people are becoming more and more interested in their health and living a healthy lifestyle. While this is a natural step in the right direction for Americans, do these electronic activity monitors really work? New research suggests that, while they do provide some benefits, they may not be as great as we think. At the very least, they may need more development.
According to a statement from the University of Texas Medical Branch (UTMB), activity monitors from Fitbit, Jawbone, Nike, Basis, BodyMedia, Misfit, Fitbug, Ibitz, Polar and Withings show great promise yet still may need some development. While they do much more than pedometers, which solely count the steps one takes and calculate on average how far one walked in a day, these wearable health devices are missing some key functions and applications.
Just what do these devices do? As the UTMB article explains, the really positive applications that wearable activity monitors perform include measuring and providing feedback on several fitness and health categories including calories burned, type of exercise undertaken, sleep quality, as well as measurements of heart rate, skin sweat, and body temperature. Some of these devices – such as those from Jawbone, Fitbit and Nike – even take their data collections a bit further by including goal-setting and progress feedback, social support and a variety of charts and progress trackers which are based on the users’ goals and are easy to read. Specifically, “The researchers found that most of the interactive tools in these devices’ apps for goal setting, self monitoring and feedback were in line with what health care professionals recommend for their patients. The number of available app tools was similar to the amount of techniques used by health care professionals to increase their patients’ physical activity.”
While all of this provides a definite bump up from the pedometer, there are some downfalls, or at least still underdeveloped parts of the current wearable activity monitors. First of all, most of the 13 commercially available devices that were part of the research did not have, or barely had, tactics connected to the successful increase of physical activity monitoring. These tactics include planning action steps to engage in and instruction on how to do the behavior or exercise, commitment, and problem solving.
The authors of the study – Elizabeth Lyons, senior author from UTMB, Zakkoyya Lewis, Jennifer Rowland (each from UTMB) and Brian Mayrsohn from the University of Central Florida – concluded that while these devices provide great potential, they are not as beneficial for health and fitness as they could be. In fact, the devices with the most features seemed to be less effective than those with fewer but more effective tools. In current devices, individual success seemed to be more correlated with individual preferences and needs such as water resistance, heart rate monitor or food logs, just to name a few. The research team concluded that more research was needed into the feasibility and benefit of wearable electronic activity monitor devices. As Lyons stated, “This content analysis provides preliminary information as to what these devices are capable of, laying a foundation for clinical, public health and rehabilitation applications. Future studies are needed to further investigate new types of electronic activity monitors and to test their feasibility, acceptability and ultimately their public health impact.”
Though there are many benefits to these devices, more research could only lead to better, more beneficial health and fitness activity monitors, which can only lead to healthier, more fit individuals.
For the full study, see the Journal of Medical Internet Research.
—–
Shop Amazon – Wearable Technology: Electronics
Supermassive Black Hole Found In Ultracompact Dwarf Galaxy For The First Time
Chuck Bednar for redOrbit.com – Your Universe Online
Using the Hubble Space Telescope and other instruments, astronomers from the University of Utah and an international team of colleagues have discovered a supermassive black hole in the ultracompact dwarf galaxy M60-UCD1, making it the smallest galaxy ever found to host one of these enormous light-sucking objects.
[ Watch the Video: Artist’s Impression Of Dwarf Galaxy M60-UCD1′s Formation ]
Anil Seth, an assistant professor of physics and astronomy at the university, and his colleagues report in the September 18 edition of the journal Nature that the galaxy is located approximately 50 million light-years away and roughly 1/500th the diameter of the Milky Way. In addition, it is home to about 140 million stars, making it the densest dwarf galaxy ever observed.
M60-UCD1 was also found to be home to a black hole with a mass of more than 20 million suns, and this discovery suggests that several other ultracompact dwarf galaxies (UCDs) also contain supermassive black holes, the study authors said. Furthermore, their research indicates these dwarf galaxies could be the remnants of larger galaxies that were torn apart during collisions with other galaxies.
Image Above: Hubble image of Messier 60 and M60-UCD1. Credit: NASA, ESA and A. Seth (University of Utah, USA)
[ Watch the Video: Zoom Into Galaxy Pair Arp 116 ]
“It is the smallest and lightest object that we know of that has a supermassive black hole. It’s also one of the most black hole-dominated galaxies known,” Seth explained in a statement. “There are a lot of similar ultracompact dwarf galaxies, and together they may contain as many supermassive black holes as there are at the centers of normal galaxies.”
Using images captured by Hubble and observations conducted with the Gemini North 8-meter optical-and-infrared telescope in Hawaii, the astronomers found that the black hole at the heart of the galaxy comprises 15 percent of its total mass, and weighs five times more than the one located at the center of the Milky Way.
“That is pretty amazing, given that the Milky Way is 500 times larger and more than 1000 times heavier than M60-UCD1,” explained Seth. “In fact, even though the black hole at the center of our Milky Way galaxy has the mass of 4 million Suns it is still less than 0.01 percent of the Milky Way’s total mass, which makes you realize how significant M60-UCD1’s black hole really is.”
[ Watch the Video: Pan Across Galaxy Pair Arp 116 ]
The Hubble images provided information about the diameter and stellar density of the galaxy, while Gemini’s observations helped the research team measure the movement of the stars as they are impacted by the gravitational pull of the black hole. They then combined the data to calculate the mass of the unseen black hole.
According to Irene Klotz of Discovery News, the findings could help resolve a long-standing mystery surrounding UCDs, as scientists have long suspected that they had previously been far larger, but had been stripped of their stars by neighboring galaxies, ultimately leaving behind just the dense center and supermassive black hole.
If they are correct about other UCDs having supermassive black holes at the center, Seth told the Washington Post that their conclusions “could actually double the number of black holes in the universe. There are lots of ultra compact galaxies like this one, and it’s possible that many of them have black holes as well.”
—–
The Cosmic Cocktail: Three Parts Dark Matter (Science Essentials) by Katherine Freese
Waistlines Of US Adults Continue To Increase
Brittany Behm, The JAMA Network Journals
The prevalence of abdominal obesity and average waist circumference increased among U.S. adults from 1999 to 2012, according to a study in the September 17 issue of JAMA.
Waist circumference is a simple measure of total and intra-abdominal body fat. Although the prevalence of abdominal obesity has increased in the United States through 2008, its trend in recent years has not been known, according to background information in the article.
Earl S. Ford, M.D., M.P.H., of the U.S. Centers for Disease Control and Prevention, Atlanta, and colleagues used data from seven 2-year cycles of the National Health and Nutrition Examination Survey (NHANES) starting with 1999-2000 and concluding with 2011-2012 to determine trends in average waist circumference and prevalence of abdominal obesity among adults in the United States. Abdominal obesity was defined as a waist circumference greater than 40.2 inches (102 cm) in men and greater than 34.6 inches (88 cm) in women.
Data from 32,816 men and nonpregnant women ages 20 years or older were analyzed. The overall age-adjusted average waist circumference increased progressively and significantly, from 37.6 inches in 1999-2000 to 38.8 inches in 2011-2012. Significant increases occurred in men (0.8 inch), women (1.5 inch), non-Hispanic whites (1.2 inch), nonHispanic blacks (1.6 inch), and Mexican Americans (1.8 inch).
The overall age-adjusted prevalence of abdominal obesity increased significantly from 46.4 percent in 1999-2000 to 54.2 percent in 2011-2012. Significant increases were present in men (37.1 percent to 43.5 percent), women (55.4 percent to 64.7 percent), non-Hispanic whites (45.8 percent to 53.8 percent), non-Hispanic blacks (52.4 percent to 60.9 percent), and Mexican Americans (48.1 percent to 57.4 percent).
The authors write that previous analyses of data from NHANES show that the prevalence of obesity calculated from body mass index (BMI) did not change significantly from 2003-2004 to 2011-2012. “In contrast, our analyses using data from the same surveys indicate that the prevalence of abdominal obesity is still increasing. The reasons for increases in waist circumference in excess of what would be expected from changes in BMI remain speculative, but several factors, including sleep deprivation, endocrine disruptors, and certain medications, have been proposed as potential explanations.”
“Our results support the routine measurement of waist circumference in clinical care consistent with current recommendations as a key step in initiating the prevention, control, and management of obesity among patients.”
Mental Health In The Workplace – We Must Also Consider Home Life Influences
Cléa Desjardins, University of Montreal
New research from the University of Montreal and Concordia and confirms there are other factors at play
Impossible deadlines, demanding bosses, abusive colleagues, unpaid overtime — all these factors can lead to a burnout. When it comes to mental health in the workplace, we often forget to consider the influence of home life.
That’s about to change, thanks to new research from Concordia University and the University of Montreal, which proves that having an understanding partner is just as important as having a supportive boss.
The study, published in the journal Social Psychiatry and Psychiatric Epidemiology, surveys 1,954 employees from 63 different organizations and shows that a multitude of issues contribute to mental health problems in the workforce.
The research team polled participants to measure factors like parental status, household income, social network, gender, age, physical health and levels of self-esteem. They studied these elements alongside stressors typically seen in the workplace, such as emotional exhaustion, poor use of skills, high psychological demands, job insecurity and lack of authority.
Turns out mental health in the workplace doesn’t exist in a vacuum; it’s deeply affected by the rest of a person’s day-to-day life, and vice versa.
The study shows that fewer mental health problems are experienced by those living with a partner, in households with young children, higher household incomes, less work-family conflicts and greater access to the support of a social network outside the workplace.
Of course, factors within the workplace are still important. Fewer mental health problems are reported when employees are supported at work, when expectations of job recognition are met and when people feel secure in their jobs. A higher level of skill use is also associated with lower levels of depression, pointing to the importance of designing tasks that motivate and challenge workers.
“This is a call to action,” says senior author Steve Harvey, professor of management and dean of Concordia’s John Molson School of Business. “Researchers need to expand their perspective so that they get a full picture of the complexity of factors that determine individuals’ mental health.”
For lead author Alain Marchand, professor at the University of Montreal’s School of Industrial Relations, it’s all about adopting a holistic view. “To maintain a truly healthy workforce, we need to look outside the office or home in simple terms to combat mental health issues in the workplace.”
Partners in research: This study was supported by the Canadian Institutes of Health Research and the Fonds de recherche du Québec – Santé. The study was also conducted in partnership with Standard Life Canada, which continuously helped to select companies and to promote their participation.
* Note: The University of Montreal is officially known as Université de Montréal.
Energy Captured By Jaw-Powered Chinstrap
Eric Hopton for redOrbit.com – Your Universe Online
Canadian scientists have developed a new method of harvesting energy from chin and jaw movements using a strap made from smart materials. The captured energy can be turned into electricity which is then used to recharge wearable or implanted devices such as hearing aids.
Whenever we eat, talk, chew, or even yawn we use up energy which until now has simply been wasted. Generating electrical charges from human body movements has clear attractions for commercial applications and this latest research has brought the achievement of that goal a little closer. The researchers from Montreal’s Sonomax-ÉTS Industrial Research Chair in In-ear Technologies, part of the École de Technologie Supériere (ÉTS), have just published their findings in the Institute of Physics’ Journal Smart Materials and Structures.
The Sonomax team made a chinstrap using commercially available materials known as piezoelectric fiber composites (PFC). This is a type of smart material made from a matrix of adhesive polymers with integrated electrodes which can produce an electrical charge when placed under mechanical stress or stretched – precisely the kind of movements that occur as a result of chin movements. The chinstrap contains a single layer of PFC material and is attached tightly but comfortably round the chin and connected to a set of earmuffs by means of elastic side straps.
Image Above: This is the experimental set up of an energy harvesting chin strap. Credit: Smart Materials and Structures/IOP Publishing
Doctor Aidin Delnavaz, a mechanical engineer at ÉTS, was the guinea pig for the chinstrap tests. Delnavaz and his colleague Doctor Jeremie Voix specialize in auditory technology research including cochlear implants and powered earmuffs. In initial trials of the prototype, a series of measurements was taken while Delnavaz chewed for 60 seconds wearing the smart chinstrap. The device was able to produce up to 18 microwatts of electrical power a minute. While this is too small a charge to be immediately viable in powering the kind of devices it is aimed at, the amount of electricity produced could, say the authors, be easily increased by bundling together additional layers of PFC material. Delnavaz estimates that around 20 layers of PFC material would be needed to power a small hearing aid. A 20 layer strap would still be a relatively thin 6mm in depth but would be powerful enough to drive a “200 microwatt intelligent hearing protector.” Delnavaz wore the prototype chinstrap for many hours during the tests and believes that even the much thicker 20 layer version would be perfectly comfortable, even in prolonged use.
Although jaw movements may be capable of producing as much as 580 joules of energy in a 24 hour period, this is only for applications with modest power requirements. In its current form this technology may never be powerful enough to drive even a mobile phone. Nevertheless, in the right situation, the strap could be highly beneficial by reducing the cost of battery replacement and the environmental damage caused by incorrect disposal of used batteries.
The developers see initial uses of the device being taken up mainly in situations where some kind of head-strap is already being worn and where a powered gadget like a Bluetooth dongle could be fitted. People wearing helmets or ear-sets as part of their job, such as the military, those working with noisy machinery, and even cyclists are good examples. This work is still at the “proof of concept” stage, but commercial interest is already there and BBC News reports that the team has already been approached by companies looking at new charging methods for Bluetooth devices.
Behind The Diagnosis Criteria of Fibromyalgia
Fibromyalgia is a disorder best characterized by the musculoskeletal pain affecting most of the body with accompanying sleep problems, memory issues and fatigue.
Many medical researchers believe the condition causes the brain to change how it processes pain signals, which in turn, magnifies painful sensations around the body.
The case with fibromyalgia diagnoses
No cure for fibromyalgia exists at this time. Instead, doctors assess fibromyalgia treatment by providing patients with the condition appropriate care for their individual case.
Though, no two people really have the same case of fibromyalgia. It’s because fibromyalgia can’t be fully diagnosed at this time. Due to that, it leads to instances where some diagnoses might not be the same as others.
Fibromyalgia diagnosis criteria has significantly changed over the years. Criteria for fibromyalgia mainly helps doctors assign patients with a formal diagnosis of the condition. But, since the condition itself has ‘mysterious’ origins, determining an accurate diagnosis has been a subject of issue for the medical community for years.
In this article, we’re going to take a look at the history behind fibromyalgia diagnosis criteria and the current set of fibromyalgia diagnosis criteria today.
Behind fibromyalgia diagnosis criteria
Most patients with fibromyalgia need multiple tests to determine whether their symptoms originate from the condition. A single test can’t determine a fibromyalgia diagnosis because of the way fibromyalgia affects people and the lack of solid fibromyalgia diagnosis criteria.
How fibromyalgia gets diagnosed
Patients with fibromyalgia may need multiple tests and evaluations before their doctor can settle on a diagnosis. Many patients with the condition often have laboratory and other test results that don’t show any abnormalities. Some symptoms even mimic other rheumatic conditions like arthritis. Due to that, many cases of fibromyalgia may be diagnosed by using what’s known as a differential diagnosis.
That type of diagnosis more or less has doctors narrow down all possible issues associated with a patient, based on their symptoms, age, gender, location, medical history, family history and other factors. The complexity of fibromyalgia makes it a condition that’s often diagnosed through that aforementioned process.
The criteria of fibromyalgia diagnosis
Despite the difficulty of diagnosing fibromyalgia, fibromyalgia diagnosis criteria does exist. The most used set of fibromyalgia diagnosis criteria was first established in 1990 as a set of fibromyalgia criteria for research. The Multicenter Criteria Committee of America established that set of criteria, also informally known as the ‘ACR 1990’ criteria.
The ACR 1990 criteria is defined as:
– Having a history of ‘widespread pain’ that has lasted more than three consecutive months. The pain is defined has having affected all quadrants of the body, including both sides, below and above the waist.
– Feeling pain along or at the designated tender points. This criteria appointed 18 designated tender points where pain from fibromyalgia may radiate. People with fibromyalgia may also feel pain in other areas, as well.
An interesting thing to note is that the aforementioned criteria was originally created for research purposes. It wasn’t intended for use as clinical diagnosis criteria, but the wide adoption by medical practitioners immediately negated that.
The original 1990 criteria also characterized the pain of fibromyalgia as ‘occurring in 11 or more of the 18 specific tender point sites where fibromyalgia pain radiates.’ The widespread pain of the 1990 fibromyalgia diagnosis criteria was defined as pain in ‘3 out of 4’ quadrants, including pain originating from the left and right side.
The 1990 fibromyalgia diagnosis criteria required a tender point site examination, which was actually found to be a ‘barrier’ in clinical settings. The tender point exam also created the assumption that fibromyalgia is a peripheral disease of the musculoskeletal system, as if its main characteristics originate from those tender points.
Due to that, medical practitioners and researchers began developing new fibromyalgia diagnosis criteria to suit the true conditions of fibromyalgia as a disorder.
Redefining fibromyalgia diagnosis criteria
The first steps toward redefining fibromyalgia diagnosis criteria started as early as 2010. During that year, the American College of Rheumatology made their provisional revised fibromyalgia diagnosis criteria public after its approval.
The new 2010 criteria eliminated the 1990 criteria’s requirement for a tender point exam. It also redefined how medical practitioners grade fibromyalgia in patients. The ACR 2010 criteria diagnoses fibromyalgia, as long as the patient meets the following conditions:
– A widespread pain index (WPI) score that’s greater than or equivalent to 7.
– A symptom severity (SS) score that’s greater than or equivalent to 5.
OR
– A WPI score ranging from 3 to 6 and SS scale score greater than or equivalent to 9.
AND
– Symptoms that have been present for at least three months.
– The patient doesn’t have an underlying disorder that may be the actual cause of the aforementioned pain.
The main reason why the American College of Rheumatology changed the fibromyalgia diagnosis criteria has a lot to do with how fibromyalgia diagnosis is handled in a clinical setting.
Most diagnoses of fibromyalgia are handled in a clinical setting. When diagnosed under the 1990 criteria, most people needed a tender point examination. The problem with that was tender point examinations were rarely performed correctly, if they were ever performed at all.
The 2010 criteria essentially simplified the diagnosis process, making the fibromyalgia diagnosis criteria much more useful in a clinical setting without ever performing a tender point examination. The revised criteria also placed a larger emphasis on recognizing the non-physical symptoms of fibromyalgia, including fatigue, sleep problems and cognitive impairment.
Further modifications were made to the new fibromyalgia diagnosis criteria in 2011. The revisions aimed to ‘make fibromyalgia diagnosis criteria more patient driven’ by eliminating doctor-sanctioned estimates of fibromyalgia symptoms and allowing patients to more or less ‘self report’ symptoms.
Besides those changes, the 2011 fibromyalgia diagnosis criteria added what’s known as the fibromygalia (FMS) symptom scale. The FMS symptom scale (FS) is the cumulative score made from the WPI and (now modified) SS scale score, measuring the strength of fibromyalgia in a patient. This scale was found to be the ‘best predictor of fibromyalgia’ in patients who were diagnosed under the new criteria.
Yelp Announces Settlement With FTC Over Child Privacy Rights Violations
Chuck Bednar for redOrbit.com – Your Universe Online
San Francisco-based online customer rating and review website Yelp has agreed to pay the US Federal Trade Commission (FTC) the sum of $450,000 for illegally collecting the personal information of youngsters under the age of 13 without proper consent.
According to VentureBeat’s Richard Byrne Reilly, Yelp violated the Children’s Online Privacy Protection Act (COPPA) by collecting the names, email addresses and other identifying information from children, some of whom were nine years old or younger, without their permission or that of their parents.
The service, which was launched in 2004, was reportedly collecting the data without permission between 2009 and 2013. In a blog entry posted Tuesday, Yelp VP of Communications and Public Affairs Vince Sollitto said that the data collection was the result of “a bug in our mobile registration process.”
That bug, Sollitto explained, “allowed certain users to register with any birth date when it was supposed to disallow registrations from individuals under 13. The good news is that only about 0.02 percent of users who actually completed Yelp’s registration process during this time period provided an underage birth date, and we have good reason to believe that many of them were actually adults.”
He also said that birthdays are not even required for users to register, and that the process can be completed by anyone (including minors) without inputting that data. However, Sollitto noted that, once the company was made aware of the issue, “we fixed it immediately and closed the affected users’ accounts.”
Prior to 2009, users were only able to register through Yelp’s website, which contained a screening mechanism that blocked those under the age of 13 from signing up, explained John Ribeiro of PCWorld. However, when Yelp rolled out a registration feature in its mobile app later that year, it neglected to implement a function age-screening mechanism.
For that reason, both the Android and iOS versions of the app accepted registrations and collected data from users who entered birthdates that indicated that they were underage, according to an FTC complaint filed in the US District Court for the Northern District of California. That activity continued until April 2013, Ribeiro said.
“The FTC charged Yelp with violating the COPPA Rule by failing to provide notice to parents of its information practices, and to obtain verifiable parental consent before collecting, using, or disclosing personal information from children,” he added. “Under the proposed settlement, Yelp has to destroy the personal information of children under 13 who registered with the service within 30 days of the entry of the order, in most cases.”
“Yelp doesn’t promote itself as a place for children, and we certainly don’t expect or encourage them to write reviews about their plumbers, dentists, or latest gastronomic discoveries,” added Sollitto. “We’re glad to have been able to cooperate with the FTC to get to a quick resolution and look forward to continuing our efforts to protect our users.”
Yelp is not the only online company to feel the FTC’s wrath over COPPA regulations this year, according to Reilly. Apple, Google and Amazon have all been hit with fines after the trade commission accused them of billing minors for unauthorized purchases in their respective app marketplaces. Apple and Google settled for a combined sum of more than $60 million, while Amazon is fighting the FTC’s ruling, he added.
—–
SHOP NOW: Kindle Fire HDX 7″, HDX Display, Wi-Fi, 16 GB – Includes Special Offers
Blame Your Brain When You Cave To The Craving
Eric Hopton for redOrbit.com – Your Universe Online
If you have ever succumbed to a craving for high-calorie snacks, and most of us surely have, you may not feel quite so bad after reading a study by the School of Public Health and Health Systems and the Department of Kinesiology at Canada’s University of Waterloo. What you may have thought was a personal weakness could have been a simple evolutionary neurological response.
The researchers discovered that such over-indulgence may be due to lapses in a small and very specific area of your brain known as the dorsolateral prefrontal cortex, or DLPFC. The DLPFC is known to play a big part in the brain’s “executive functions,” helping the individual exercise control or restraint over otherwise automatic or “knee-jerk” reactions.
The report, which was published in the September edition of the Psychosomatic Medicine section of the Journal of Biobehavioral Medicine, describes the results of the study in detail. The aim of the research was to establish if there was any causal relationship between DLPFC function and “dietary self-control” as manifested by both visceral cravings and actual consumption.
The study used a sample of 21 young women aged between 19 and 26 years old, all students on undergraduate psychology courses who had admitted “strong and frequent cravings” for high-calorie foods like chocolate and potato chips. The students were either paid $40 or were given the chance to win a 16GB iPad in a drawing and, of course, got to try lots of their favorite foods in the interest of science.
The subjects were shown pictures of high-calorie foods in order to stimulate cravings. Then magnetic stimulation was applied to the DLPFC area of the brain using “continuous theta-burst stimulation” to suppress DLPFC activity. After the magnetic stimulation, the women exhibited increased cravings, in particular for the more appetizing choices like milk chocolate and potato chips. In subsequent taste tests they ate more of the appetizing foods than other foods like dark chocolate and soda crackers which were deemed to be less “appetitive.”
In effect, what these tests demonstrated was that suppressing DLPFC activity not only inhibits self-control but also increases “reward sensitivity.” In other words, the brain is subject to a double assault as high-calorie foods become more attractive and the pleasure response is heightened.
In what the study authors refer to as “the modern obesogenic environment,” this work may have important implications in understanding the neurological basis for “dietary self-restraint.” Human preference for calorie-dense foods is deep seated and, say the Waterloo scientists, “potentially driven by evolutionary pressures to optimize investment return per unit of energy spent foraging.” This preference would have been an evolutionary advantage when our species had unreliable food resources and needed to maximize opportunities when they arose. But in the modern developed world high-calorie choices are abundant and pushed at us relentlessly in what the authors call “ubiquitous environmental cuing of such foods through media advertising.” These changes have helped to bring about a rapid and serious increase in global levels of obesity and other chronic food-related illnesses.
“Interventions aimed at enhancing or preserving dorsolateral cortex function in healthy populations may reduce the likelihood of obesity and other chronic conditions” says Peter Hall, a senior author of the study. According to Hall, regular aerobic exercise, controlling alcohol intake, and getting plenty of sleep can all keep the brain in good shape and help us fight the demons of temptation.
—–
Shop Amazon – Wearable Technology: Electronics
Meteorite That Doomed The Dinosaurs Also Remade Earth’s Forests
April Flowers for redOrbit.com – Your Universe Online
Approximately 66 million years ago, a relatively small chunk of rock changed the entire face of the planet. A meteorite, approximately 6 miles in diameter, impacted the Earth near the present-day site of the town of Chicxulub, in the Yucatan. With a force nearing that of 100 teratons of TNT, the meteorite left a crater nearly 100 miles wide and set off a megatsunami, wildfires, global earthquakes and volcanism that, to our knowledge, wiped out the dinosaurs and made way for the rise of mammals as the dominant species.
The question that hasn’t been answered until now is what happened to the plants. A new study led by the University of Arizona reveals that the same meteor strike that decimated the dinosaurs had the same effect on evergreen flowering plants, making way for the deciduous flowering plants. Their results, published in a recent journal of PLOS Biology, suggest that deciduous plants have properties that make them better able to rapidly respond to chaotically varying climate conditions.
The team used biomechanical formulae on thousands of fossilized leaves of angiosperms (flowering plants excluding conifers) to reconstruct the ecology of a diverse plant community that thrived during a 2.2 million year period. This time period spanned the impact event, which is believed to have killed off half the plant species of the time.
The evidence suggests that, for the most part, fast-growing, deciduous angiosperms replaced the slower-growing, evergreen plants after the event. Modern examples of evergreen angiosperms — for example, holly and ivy, are dark-leaved, slow-growing plants.
Image Above: This post-extinction landscape is lush from warm weather and ample rain along the Front Range, but there are only a few types of trees. Extinct relatives of sycamores, walnut trees, and palm trees are the most common. Credit: Donna Braginetz/courtesy of Denver Museum of Nature & Science
“When you look at forests around the world today, you don’t see many forests dominated by evergreen flowering plants,” said Benjamin Blonder, who graduated last year from the lab of UA Professor Brian Enquist with a Ph.D. from the UA’s Department of Ecology and Evolutionary Biology and is now the science coordinator at the UA SkySchool. “Instead, they are dominated by deciduous species, plants that lose their leaves at some point during the year.”
According to Blonder, the results provide much needed evidence of how the extinction level event affected plant communities. Before this study, scientists knew that plant species existed before the impact that were decidedly different from those existing after. What wasn’t understood, however, was whether the shift in plant assemblages was coincidental, or a direct result of the event.
“If you think about a mass extinction caused by catastrophic event such as a meteorite impacting Earth, you might imagine all species are equally likely to die,” Blonder said. “Survival of the fittest doesn’t apply — the impact is like a reset button. The alternative hypothesis, however, is that some species had properties that enabled them to survive.”
“Our study provides evidence of a dramatic shift from slow-growing plants to fast-growing species,” he said. “This tells us that the extinction was not random, and the way in which a plant acquires resources predicts how it can respond to a major disturbance. And potentially this also tells us why we find that modern forests are generally deciduous and not evergreen.”
Previous studies found evidence of an “impact winter,” or a temperature shift so dramatic that it caused plants to struggle in order to harvest enough sunlight to maintain metabolism and growth.
“The hypothesis is that the impact winter introduced a very variable climate,” Blonder said. “That would have favored plants that grew quickly and could take advantage of changing conditions, such as deciduous plants.”
Image Above: Seen here is a Late Cretaceous specimen from the Hell Creek Formation, morphotype HC62, taxon “Rhamnus” cleburni. Specimens are housed at the Denver Museum of Nature and Science in Denver, Colorado. Credit: Benjamin Blonder
The team – which included scientists from Wesleyan University, the Smithsonian National Museum of Natural History and the Denver Museum of Nature and Science – examined nearly 1,000 fossilized plant leaves collected from North Dakota. The samples were found embedded in rock layers known as the Hell Creek Formation and are currently housed at the Denver Museum of Nature and Science.
Analyzing the leaves was a new approach for predicting how plant species used carbon and water in the ancient world. This technique shed light on the ecological strategies of plant communities of long ago.
“We measured the mass of a given leaf in relation to its area, which tells us whether the leaf was a chunky, expensive one to make for the plant, or whether it was a more flimsy, cheap one,” Blonder explained. “In other words, how much carbon the plant had invested in the leaf.”
“There is a spectrum between fast- and slow-growing species,” said Blonder. “There is the ‘live fast, die young’ strategy and there is the ‘slow but steady’ strategy. You could compare it to financial strategies investing in stocks versus bonds.” The team’s results revealed that while slow-growing evergreens dominated the plant assemblages before the extinction event, fast-growing flowering species had taken their places afterward.
—–
GET PRIMED! Join Amazon Prime – Watch Over 40,000 Movies & TV Shows Anytime – Start Free Trial Now
Knowing What Drugs are approved to Fight Fibromyalgia
If you are struggling with the symptoms of fibromyalgia, such as pain and soreness throughout the muscles and joints, and widespread and trouble sleeping, then you’ll want to consult with a doctor. The problem is that you may have to undergo months or even years of doctor visits and physical tests before being diagnosed with fibromyalgia.
Very few doctors are experts or even experienced in fibromyalgia for that matter. Just under five percent of the American population suffers from fibromyalgia, the majority of those people middle aged women. It also occurs in men and children, but not nearly as much. Fibromyalgia is also not a specific illness, as many of the symptoms for it could also mean that you have another medical condition or disease. It is very difficult to diagnose fibromyalgia…let alone treat it.
Drugs for Treating Fibromyalgia
If you are finally diagnosed with fibromyalgia, then you will be treated with numerous pain medicines, anti-depressant medicines, and muscle relaxants. While these medicines and drugs will usually decrease the pain levels and symptoms that you feel, they usually will not be able to stop fibromyalgia all together.
Some common drugs that doctors will prescribe you are Cymbalta, Lyrica, and Savella. These drugs will reduce the pain that you feel from the fibromyalgia symptoms, but what’s mysterious is that we don’t exactly know why these drugs work to treat fibromyalgia the way they do.
Cymbalta is usually used to treat depression and anxiety, and as such is also a useful drug treatment for fibromyalgia. However, as with any drug, it does come with a number of side effects, including sweating, dry mouth, and a decreased appetite. . You should also consult with your doctor to see if you might have any allergic reactions to Cymbalta.
Lyrica is a drug primarily used to treat seizures and pain in the nerves. As a result, it is also approved to treat fibromyalgia as well. The most common side effects of Lyrica are sleepiness, dizziness, a gain in weight, difficult concentrating, swelling of the hands, and dry mouth. In addition, some people also may have an allergic reaction to Lyrica, so you should talk to your doctor before use.
Savella is actually probably a more effective choice than either Cymbalta or Lyrica, although that will vary by person. Unlike the other two, Savella wasn’t first used to fight another disease before being approved to also treat fibromyalgia. It is the first drug that was made to treat fibromyalgia in the first place (at least in the United States). The primary side effects of savella are nausea, sweating, vomiting and/or diahhrea, dry mouth, an increased heart rate, and higher levels of blood pressure.
Now that we have gone over the most common drugs used to treat fibromyalgia, it’s important that we go over the downside to taking approved drugs for treating fibromyalgia. Of course, every drug has its pros and cons. All of the drugs listed and discussed above have proven to lower the pain and symptoms of fibromyalgia, despite the potential side effects that could occur as well. People’s lived can be turned completely upside down just because of the symptoms of fibromyalgia alone, and these approved drugs for treating fibromyalgia will be sure to lower those symptoms, even if you accept the risk of the potential side effects. Many people with fibromyalgia are forced to spend most of their days in bed, due to the pain.
The symptoms of Fibromyalgia can be experienced anywhere in the body, but especially in the muscles and joints of the neck, back, shoulders, hips, legs and arms. But it’s not just the pain and fatigue that hurt people; it’s the difficulty to do anything. Feeling powerful pain and excessive sleepiness is enough to want to keep you in bed, but you also may feel numbing in the sensitive parts of your body, a difficulty remembering things and incoherent thinking, and all of this may eventually lead to depression. So while taking approved drugs for fibromyalgia may be a good idea to bring down these symptoms, you should still consult with your doctor first to decide on the right one to take.
What Is the Root Cause of Fibromyalgia?
The unfortunate truth is that we still don’t exactly know what the real cause, or causes, of fibromyalgia are. This is one reason why approved drugs can’t completely cure fibromyalgia, since we don’t even know what the real cause of it is. Scientists do believe that developing fibromyalgia has something to do with our genes, but we don’t yet know what specific gene that is yet.
Even becoming diagnosed with fibromyalgia alone is a challenge. You may go months or even years of enduring the pain and symptoms of fibromyalgia before finding out if you have been diagnosed with it or not. This is because many doctors are not exactly experts in the field of fibromyalgia, and therefore cannot properly diagnose or treat you. Many other doctors have very limited experience in the field of fibromyalgia.
You may feel that no one is truly caring about you if you go from doctor to doctor without the right answers. But if you are eventually diagnosed with fibromyalgia (and hopefully from an experienced doctor), then you may start to feel better as your doctor and you work together to treat your symptoms. The approved drugs that we have talked about may be a great place to start.
The doctors who usually treat fibromyalgia are physicians and rheumatologists, even though there is not yet a specific diagnostic test for diagnosing fibromyalgia. Doctors will usually perform physical tests and evaluate your symptoms and slowly but gradually cross out other possibilities on your list to narrow down the list. Eventually, they may determine that you have or haven’t been diagnosed with fibromyalgia, but again, this is a very long and expensive process.
Recent Studies Investigate The Health Benefits Of Eating Dairy Products
Chuck Bednar for redOrbit.com – Your Universe Online
Dairy products have long been known to benefit bone health, but a series of recent studies suggest they could also reduce the risk of cardiovascular disease, stroke, diabetes, obesity and other metabolic conditions.
In one study, presented this week at the Milk and Dairy Products in Human Health session of the 2014 Euro Fed Lipid Congress in Montpellier, France and published earlier this year in the American Journal of Clinical Nutrition, the researchers conducted a meta-analysis of nine different studies and found a link between increased consumption of milk and a reduced incidence of hypertension.
Specifically, Dr. Sabita S. Soedamah-Muthu of Wageningen University in the Netherlands reviewed research involving more than 57,000 people, over 15,000 of whom had been diagnosed with hypertension, and found that as total dairy, low-fat dairy and milk consumption increased, the risk of high blood pressure decreased, though no statistically significantly link was found with coronary heart disease, stroke and total mortality in this study.
In a related study, appearing in the Journal of the American College of Nutrition , Professor Mark Wahlqvist of the Monash University’s Department of Epidemiology and Preventive Medicine, and his colleagues conducted a study of 4,000 Taiwanese people. They found that increased consumption of dairy products could reduce the risk of heart disease and stroke – even in communities where those foods are not typically part of the diet.
“In a dominantly Chinese food culture, unaccustomed to dairy foods, consuming them up to seven times a week does not increase mortality and may have favorable effects on stroke,” he explained. “We observed that increased dairy consumption meant lower risks of mortality from cardiovascular disease, especially stroke, but found no significant association with the risk of cancer.”
While milk and other dairy foods have been demonstrated to provide several nutrients that are important for our overall health and wellbeing, Wahlqvist said that people only need to consume small amounts to gain the benefits. For the best results, he and his colleagues suggest consuming about five servings equal to about eight grams of protein (one cup of milk or 45 grams of cheese) spread out over the course of a week.
“A little is beneficial and a lot is unnecessary,” Professor Wahlqvist said. “Those who ate no dairy had higher blood pressure, higher body mass index and greater body fatness… than other groups. But Taiwanese who included dairy food in their diet only three to seven times a week were more likely to survive than those who ate none.”
Dairy products could also help combat type 2 diabetes, according to research presented at the annual meeting of the European Association for the Study of Diabetes (EASD) claiming that people consuming eight or more portions of high-fat dairy products each day were less likely of contracting the disease than those having one or fewer servings.
That study, which was written by Dr. Ulrika Ericson of the Lund University Diabetes Center in Sweden and her colleagues, looked at nearly 27,000 individuals between the ages of 45 and 74, and found that higher intake of high-fat dairy foods was linked with a 23 percent lower incident rate of type 2 diabetes (T2D). In contrast, Dr. Ericson’s team found no association between low-fat dairy product intake and the risk of developing diabetes.
“Our observations may contribute to clarifying previous findings regarding dietary fats and their food sources in relation to T2D. The decreased risk at high intakes of high- fat dairy products, but not of low-fat dairy products, indicate that dairy fat, at least partly, explains observed protective associations between dairy intake and T2D,” she said. “Our findings suggest… fats specific to dairy products may have a role in prevention of type 2 diabetes.”
Likewise, researchers from CHU de Québec Research Center’s Endocrinology and Nephrology Department and Laval University reported in the September 16 online edition of the journal Applied Physiology, Nutrition, and Metabolism that dairy consumption could reduce a person’s risk of developing diabetes and other metabolic diseases such as obesity.
The study authors recruited 233 participants with healthy metabolic profiles from the greater Quebec City metropolitan area, and found that the average individual consumed approximately 2.5 servings of dairy per day (plus or minus 1.4 portions). The goal of the study was to determine a link between dairy intake and specific metabolic risk factors, including plasma glucose, plasma lipid profile, inflammatory markers and blood pressure.
The data suggested that trans-palmitoleic acid found in plasma could be used as a biomarker to evaluate dairy consumption. Trans-palmitoleic acid is naturally present in milk, cheese, yogurt, butter, and meat fat but cannot be synthesized by the body, and the fatty acid has recently been shown to be beneficial to a person’s health.
In the study, the authors found that trans-palmitoleic acid level was related to lower blood pressure in men and women, and to lower body weight in men. Daily intake was also associated with reduced blood glucose levels and blood pressure in the population studies, though the cross-sectional design made it impossible to draw causal relationships, and the study demonstrates that higher dairy intakes are not associated with adverse health effects.
New Research Investigates Genetic Factors Responsible For Speech In Humans
Chuck Bednar for redOrbit.com – Your Universe Online
Two separate, recently published studies are shedding new light on how humans developed the ability to produce and understand speech, and what factors contribute to the development of language during infancy.
In the first study, researchers from MIT and several European universities report that the human version of a gene known as Foxp2 makes it easier to transform new experiences into routine procedures, and that engineering mice to express humanized Foxp2 allowed the rodents to run through a maze far more quickly than normal.
Their research indicates the gene could help people with one of the key components of learning languages – transforming the experience of hearing the word “glass” when viewing a glass of water – into an almost automatic association of that term with other objects that resemble and function like glasses, the researchers said.
“This really is an important brick in the wall saying that the form of the gene that allowed us to speak may have something to do with a special kind of learning, which takes us from having to make conscious associations in order to act to a nearly automatic-pilot way of acting based on the cues around us,” MIT professor Ann Graybiel said in a statement Monday.
Graybiel is a member of MIT’s McGovern Institute for Brain Research and one of the senior authors of a paper published in this week’s edition of the Proceedings of the Natural Academy of Sciences. The other senior author is Wolfgang Enard, a professor of anthropology and human genetics at Ludwig-Maximilians University. Christiane Schreiweis, a former visiting graduate student at MIT, and Ulrich Bornschein of the Max Planck Institute for Evolutionary Anthropology in Germany are the lead authors of the study.
While all animal species are capable of communicating with one another, humans are the only ones with the ability to generate and comprehend language, the researchers said. Foxp2 is one of several genes believed to have played a role in the development of these linguistic talents, and the study authors said that it was originally identified in a group of family members suffering from severe difficulties in both speaking and understanding speech.
Those individuals were found to have been carrying a mutated version of the Foxp2 gene, and in 2009, researchers from the Max Planck Institute engineered mice to express the human form of the Foxp2 gene, which encodes a protein that is different from the mouse version by just two amino acids.
Those scientists discovered that the mice had longer dendrites (which are thin extensions used by neurons to communicate with one another) in the striatum (which is a part of the brain associated with habit formation). Those rodents were also said to be better at forming new synapses or connections between neurons.
In the new study, Graybiel and her colleagues looked at the behavioral effects of replacing Foxp2. They reported that those mice with a humanized form of the gene were better at learning how to complete a maze in which the creatures had to decide whether to turn left or right at a T-shaped junction to earn a food reward.
This type of learning requires the use of both declarative memory (memory for events and places) and procedural memory (memory required for routine tasks), and based on their performance in both T-shaped and cross maze trials, the researchers believe that the humanized version of the Foxp2 gene made it easier for the rodents to convert declarative memories into habitual routines.
“In this study, the researchers found that Foxp2 appears to turn on genes involved in the regulation of synaptic connections between neurons. They also found enhanced dopamine activity in a part of the striatum that is involved in forming procedures,” Anne Trafton of the MIT News Office explained. “Together, these changes help to “tune” the brain differently to adapt it to speech and language acquisition, the researchers believe.”
In related research, scientists from the Medical Research Council (MRC) Integrative Epidemiology Unit at the University of Bristol and an international team of colleagues reported in Tuesday’s edition of Nature Communications that they had found a link between variations near the ROBO2 gene and the number of words spoken by kids who are learning how to talk.
According to the authors of this study, children begin producing words between the ages of 10 and 15 months, and their vocabulary range expands as they grow, going from approximately 50 words at the 15 to 18 months to 200 words at 18 to 30 months to over 14,000 words by the age of six and 50,000 before entering high school.
The researchers discovered the genetic link during the 15 to 18 month range, when toddlers are typically using single words to communicate and before their linguistic skills mature to two-word combinations and more complex grammar. The genetic region is on chromosome 3, which had previously been implicated in dyslexia and speech-related disorders, and involves a protein that directs chemicals in brain cells that could help infants develop language.
“This research helps us to better understand the genetic factors which may be involved in the early language development in healthy children, particularly at a time when children speak with single words only, and strengthens the link between ROBO proteins and a variety of linguistic skills in humans,” said co-lead investigator Dr. Beate St Pourcain of the University of Bristol’s Medical Research Council Integrative Epidemiology Unit.
—–
Teach My Toddler Learning Kit
Certain Form Of Baldness At Age 45 Linked To Higher Risk Of Aggressive Prostate Cancer
Kate Blackburn, American Society of Clinical Oncology
A new, large cohort analysis from the prospective Prostate, Lung, Colorectal and Ovarian (PLCO) Cancer Screening Trial, indicates that men who had moderate baldness affecting both the front and the crown of their head at age 45 were at a 40% increased risk of developing aggressive prostate cancer (usually indicates a faster growing tumor resulting in poorer prognosis relative to non-aggressive prostate cancer) later in life, compared to men with no baldness. There was no significant link between other patterns of baldness and prostate cancer risk. The study, published September 15 in the Journal of Clinical Oncology, supports earlier research suggesting that male pattern baldness and prostate cancer may be linked.
“Our study found an increased risk for aggressive prostate cancer only in men with a very specific pattern of hair loss, baldness at the front and moderate hair-thinning on the crown of the head, at the age of 45. But we saw no increased risk for any form of prostate cancer in men with other hair-loss patterns,” said senior study author Michael B. Cook, PhD, an investigator in the Division of Cancer Epidemiology and Genetics at the National Cancer Institute in Bethesda, MD. “While our data show a strong possibility for a link between the development of baldness and aggressive prostate cancer, it’s too soon to apply these findings to patient care.”
Prostate cancer is the second most common cancer among men. Emerging evidence suggests that prostate cancer and male pattern baldness—progressive scalp hair-loss in a distinct pattern—are both connected to increased levels of male sex hormones (androgens) and androgen receptors, supporting the idea of a biological link between baldness and prostate cancer development and progression.
Researchers analyzed male pattern baldness in relation to prostate cancer risk in a cohort of 39,070 men from the U.S. PLCO Cancer Screening Trial, aged 55-74 years at enrollment. The men received a questionnaire that asked them to recall what their hair-loss patterns were at age 45 using a pictorial tool.
During follow-up, 1,138 prostate cancer cases were diagnosed, 51% of which were aggressive (Gleason score equal to or greater than 7, stage III or IV, or prostate cancer as the cause of death). The mean age at the time of prostate cancer diagnosis was 72.
Men who had a specific pattern of baldness, frontal and moderate crown (vertex), were 40% more likely to develop aggressive prostate cancer, compared to men who had no baldness. There was no association between male pattern baldness and risk of non-aggressive prostate cancer.
Dr. Cook stated that if these findings are confirmed by further studies, medical assessment of baldness could possibly be used to help identify men who may be at increased risk of aggressive prostate cancer. His research team is currently conducting two additional cohort analyses exploring the relationship between male pattern baldness and risk of developing and dying from prostate cancer. One of the studies includes a baseline dermatologic assessment of male pattern baldness, which may be more reliable than the recall method, which was used in the present study.
This research was supported by the intramural program of the U.S. National Cancer Institute, National Institutes of Health.
Slow To Mature, Quick To Distract: ADHD Study Finds Slower Development Of Connections
Kara Gavin, University of Michigan Health System
Brain networks to handle internal & external tasks mature more slowly in ADHD
A peek inside the brains of more than 750 children and teens reveals a key difference in brain architecture between those with attention deficit hyperactivity disorder (ADHD) and those without.
Kids and teens with ADHD, a new study finds, lag behind others of the same age in how quickly their brains form connections within, and between, key brain networks.
The result: less-mature connections between a brain network that controls internally-directed thought (such as daydreaming) and networks that allow a person to focus on externally-directed tasks. That lag in connection development may help explain why people with ADHD get easily distracted or struggle to stay focused.
What’s more, the new findings, and the methods used to make them, may one day allow doctors to use brain scans to diagnose ADHD — and track how well someone responds to treatment. This kind of neuroimaging “biomarker” doesn’t yet exist for ADHD, or any psychiatric condition for that matter.
The new findings come from a team in the University of Michigan Medical School’s Department of Psychiatry. They used highly advanced computing techniques to analyze a large pool of detailed brain scans that were publicly shared for scientists to study. Their results are published in the Proceedings of the National Academy of Sciences.
Lead author Chandra Sripada, M.D., Ph.D., and colleagues looked at the brain scans of 275 kids and teens with ADHD, and 481 others without it, using “connectomic” methods that can map interconnectivity between networks in the brain.
The scans, made using functional magnetic resonance imaging (fMRI) scanners, show brain activity during a resting state. This allows researchers to see how a number of different brain networks, each specialized for certain types of functions, were “talking” within and amongst themselves.
The researchers found lags in development of connection within the internally-focused network, called the default mode network or DMN, and in development of connections between DMN and two networks that process externally-focused tasks, often called task-positive networks, or TPNs. They could even see that the lags in connection development with the two task-related networks — the frontoparietal and ventral attention networks — were located primarily in two specific areas of the brain.
Image Above: These figures show lagged maturation of connections in ADHD between the default mode network, involved in internally-directed thought (i.e., daydreaming) and shown on the left of each figure, and two brain networks involved in externally-focused attention, shown on the right of each figure. The width of each arc represents the number of lagged connections between two regions within each network. Connections that normally increase with age and that are hypoconnected in ADHD are shown in blue; connections that normally decrease with age and that are hyperconnected in ADHD are shown in red. Credit: Sripada lab, University of Michigan
The new findings mesh well with what other researchers have found by examining the physical structure of the brains of people with and without ADHD in other ways.
Such research has already shown alterations in regions within DMN and TPNs. So, the new findings build on that understanding and add to it.
The findings are also relevant to thinking about the longitudinal course of ADHD from childhood to adulthood. For instance, some children and teens “grow out” of the disorder, while for others the disorder persists throughout adulthood. Future studies of brain network maturation in ADHD could shed light into the neural basis for this difference.
“We and others are interested in understanding the neural mechanisms of ADHD in hopes that we can contribute to better diagnosis and treatment,” says Sripada, an assistant professor and psychiatrist who holds a joint appointment in the U-M Philosophy department and is a member of the U-M Center for Computational Medicine and Bioinformatics. “But without the database of fMRI images, and the spirit of collaboration that allowed them to be compiled and shared, we would never have reached this point.”
Sripada explains that in the last decade, functional medical imaging has revealed that the human brain is functionally organized into large-scale connectivity networks. These networks, and the connections between them, mature throughout early childhood all the way to young adulthood. “It is particularly noteworthy that the networks we found to have lagging maturation in ADHD are linked to the very behaviors that are the symptoms of ADHD,” he says.
Studying the vast array of connections in the brain, a field called connectomics, requires scientists to be able to parse through not just the one-to-one communications between two specific brain regions, but the patterns of communication among thousands of nodes within the brain. This requires major computing power and access to massive amounts of data – which makes the open sharing of fMRI images so important.
“The results of this study set the stage for the next phase of this research, which is to examine individual components of the networks that have the maturational lag,” he says. “This study provides a coarse-grained understanding, and now we want to examine this phenomenon in a more fine-grained way that might lead us to a true biological marker, or neuromarker, for ADHD.”
Sripada also notes that connectomics could be used to examine other disorders with roots in brain connectivity – including autism, which some evidence has suggested stems from over-maturation of some brain networks, and schizophrenia, which may arise from abnormal connections. Pooling more fMRI data from people with these conditions, and depression, anxiety, bipolar disorder and more could boost connectomics studies in those fields.
Volunteers needed for research. Continue reading for more information.
Genetic Research Reveals Eight Distinct Types Of Schizophrenia
Chuck Bednar for redOrbit.com – Your Universe Online
Schizophrenia is not a single disease, but a group of eight genetically distinct disorders – each with its own unique set of symptoms, according to new research published online Monday in The American Journal of Psychiatry.
Dr. C. Robert Cloninger, one of the senior investigators of the study as well as a professor of psychiatry and genetics at the Washington University School of Medicine in St. Louis, and his colleagues believe their findings could be the first step towards improving how the condition is diagnosed and treated.
According to the researchers, approximately 80 percent of schizophrenia risk is inherited, but scientists have struggled to identify the exact genes responsible for the debilitating psychiatric illness. Now, after conducting detailed analysis of genetic influences on over 4,000 people with schizophrenia, the study authors have identified distinct gene clusters which they said contribute to eight different classes of the disorder.
“Genes don’t operate by themselves. They function in concert much like an orchestra, and to understand how they’re working, you have to know not just who the members of the orchestra are but how they interact,” Dr. Cloninger, whose team matched precise DNA variations in people with and without schizophrenia to symptoms in individual patients, said in a statement.
The investigators looked at nearly 700,000 sites within the genome where a single DNA unit is changed, also known as a single nucleotide polymorphism (SNP), in 4,200 people with schizophrenia and 3,800 healthy controls. The goal was to discover how individual genetic variations interacted with one another in order to produce the illness.
For example, in some patients suffering from delusions or hallucinations, they matched distinct genetic features to the symptoms and demonstrated with 95 percent certainty the genetic variations that would have caused that type of schizophrenia. In a second group, they discovered a link between disorganized speech and behavior with a unique set of DNA variations that carried a 100 percent risk of schizophrenia.
“What we’ve done here, after a decade of frustration in the field of psychiatric genetics, is identify the way genes interact with each other, how the ‘orchestra’ is either harmonious and leads to health, or disorganized in ways that lead to distinct classes of schizophrenia,” explained Dr. Cloninger.
While individual genes only have weak and somewhat inconsistent associations with the disease, groups of gene clusters that interact with each other can result in a 70 percent to 100 percent risk of developing schizophrenia. The study authors said that this makes it nearly impossible for people with those specific variations to avoid the condition. In all, they identified 42 clusters of genetic variations which dramatically increase schizophrenia risk.
“In the past, scientists had been looking for associations between individual genes and schizophrenia,” explained Dr. Dragan Svrakic, a study co-author and Washington University psychiatry professor. “When one study would identify an association, no one else could replicate it. What was missing was the idea that these genes don’t act independently. They work in concert to disrupt the brain’s structure and function, and that results in the illness.”
Dr. Svrakic noted that he and his colleagues were only able to see how these specific clusters of DNA variations acted together to cause specific types of symptoms once they were able to organize genetic variations and patients symptoms into groups. They then divided patients into groups based on the type and severity of their symptoms, including hallucinations and delusions, lack of initiative, and disconnect between thoughts and emotions.
In all, they developed symptom profiles that described eight qualitatively distinct disorders based on underlying genetic conditions. They also were able to replicate their findings using two additional DNA databases of schizophrenia patients, indicating that identifying the gene variations that are working together is a valid avenue for research and a possible way to improve diagnosis and treatment in the future.
“People have been looking at genes to get a better handle on heart disease, hypertension and diabetes, and it’s been a real disappointment,” Dr. Cloninger said. “Most of the variability in the severity of disease has not been explained, but we were able to find that different sets of genetic variations were leading to distinct clinical syndromes. So I think this really could change the way people approach understanding the causes of complex diseases.”
World Ozone Day 2014: Progress, But The Mission Has Not Yet Been Accomplished
Chuck Bednar for redOrbit.com – Your Universe Online
The layer of gas that protects us and all life on Earth from the Sun’s harmful ultraviolet radiation may be on the road to recovery, but as the world commemorates World Ozone Day on Tuesday, officials emphasize there is still much work to be done.
Starting in 1994, the UN Assembly proclaimed that September 16 would be the International Day for the Preservation of the Ozone Layer (or, alternatively, World Ozone Day). The date was chosen because it was on September 16, 1987 that the Montreal Protocol on Substances that Deplete the Ozone Layer was signed into effect.
According to the UN, the theme for this year’s World Ozone Day is “Ozone Layer Protection: The Mission Goes On,” since even though the Montreal Protocol has been somewhat successful to this point, the organization emphasized that there are “some remaining challenges” to overcome.
Earlier this month, scientists from the UN Environment Programme (UNEP) and the World Meteorological Organization (WMO) revealed in a new report that the ozone layer in the stratosphere was starting to thicken, and the whole that appears annually over Antarctica has finally stopped growing larger.
The UNEP and WMO explained it would take decades before the hole begins to shrink. Without the Montreal Protocol and the “concerted international action against ozone depleting substances” it has encouraged, they said, the atmospheric levels of ozone-depleting substances might have increased tenfold by 2050.
“There are positive indications that the ozone layer is on track to recovery towards the middle of the century”, UN Undersecretary General and UNEP Executive Director Achim Steiner said, according to Lydia Smith of International Business Times. “The challenges that we face are still huge. The success of the Montreal Protocol should encourage further action not only on the protection and recovery of the ozone layer but also on climate.”
World Ozone Day also provides a unique opportunity to learn more about this substance, which according to William Hartson of The Express is a pungent smelling blue-gas comprised of three oxygen atoms. It was first discovered in 1840 by Christian Friedrich Schönbein, a German-Swizz chemist who named it after Greek word “ozein,” which means “to smell.”
Also, as NASA points out, not all ozone is good ozone. While the ozone in the stratosphere (12-20 miles above the ground) helps protect humans, animals and all life on Earth from those dangerous UV rays, ozone located closer to the planet’s surface in the troposphere can be hazardous to our health. In fact, according to the EPA, it is harmful to breathe and is also one of the primary ingredients of urban smog.
The US space agency has also published a list of educational activities that parents and teachers can use with children of all ages to commemorate the occasion. NASA is also encouraging teachers to contact the organization and discuss how they were able to use the information in their classroom, promising an educational poster to the first 25 instructors they hear from.
At its core, World Ozone Day 2014 is a time to mark what WMO Secretary-General Michel Jarraud called “a major environmental success story” last week following the release of the joint WMO-UNEP report. He went on to tell reporters that the findings of that study “should encourage us to display the same level of urgency and unity to tackle the even greater challenge of tackling climate change.”
Image 2 (below): Miles above the surface of the Earth, a thin layer of ozone gas acts as a shield that protects us from harmful ultraviolet light. Credit: NASA
Where Do The Rudest Drivers In America Live? The Answer May Surprise You
Chuck Bednar for redOrbit.com – Your Universe Online
When you think of places where the drivers are rude, impatient and likely to flash you an obscene gesture, Idaho might not be the first location that comes to mind – but according to a new poll, the state is home to the highest concentration of unpleasant motor vehicle operators in the United States.
The website Insure.com polled 2,000 drivers from all 50 states, equally divided between men and women, and those responders listed the Gem State as the home of the rudest drivers in America, followed by Washington DC, New York, Wyoming and Massachusetts. Delaware, Vermont, New Jersey, Nevada and Utah rounded out the top 10.
The survey also found that North Dakota was the state with the fewest amount of rude drivers, followed by Maine, New Hampshire, Montana, Minnesota, Oregon and Wisconsin. The state rankings were calculated using a ratio of the nationwide votes for drivers of the state divided by the number of respondents from the state, the website said.
As Chris Woodyard of USA Today noted, “Washington and New York have dense urban environments that make them breeding grounds for rudeness.” Idaho, on the other hand, “is a fairly laid-back state, home to wide open spaces. Relatively speaking, there aren’t many people to be rude to.”
So what makes the drivers in the seventh least-densely populated state in the US so rude?
“The roadways of Idaho present a dichotomy of drivers: Those who are moving so slowly that they’re judged to be rude, and the aggressive drivers who speed around them and flip them off,” Insure.com’s Jeffrey Steele explained in a blog post Monday. “Together, with their opposite yet equally vexing styles of driving, they push Idaho to the top of the rankings.”
Steele interviewed a man by the name of Matt Stubbs, who had recently moved to Idaho from Utah. Stubbs described being amazed by the number of drivers who regularly traveled between five and 10 miles per hour under the speed limit. Slower drivers tend to hold up those traveling behind them, causing those motorists to become impatient.
Second-ranked Washington DC was described by former Los Angeles resident Sam Russell as “self-serving, abrasive and unsafe,” and research also indicates that the nation’s capital is home to the most speeding tickets per capita in the US. As for New York, 41-year-old Steven Lowell shared some interesting stories about the Big Apple.
“I’m trying to figure out if that woman talking on her cell and smoking a cigarette is going to run a stop sign. Good thing she did 75 miles an hour up to the stop sign and then flipped me off for not letting her go,” Lowell told Insure.com, adding that the pedestrians were also rude. “I was just told to [expletive] off by a woman pushing her baby carriage through an intersection against the light because I interrupted her texting and emailing.”
Since the poll asked drivers to name the states where they believed the rudest drivers resided, Woodyard said that it was also able to reveal that vehicle operators in California were the top-ranked haters of those from other states. Arizona, on the other hand, was the state that most hated those slow-paced drivers in Idaho, Steele added.
—–
Amazon.com – Read eBooks using the FREE Kindle Reading App on Most Devices
1 In 5 Men Reports Violence Toward Intimate Partners
Beata Mostafavi, University of Michigan
Physical symptoms like irritable bowel syndrome and insomnia also associated with higher risks of intimate partner violence
One in five men in the U.S. reports violence towards their spouse or significant other, says a new nationally-representative study by the University of Michigan.
The analysis also found that male aggression toward a partner is associated with warning signs that could come up during routine health care visits, including irritable bowel syndrome (IBS) and insomnia, in addition to better known risks like substance abuse and a history of either experiencing or witnessing violence as a child.
The findings appear in the Journal of the American Board of Family Medicine and are based on the most recent data available from the National Comorbidity Survey-Replication from 2001-2003. The survey assesses intimate partner violence and characteristics among male perpetrators.
“When people think of men who abuse their partners, they often think of violent people who they have never come across, or people they have only heard about in the news,” says lead author Vijay Singh, M.D., MPH, MS, a clinical lecturer in the Departments of Emergency Medicine and Family Medicine at the University of Michigan Medical School.
“However, our study showed one out of every five men in the U.S. reported physical violence toward an intimate partner. It’s likely that we’ve all met these men in our daily environment. This is an issue that cuts across all communities, regardless of race, income, or any other demographics.”
Domestic violence has become a growing health concern. In the U.S. each year, roughly 320,000 outpatient health visits and 1,200 deaths among women are due to intimate partner violence, and $8.3 billion is spent in related medical and mental health services alone.
The subject has also recently been in the headlines, with the case of NFL running back Ray Rice. The Baltimore Ravens released Rice after a video of him hitting his wife in a casino elevator surfaced in the news.
The U-M study found that more than half of the men who reported violence toward a partner had at least one routine health visit over the last year and nearly one third noted at least one emergency room visit over the last year.
“Most of our efforts to prevent intimate partner violence have focused on screening and improving outcomes for women who are victims, because their health and well-being is our priority. Very little work, however, has been done on how to identify male perpetrators,” says Singh, who is also a member of the University of Michigan Injury Center and Institute for Healthcare Policy and Innovation.
“Our research shows that male perpetrators of intimate partner violence seek routine medical services, and they have physical symptoms that are common reasons patients seek medical care. This suggests that we may be missing an important opportunity in the primary care setting to identify their aggressive behavior and potentially intervene.”
Singh says further work needs to be done on developing identification and intervention programs focused to on male aggression toward a partner.
The nationally-representative sample included 530 men with an average age of 42. Roughly 78 percent were non-Hispanic white, 56 percent were educated beyond high school and 84 percent were employed.
Intimate partner violence was defined as pushing, grabbing, shoving, throwing something, slapping or hitting, kicking, biting, beating up, choking, burning or scalding, or threatening a partner with a knife or gun.
Study Finds Warming Atlantic Temperatures Could Increase Range Of Invasive Species
Ben Sherman, NOAA
Warming water temperatures due to climate change could expand the range of many native species of tropical fish, including the invasive and poisonous lionfish, according to a study of 40 species along rocky and artificial reefs off North Carolina by researchers from NOAA and the University of North Carolina-Wilmington.
The findings, reported for the first time, were published in the September issue of Marine Ecology Progress Series.
“The results will allow us to better understand how the fish communities might shift under different climate change scenarios and provide the type of environmental data to inform future decisions relating to the management and siting of protected areas,” said Paula Whitfield, a research ecologist at NOAA’s National Centers for Coastal Ocean Science (NCCOS) and lead author of the study.
The North Carolina reefs lie within the temperate-tropical transition zone, where historically, both temperate and tropical species live, at their respective range limits. However, water temperatures in the zone are becoming more tropical, making it an important place to detect climate changes and its impacts.
The researchers first made these discoveries during an ecological study of the marine communities on the North Carolina reefs. Findings from this earlier study showed similar shifts of climate change induced shifts in algal populations.
Researchers combined year-round bottom water temperature data with 2006-2010 fish community surveys in water depths from 15 to 150 feet off the coast of North Carolina. The study revealed that the fish community was primarily tropical in the deeper areas surveyed, from 122 to 150 feet, with a winter mean temperature of 21 °C (69.8 °F). However, many of these native tropical fishes, usually abundant in shallow, somewhat cooler reefs, tended to remain in the deeper, warmer water, suggesting that temperature is a main factor in controlling their distribution.
“Globally, fish communities are becoming more tropical as a result of warming temperatures, as fish move to follow their optimal temperature range.,” said Whitfield. “Along the North Carolina coast, warming water temperatures may allow the expansion of tropical fish species, such as lionfish, into areas that were previously uninhabitable due to cold winter temperatures. The temperature thresholds collected in this study will allow us to detect and to estimate fish community changes related to water temperature.”
“This kind of monitoring data set is quite rare because it combines multi-year quantitative fish density data with continuous bottom water temperature data from the same location,” said Jonathan A. Hare, NOAA Fisheries research oceanographer and a co-author on the study.
Similarly, the distribution of the venomous Indo-Pacific lionfish (Pterois volitans), a species new to the Atlantic since 2000, was restricted to water depths deeper than 87 feet where the average water temperature was higher than 15.2°C (approximately 59.4 °F). As the more shallow waters warm, lionfish may expand their range, since they seem to be attracted to areas with a warmer minimum temperature. Although lionfish only arrived in North Carolina in 2000 they were the most common species observed in water depths from 122 to 150 feet in this study.
Since their first sighting off the Florida east coast, in the late 1980s, lionfish have spread throughout the western North Atlantic including the Gulf of Mexico and Caribbean. They are considered a major threat to Atlantic reefs by reducing reef fish recruitment and biomass, and have been implicated in cascading impacts such as decreased coral cover on coral reefs. To date, cold winter bottom temperatures are the only factor found to control their distribution on a large scale.
> Continue reading…
Some People Use Marijuana To Feel Better Even Though They May Feel Worse Afterward
Erin Tornatore, Journal of Studies on Alcohol and Drugs
Adolescents and young adults who smoke marijuana frequently may attempt to manage negative moods by using the drug, according to a study in September’s Journal of Studies on Alcohol and Drugs.
“Young people who use marijuana frequently experience an increase in negative affect in the 24 hours leading up to a use event, which lends strong support to an affect-regulation model in this population,” says the study’s lead author Lydia A. Shrier, M.D., M.P.H., of the division of adolescent and young adult medicine at Boston Children’s Hospital.
She notes that using marijuana as a coping technique for negative affect may make it harder for people to stop using the drug.
“One of the challenges is that people often may use marijuana to feel better but may feel worse afterward,” she says. “Marijuana use can be associated with anxiety and other negative states. People feel bad, they use, and they might momentarily feel better, but then they feel worse. They don’t necessarily link feeling bad after using with the use itself, so it can become a vicious circle.”
For the study, Shrier and colleagues recruited 40 people, ages 15 to 24, who used marijuana at least twice a week, although their average was 9.7 times per week. They were trained to use a handheld computer that signaled them at a random time within three-hour intervals (four to six times per day) for two weeks. At each signal, participants were asked about their mood, companionship, perceived availability of marijuana, and recent marijuana use. Participants were also asked to report just before and just after any marijuana use. They completed more than 3,600 reports.
The researchers found that negative affect was significantly increased during the 24 hours before marijuana use compared with other periods. However, positive affect did not vary in the period before marijuana use compared with other times.
Also, neither the availability of marijuana nor the presence of friends modified the likelihood that chronic users would use marijuana following a period of negative affect.
The study is unique in that it collected data in real time to assess mood and marijuana use events. The study thus was able to identify mood that was occurring in the 24 hours before marijuana use and compared it with mood at other times, Shrier reports.
“There are a host of limitations with retrospective assessments, such as asking people ‘the last time you used marijuana, why did you use it?'” according to Shrier. “We weren’t asking people to predict anything or to recall anything—we were just asking them to give us reports about how they were feeling right now. We were able to put under a microscope the association between those feelings and subsequent marijuana use.”
Shrier says it could be beneficial for clinicians and counselors to help their patients identify patterns of negative affect and to implement alternative mood-regulation strategies to replace marijuana use.
References:
Shrier, L. A., Ross, C. S., & Blood, E. A. (September 2014). Momentary positive and negative affect preceding marijuana use events in youth. Journal of Studies on Alcohol and Drugs, 75(5), 781.
The Journal of Studies on Alcohol and Drugs is published by the Center of Alcohol Studies at Rutgers, The State University of New Jersey. It is the oldest substance-related journal published in the United States.
To learn about education and training opportunities for addiction counselors and others at the Rutgers Center of Alcohol Studies, please visit AlcoholStudiesEd.rutgers.edu.
When Is Vitamin E Intake Most Crucial?
David Stauth, Oregon State University
Amid conflicting reports about the need for vitamin E and how much is enough, a new analysis published September 15 suggests that adequate levels of this essential micronutrient are especially critical for the very young, the elderly, and women who are or may become pregnant.
A lifelong proper intake of vitamin E is also important, researchers said, but often complicated by the fact that this nutrient is one of the most difficult to obtain through diet alone. Only a tiny fraction of Americans consume enough dietary vitamin E to meet the estimated average requirement.
Meanwhile, some critics have raised unnecessary alarms about excessive vitamin E intake while in fact the diet of most people is insufficient, said Maret Traber, a professor in the College of Public Health and Human Sciences at Oregon State University, principal investigator with the Linus Pauling Institute and national expert on vitamin E.
“Many people believe that vitamin E deficiency never happens,” Traber said. “That isn’t true. It happens with an alarming frequency both in the United States and around the world. But some of the results of inadequate intake are less obvious, such as its impact on the nervous system and brain development, or general resistance to infection.”
Some of the best dietary sources of vitamin E – nuts, seeds, spinach, wheat germ and sunflower oil – don’t generally make the highlight list of an average American diet. One study found that people who are highly motivated to eat a proper diet consume almost enough vitamin E, but broader surveys show that 90 percent of men and 96 percent of women don’t consume the amount currently recommended, 15 milligrams per day for adults.
In a review of multiple studies, published in Advances in Nutrition, Traber outlined some of the recent findings about vitamin E. Among the most important are the significance of vitamin E during fetal development and in the first years of life; the correlation between adequate intake and dementia later in life; and the difficulty of evaluating vitamin E adequacy through measurement of blood levels alone.
Findings include:
– Inadequate vitamin E is associated with increased infection, anemia, stunting of growth and poor outcomes during pregnancy for both the infant and mother.
– Overt deficiency, especially in children, can cause neurological disorders, muscle deterioration, and even cardiomyopathy.
– Studies with experimental animals indicate that vitamin E is critically important to the early development of the nervous system in embryos, in part because it protects the function of omega-3 fatty acids, especially DHA, which is important for brain health. The most sensitive organs include the head, eye and brain.
– One study showed that higher vitamin E concentrations at birth were associated with improved cognitive function in two-year-old children.
– Findings about diseases that are increasing in the developed world, such as non-alcoholic fatty liver disease and diabetes, suggest that obesity does not necessarily reflect adequate micronutrient intake.
– Measures of circulating vitamin E levels in the blood often rise with age as lipid levels also increase, but do not prove an adequate delivery of vitamin E to tissues and organs.
– Vitamin E supplements do not seem to prevent Alzheimer’s disease occurrence, but have shown benefit in slowing its progression.
– A report in elderly humans showed that a lifelong dietary pattern that resulted in higher levels of vitamins B,C, D and E were associated with a larger brain size and higher cognitive function.
– Vitamin E protects critical fatty acids such as DHA throughout life, and one study showed that people in the top quartile of DHA concentrations had a 47 percent reduction in the risk of developing all-cause dementia.
“It’s important all of your life, but the most compelling evidence about vitamin E is about a 1000-day window that begins at conception,” Traber said. “Vitamin E is critical to neurologic and brain development that can only happen during that period. It’s not something you can make up for later.”
Traber said she recommends a supplement for all people with at least the estimated average requirement of vitamin E, but that it’s particularly important for all children through about age two; for women who are pregnant, nursing or may become pregnant; and for the elderly.
This research was supported in part by the National Institutes of Health.