New Microbes, Toxins Discovered In NIH, FDA Lab Facilities

Chuck Bednar for redOrbit.com – Your Universe Online
Containers filled with several types of potentially deadly substances, including a nearly 100-year-old vial of the toxin ricin and samples of the pathogens that cause botulism and the plague, have reportedly been discovered at US National Institutes of Health (NIH) and Food and Drug Administration (FDA) laboratories.
The substances were discovered as part of a search launched in response to the accidental discovery of smallpox at an NIH facility in Bethesda, Maryland earlier this summer, according to BBC News. In that incident, six freeze-dried and sealed vials of the disease were discovered, marking the first time that the unaccounted-for samples of the virus had been discovered in the US.
This latest incident took place at laboratories that the NIH said were permitted to use poisonous substances, but involved materials from historical collections that were allowed to be stored without adhering to any specific safety regulations. Officials told the BBC that the toxins had been improperly stored, but were in sealed containers. No employees were in any danger of being infected, and the samples have since been destroyed.
According to Washington Post reporters Brady Dennis and Lena H. Sun, a total of five misplaced biological materials were discovered by NIH officials at the Bethesda campus over the past few weeks. Alfred Johnson, director of the agency’s office of research services, told the writers that the items were found in locations where they should not have been stored. Johnson’s office is in the midst of a “clean sweep” of NIH labs, Dennis and Sun added.
Three of the samples were found at the NIH Clinical Center’s Department of Medicine, which is home to thousands of microbial samples dating back to the 1950s, Johnson told the Washington Post. Those samples included two vials of the bacteria responsible for causing plague, as well as two vials of a rare bacterium that causes a tropical illness known as Melioidosis and three vials of the bacterium that causes a potentially fatal disease known as tularemia.
The NIH search, which took place between July 29 and August 27, also revealed a vial of ricin in a chemical lab, and two vials of the nerve toxin that causes the muscle-paralyzing disease botulism in a lab of the National Institute of Child Health and Human Development (NICHD). While Johnson said that scientists are allowed to have individual quantities of this substance if they are below half a milligram, the total found in the two vials exceeded the limit.
In a memo, the NIH said that it “takes this matter very seriously,” and that “the finding of these agents highlights the need for constant vigilance in monitoring laboratory materials in compliance with federal regulations on biosafety,” according to The Telegraph.
In a separate but related incident, the FDA reported on Friday that it, too, had discovered an improperly stored pathogen – staphylococcus enterotoxin, which can cause food poisoning, in one of its laboratories, the UK newspaper reported. The vials were stored in a locked freezer, the agency said, but not in a lab registered to work with such agents. They were relocated to a registered facility where they were later destroyed.

Do 3D Movies Elicit More Of An Emotional Response Than 2D Films?

Chuck Bednar for redOrbit.com – Your Universe Online
While the common perception is that watching movies in 3D adds an extra level of excitement and makes the theatrical experience more thrilling and lifelike, new research appearing in a recent edition of the journal PLOS ONE suggests otherwise.
Psychologists from the University of Utah devised the study to investigate whether or not older 2D film clips could produce the desired response in people who are regularly exposed to high-definition 3D video. They recruited 480 participants, showed them both types of film clips, and then gauged their emotional reaction to what they watched.
Four five-minute film clips were selected, each of which prompted one discrete emotion intensely and in context without the need to watch the entire film, the researchers explained. The participants watched both 2D and 3D versions of each film, which included clips from “My Bloody Valentine” (which is associated with fear), “Despicable Me” (amusement), “Tangled” (sadness) and “The Polar Express” (thrill or excitement).
All of the participants were selected at random to view the films in a design that balanced the pairs of films watched, the format which they were viewed in, and the order of presentation. These configurations allowed the study authors to compare not just the emotional responses, but the effects of format and viewing order on the results as well. Overall, they found few significant differences between physiological reactions to the clips.
The researchers used common measures to gauge emotional responses, including palm sweat, breathing and cardiovascular responses such as heart rate. Taking into account the sizable number of tests, the research team said that there was only one primary difference detected between the formats – the number of electrodermal responses (palm sweat) experienced during a thrilling moment in the 3D clip of “The Polar Express.”
The authors believe that this is due to the overall quality, the amount and the variety of 3D effects used in this particular film. Overall, however, the researchers found that the individual differences in anxiety, control of emotional responses or “thrill seeking” did not change the participants’ psychological or physiological responses to 3D viewing.
Thus, differences in personality did not change the fact that 2D and 3D are equally effective when it comes to eliciting emotions. In a statement, study author and assistant professor of psychology Sheila Crowell said that the findings “could be good news for people who would rather not wear 3D glasses or pay the extra money to see these types of films.”
“We set out to learn whether technological advances like 3D enhance the study of emotion, especially for young patients who are routinely exposed to high-tech devices and mediums in their daily lives,” she added. “Both 2D and 3D are equally effective at eliciting emotional responses, which also may mean that the expense involved in producing 3D films is not creating much more than novelty.”
While Crowell and her colleagues note that further research is needed to confirm the findings, they said that researchers who cannot afford 3D technologies should be encouraged by the results. In addition to Crowell, Daniel L. Bride, Brian R. Baucom, Erin A. Kaufman, Caitlin G. O’Connor, Chloe R. Skidmore, and Mona Yaptangco – all of the University of Utah’s Department of Psychology – were credited as authors on the study.
—–
Shop Amazon Fire TV – Say it. Watch it.

Scientists Achieve First Successful Brain-To-Brain Communication In Humans

Chuck Bednar for redOrbit.com – Your Universe Online
Researchers have taken telepathic communication out of the realm of science fiction and into reality, successfully demonstrating that a simple message can be sent directly from the brain of one person to the mind of another.
In research published recently in PLOS ONE, Dr. Alvaro Pascual-Leone, a neurologist at Beth Israel Deaconess Medical Center in Boston and a professor at Harvard Medical School, and his colleagues explain how they were able to use non-invasive techniques to transmit the information through the Internet to and from the scalps of people 5,000 miles away.
“It is kind of technological realization of the dream of telepathy, but it is definitely not magical. We are using technology to interact electromagnetically with the brain,” co-author Giulio Ruffini, a theoretical physicist with Starlab Barcelona, told AFP in a telephone interview Friday. “We hope that in the longer term this could radically change the way we communicate with each other.”
According to Macrina Cooper-White of The Huffington Post, Dr. Pascual-Leone’s team performed the feat by first attaching electrodes to the brain of one person living in India and three others residing in France. The first person was then asked to transmit a mental message to the others – a message that was detected using an electroencephalogram, which records electrical activity in a person’s brain.
Next, the message was translated into binary code by a computer and then emailed to France, where it was converted back into electrical pulses. Those pulses were applied to the brain of the receivers through a process known as transcranial magnetic stimulation (TMS), which caused them to produce flashes of light in the subjects’ peripheral vision, which could then be decoded to find the original messages – in this case, “hola” and “ciao.”
“We wanted to find out if one could communicate directly between two people by reading out the brain activity from one person and injecting brain activity into the second person, and do so across great physical distances by leveraging existing communication pathways,” Dr. Pascual-Leone said in a statement.
“One such pathway,” he added, “is, of course, the internet, so our question became, ‘Could we develop an experiment that would bypass the talking or typing part of Internet and establish direct brain-to-brain communication between subjects located far away from each other in India and France?'”
The researchers compared their experiment to the neuroscientific equivalent of instant messaging. While previous research on EEG-based brain-computer interaction (BCI) usually used communication between a human mind and a computer, this study set out to use the computer as a sort of intermediary to pass along the recorded electrical currents in the sender’s brain to the receiver, then ensuring that the receiver got and understood the messages.
The computer-brain interface (CBI) transmitted the message to the receiver’s brain through noninvasive brain stimulation in the form of phosphenes – the aforementioned flashes of light in their peripheral vision. That light appeared in numerical sequences that allowed the recipient to decode the information contained in the message, and while the subjects did not report feeling anything, they were able to correctly receive the messages.
“By using advanced precision neuro-technologies including wireless EEG and robotized TMS, we were able to directly and noninvasively transmit a thought from one person to another, without them having to speak or write,” Dr. Pascual-Leone said. “This in itself is a remarkable step in human communication.”
He added that “being able to do so across a distance of thousands of miles” was “a critically important proof-of-principle for the development of brain-to-brain communications.” These experiments, which also involved experts from Axilum Robotics in France, represent “an important first step in exploring the feasibility of complementing or bypassing traditional language-based or motor-based communication,” the Harvard professor concluded.
—–
What If?: Serious Scientific Answers to Absurd Hypothetical Questions by Randall Munroe

No Link Found Between Wearing A Bra And Breast Cancer

April Flowers for redOrbit.com – Your Universe Online
Have you ever wondered if your bra might cause cancer? Apparently, you are not alone. A group of scientists from the University of Washington recently studied the relationship between bra wearing and increased breast cancer risk among postmenopausal women. The findings of the population-based case-control study, published in Cancer Epidemiology, Biomarkers & Prevention, found no association.
“There have been some concerns that one of the reasons why breast cancer may be more common in developed countries compared with developing countries is differences in bra-wearing patterns,” Lu Chen, MPH, a researcher in the Public Health Sciences Division at Fred Hutchinson Cancer Research Center and a doctoral student in the Department of Epidemiology at the University of Washington School of Public Health, said in a recent statement. “Given how common bra wearing is, we thought this was an important question to address.”
“Our study found no evidence that wearing a bra increases a woman’s risk for breast cancer. The risk was similar no matter how many hours per day women wore a bra, whether they wore a bra with an underwire, or at what age they first began wearing a bra,” Chen added in a separate statement
Chen continued, “There has been some suggestion in the lay media that bra wearing may be a risk factor for breast cancer. Some have hypothesized that drainage of waste products in and around the breast may be hampered by bra wearing. Given very limited biological evidence supporting such a link between bra wearing and breast cancer risk, our results were not surprising.”
Using a strict epidemiological study design, the study looked at various bra wearing habits in relation to breast cancer risk. The researchers note that the study results should provide reassurance that the risk for most common histological types of postmenopausal breast cancer is not increased by bra wearing.
The research team recruited 454 women with invasive ductal carcinoma (IDC), 590 women with invasive lobular carcinoma (ILC) — the two most common subtypes of breast cancer — and 469 women without breast cancer to serve as controls. All of the participants were postmenopausal women between the ages of 55 and 74 recruited from the Seattle-Puget Sound metropolitan area.
In-person interviews were conducted to gather data on demographics, family history, and reproductive history. The research team also asked a series of structured questions to assess lifetime patterns of bra wearing—including the age at which the participant began wearing a bra, whether she wore a bra with an underwire, her bra cup size and band size, the number of hours per day and number of days per week she wore a bra, and if her bra-wearing patterns ever changed at different times in her life.
According to their analysis, no aspect of bra wearing is associated with an increased risk of either type of cancer.
—–
100 Days of Real Food: How We Did It, What We Learned, and 100 Easy, Wholesome Recipes Your Family Will Love by Lisa Leake

North Pacific Blue Whale Population Rebounds To Near-Historic Levels

Chuck Bednar for redOrbit.com – Your Universe Online
California blue whales have become the first group of the endangered species to experience a population rebound, demonstrating their ability to rebound when carefully managed, according to new research appearing in the journal Marine Mammal Science.
Blue whales are the largest animals on Earth, reaching nearly 100 feet in length and weighing nearly 200 tons when they reach adulthood. They are also the heaviest creature to ever live, weighing twice as much as the largest known dinosaur, but the study authors report they have been hunted to the brink of extinction.
According to Rachel Feltman of the Washington Post, the species has been hit hard as commercial whalers seek them out for their meat and oil. While the practice of hunting blue whales for commercial purposes has been prohibited by the International Whaling Commission since 1966, they still face threats from illegal whaling, as well as incidental fatalities related to other types of fishing, shipping, and pollution, she added.
Now, the authors of the new study report that the North Pacific blue whale population is the largest in the world, and has reached about 2,200. Scientists previously believed that the pre-whaling population was much higher than that, but the new study suggests otherwise. The authors reviewed data from 1905-1971 to estimate the number of whales caught from each population, and now believe the current population is 97 percent of the historical high.
“If California has always had a relatively small blue whale population, it explains why the area’s population growth has slowed in recent years: It may be almost back to normal,” Feltman said. “The researchers believe that our nasty habit of running into whales with our ships (at least 11 were struck along the west coast last year) isn’t actually a major concern. They believe that the population can maintain its stability regardless.”
“My impression is that they are fairly robust,” lead author Cole Monnahan a doctoral student in quantitative ecology and resource management at the University of Washington, told BBC News environmental correspondent Matt McGrath on Thursday. “If you can whale them pretty extensively for 50-70 years and they are able to recover I think that says a lot about moving forward.”
Monnahan was joined on the study by Trevor Branch and Andre Punt, both of the UW School of Aquatic and Fishery Sciences, and based on their findings, they conclude that the North Pacific blue whale population most likely never dropped below 460 individuals. The research was funded by the Joint Institute for the Study of the Atmosphere and Ocean, a collaboration between UW and the US National Oceanic and Atmospheric Administration (NOAA).
“Our findings aren’t meant to deprive California blue whales of protections that they need going forward,” said Monnahan. “California blue whales are recovering because we took actions to stop catches and start monitoring. If we hadn’t, the population might have been pushed to near extinction – an unfortunate fate suffered by other blue whale populations. It’s a conservation success story.”
—–
What If?: Serious Scientific Answers to Absurd Hypothetical Questions by Randall Munroe

Sequencing Of Coffee Genome Reveals Secrets Of Caffeine Development

Chuck Bednar for redOrbit.com – Your Universe Online
By sequencing the genome of the coffee plant, an international team of researchers has discovered genetic secrets that could enable them to create new varieties of coffee that taste better, have varied levels or caffeine, or are better able to survive drought conditions and diseases.
In addition, Philippe Lashermes, a researcher at the French Institute of Research for Development (IRD), and his colleagues discovered that the coffee plant developed caffeine-linked genes independently and did not inherit them from a common ancestor. Their findings are detailed in Thursday’s online edition of the journal Science.
According to the researchers, they decided to sequence the coffee genome because it is “one of the most important crops on Earth,” with 8.7 million tons of coffee produced last year and over 2.25 billion cups of the beverage consumed on a daily basis. They selected the species Coffea canephora as it displayed a conserved chromosomal gene order among asterid angiosperms, and because they were able to generate a high-quality draft genome of the plant.
“Coffee is as important to everyday early risers as it is to the global economy. Accordingly, a genome sequence could be a significant step toward improving coffee,” Lashermes, one of the principal authors of the study, said in a statement. “By looking at the coffee genome and genes specific to coffee, we were able to draw some conclusions about what makes coffee special.”
After sequencing the Coffea canephora genome, the study authors examined how its genetic composition differed from other types of plants. In comparison to several other species, including grapes and tomatoes, they discovered larger families of genes associated with the production of alkaloid and flavonoid compounds in coffee plants – compounds with contribute to traits such as the aroma of the coffee and the bitterness of the beans.
Furthermore, they discovered that coffee has an expanded group of enzymes known as N-methyltransferases, which are involved in caffeine production. After examining these enzymes more closely, the researchers learned that they were more closely related to other genes in the coffee plant than to caffeine enzymes found in tea and chocolate – a discovery which suggests caffeine production developed independently in coffee plants, since the enzymes would have been more similar between species if they had been inherited from a common ancestor.
“The coffee genome helps us understand what’s exciting about coffee – other than that it wakes me up in the morning,” said Victor Albert, professor of biological sciences at the University at Buffalo and co-principle author. “By looking at which families of genes expanded in the plant, and the relationship between the genome structure of coffee and other species, we were able to learn about coffee’s independent pathway in evolution.”
Albert told Reuters reporter Will Dunham that the coffee genome was about as large as the average plant genome, and had approximately 25,500 genes responsible for various proteins. He also suggested that coffee plants might have started producing caffeine in order to entice pollinators to return, or to prevent herbivorous insects from eating their leaves.
The study was funded by the French National Research Agency; the Australian Research Council; the Natural Sciences and Engineering Research Council of Canada; the CNR-ENEA Agrifood Project of Italy; the Funding Authority for Studies and Projects (FINEP Qualicafe) of Brazil; the National Institutes of Science and Technology (INCT Cafe) of Brazil; the US National Science Foundation (NSF); the College of Arts and Science at the University at Buffalo; and in-kind support by scientists at Nestle’s research and development center in France.
—–
Keurig K130/B130 Brewing System

Automotive Giants To Help University Of Michigan Study V2V Communication Technology

Chuck Bednar for redOrbit.com – Your Universe Online
Researchers at the University of Michigan and several automotive industry partners are looking to have a wireless vehicle-to-vehicle communication system operational within seven years, the Ann Arbor-based school announced on Friday.
According to Reuters reporter Ben Klayman, the university’s new Mobility Transformation Center (MTC) will be helping to develop and implement technology that will enable cars and trucks to communicate with each other, and to use stoplights and other infrastructure items to help reduce traffic congestion and prevent accidents from occurring.
Joining the MTC’s Leadership Circle will be companies in the manufacturing field as well as suppliers, insurance, telecommunications, data management and mobility services, including Denso Corp, Econolite Group, Ford, General Motors (GM), Honda, Iteris Inc., Nissan, Robert Bosch, State Farm, Toyota, Verizon and Xerox. The MTC is currently testing a pilot program in Ann Arbor, with the hopes that its automated car system will be functional by 2021.
Klayman said that GM, Ford, Toyota, Honda and Nissan have all pledged $1 million over three years to establish the new facility, which a spokesperson told Reuters is expected to raise up to $100 million through 2021. The university, along with officials from the US Department of Transportation, launched a pilot program in 2012 with the goal of equipping nearly 3,000 motor vehicles with technology capable of tracking the speed and location of other cars and trucks, warn drivers when they are approaching congested areas, or change traffic lights to green.
“We are on the threshold of a transformation in mobility that the world hasn’t seen since the introduction of the automobile a century ago,” said MTC Director Peter Sweatman. “Only by bringing together partners from these sectors, as well as from government, will we be able to address the full complexity of the challenges ahead as we all work to realize the opportunities presented by this emerging technology.”
“This is the next big thing for the state that put the world on wheels,” added Michigan Department of Transportation (MDOT) Director Kirk Steudle, who is also working with the MTC on the project. “We are thrilled to join our partners in private industry and the University of Michigan in supporting groundbreaking research to keep our state in the lead in building the safest and most efficient vehicles in the world.”
In August, the US National Highway Traffic Safety Administration (NHTSA) announced that it had started drafting rules that would require vehicle-to-vehicle (V2V) technology in most types of new cars and light trucks. That announcement came after the agency issued research claiming that V2V units could ultimately prevent an estimated 592,000 left-turn and intersection crashes and save over one-thousand lives each year.
“Safety is our top priority, and V2V technology represents the next great advance in saving lives,” US Transportation Secretary Anthony Foxx said at the time. “This technology could move us from helping people survive crashes to helping them avoid crashes altogether – saving lives, saving money and even saving fuel thanks to the widespread benefits it offers.”
“By warning drivers of imminent danger, V2V technology has the potential to dramatically improve highway safety,” added NHTSA Deputy Administrator David Friedman. “V2V technology is ready to move toward implementation and this report highlights the work NHTSA and [the Department of Transportation] are doing to bring this technology and its great safety benefits into the nation’s light vehicle fleet.”
—–
FOR THE KINDLE: Space Technologies on Earth: redOrbit Press

Death Of The Traditional Family: "Different Is The New Normal," Report Author Claims

Chuck Bednar for redOrbit.com – Your Universe Online
The days of a traditional 1950s family where Dad goes off to be the breadwinner and Mom stays home to take care of the kids is long gone, replaced by a “peacock’s tail” of various family unit structures, University of Maryland sociologist Philip Cohen claims in a new report prepared for the Council on Contemporary Families (CCF).
In his paper “Family Diversity is the New Normal for America’s Children,” Cohen reports that only 22 percent of children currently live in a married male-breadwinner family, while 23 percent are cared for by a single mother. Seven out of every 100 live with a parent cohabitating with an unmarried partner, while six live with either a single father or grandparents.
The single largest group of children (34 percent) lives with dual-earner married parents, the study said, but that group represents just slightly over one-third of the whole. That is a far cry from six decades ago, when 65 percent of all kids under the age of 15 were living with a family of married parents where only the father was part of the workforce.
“Different is the new normal. There hasn’t been the collapse of one dominant family structure and the rise of another. It’s really a fanning out into all kinds of family structures,” Cohen told Brigid Schulte of The Washington Post on Thursday. “The big story, really, is the decline of marriage. That’s what’s really changed.”
Between the 1950s and 2010, married couple families decreased from two-thirds of all households to just 45 percent, Cohen told Schulte. He said that he was most troubled by the rapid increase in single parenthood, which is often linked with higher poverty levels. A separate study of 11 wealthy nations found that the largest gap in poverty rates between married couples and unmarried mothers was in the US, the Washington Post added.
Cohen, who used data from the US Census and national surveys on family life to compile his report, told Live Science Contributor Stephanie Pappas that the American family demonstrated a “peak conformity” in 1960, the year in which the marriage rate was at its highest and the number of multigenerational families living under the same roof was at its lowest.
“That year, 65 percent of children under age 15 lived in a family with married parents in which the father was the breadwinner,” Pappas said. “Another 18 percent had married parents who were both employed. Only one child in 350 lived with a mother who had never been married. The vast majority of the single mothers were divorced, widowed or separated, and about 7 percent of children were growing up in those households.”
As of two years ago, however, no single family type held a majority, she added. Cohen’s report revealed there was an 80 percent chance that two children selected at random in 1960 would be growing up in the exact same type of family structure. By 2012, however, that chance had dipped all the way down to 50 percent, he noted.
“Not only is there no dominant family form, but children experience more transitions in and out of different family arrangements than in the past, and do so through more varied pathways than ever before,” the CCF said in a statement, adding that Cohen argues in his report that “market forces and social reforms following the Great Depression and World War II” were “driving forces” of the phenomenon, “though in surprising ways.”
“People are really sort of on their own figuring out how to make their family life work, and that’s one reason why you have the huge advice and parenting industry,” Cohen told Pappas, adding that the search for role models could help explain why Americans are so interested in celebrity marriages and families. “People get reassurance from conformity. Not that everybody wants to conform with everyone else, but they like a model.”
—–
Nikon COOLPIX L830 16 MP CMOS Digital Camera with 34x Zoom NIKKOR Lens and Full 1080p HD Video (Red)

US Obesity Rates "Unacceptably High," According To Trust For America’s Health Report

Chuck Bednar for redOrbit.com – Your Universe Online
For the first time ever, adult obesity rates have exceeded 35 percent in two US states, and no state in the country had obesity rates of less than 21 percent, according to the results of a new report released by the Trust for America’s Health (TFAH) and the Robert Wood Johnson Foundation (RWJF) on Thursday.
The study, entitled “The State of Obesity: Better Policies for a Healthier America,” found that Mississippi and West Virginia were tied with the highest adult obesity rate in America (35.1 percent), while the lowest was in Colorado (21.3 percent), TFAH official said in a statement.
The report also found that adult obesity rates increased in six states (Alaska, Delaware, Idaho, New Jersey, Tennessee and Wyoming) while not falling in any of them. Based on the study’s findings, obesity rates among American adults were now at or above 30 percent in 20 of the 50 US states.
Those findings are based on federal government data, Reuters explained, and suggest that the problem is growing worse in spite of the widespread publicity the issue has received in recent years from First Lady Michelle Obama and countless others.
From 2011 to 2012, the rate of obesity increased in just one state, the news organization added. As of 2013, 42 US states had adult obesity rates exceeding 25 percent, and at the national level, the obesity rate remained at approximately one-third of the entire adult population.
According to the US Centers for Disease Control and Prevention (CDC), the highest prevalence of obesity regionally was in the South (30.2 percent), followed closely by the Midwest (30.1 percent). Third on the list was the Northeast (26.5 percent), with the West (24.9 percent) ranking fourth.
Childhood obesity rates remained steady with approximately one-third of children between the ages of 2 and 19 being overweight or obese in 2012, Reuters said. Higher obesity rates were also found in areas of poverty, which is associated with a decreased availability of healthy foods and a reduced number of safe places to use for exercise, and more African Americans (75 percent) were found to be overweight or obese than whites (67.2 percent).
“That pattern affects children, too. In 2012, just over 8 percent of African American children ages 2 to 19 were severely obese, with a BMI above 40, compared with 3.9 percent of white children. About 38 percent of African American children live below the poverty line, while 12 percent of white children do,” it added. “One-third of adults who earn less than $15,000 per year are obese, compared with one-quarter who earn at least $50,000.”
Among African Americans, adult obesity rates were at or above 40 percent in 11 states and 35 percent in 29 states, the report said. Rates of adult obesity among Latinos exceeded 35 percent in five states and 30 percent in 23 states, while adult obesity rates among whites topped 30 percent in 10 states.
Baby Boomers (those between the ages of 45 and 64) had the highest obesity rate of any age group, topping 35 percent in 17 states and 30 percent in 41 states, the study discovered. Furthermore, the percentage of severely obese adults in the US has quadrupled in the past three decades, with over six percent now meeting the criteria.
“Obesity in America is at a critical juncture,” said Dr. Jeffrey Levi, executive director of TFAH. “Obesity rates are unacceptably high, and the disparities in rates are profoundly troubling. We need to intensify prevention efforts starting in early childhood, and do a better job of implementing effective policies and programs in all communities – so every American has the greatest opportunity to have a healthy weight and live a healthy life.”
“While adult rates are stabilizing in many states, these data suggest that our overall progress in reversing America’s obesity epidemic is uneven and fragile,” added RWJF president and CEO Dr. Risa Lavizzo-Mourey. “A growing number of cities and states have reported decreases in obesity among children, showing that when we make comprehensive changes to policies and community environments, we can build a Culture of Health that makes healthy choices the easy and obvious choices for kids and adults alike.”
—–
Shop Amazon – Wearable Technology: Electronics

Nocturnal Behavior May Predate The Earliest Mammals By 100 Million Years

Chuck Bednar for redOrbit.com – Your Universe Online
Originally believed to have occurred around the same time that mammals evolved some 200 million years ago, researchers from the Field Museum of Natural History in Chicago now report that the transition to nocturnal behavior actually occurred more than 100 million years earlier.
Previous theories regarding nocturnal behavior were based on the large brains of mammals, which allow them to better process information from senses such as hearing, touch and smell, and the details of light-sensitive chemicals in the eyes of mammals, the researchers explained.
Now, however, Field Museum curator Kenneth Angielczyk and Claremont McKenna, Pitzer, and Scripps Colleges biology professor Lars Schmitz report in Wednesday’s early edition of the journal Proceedings of the Royal Society B that nocturnal activity might have actually originated within ancient mammal relatives known as synapsids.
“Synapsids are most common in the fossil record between about 315 million years ago and 200 million years ago. The conventional wisdom has always been that they were active during the day (or diurnal), but we never had hard evidence to say that this was definitely the case,” Angielczyk, lead author of the study, said in a statement.
Angielczyk and Schmitz based their work on the analysis of scleral ossicles, tiny bones found in the eyes of birds, lizards and many other types of backboned animals. Modern living mammals lack these bones, the researchers explained, but they were present in many of their ancient synapsid relatives.
“The scleral ossicles tell us about the size and shape of different parts of the eyeball. In turn, this information allows us to make predictions about the light sensitivity of the eye, which usually reflects the time of day an animal is active,” Schmitz said.
The bones are extremely delicate, and as such are not typically preserved in synapsid fossils. However, Angielczyk and Schmitz were able to locate data on scleral ossicles from two dozen species, representing most major groups of synapsids, through a review of museum collections and by recruiting other paleontologists to assist on the project.
The information they collected was then compared to similar measurements for living lizards and birds known to have daily activity patterns. They used a special statistical technique, developed by Schmitz, to find that the eyes of ancient synapsid species likely spanned a wide range of light sensitivities, some of which were consistent with activity under bright daytime conditions and others possessing eyes better suited to low-light nighttime conditions.
“The oldest synapsids in the dataset, including the famous sail-backed carnivore Dimetrodon, were found to have eye dimensions consistent with activity at night,” the Field Museum explained. “Based on the ages of the rocks in which these fossils are found, the results indicate that nocturnality had evolved in at least some synapsids by about 300 million years ago or 100 million years earlier than the age of the first mammals.”
The results raise the possibility that the common ancestors of all synapsids were active at night, and the researchers said their findings could help scientists studying the visual systems and behaviors of living mammals. It will also require experts to rethink some long-standing viewpoints, particularly suggesting that mammals became nocturnal in order to avoid competition with dinosaurs.
Furthermore, Angielczyk explained that their research “shows how little we really known about the daily lives of some of our oldest relatives,” and Schmitz added that as he and his colleagues discover more fossils, “we can continue to test these predictions and start to address questions such as how many times nocturnality evolved in synapsids and whether the synapsids most closely related to mammals were also nocturnal.”
Image 2 (below): This is the skeleton of Dimetrodon, an ancient relative of mammals. New research suggests that at least some species of Dimetrodon were active at night (nocturnal). Credit: The Field Museum
—–
FOR THE KINDLE: America’s Least Visited National Parks – Hidden Gems of America’s National Park System: redOrbit Press

The Latest Fibromyalgia Research…And Why It’s Important

Fibromyalgia is without a doubt one of the most mysterious and yet most painful diseases out there.  For decades, medical researchers and professionals have been researching for the root cause of the condition, which is characterized by widespread pain throughout the body, fatigue throughout the day, difficult sleeping at night, anxiety, and in some cases even depression.

The latest research on Fibromyalgia has yielded some surprising and unexpected results, primarily dealing with abnormalities in the nervous system.  It has been found that people with fibromyalgia have peripheral neuropathy, and this seems to support that fibromyalgia is a neuropathic pain disease.

Damaging of Peripheral Nerves

This research was conducted by looking at samples in the skin of fibromyalgia patients, and then compared the samples to skin samples of people not diagnosed with the condition.  It was discovered that the fibromyalgia group showed damaged peripheral sensory nerves.  More studies have been concluded with the same results.

This latest research brings about the question of whether or not people who have fibromyalgia also have impaired peripheral sensory nerves.  But it does reveal that there definitely is an important link between the two, and it is likely that if someone does have impaired peripheral sensory nerves, then they may be diagnosed with fibromyalgia as well.

However, there are other symptoms of fibromyalgia that are not symptoms in an impaired peripheral sensory nerve.  While people with fibromyalgia will feel the burning nerve pain of impaired peripheral sensory nerves, conversely those with impaired sensory nerves may not feel the deep tissue pain of those with fibromyalgia.

This has led many medical professionals and researchers to draw the conclusion that nerve damage might not be the cause to the pain and symptoms of fibromyalgia, and that fibromyalgia might instead result from problems in the fibers of the muscles and tendons.

People with impaired peripheral sensory nerves will have widespread pain throughout their body, but research has also yielded the fact that people who have been officially diagnosed with fibromyalgia also reported having small fiber pain, also widespread throughout their body.

Therefore, diagnosing impaired peripheral sensory nerves in people who have fibromyalgia only led medical and scientific researchers to try even harder to look for a different cause, which would in turn lead to a series of different treatments.  These other causes include diabetes or an immune disorder.  The results showed a strong correlation between nerve fiber damage and fibromyalgia.

Fibromyalgia Latest Research

As a result, the question does come up about whether or not identifying the cause of impaired peripheral nerves would or would not lead to finding the cause in fibromyalgia patients, and subsequently also finding the cause of fibromyalgia and any treatments that would needed to fix that problem.  Many medical researchers seem to think that we can improve the pain and symptoms of fibromyalgia if we do find the cause of impaired peripheral nerves as well.

Other medical professionals and research take a strong stance on the opposing side to this argument.  Even if the science makes sense in the new research, does that really mean that we could also find different changes in treatment for people with fibromyalgia?  Chances could be that nerve damage follows fibromyalgia rather than the fibromyalgia following the nerve damage.  So even if the causes of impaired peripheral sensory nerves are not the answer to finding the cause to fibromyalgia, than what is?

The answer could lie in the fibers of the skin that link the sensory nerves to the blood vessels.  Much of our blood is stocked up in our hands and feet when the rest of our body doesn’t need it.  But when we perform physical activities or exercises, then the brain sends signals to which parts of the body need blood.  The problem that can occur here is when the blood vessels are blocked from sending blood to the muscles, resulting in widespread muscle pain throughout the body and immense fatigue.

Many people who have impaired peripheral nerve damage do not feel this pain, even though people with fibromyalgia certainly do and the research has yielded a strong link between nerves and fibromyalgia. Fibromyalgia can be triggered by a number of different factors, including trauma, stress or injury, but the evidence as revealed by research seems to conclude that fiber neuropathy is one of the causes.

Fiber Neuropathy

Research conducted by Albany Medical College in New York seems to support the fiber neuropathy as well.  Blood vessels are important for regulating body temperature and the blood flow to the muscles during physical activity and exercise.  But when these fibers constricts, it inhibits the flow of blood, and that is ultimately what causes the widespread pain in the muscles.  What’s even more unique about the link between fibromyalgia and fiber neuropathy?

Patients with fibromyalgia have been revealed to have significantly altered fibers as found in samples, as opposed to people who don’t have fibromyalgia.  Fiber neuropathy can also affect the way people react to different temperatures, which also supports the link between fiber neuropathy and fibromyalgia, as people with fibromyalgia usually don’t react well to extreme hot or cold temperatures either.

The evidence as gathered by research has only made it ever the more clear about the strong correlation between altered fiber neuropathy and fibromyalgia in patients.  It’s also heavily likely that the immense fatigue felt by patients with fibromyalgia is due to the muscles not getting enough blood.  Medical researchers are optimistic about the new information that they have discovered, but also admit that more tests will need to be conducted before they can draw any final conclusions on paper.

But for now, we can come to the conclusion that changes in the nervous system for the worst do contribute to the development of fibromyalgia in at least some shape or form.  And hopefully, we will be able to find the cause of both and subsequently the appropriately needed treatment.

Massive New Species Of Titanosaur Discovered In Argentina

Chuck Bednar for redOrbit.com – Your Universe Online
A gigantic and remarkably complete dinosaur skeleton belongs to a new species that was 85 foot long and weighed a reported 65 tons during its lifetime, according to new research appearing in the September 4 online edition of the journal Scientific Reports.
The creature, which has been named Dreadnoughtus schrani is the largest land animal for which a body mass can be accurately calculated, the US National Science Foundation (NSF) explained on Thursday. Furthermore, its skeleton is exceptionally complete, with more than 70 percent of bone types (excluding the head) represented, the NSF added.
[Watch the Video: Dreadnoughtus: A Dinosaur Discovery ]
Dreadnoughtus schrani was astoundingly huge,” paleontologist Dr. Kenneth Lacovara, an associate professor in the Drexel University College of Arts and Sciences and a member of the team that discovered the fossils at a site in southern Patagonia, said in a statement.
“It weighed as much as a dozen African elephants or more than seven T. rex,” he added. “Shockingly, skeletal evidence shows that when this 65-ton specimen died, it was not yet full grown. It is by far the best example we have of any of the most giant creatures to ever walk the planet.”
In fact, Brian Switek of National Geographic said the dinosaur would have been heavier than a Chieftain FV4201 tank, and the study authors claim it would have been “nearly impervious to attack” from predators. Dreadnoughtus was a titanosaur, meaning that it was a large herbivore with a small head and long neck, and it most likely lived between 84 million and 66 million years ago, he added.
“What makes Dreadnoughtus a remarkable new addition to this prehistoric family is the amount of material recovered from the dinosaur,” Switek explained. “The remains, representing two individual animals, include both the humerus and femur of Dreadnoughtus.” Lacovara and a team of experts from the US and Argentina measured the circumferences of those bones and used them to estimate the weight of the new species.
The fossils used in the identification of the new species were excavated of the course of four field seasons from 2005 through 2009 by Lacovara, Lucio M. Ibiricu of the Centro Nacional Patagonico in Chubut, Argentina, Matthew Lamanna of the Carnegie Museum of Natural History, and Jason Poole of the Academy of Natural Sciences of Drexel University, as well as a team of other former and current Drexel students and other collaborators, the NSF said.
Among the bones unearthed during those sessions were an over three-foot long neck vertebra, a thigh bone roughly as tall as a human male, and ribs the size of planks, said Ian Sample, science editor with The Guardian. Its bones represent the most complete skeleton of a titanosaur ever recovered, and the massive size of the Dreadnoughtus schrani led it to be named in honor of the dreadnought battleships.
Drexel University reports that more than 100 different elements of the creature’s skeleton are represented from the sample, including most of the vertebrae from its 30-foot-long tail, the neck vertebra, scapula, multiple ribs, toes, a claw, a small section of jaw and a single-tooth, as well as most of the forelimb and hind limb bones and a humerus.
The loss of the skull is fairly common in large plant-eaters, Sample explained, since the skull bones tended to be relative small and light enough to allow the dinosaur to lift its head. Along with the Dreadnoughtus schrani fossils, the team also unearthed less-complete remains to a second, smaller titanosaur at the site, the university added.
—–
What If?: Serious Scientific Answers to Absurd Hypothetical Questions by Randall Munro

Electricity And Light Sent Along Same Super-thin Wire

By David Barnstone, University of Rochester

A new combination of materials can efficiently guide electricity and light along the same tiny wire, a finding that could be a step towards building computer chips capable of transporting digital information at the speed of light.

Reporting today in The Optical Society’s (OSA) high-impact journal Optica, optical and material scientists at the University of Rochester and Swiss Federal Institute of Technology in Zurich describe a basic model circuit consisting of a silver nanowire and a single-layer flake of molybdenum disulfide (MoS2).

Using a laser to excite electromagnetic waves called plasmons at the surface of the wire, the researchers found that the MoS2 flake at the far end of the wire generated strong light emission. Going in the other direction, as the excited electrons relaxed, they were collected by the wire and converted back into plasmons, which emitted light of the same wavelength.

“We have found that there is pronounced nanoscale light-matter interaction between plasmons and atomically thin material that can be exploited for nanophotonic integrated circuits,” said Nick Vamivakas, assistant professor of quantum optics and quantum physics at the University of Rochester and senior author of the paper.

Typically about a third of the remaining energy would be lost for every few microns (millionths of a meter) the plasmons traveled along the wire, explained Kenneth Goodfellow, a graduate student at Rochester’s Institute of Optics and lead author of the Optica paper.

“It was surprising to see that enough energy was left after the round-trip,” said Goodfellow.

Photonic devices can be much faster than electronic ones, but they are bulkier because devices that focus light cannot be miniaturized nearly as well as electronic circuits, said Goodfellow. The new results hold promise for guiding the transmission of light, and maintaining the intensity of the signal, in very small dimensions.

Ever since the discovery of graphene, a single layer of carbon that can be extracted from graphite with adhesive tape, scientists have been rapidly exploring the world of two-dimensional materials. These materials have unique properties not seen in their bulk form.

Like graphene, MoS2 is made up of layers that are weakly bonded to each other, so they can be easily separated. In bulk MoS2, electrons and photons interact as they would in traditional semiconductors like silicon and gallium arsenide. As MoS2 is reduced to thinner and thinner layers, the transfer of energy between electrons and photons becomes more efficient.

The key to MoS2’s desirable photonic properties is in the structure of its energy band gap. As the material’s layer count decreases, it transitions from an indirect to direct band gap, which allows electrons to easily move between energy bands by releasing photons. Graphene is inefficient at light emission because it has no band gap.

Combining electronics and photonics on the same integrated circuits could drastically improve the performance and efficiency of mobile technology. The researchers say the next step is to demonstrate their primitive circuit with light emitting diodes.

Take Control of Fibromyalgia Knee Pain

More than six million people in the United States are affected by a condition known as fibromyalgia. Those affected with fibromyalgia experience pain throughout various areas of their body, with varying levels of pain.  Women are more likely than men to suffer with the condition, and the greatest percentage of those women are between the ages of 20 and 50.

There are many signs and symptoms that indicate that fibromyalgia is present, and one of those is knee pain. A great percentage of people affected by fibromyalgia will also suffer with knee pain.

Knee Pain & Fibromyalgia

Knee pain is a symptom that almost all people with fibromyalgia will experience. Doctors think that knee pain from fibromyalgia is due to other pain that you are experiencing, particularly that in the back, hips and the thighs.

The pain is triggered by a knot in the muscles that pull another group of knee muscles, which causes pressure to be placed on the connected tissues. Knots can be very small or very large, and this can vary from day to day or even week to week.

In most circumstances, the knots in the muscles are deep down within the muscle, and oftentimes accompanies by joint pain that can be downright excruciating.

Most people who have knee pain that is associated with fibromyalgia experience it first in one knee only, but in many cases the pain quickly sets in to the second leg as well.

Again, the level of pain that is experienced within the knees is not the same for any two people, and it can even vary from person to person at that time.

One day you might feel fine while the next day the pain is so excruciating it is difficult to maintain any bit of livelihood. There is no way to know whether you will be affected in just one knee or if you will have it in both knees.

Fibromyalgia Knee Pain

A Look at Fibromyalgia

Fibromyalgia is one of the most puzzling conditions around. It is oftentimes misdiagnosed because there are so many different problems that a person can experience. No matter what the symptoms, most people with the condition experience life altering changes that make it hard for them to maintain their daily lifestyle, which can include work or relationships with other people. Many people who have fibromyalgia also experience depression and anxiety as a result of the inability to do so many different things.

Patients with the condition who experience knee pain may also find difficulty in doing things like squatting down, bending or even sitting or standing for long periods of time. The knee pain is most often accompanies by stiffness and pain in the joints and the muscles. There is no known cause of fibromyalgia, but doctors believe there are a number of different triggers that can cause the condition to develop and to worsen.

What to do about Fibromyalgia Knee Pain

If you are one of those people that is bothered by knee pain, the best thing that can be done is to talk to the doctor. He can evaluate your pain, as well as the best remedies to end that pain so that there is one less thing for you to worry over.

The doctor is likely to recommend exercise as one of the cures for this ailment. But you do not want to start any kind of exercise regimen without first speaking to your doctor and learning the best kinds of exercises that can be performed, as well as the easiest ways to remedy the problem. The more that you keep your body in motion, the less likely you are to suffer with knee pain, or at least you can affect the severity of that pain. When there are so many things that are hurting you, being able to minimize the pain is always something to look forward to doing.

Over the counter medications for pain can also be used to ease the knee pain that you are experiencing. There are many medications that can be purchased for this. You might want to purchase the strongest dosage to find the best results. If this does not help, and many times fibromyalgia patients find that these medicines are no strong enough, then ask your doctor to prescribe a prescription pain medication. He will evaluate your condition and determine which is best for your needs.

Massage is also something that many fibromyalgia patients use to help their muscles and joints and all of the pain they are experiencing. This may very well be something that helps you too, as it certainly provides sufficient and quick relief to the patient.

Hot and cold packs can be used in addition to the massage and other treatment tips listed above. A rotation of cold for 15 minutes, followed by 15 minutes of heat can decrease inflammation while also dulling the pain tremendously. A number of patients with fibromyalgia also have used this to treat their pain.

Finally, make sure that you get plenty of rest and allow the knee (or knees) sufficient time to rest. While it is important to stay active and work the knees to keep them in the best condition, it is just as important to allow them time to rest. You can keep your legs elevated to reduce swelling and inflammation, which in turn also eliminates some of stress on the knees.

Final Thoughts

Knee pain is something that you are likely to experience with fibromyalgia.  It is just one of the many ill effects of this difficult condition. While it is painful and can cause a number of difficulties in your everyday life, there is hope and help available. Do not hesitate speaking with your doctor to learn more about treating fibromyalgia knee pain, and be sure to use the tops above as well.

In the end you can manage your knee pain effectively and get back to as normal of a life as possible. You have a fighting chance against knee pain and fibromyalgia if you are willing to put forth the effort. Win that battle!

Spinach Extract Decreases Cravings, Aids Weight Loss

Charlotte Erlanson-Albertsson, Lund University

A spinach extract containing green leaf membranes called thylakoids decreases hedonic hunger with up to 95% – and increases weight loss with 43%. This has been shown in a recently published long-term human study at Lund University in Sweden.

[ Watch the Video: Spinach Extract Curbs Appetite, Sugar Cravings ]

Hedonic hunger is another term for the cravings many people experience for unhealthy foods such as sweets or fast food, a common cause of obesity and unhealthy eating habits. The study shows that taking thylakoids reinforces the body’s production of satiety hormones and suppresses hedonic hunger, which leads to better appetite control, healthier eating habits and increased weight loss.

“Our analyses show that having a drink containing thylakoids before breakfast reduces cravings and keeps you feeling more satisfied all day”, says Charlotte Erlanson-Albertsson, Professor of Medicine and Physiological Chemistry at Lund University.

The study involved 38 overweight women and ran for three months. Every morning before breakfast the participants had a green drink. Half of the women were given 5 grams of spinach extract and the other half, the control group, were given a placebo. The participants did not know which group they belonged to – the only instructions they received were to eat a balanced diet including three meals a day and not to go on any other diet.

“In the study, the control group lost an average of 3.5 kg while the group that was given thylakoids lost 5 kg. The thylakoid group also found that it was easier to stick to three meals a day – and they did not experience any cravings”, said Charlotte Erlanson-Albertsson.

The key is the feeling of satiety and suppression of hedonic hunger, vs homeostatic hunger that deals with our basic energy needs. Modern processed food is broken down so quickly that the hormones in the intestines that send satiety signals to the brain and suppress cravings cannot keep up. The green leaf membranes slow down the digestion process, giving the intestinal hormones time to be released and communicate to the brain that we are satisfied.

“It is about making use of the time it takes to digest our food. There is nothing wrong with our digestive system, but it doesn’t work well with the modern ‘pre-chewed’ food. The thylakoids extend digestion, producing a feeling of satiety. This means that we are able to stick to the diet we are meant for without snacks and unnecessary foods like sweets, crisps and such”, says Charlotte Erlanson-Albertsson.

Publication: ‘Body weight loss, reduced urge for palatable food and increased release of GLP-1 through daily supplementation with green-plant membranes for three months in overweight women

—–

GET FIT WITH FITBIT – Fitbit Flex Wireless Activity + Sleep Wristband, Black

Home-Cooked Meals Can Actually Be A Source Of Stress For Some Families

Chuck Bednar for redOrbit.com – Your Universe Online

Despite mainstream media portrayals to the contrary, preparing family meals can be a significant source of stress and conflict for many mothers, researchers from North Carolina State University sociology and anthropology department report in the Summer 2014 edition of the journal Contexts.

In the study, associate professors of sociology Dr. Sarah Bowen, Dr. Sinikka Elliott and Dr. Joslyn Brenton, an assistant professor of sociology at Ithaca College and former Ph.D. student at NC State, looked at the notion that reforming the food system requires families to eat painstakingly prepared home-cooked meals as a unit.

“Magazines, television and other popular media increasingly urge families to return to the kitchen, stressing the importance of home-cooked meals and family dinners to physical health and family well-being,” the university said in a statement. However, the authors report that preparing such meals can “place significant stresses on many families” and is “simply impossible” for others.

“We wanted to understand the relationship between this ideal that is presented in popular culture and the realities that people live with when it comes to feeding their children,” added Dr. Bowen. To that end, she and her co-authors interviewed 150 female caregivers in families with children between the ages of two and eight-years-old, and also conducted 250 hours worth of in-depth observations in a dozen of those families.

They found that middle-class, working-class and poor families faced some of the same challenges when it came to preparing family meals, said Dr. Elliott. For instance, mothers of all backgrounds said that it was hard finding enough time to make meals that everyone in the family would be willing to eat.

Furthermore, the study – which was supported by a grant from the USDA National Institute of Food and Agriculture – found that middle-class mothers were conflicted between their desire the spend quality time with their children and the expectation that they needed to provide those youngsters with home-cooked dinners.

The new study comes just days after McGill University Institute for Health and Social Policy professor Frank Elgar and his colleagues from the US and Canada published research suggesting that regularly eating meals together as a family could provide the social support necessary to help reduce the negative effects of cyberbullying on youngsters.

In that study, Elgar’s team explained that the exchanges which occur during family meal times can benefit the well-being of adolescents, and this communication and interpersonal contact can reduce some of the distressing effects of being bullied online. Their research found that family meal time could reduce moderate the link between cyberbullying and the resulting mental health and substance use issues, including anxiety and substance use problems.

However, while that study found that family meals could potentially reduce stress in victims of online bullying, the new NC State study has found that home-cooked meals can actually be a source of stress for some families – and that financial considerations are one of the main reasons why (but for different reasons, based on socioeconomic status).

As the study authors explain, middle-class mothers expressed concern that they were unable to give their kids the best possible meals because they could not afford to purchase all organic foods, while poor families found that a lack of money made it difficult for them to afford fresh produce, find transportation to grocery stores, or secure the kitchen tools (knives, stoves, pots and pans) required to prepare a home-cooked meal.

“Poor mothers also skipped meals and stood in long lines at non-profit food pantries to provide food for their children,” said Bowen. “This idea of a home-cooked meal is appealing, but it’s unrealistic for a lot of families. We as a society need to develop creative solutions to support families and help share the work of providing kids with healthy meals.”

“There are a lot of ways we could do this, from community kitchens where families work together to arranging to-go meals from schools,” added Elliott. “There is no one answer. But we hope this work inspires people to start thinking outside the family kitchen about broader things we as a society can do when it comes to food and health.”

—–

Amazon.com – Read eBooks using the FREE Kindle Reading App on Most Devices

Fibromyalgia and Dizziness

Just the pure symptoms of fibromyalgia are enough to make one’s life extremely difficult.  Going to work or completing simple house chores can become a challenge and many people with fibromyalgia are forced to stay in bed for much of the day.  Many people with fibromyalgia also lose their jobs and are forced to work from home.

There are many different symptoms of fibromyalgia, which all told make it very difficult to diagnose and treat.  Many of the symptoms of fibromyalgia include muscles and joint pain and soreness throughout the body, severe headaches, the inability to think or remember clearly, and dizziness.  This article will discuss dizziness as a symptom, and as we will find out, not only will it just make you dizzy, but it can cause even greater headaches, vomiting and nausea, and last for weeks or even longer.

What exactly is Dizziness?

We’ve all experienced some type of dizziness at points in our lives.  Sometimes we just randomly feel dizzy, while other times we’ve had the flu, been sick, or experienced blurred vision.  Dizziness is, generally speaking, having lightheadedness, and can include nearly everything that fits that description.  The causes for feeling dizzy can range from suffering from an illness to having low levels of blood pressure.

It may not seem like a very big issue, but dizziness is one of the most common medical issues in America today.  Roughly four out of ten Americans have claimed to have suffered extensively from dizziness at some points in their lives, and millions of people head to the doctor’s office each year for feeling dizzy.

While it’s easy to deal with dizziness if it only occurs off and on and not for very long periods of time, with fibromyalgia dizziness occurs nearly on a daily basis and happens for extended periods of time.  It is estimated that nearly seventy percent of people who suffer from fibromyalgia simultaneously suffer from dizziness on a regular basis.

Fibromyalgia and Dizziness

 

Why Does Dizziness Happen?

Our body actually relies on a balance system to keep us, well, balanced.  Our brain takes messages that are received from various parts of our body, and when they are combined together, our brain is able to tell the direction that we are in.  If this system did not exist, it would be very difficult to stand!

So think about this: when do you feel dizzy when you do?  Is it when you’re just walking or standing around at home or at work, while you’re driving a vehicle and have motion sickness, or does it just hit you randomly?  Our body senses different things for our balance system based on different activities.  If you have motion sickness, chances are that you are going to feel dizzier while you are driving or on the road vs. if you were in your home.

So why do we feel dizzy sometimes?  It’s really because of our body’s ability to circulate the blood flow through the body.  When we feel dizzy, it’s because our body isn’t circulating enough blood flow, causing weakness, sweating and lightheadedness, all of which combine together to make us feel dizzy.  If you are suffering from dizziness often, and especially if you also have fibromyalgia, then you should seek treatment immediately.

The condition of not getting enough blood flow through your body is called neurally mediated hypotension, and it will also make your body much more difficult at regulating your blood flow levels.  When you stand, blood rushes down to your leg.  When you run, your heart rate increases causing your blood vessels to tighten, meaning that your heart has to pump more blood through your body.  But people who have neurally mediated hypotension means that it’s very difficult for your heart rate to increase, and it may actually drop while running or under similar exercises.  This prevents the blood that your body needs from being pumped through your body, and in turn, leads to dizziness.

What Kinds of Dizziness Are There?

When you feel dizzy, you just feel dizzy, right?  Actually, it’s not all that simple.  There are many different types of dizziness, meaning that you can experience different symptoms.  It’s when these symptoms persist that you should go to your doctor.

Lightheadedness is the type of dizziness that makes your head feel lighter.  It leads to nausea that can in turn lead to vomiting and diarrhea.  This is the most common form of dizziness.

Near Fainting is the kind of dizziness making you feel that you could pass out.  You usually will feel this kind of dizziness when you stand up after sitting down for a long period of time, making your vision blurred or even blacked out all together, meaning that there hasn’t been enough blood flow to your brain.

Unsteadiness is the type of dizziness that makes you feel not-balanced.  When you feel this kind of dizziness, you may feel that you could just topple over and fall down.  Unsteadiness is most often caused from poor vision and arthritis in the joints and most often affects older people rather than the young.

Vertigo is the condition of dizziness that makes you feel like you are spinning or floating around.  Vertigo is a very painful type of dizziness as it can go on for entire days without end, and often comes with nausea so you might suffer from vomiting and diarrhea at the same time.  If you believe that you may be suffering from vertigo, you need to go to the doctor’s office as soon as you can (although you might want someone else to drive).

There are other common symptoms of dizziness too.  These include feeling headaches, sweating excessively, having blurred or incoherent vision, having difficulty hearing, or fainting.  If you are experiencing any of these symptoms in addition to your fibromyalgia, it’s important that you consult with your doctor or medical professional immediately.

Secure Smartphone Maker Issues Warning About Fake Cell Towers

Chuck Bednar for redOrbit.com – Your Universe Online
Fake cell towers could be attacking your smartphone between 80 to 90 times per hour, according to various media outlets, and new reports suggesting they do not appear to be instruments used by the National Security Agency (NSA) have raised the question: exactly who is responsible for installing and operating these devices?
Last week, the existence of these pseudo-cell towers was revealed to Popular Science writer Andrew Rosenblum by ESD America CEO Les Goldsmith, whose company builds and markets the ultra-secure CryptoPhone 500. While demonstrating his product’s capabilities, Goldsmith showed Rosenblum a map showing the location of phony cell towers that he and his phone’s users found spread throughout the US during the month of July alone.
These fake cell towers are known as “interceptors,” and to a typical smartphone, they look like an ordinary tower. However, once the mobile device connects with the interceptor, it can be targeted by a variety of “over-the-air” attacks, ranging from spyware attacks to eavesdropping on calls, the Popular Science reporter noted.
According to Lucian Armasu of Tom’s Hardware, “The CryptoPhone is a security-‘hardened’ Galaxy S3 device, and the company has removed 468 vulnerabilities removed from the stock operating system.” Ordinary cell phones can be targeted due to “vulnerabilities in the baseband software of the device and poor encryption algorithms,” he added.
“Most of the attacks from these fake cell towers happen against the baseband processor of the phones,” Armasu explained in a report Tuesday. “The software for these baseband processors is usually just a proprietary black box that doesn’t allow anyone to see what’s happening inside other than the company making the baseband processor or hackers who have found vulnerabilities in it.”
“Interceptor use in the U.S. is much higher than people had anticipated. One of our customers took a road trip from Florida to North Carolina and he found 8 different interceptors on that trip,” Goldsmith told Rosenblum. When asked who might be operating the fake cell towers, and for what purpose, he said that it was “suspicious… that a lot of these interceptors are right on top of US military bases… [but] we really don’t know whose they are.”
VentureBeat writer Barry Levine spoke with cloud security expert Andrew Jaquith on Tuesday, and the SilverSky CTO/SVP told him it was unlikely the towers were NSA projects. “The NSA doesn’t need a fake tower,” he told Levine. “They can just go to the carrier” to gain access to a user’s communications. Goldsmith agreed, suggesting that they could belong to the military, based on their locations, or to law enforcement agencies.
Stephen Ellis, manager of cyber threat intelligence at security firm iSIGHT Partners, told Levine that the discovery of the towers “appears to confirm real-world use of techniques that have been highlighted by researchers for years.” While he said his company could not confirm the report’s accuracy without additional research, Ellis noted that he is “highly confident” iSIGHT has “observed real-world use of this technique” by cybercriminals.
“We have observed and reported on cases in other parts of the world where actors are known to have set up fake base stations to send spoofed SMS messages, possibly to send spam or to direct unsuspecting victims to malicious websites,” he added. Levine noted that the US Federal Communications Commission (FCC) had recently launched a probe into the use of cell network interceptors by both criminal gangs and foreign intelligence agencies.
One warning sign that might alert users of regular, less secure smartphones is a sudden dip in network quality, explained Computerworld’s Darlene Storm. Goldsmith said that tests conducted on various devices showed that the CryptoPhone “lit up like a Christmas tree.”
An Apple iPhone reportedly had no reaction to the interceptor, he added, but a regular Samsung Galaxy S4 bounced between 4G and 3G networks, Storm said. While dipping down to 3G or even 2G could be a sign that the phone is being affected by a fake cell tower, however, she cautioned that some interceptor devices claim to be “undetectable,” meaning that this technique might not always work.
—–
Shop Amazon – Contract Cell Phones & Service Plans

Researchers Map And Name The Region Of The Universe Containing The Milky Way

Chuck Bednar for redOrbit.com – Your Universe Online
Using a new mapping technique that takes into account the motions of nearby galaxies, and not just their distances, researchers from the University of Hawaii have discovered that the Milky Way resides on the outer edge of a massive, previously undetected supercluster of galaxies that they have dubbed Laniakea.

Laniakea, a moniker created from the words meaning “immeasurable heaven” in Hawaiian, spans 520 million light-years in diameter, explained Irene Klotz of Discovery News.
Based on its boundaries, which were identified by charting the flow of over 8,000 surrounding galaxies, Laniakea is more than five times larger than the cluster previously believed to have been home to the Milky Way.
“This discovery clarifies the boundaries of our galactic neighborhood and establishes previously unrecognized linkages among various galaxy clusters in the local Universe,” the National Radio Astronomy Observatory (NRAO) said in a statement Wednesday.
“We have finally established the contours that define the supercluster of galaxies we can call home,” added lead researcher R. Brent Tully, an astronomer at the University of Hawaii at Manoa. “This is not unlike finding out for the first time that your hometown is actually part of much larger country that borders other nations.”
A paper detailing the researcher’s work, which will be featured as the cover story in the September 4 edition of the journal Nature, also reveals that the Laniakea supercluster contains the mass of one hundred million billion suns spread across 100,000 galaxies.
The largest structures in the known universe, superclusters are made up of groups containing dozens of galaxies, as well as massive clusters containing hundreds of galaxies. Even though all of these galaxies are interconnected in a web of filaments, the researchers pointed out that they tend to have poorly defined boundaries.
“To better refine cosmic mapmaking, the researchers are proposing a new way to evaluate these large-scale galaxy structures by examining their impact on the motions of galaxies,” the NRAO said. Galaxies between structures are caught in “a gravitational tug-of-war” in which the balance of those forces will determine the galaxy’s motion.
By using the National Science Foundation’s Green Bank Telescope (GBT) and similar instruments, the team mapped the velocities of galaxies throughout our local universe and was able to define the region of space where each supercluster was dominant. Thanks to these techniques, Tully and colleagues were able to carefully map the extent of the Laniakea supercluster for the first time.
According to Reuters, the new maps showed that, in addition to the Milky Way, the Virgo cluster and roughly 100,000 other galaxies are part of Laniakea. Furthermore, this supercluster is bordered by the Shapley, Hercules, Coma and Perseus-Pieces super-clusters, though the far edges of the neighboring galaxy complexes have not yet been fully determined.
“We don’t have the distance information to see the far sides of… our (super-cluster) neighbors and we haven’t seen far enough to understand what’s causing this full motion of our galaxy,” Tully told Klotz. “That’s really the goal, to look out far enough – probably three times farther than we are right now, probably requiring many thousands of more distance measurements, to map this larger region.”
Astronomer Elmo Tempel of the Tartu Observatory in Estonia added that since Laniakea “is the biggest structure in the local universe,” he was surprised that the supercluster had not been discovered sooner. He added that the discovery “is definitely interesting and hopefully will initiate studies that will map the local universe in more detail,” and that the study “will give us new perspective (on) how to analyze these problems in observations.”
—–
Keep an eye on the cosmos with Telescopes from Amazon.com

‘Drink Responsibly’ Messages In Alcohol Ads Promote Products, Not Public Health

Andrea Maruniak, Johns Hopkins University Bloomberg School of Public Health

Nine out of 10 encourage responsibility; none provide real information about what that means

Alcohol industry magazine ads reminding consumers to “drink responsibly” or “enjoy in moderation” fail to convey basic public health information, according to a new study from the Johns Hopkins Bloomberg School of Public Health.

A report on the research, published in the September issue of Drug and Alcohol Dependence, analyzed all alcohol ads that appeared in U.S. magazines from 2008 to 2010 to determine whether messages about responsibility define responsible drinking or provide clear warnings about the risks associated with alcohol consumption.

According to the study, most of the ads analyzed (87 percent) incorporated a responsibility message, but none actually defined responsible drinking or promoted abstinence at particular times or in certain situations. When responsibility messages were accompanied by a product tagline or slogan, the messages were displayed in smaller font than the company’s tagline or slogan 95 percent of the time.

Analysis of the responsibility messages found that 88 percent served to reinforce promotion of the advertised product, and many directly contradicted scenes depicted in the ads. For example, a vodka ad displayed a photograph of an open pour of alcohol with a tagline that implied the drinker had been partying all night. In small lettering, the same ad advised the audience to enjoy the product responsibly.

“While responsibility messages were present in almost nine out of ten ads, none of them provided any information about what it means to drink responsibly,” says study leader Katherine Clegg Smith, PhD, an associate professor in the Department of Health, Behavior and Society at the Johns Hopkins Bloomberg School of Public Health. “Instead, we found that the vast majority of responsibility messages were used to convey promotional information, such as appealing product qualities or how the product should be consumed.”

Federal regulations do not require “responsibility” statements in alcohol advertising, and while the alcohol industry’s voluntary codes for marketing and promotion emphasize responsibility, they provide no definition for “responsible drinking.”

“The contradiction between appearing to promote responsible drinking and the actual use of ‘drink responsibly’ messages to reinforce product promotion suggests that these messages can be deceptive and misleading,” said David Jernigan, PhD, director of the Center on Alcohol Marketing and Youth at the Johns Hopkins Bloomberg School of Public Health.

A better option for promoting responsible drinking in advertising would be to replace or supplement unregulated messages with prominently placed, tested warning messages that directly address behaviors presented in the ad and that do not reinforce marketing messages, Smith says.

“We know from experience with tobacco that warning messages on product containers and in advertising can affect consumption of potentially dangerous products,” she says. “We should apply that knowledge to alcohol ads and provide real warnings about the negative effects of excessive alcohol use.”

The research was funded under a cooperative agreement from the Centers for Disease Control and Prevention.

Defining strategies for promoting product through ‘drink responsibly’ messages in magazine ads for beer, spirits and alcopops” was written by Katherine Clegg Smith, Samantha Cukier and David H. Jernigan.

Researchers Observe The Phenomenon Of “Lithium Plating” During The Charging Process

Technische Universität München

Lithium-ion batteries are seen as a solution for energy storage of the future and have become indispensable, especially in electromobility. Their key advantage is that they are able to store large amounts of energy but are still comparatively light and compact. However, when metallic lithium forms and deposits during charging it can lead to a reduced battery lifespan and even short-circuits. Scientists at the Technische Universität München (TUM) have now managed to peer into the inner workings of a battery without destroying it. In the process, they have resolved the so-called lithium plating mystery.

Mobile phones, digital cameras, camcorders, notebooks: They all run on lithium-ion batteries. These are characterized by high energy densities while remaining small and light enough to be used in portable devices. “A lithium-ion battery can store three to four times the energy of a comparably sized nickel-cadmium battery,” explains Dr. habil. Ralph Gilles, scientist at the Neutron Source Heinz Maier-Leibnitz (FRM II). Even temperature fluctuations and longer-term storage do not pose problems for lithium-ion batteries.

These advantages make lithium-ion batteries a key technology for electromobility. In the not too distant future, electric vehicles will be able to hold their own against liquid fuel-driven transport media – also with regard to the accessible distance. This will require powerful, safe and fast-charging batteries.

Lithium plating can cause short-circuits

However, one previously known, yet poorly understood phenomenon stands in the way of this goal: metallic lithium deposition or lithium plating, as it is called.

Simply put, energy storage in a lithium-ion battery works by the following principle: Both the positive electrode (cathode) and the negative electrode (anode) can bind lithium ions. During the charging process, the induced electrical field forces the ions to move from the cathode to the anode. When the battery is discharged, the lithium ions move back to the cathode, releasing energy in the process.

The cathode in lithium-ion batteries comprises a lithium metal oxide while the standard material for battery anodes is graphite (carbon) with a layered structure. During the charging process, the lithium ions are stored in these layers.

However, occasionally lithium ions form metallic lithium instead of intercalating into the anode, as desired. The lithium deposits onto the anode and is no longer fully available for the described process. The result is a drop in battery performance. In extreme cases this can even lead to short-circuits. In addition, metallic lithium is highly inflammable.

Non-destructive investigation using neutrons as a probe

Hitherto, observing the precise mechanism at work during lithium plating has not been possible. When a battery is opened, explains Ralph Gilles, you only get a snapshot of its present state. Yet, the amount of metallic lithium changes permanently. Using neutron beams, the scientists Dr. Veronika Zinth at the Neutron Source Heinz Maier-Leibnitz (FRM II) and Christian von Lüders at the Department of Electrical Energy Storage Technology were able to observe the processes inside of batteries without cutting them open.

“In contrast to other methods, with neutron diffraction we can make more precise statements about when and how strongly lithium plating takes place,” explains Veronika Zinth.

Using the material research diffractometer STRESS-SPEC at FRM II, the researchers installed a battery, in both charging and discharging states, into the neutron beam. The incident neutron beam is diffracted according to Bragg’s law and collected in a detector. Using these signals, the researchers were able to indirectly deduce how much metallic lithium had formed.

Faster charging means more metallic lithium

Initial results of the study:

– The faster the charging process, the more metallic lithium is formed. Up to 19 % of the lithium ions normally involved in the charging and discharging process take on the metallic form. (The measurements were made at -20 degrees Celsius).

– During a 20-hour resting phase following a fast recharge some of the metallic lithium reacts with the graphite, intercalating between the graphite layers as lithium ions. It is effectively a delayed, slow charging process. Albeit, only a part of the lithium plating is reversible.

– Low temperatures encourage the formation of metallic lithium.

The scientists are planning further experiments to shed more light on the lithium plating mechanism. The results may help answer the question of how the phenomenon might be averted altogether. This will also involve answering the question of how quickly batteries can be charged before lithium plating sets in.

The study is part of the German Federal Ministry of Education and Research (BMBF) ExZellTUM (Excellence Center for Battery Cells) project. The ExZellTUM project is geared towards the development of new energy storage systems, as well as new manufacturing processes, forming strategies and test technologies for storage systems production. The project comprises four partners: the Department of Electrical Energy Storage Systems, the Department of Machine Tools and Industrial Management, the Department of Technical Electrochemistry and the Neutron Source Heinz Maier-Leibnitz.

Publication: Lithium plating in lithium-ion batteries at sub-ambient temperatures investigated by in situ neutron diffraction, Veronika Zinth, Christian von Lüders, Michael Hofmann, Johannes Hattendorff, Irmgard Buchberger, Simon Erhard, Joana Rebelo-Kornmeier, Andreas Jossen, Ralph Gilles, Journal of Power Sources, Doi: 10.1016/j.jpowsour.2014.07.168

> Continue reading…

Researchers Find Low-Carb Diets Are Superior To Low-Fat For Weight Loss, Heart Health

Chuck Bednar for redOrbit.com – Your Universe Online
Low-fat or low-carb – which type of diet is better when it comes to battling obesity and maintaining heart health? Researchers from the Tulane University School of Public Health and Tropical Medicine are weighing in on the longstanding debate.
In the latest edition of the Annals of Internal Medicine, lead author Dr. Lydia Bazzano, a professor of nutrition research at the New Orleans-based university, along with colleagues from Kaiser Permanente Southern California and the Johns Hopkins Bloomberg School of Public Health in Baltimore, explain that restricting carbohydrates was more effective for weight loss and cardiovascular risk factor reduction.
The study authors recruited 148 obese participants and assigned them to one of two specialized diets. One group consumed less than 40 grams of digestible carbs per day, while the other consumed less than 30 percent of their daily calories from fats. While both groups were given dietary advice, neither had strict calorie or exercise goals.
Twelve months later, the low-carb group lost an average of 7.7 pounds more than the low-fat group, the researchers revealed. Furthermore, the blood levels of certain fats which are predictors of heart disease risk also improved more in the low-carb group. Low-density lipoprotein cholesterol for both groups were about the same, but the low-carb group saw an increase in high-density lipoprotein cholesterol and a decline in ratio of bad-to-good cholesterol.
“Over the years, the message has always been to go low-fat,” Dr. Bazzano said in a statement Tuesday. “Yet we found those on a low-carb diet had significantly greater decreases in estimated 10-year risk for heart disease after six and 12 months than the low-fat group.”
However, as she told Lauren Raab of the Los Angeles Times, “This isn’t a license to hit the butter and meat fats.” Dr. Bazzano explained that both groups were given instructions to continue being as physically active as was normal for them, and that pre-study counseling covered topics such as meal planning, portion size and reading nutrition labels.
The study participants were also taught about the different types of fats, she told Raab. They were instructed that monounsaturated fats (like canola and olive oils) and polyunsaturated fats (those found in nuts and fish) were “recommended,” and that saturated fats (those solid at room temperature) were “not recommended.”
As Dr. Bazzano told USA Today’s Kim Painter, the low-carb diet participants “were not eating butter and burgers at every meal.” They got 41 percent of their calories from fat, but only 13 percent from saturated fats, meaning that their diets were rich in things like olive oil, canola oil, nuts and avocados, the Tulane University professor added.
“It’s not a license to go back to the butter, but it does show that even high-fat diets – if they are high in the right fats – can be healthy and help you lose weight,” she added. However, as University of Colorado School of Medicine obesity researcher James Hill told Painter, the research did not address the issue of long-term weight maintenance, and that increased (which was discouraged in the study) could require higher levels of carbohydrates.
“The study isn’t the last word on what kind of diet is best,” Raab added. “An analysis published Tuesday in the Journal of the American Medical Association looking at 48 studies involving overweight or obese participants found significant weight loss with any low-carbohydrate or low-fat diet.”
—–
Shop Amazon – Tailgating Party Store

Ocean Connectivity Map Sheds New Light On Garbage Patch Formation

Chuck Bednar for redOrbit.com – Your Universe Online
Scientists from the University of New South Wales in Australia have developed a new model which divides the world’s oceans into seven primary regions that experience little intermingling of water, but their research has also revealed the existence of flotillas of garbage located in large, circular ocean currents known as gyres.
One of those trash flotillas, the Great Pacific Garbage Patch, is a region of environmental concern located between California and Hawaii, the researchers explained. In this region, pieces of plastic are scattered all over the ocean surface, outnumbering plankton in that area of the ocean and posing risks to fish, turtles and birds if consumed.
It is believed to be one of five such locations, and according to the UNSW team, the new model could help determine which nations are to blame for the creation of each garbage patch. Their discovery is part of a larger study, published Tuesday in the journal Chaos, which investigates how well the ocean’s surface waters mix.
“In some cases, you can have a country far away from a garbage patch that’s unexpectedly contributing directly to the patch,” UNSW mathematician Gary Froyland said in a statement. For instance, even though Madagascar and Mozambique border the Indian Ocean, debris from those countries would likely flow into the south Atlantic.
Erik van Sebille, an oceanographer who worked with Froyland on the study, added that the new model could also help experts determine how quickly refuse leaks from one patch into another. As he explained, scientists “can use the new model to explore, for example, how quickly trash from Australia ends up in the north Pacific.”
Winds, differences in water temperatures, global salinity gradients and forces caused by the rotation of the Earth all play a role in forming fast-moving ocean currents, the study authors explained. Currents stir ocean waters, but they also act as barriers limiting how much water in different ocean regions can mix. They compare it to the blast of air at the entrance of an air-conditioned building, which keeps the cold air inside and the warm air outside from mixing.
Along with UNSW colleague Robyn Stuart, Froyland and van Sebille divided the world’s waters into seven primary regions marked by extremely little mixing of waters. The team borrowed mathematical concepts from a field known as ergodic theory (which has also been used to partition interconnected systems such as computer chips and the Internet) to complete their work without needing to conduct complex simulations.
“Instead of using a supercomputer to move zillions of water particles around on the ocean surface, we have built a compact network model that captures the essentials of how the different parts of the ocean are connected,” Froyland explained. Based on their research, some portions of the Pacific and Indian oceans are actually more closely associated to the south Atlantic, while another section of the Indian Ocean is actually linked to the south Pacific.
“The take-home message from our work is that we have redefined the borders of the ocean basins according to how the water moves,” added van Sebille. He and his colleague believe that the geography of these new basins could help scientists learn more about ocean ecology, and make it easier to monitor aquatic debris. They added that they believe their technique could also be used to model smaller-scale bodies of water such as the Great Lakes, or to predict how an oil spill could spread through the Gulf of Mexico.
—–
Garbology: Our Dirty Love Affair with Trash – By Edward Humes. A Pulitzer Prize–winning journalist takes readers on a surprising tour of America’s biggest export, our most prodigious product, and our greatest legacy: our trash.

Google To Partner With Award-Winning Quantum Computer Researchers

Chuck Bednar for redOrbit.com – Your Universe Online
One of the world’s largest consumer technology companies is entering into the quantum computing market, as Google announced this week that it plans to team with researchers at UC Santa Barbara to build processors based on superconducting electronics.
The Quantum Artificial Intelligence Lab, which was launched by Google in May, is operated out of NASA’s Ames Research Center in Moffat Field, California and uses a quantum computer from D-Wave Systems to study the application of quantum optimization to difficult problems in artificial intelligence. The Universities Space Research Association (USRA) is also a project partner.
On Tuesday, Google Director of Engineering Hartmut Neven confirmed that John Martinis and his team at UC Santa Barbara were also joining the research project. Martinis, who was recently presented with the London Prize for his work in quantum control and quantum information processing, and his colleagues “have made great strides in building superconducting quantum electronic components of very high fidelity,” Neven said.
“With an integrated hardware group the Quantum AI team will now be able to implement and test new designs for quantum optimization and inference processors based on recent theoretical insights as well as our learnings from the D-Wave quantum annealing architecture,” he added, noting that they would continue to work with D-Wave scientists and planned to upgrade their “Vesuvius” machine to “a 1000 qubit ‘Washington’ processor.”
According to Reuters reporters Subrat Patnaik and Arnab Sen, while Google is best known for its work on search engines, mobile device technology, self-driving cars and robotics projects, the Mountain View, California-firm has also been increasingly interested in the field of artificial intelligence – even going as far as acquiring AI startup DeepMind Technologies Ltd in January to gain an edge in the burgeoning field.
GigaOM’s Derrick Harris explained that even though Google is not yet severing ties with D-Wave, it ultimately is planning to develop their own quantum computing hardware. After all, he explains, “the company has long designed its own servers and switches, and is pushing an artificial intelligence agenda that includes smartphones, robots and driverless cars. If Google, or anyone, is going to solve the very hard AI problems these technologies present, they probably can’t sit around and wait for someone else to build the right systems for them.”
“Both the UCSB and D-Wave systems require cooling to nearly absolute zero, or minus 459 degrees Fahrenheit. But there are some technical differences,” added Don Clark of the Wall Street Journal. Earlier this year, Martinis and his associates published research featuring “a five-qubit array that showed advances in correcting certain errors that can occur during the fragile conditions that create quantum effects.”
Martinis told Clark he is hopeful the new project will produce technology that “will not lose its memory” as quickly as earlier hardware, and that he expected his team would actually benefit from Google’s affiliation from D-Wave. “We view this as a complementary approach to what D-Wave is doing,” he explained.
The ultimate goal in quantum computing is to develop machines capable of performing calculations millions of times faster than today’s computers by using qubits (quantum bits) instead of electrical transistors to represent the ones and zeros of binary computing, according to AFP reports. The news agency added that there are reports suggesting Microsoft is also exploring quantum computing technology.
—–
Schrödinger’s Killer App: Race to Build the World’s First Quantum Computer by Jonathan P. Dowling

NASA Planning To Send 3D Printer Technology To ISS Later This Year

Chuck Bednar for redOrbit.com – Your Universe Online
International Space Station crew members currently forced to wait for resupply vehicles to arrive with essential items could soon benefit from the arrival of a new 3D printer later this year, NASA officials announced on Tuesday.
The device, which was constructed by Made In Space Inc. and passed flight certification and acceptance testing at NASA’s Marshall Space Flight Center in Huntsville, Alabama back in April, is expected to make its way to the ISS later this year aboard the SpaceX-4 resupply mission, the US space agency said.
The 3D printer will be the first to ever leave the Earth’s atmosphere, and NASA is banking on it being a game-changer. They hope that it will demonstrate that the technology can work normally in the orbital laboratory’s microgravity environment, and that it will be able to produce parts equal in quality to those made on the ground.
“It works by extruding heated plastic, which then builds layer upon layer to create three-dimensional objects,” explained Jessica Eagan of the International Space Station Program Science Office at Marshall Space Flight Center. “Testing this on the station is the first step toward creating a working ‘machine shop’ in space.”
“This capability may decrease cost and risk on the station, will be critical when space explorers venture far from Earth and will create an on-demand supply chain for needed tools and parts,” she added. “If the printer is successful, it will not only serve as the first demonstration of additive manufacturing in microgravity, but it also will bring NASA… a big step closer to evolving in-space manufacturing for future missions to destinations such as an asteroid and Mars.”
Made In Space received a Small Business Innovation Research (SBIR) from Marshall’s 3-D Printing In Zero-G Technology Demonstration (3-D Printing In Zero-G) program to build the device. The project is supported by the International Space Station Technology Development Office in Houston, as well as the Advanced Human Exploration and Operations Mission Directorate and the Game Changing Development Program at NASA HQ in Washington.
If proven to be successful, the technology would greatly benefit long-term space missions thanks to the onboard manufacturing capabilities it would provide, explained NASA. The data and knowledge gained during this demonstration will improve future 3D manufacturing technology and equipment for use by the space program, while allowing astronauts to have a greater degree of autonomy and flexibility during missions, the agency added.
“I remember when the tip broke off a tool during a mission,” said NASA astronaut TJ Creamer, a member of the Expedition 22/23 crew from December 2009 to June 2010. “I had to wait for the next shuttle to come up to bring me a new one. Now, rather than wait for a resupply ship to bring me a new tool, in the future, I could just print it.”
The time required to print a new tool or instrument would depend on the size and complexity of the part, and would range from anywhere from 15 minutes to one hour on average, the US space agency said. Instructions can be pre-loaded onto the printer in the form of a computer-aided design model, or that information could be uplinked from the ground to the station printer, which can be operated primarily from Marshall’s Operations Support Center.
“This means that we could go from having a part designed on the ground to printed in orbit within an hour to two from start to finish,” said NASA’s 3-D print project manager Niki Werkheiser. “The on-demand capability can revolutionize the constrained supply chain model we are limited to today and will be critical for exploration missions.”
The printer will decrease both cost and risk, while also increasing efficiency, NASA said. Ken Cooper, principal investigator for 3D printing at Marshall, called the project “the first step in sustaining longer missions beyond low-Earth orbit.”
“NASA is great at planning for component failures and contingencies; however, there’s always the potential for unknown scenarios that you couldn’t possibly think of ahead of time,” he added. “That’s where a 3-D printer in space can pay off.”
—–
FOR THE KINDLE: The History of 3D Printing: redOrbit Press

Too Much Sex? Roscosmos Confirms Geckos Died In Space

Chuck Bednar for redOrbit.com – Your Universe Online
Five geckos sent into space so that researchers could analyze their reproductive habits in microgravity have died in orbit, officials from the Russian Federal Space Agency (Roscosmos) confirmed on Monday.
The four male and one female lizards were sent into space on a Photon-M4 satellite on July 19 and returned to Earth 44 days later, explained Howard Amos of The Telegraph. However, when Roscosmos scientists traveled to the probe’s landing site (a field in the Orenburg Region of southern Russia), they found that none of the geckos had survived.
BBC News said that Russian media outlets are claiming that the geckos might have frozen to death after a heating system failure in the Photon-M4 satellite. Despite those reports, however, Russian space officials have only said that the demise of the creatures was being investigated.
According to The Guardian reporter Alan Yuhas, a representative at the Institute of Biomedical Problems (ISTC), which was involved in the experiment, told Russian news agency Itar-Tass that it was “too early to talk about the geckos’ cause of death.”
However, an unidentified source in the scientific commission later told the Interfax wire service that it was “clear” that the geckos froze, probably “due to a failure of the equipment meant to ensure the temperature of the box with the animals,” Yuhas added. That source also said that the creatures “could have died at any stage of the flight, and it’s impossible to judge when based on the animals’ mummified remains.”
Not all of the news was bad, however – drosophila fruit flies that had also been traveling on the satellite not only survived, but successfully reproduced while in outer space as well. Mushrooms and plant seeds were also being monitored as part of the experiment, according to BBC News, and a special vacuum furnace was being used to analyze the solidification and melting of metal alloys in extremely low-gravity conditions.
“While scientists were unable to follow exactly what the geckos were up to in real time, cameras installed inside the satellite means they will now examine the footage to see whether the geckoes managed to have sex, and when and how they met their demise,” Amos said. “One unnamed Russian academic involved in the project… told Russian news agencies that the lizards had died fairly early on in the flight.”
“We can say with confidence that they died at least a week before the landing because their bodies were partly mummified,” that individual said, according to The Telegraph. “Hypothermia is not the main possible cause but only one of the options. Others include a possible malfunction of the on-board equipment and life-support system.”
Less than a week after the launch of the Photon-M4 satellite, Roscosmos officials revealed that the probe was no longer responding to commands, leading to fears that the experiments could have been jeopardized. Fortunately, however, they were able to re-establish contact with the probe on July 29, allowing the mission to continue as scheduled.

German Court Issues Temporary Injunction Banning Uber Ride-Sharing Service

Chuck Bednar for redOrbit.com – Your Universe Online
Uber, the rapidly-growing US-based online ride-sharing service, has been hit with a temporary injunction by a Frankfurt court prohibiting it from operating throughout Germany, various media outlets have reported.
According to Ben Knight of The Guardian, the court rules that the mobile app and chauffeuring service violated that country’s Passenger Transportation Act. The injunction, which was issued following expedited proceedings, will remain in effect until a full hearing takes place and threatens Uber with a €250,000 (approximately $328,000) fine per ride.
The case against the ride-sharing service was brought by Taxi Deutschland Servicegesellschaft (TDS), which operates an app that links smartphone users to registered taxi drivers, Knight added. The company claimed that Uber was not a legitimate service since its drivers lacked the proper permits, were not adequately insured, and were not subject to any oversight. A TDS representative said the company was “very happy” with the ruling.
Arne Hasse, a spokesman with Frankfurt state court, told the Associated Press (AP) that the decision means that Uber cannot offer its services without a specific permit required by German transportation regulations. The ruling also comes after authorities in Berlin banned the service from operating in the nation’s capital due to safety concerns, the wire service added.
In a statement, San Francisco-based Uber said that it would use “all legal means” to fight the case, and according to Christopher Williams of The Telegraph, the company vowed that it would continue servicing German customers in spite of the temporary nationwide ban and the threat of the hefty fines attached to that injunction.
A company spokesperson told BBC News reporter Kevin Rawlinson that the ban was not enforceable while the appeals process was ongoing. Rawlinson noted that a check of the company’s app verified that drivers were still offering to pick up customers in several German cities, including Munich, Berlin, Hamburg, Frankfurt and Dusseldorf.
“Germany is one of the fastest growing markets for Uber in Europe. We will continue to operate in Germany and will appeal the…lawsuit,” that spokesperson explained. “You cannot put the brakes on progress. Uber will continue its operations and will offer UberPop ridesharing services via its app throughout Germany.”
“The law says there are safety regulations for drivers and safety regulations for users, and these also apply to neo-liberal firms like Uber,” Floetenmeyer told the Guardian, adding that since Uber said that they planned to ignore the verdict, that TDS would formally petition the court to enforce the relevant fines.
“If you get into a car, you are legally in the hands of the driver with your life and your personal health and safety, and the driver has to play by the German rules,” she continued, noting that the company could also seek to have additional fines imposed on drivers using the Uber app. “If they don’t play by the rules, this is worth up to €25,000 (about $32,800) per drive, per driver.”
This is a high-stakes legal battle for Uber, whose German userbase has grown fivefold this year alone, explained Reuters reporter Eric Auchard. The firm, which is currently valued at $18.2 billion, has expanded into nearly 150 cities worldwide since launching in March 2009 and has been hit with regulatory challenges since day one, largely pertaining to whether or not its drivers should be specially licensed and fully insured to carry passengers, he added.
“Even in its home city of San Francisco, Uber has had to overcome legal and regulatory hurdles from city authorities concerned its services sidestep rules governing commercial transport and by taxi companies hoping to keep competition out,” Auchard said.
“Taxi drivers across Europe caused chaos in June by protesting against the service but Uber services have continued to grow in popularity,” he added. “Uber last week said it was experiencing ‘huge demand’ for its services in German cities… and that it planned to expand into Cologne and Stuttgart by the end of this year.”

Apple Probing iCloud Vulnerabilities Following Celebrity Nude Photo Leak

Chuck Bednar for redOrbit.com – Your Universe Online
In the wake of the well-publicized publication of nude photographs of some of Hollywood’s biggest stars earlier this week, Apple is reportedly working to correct the iCloud vulnerabilities that allowed hackers to gain access to those images.
In fact, according to Kevin McCoy of USA Today, the issue – which involved flaws in the Cupertino, California-based company’s Apple’s Find My iPhone software – may have already been patched, although some other media reports published Monday suggested that Apple’s investigation into the matter was still ongoing.
According to Wall Street Journal reporter Daisuke Wakabayashi, a message posted on online-code sharing website GitHub claimed that a user had located a bug in the Find My iPhone app, which monitors the location of missing Apple smartphones and disables the device if it is stolen. The vulnerability allowed hackers to continue trying passwords until identifying the correct one, instead of locking out users after multiple incorrect attempts.
That post was later updated to note that Apple had patched the vulnerability, Wakabayashi added. However, Rich Mogull of security research firm Securosis told the Wall Street Journal that it was “possible” that this particular flaw was exploited by the hackers who stole the photos, but that the two issues might have been unrelated. He added that it was more likely it was the individual accounts of the celebrities, and not Apple itself, that were compromised.
The hack posted on GitHub, identified by Mashable’s Lance Ulanoff and Pete Pachal as iBrute, was shared roughly 36 hours before the first photographs were leaked. Andrey Belenko, senior security engineer for mobile security firm viaForensics, told them that might not have been enough time for a brute force attack on the Find My iPhone software to work.
Belenko, along with Alexey Troshichev of HackApp, discussed iOS7 and iCloud security at the Russian Defcon Group DCG#7812 over the weekend, Ulanoff and Pachal said. In their presentation, they reported discovering two potential weak spots in iCloud security: the lack of a lock-out mechanism on the Find My iPhone app, and the fact that a user’s iCloud security code defaults to just four digits (although users can make it more complex if they want) and could be vulnerable to brute force attacks.
Forbes contributor Dave Lewis said the researchers were upset that their research may have played a role in the theft and publication of the private images, but said that the incident was a reminder that “data from ‘smart’ devices could be accessible from [the] Internet,” which they said can be a “place of anarchy” and the “source of undesirable and unfriendly activity.”
Lewis called it “the law of unintended consequences at its finest,” adding, “while this incident has unfortunate ramifications for the victims it has been a great wake up call for others thanks to the huge amount of press coverage. This is an excellent opportunity for people to clean up their password practices and improve their personal security posture.” He suggests using strong passwords and implementing two-factor authentication for iCloud accounts.
The incident that led to Apple’s investigation into iCloud security saw currently unidentified hackers post photos allegedly depicting Jennifer Lawrence, Kate Upton, Victoria Justice, Mary Elizabeth Winstead, Ariana Grande and other singers and actresses in various states of undress, said McCoy. While some of the victims, including Lawrence, Upton and Winstead, acknowledged the theft of their private photos, others (including Grande and Justice) have claimed that the pictures were fakes, the USA Today reporter added.
—–
Amazon.com – Read eBooks using the FREE Kindle Reading App on Most Devices

People Are More Likely To Overeat When Watching Action Movies

Chuck Bednar for redOrbit.com – Your Universe Online
Watching too much television has long been associated with obesity, but new research published Monday in JAMA Internal Medicine suggests that what you watch might be every bit as important as how much you watch when it comes to packing on the pounds.
As part of the study, researchers at the Cornell Food and Brand Lab recruited 94 participants and had them watch 20 minutes of television programming while snacking on M&Ms, cookies, carrots and grapes. One-third of the participants watched a portion of the action movie The Island with sound, one-third watched the same movie without sound, and the remaining study participants watched a segment of the Charlie Rose talk show.
According to the study authors, viewers who watched The Island with sound ate 98 percent more grams of food (206.5 grams vs. 104.3 grams) and 65 percent more calories (354.1 calories vs. 214.6 calories) than those who watched the Charlie Rose Show. Even those watching the silent version of the action movie at 36 percent more grams of food (142.1 grams vs. 104.3 grams) and 46 percent more calories (314.5 calories vs. 214.6 calories).
“We find that if you’re watching an action movie while snacking your mouth will see more action too! In other words, the more distracting the program is the more you will eat,” lead author Dr. Aner Tal, a post-doctoral researcher at the Food and Brand Lab, said in a statement.
“More stimulating programs that are fast paced, include many camera cuts, really draw you in and distract you from what you are eating. They can make you eat more because you’re paying less attention to how much you are putting in your mouth,” he added. As a result, the more a program engages a viewer, the worse it might be for them.
So what can a person do to avoid overeating on junk food while watching the hero battle the evil empire? Dr. Tal and his colleagues recommend creating portions of television snacks before sitting down to watch the show, rather than bringing out an entire bag of chips or box of cookies.

The best option, the researchers said, is to bring out healthier snack foods such as carrots, since action movie viewers tend to munch on whatever happens to be available. After all, they explain, action and sound variation in a program tend to cause people to pay less attention to what they happen to be eating.
Dr. Tal is no stranger to this phenomenon, as he told Deborah Netburn of the Los Angeles Times: “It’s something I noticed in myself. When I go to the cinema and watch a movie I’m really engrossed in, my popcorn will go from full to empty without me realizing it. But if it is a movie I’m less into, I pay more attention to what I’m eating.”
He said that he and his colleagues plan to conduct a follow up study to determine exactly what factors were responsible for mindless eating during action movies, but that they suspect the pacing of the programming and the level of engagement play key roles. The fact that people consumed more food while watching The Island, even when they could not hear the sound, suggests that this is primarily a visual phenomenon.
“One thing we noticed is people eating without paying attention will eat anything. If you don’t really like broccoli but you don’t hate it, this could be a good way for you to get your daily dose of vegetables,” he continued, noting that the results also suggest that people should start tuning in to shows like Charlie Rose. “You’ll eat less and you will get more intelligent television.”
—–
Shop Amazon Fire TV – Say it. Watch it.

Regular Family Meals Could Have Mental Health For Victims Of Cyberbullying

Chuck Bednar for redOrbit.com – Your Universe Online

Regularly eating meals together as a family can provide the social support necessary to help reduce the negative impact that cyberbullying can have on a youngster, according to new research published online Monday in the journal JAMA Pediatrics.

Lead author Frank Elgar, a professor in the McGill University Institute for Health and Social Policy, and his colleagues explained that the exchanges which occur during family meal times can benefit the well-being of adolescents, and this communication and interpersonal contact can reduce some of the distressing effects of cyberbullying.

The researchers examined the link between online bullying and both mental health and substance abuse issues, as well as the effects of family contact and communication which takes place during dinner could have in mitigating those problems. The study looked at the data of more than 18,800 students between the ages of 12 and 18, and found that nearly one-fifth of them (18.6 percent) reported they had experienced cyberbullying over the past year.

Elgar’s team also measured five internalizing problems (anxiety, depression, self-harm, suicide ideation and suicide attempt), two externalizing problems (fighting and vandalism) and four substance use problems (frequent alcohol use, frequent binge drinking, prescription drug misuse and over-the-counter drug misuse). Cyberbullying was found to be associated with all 11 of those internalizing, externalizing and substance use problems.

However, family dinners appeared to moderate the link between acts of online bullying and the resulting mental health and substance use issues, the study authors said. For instance, there was a four-fold difference in the rates of total problems between no cyberbullying victimization and frequent victimization when there were at least four family dinners per week, but the difference was more than seven-fold when there were no family dinners.

“One in five adolescents experience cyberbullying,” Elgar said in a statement. “Many adolescents use social media, and online harassment and abuse are difficult for parents and educators to monitor, so it is critical to identify protective factors for youths who are exposed to cyberbullying.”

“We found that emotional, behavioral, and substance use problems are 2.6 to 4.5 times more common among victims of cyberbullying, and these impacts are not due to face-to-face bullying; they are specific to cyberbullying,” he added. “The results are promising, but we do not want to oversimplify what we observed. Many adolescents do not have regular family meals but receive support in other ways, like shared breakfasts, or the morning school run.”

Elgar noted the results of the study, which was funded by the Social Sciences and Humanities Research Council and Canada Research Chairs, also suggest that parents who are more involved in the lives of their children could go a long way to helping victims of cyberbullying. Touching base with teens or adolescents about their online lives could give them the tools necessary to manage online harassment or cyberbullying that can otherwise go undetected.

In May, researchers from Michigan State University found that socioeconomic status was not a factor in cyberbullying, and that online harassment was just as likely to occur to teenagers living in poor, higher-crime neighborhoods as it was to middle class high-school students living in more affluent areas.

“We found neighborhood conditions that are indicative of poverty and crime are a significant predictor for bullying – not only for physical and verbal bullying, but cyberbullying as well,” said lead researcher Thomas J. Holt, an associate professor of criminal justice at MSU. “This is a very unique and somewhat surprising finding.”

—–

Shop Amazon – Back to School

E-Cigarettes May Be Safer, But They’re Still Harmful

Rayshell Clapper for redOrbit.com – Your Universe Online
According to the Centers for Disease Control and Prevention (CDC), the single most easily preventable cause of death and disease is to eliminate tobacco use. Smoking can lead to many health issues including respiratory, cancer, heart disease, and blood problems. Over 480,000 Americans die each year due to cigarette smoking, and 41,000 of these deaths happen due to secondhand exposure to smoke.
With the ever-growing popularity of the e-cigarette, it may be possible that the 41,000 secondhand smoking-related illnesses and deaths could decrease; however, e-cigs are so new that there is not much research about them.
The University of Southern California recently published research about secondhand e-cigarette smoke. The study found that e-cig smoke certainly has a decrease in exposure to carcinogenic particles, but it still is not without its own harm.
The study had volunteers smoke traditional cigarettes and e-cigarettes in offices and rooms because these are the most likely places that people would be exposed to secondhand smoke. The researchers then collected particles in the indoor air to study the chemical content and sources of the samples. What they found proved that e-cigs are safer than traditional cigarettes, but they still have their own set of dangers.
E-cigs have a 10-fold decrease in the bad carcinogens. They have an exposure of close to zero for organic carcinogens. This is definitely good news and further supports the idea that e-cigs are certainly healthier for all than traditional cigarettes. But they are not without their issues in the form of toxic elements like chromium, nickel, lead and zinc. These metals are inhaled by the smoker but also exhaled in the vapor making them dangerous to others. Though the concentration is lower in e-cigarettes than in traditional cigarettes, e-cigs still produce these toxins for the smoker and those close by.
The good news for e-cigarette smokers to come out of this is that the toxic metals most likely come from the cartridges of the e-cig devices more so than the liquid that is vaporized. This means that through better manufacturing standards, these devices could lessen the impact of toxins even further.
Certainly the use of e-cigarettes has helped many to stop smoking the far more dangerous carcinogenic, traditional cigarettes. Naturally, the best choice is to quit smoking altogether. Once a smoker quits, he or she immediately lowers the chance of cancer, heart disease, blood issues, eye problems and breathing troubles. E-cigarettes are definitely safer and healthier than regular cigarettes, but they still aren’t as good as quitting completely.
Though the USC findings are a step in the right direction, there is still much that people do not know about the impact of e-cigarettes. It certainly is good to know that e-cigs are safer and healthier than traditional smoking and secondhand smoke, but until we know more about just how much safer and what other possible dangers e-cigs may expose us to, we should embrace the ideas that e-cigs are still understudied.
Any foreign agents in our bodies expose us to dangers, and though the dangers of e-cigs are less, we should still work to eliminate what we can in order to succeed in preventing illnesses and diseases. Tobacco-related illnesses and diseases are the number one most preventable illnesses and diseases. E-cigs still contain nicotine, which means they still have an impact on our health. This USC study shows the beginning of a greater understanding to that impact.
The research was published online August 22 by the Journal of Environmental Science, Processes and Impacts.
—–
Amazon.com – Read eBooks using the FREE Kindle Reading App on Most Devices

Cave Engravings Discovered In Gibraltar May Have Been Created By Neanderthals

Chuck Bednar for redOrbit.com – Your Universe Online
Neanderthals have a reputation for being unintelligent brutes, but the discovery of a series of lines scratched into a rock wall in southwestern Europe suggests that the predecessors of modern humans might have had the intelligence and creativity to produce cave art.
In research published Monday in the Proceedings of the National Academy of Sciences, experts from 11 different European institutions reported discovering the cross-hatched engravings that were similar in appearance to a hashtag. It is the Neanderthal-created artwork ever discovered, according to Sharon Begley of Reuters.
The marks were discovered deep within Gorham’s Cave in Gibraltar, which overlooks the Mediterranean Sea, Begley noted. She described it as “eight partially crisscrossing lines with three shorter lines on the right and two on the left, incised on a shelf of bedrock jutting out from the wall about 16 inches (40 cm) above the cave floor.”
“The engraving is covered by undisturbed sediment that contains 294 previously discovered stone tools,” the Reuters reporter added. “They are in a style long known as the signature of Neanderthals, who had reached Europe from Africa some 300,000 years ago. Standard techniques had dated the tools at 39,000 years old, about when Neanderthals went extinct, meaning the art below it must be older.”
Modern humans had yet to reach the region where Gorham’s Cave is located by that time, and the study authors eliminated the possibility that the engravings were made accidentally while the Neanderthals were cutting meat or animal skins. Rather, Begley said that the research team is confident that the engravings were made intentionally using a sharp stone tool, which would have required at least 54 strokes per line and over 300 for the entire pattern.
“It is the last nail in the coffin for the hypothesis that Neanderthals were cognitively inferior to modern humans,” rock art expert Paul Tacon from Griffith University in Australia, who was not involved in the study, told the Associated Press (AP). He added that the research suggests they were likely made for ritual purposes and/or to communicate with others.
“We will never know the meaning the design held for the maker or the Neanderthals who inhabited the cave but the fact that they were marking their territory in this way before modern humans arrived in the region has huge implications for debates about what it is to be human and the origin of art,” he added.
According to the Daily Mail, the researchers conducted a chemical analysis on the mineral coating on the grooves in the engraving, which led them to their conclusion that the art had been produced before the overlying sediment had been deposited.
The study authors then reduced the size of photographs to microscopic scale in order to see the tool marks within the engraving, and they then compared them with experimental marks that had been created using a variety of instruments. Ultimately, they concluded that the artwork had been made by repeatedly passing a robust cutting tip over the rock surface in the same direction, but not everyone is convinced that Neanderthals created it.
“Any discovery that helps improve the public image of Neanderthals is welcome. We know they spoke, lived in large social groups, looked after the sick, buried their dead and were highly successful in the ice age environments of northern latitudes. As a result rock engraving should be entirely within their grasp,” Clive Gamble, an archaeologist at the University of Southampton, told AP reporter Frank Jordans. “What is critical, however, is the dating. While I want Neanderthals to be painting, carving and engraving, I’m reserving judgment.”
However, Clive Finlayson, one of the study authors and the director of the heritage division at the Gibraltar Museum, said that he is convinced the engravings were created by Neanderthals. As he told Jordans via email, “All European Neanderthal fossil sites from this period, including Devil’s Tower Rock Shelter just one mile from Gorham’s Cave, have this technology associated. In contrast no modern human site in Europe has this type of technology. So we are confident that the tools were made by Neanderthals.”
—–
Evolution: The Human Story by DK Publishing

Neutrinos Created By Energy-Generating Fusion Process Detected In Sun’s Core

Chuck Bednar for redOrbit.com – Your Universe Online
Thanks to one of the most sensitive neutrino detectors on Earth, physicists have for the first time confirmed the existence of low-energy neutrinos created by the “keystone” proton-proton (pp) fusion process taking place in the core of the sun, ending a search that has been going on in the scientific community for decades.
The pp reaction, the researchers explain, is the first step of a reaction sequence responsible for roughly 99 percent of the sun’s power. Solar neutrinos are produced in nuclear processes and radioactive decays of various elements during fusion reactions occurring at the sun’s core, and these particles wind up shooting out of the star at roughly the speed of light, with up to 420 billion of them pelting every square inch of the Earth’s surface each second.
Since they only interact through nuclear weak force, the particles can pass through matter virtually unaffected, which has made it difficult to detect them and tell them apart from trace nuclear decays of other materials. Now, however, an international team of over 100 scientists write in the latest edition of the journal Nature that they have completed spectral observations of pp neutrinos.
According to Nature News writer Ron Cowen, the study is the first to confirm the existence of these low-energy neutrinos.
“While the detection validates well-established stellar fusion theory, future, more sensitive versions of the experiment could look for deviations from the theory that would reveal new physics,” he added.
The researchers used the Borexino detector, a particle physics unit designed to study low energy solar neutrinos that is housed deep beneath Italy’s Apennine Mountains at the Gran Sasso National Laboratory, Cowen said. The research helps ease some doubts about the multistep process through which the sun converts hydrogen into helium.
That process, he explained, is the source of 99 percent of the sun’s energy. It begins when the star’s hot, dense core squeezes two protons together to form the hydrogen isotope deuterium. One of those protons then transforms into a neutron and releases both a neutrino and a positron (the electron’s antimatter counterpart).
While physicists had a general understanding of the process, there were fears that they might have been mistaken about the precise reactions that take place and the relative importance of each, Cohen said. However, this study removes those doubts, and for this reason, University of California, Irvine neutrino physicist Michael Smy told Nature News that the Borexino collaboration’s direct detection of the neutrinos was “a landmark achievement.”
“With these latest neutrino data, we are directly looking at the originator of the sun’s biggest energy producing process, or chain of reactions, going on in its extremely hot, dense core,” University of Massachusetts Amherst physicist Andrea Pocar said in a statement. “While the light we see from the Sun in our daily life reaches us in about eight minutes, it takes tens of thousands of years for energy radiating from the sun’s center to be emitted as light.”
“By comparing the two different types of solar energy radiated, as neutrinos and as surface light, we obtain experimental information about the Sun’s thermodynamic equilibrium over about a 100,000-year timescale,” Pocar added. “As far as we know, neutrinos are the only way we have of looking into the Sun’s interior. These pp neutrinos, emitted when two protons fuse forming a deuteron, are particularly hard to study. This is because they are low energy, in the range where natural radioactivity is very abundant and masks the signal from their interaction.”
—–
Particle Physics: A Beginner’s Guide (Beginner’s Guides) by Brian R. Martin

Consuming Fruit Cuts Risk Of Cardiovascular Disease By Up To 40 Percent

European Society of Cardiology
Daily fruit consumption cuts the risk of cardiovascular disease (CVD) by up to 40%, according to research presented at ESC Congress today by Dr Huaidong Du from Oxford, UK. The findings from the seven year follow-up study of nearly 0.5 million people in the China Kadoorie Biobank found that the more fruit people ate, the more their risk of CVD declined.
Dr Du said: “CVD, including ischaemic heart disease (IHD) and stroke, is the leading cause of death worldwide. Improving diet and lifestyle is critical for CVD risk reduction in the general population but the large majority of this evidence has come from western countries and hardly any from China.”
She added: “China has a different pattern of CVD, with stroke as the main cause compared to western countries where IHD is more prevalent. Previous studies have combined ischaemic and haemorrhagic stroke probably due to the limited number of stroke cases in their datasets. Given their different physiology and risk factors, we have conducted the first large prospective study on the association of fruit with subtypes of stroke in Chinese adults from both rural and urban areas.”
The current study included 451 681 participants with no history of CVD and not on anti-hypertensive treatment at baseline from the China Kadoorie Biobank conducted in 10 different areas of China, 5 rural and 5 urban. Habitual consumption of fruit was recorded at baseline according to five categories: never, monthly, 1-3 days per week, 4-6 days per week, daily.
Over the seven year follow up period there were 19 300 cases of IHD and 19 689 strokes (14 688 ischaemic and 3562 haemorrhagic). Some 18% of participants consumed fruit daily and 6.3% never consumed fruit. The average amount of fruit eaten by the daily consumers was 1.5 portions (~150g).
The researchers found that compared to people who never ate fruit, those who ate fruit daily cut their CVD risks by 25-40% (around 15% for IHD, around 25% for ischaemic stroke and 40% for haemorrhagic stroke). There was a dose response relationship between the frequency of fruit consumption and the risk of CVD.
Dr Du said: “Our data clearly shows that eating fresh fruit can reduce the risk of cardiovascular disease, including ischaemic heart disease and stroke (particularly haemorrhagic stroke). And not only that, the more fruit you eat the more your CVD risk goes down. It does suggest that eating more fruit is beneficial compared to less or no fruit.”
The researchers also found that people who consumed fruit more often had significantly lower blood pressure (BP). Eating fruit daily was associated with 3.4/4.1 mmHg lower systolic/diastolic BP compared to those who never ate fruit. Dr Du said: “Our data shows that eating fresh fruit was associated with lower baseline BP. We also found that the beneficial effect of fruit on the risk of CVD was independent of its impact on baseline BP.”
In a separate analysis, the researchers examined the association of fruit consumption with total mortality and CV mortality in more than 61 000 patients from the China Kadoorie Biobank who had CVD or hypertension at baseline. They found that compared to those who never ate fruit, daily consumers of fruit cut their overall risk of death by 32%. They also reduced their risks of dying from IHD by 27% and from stroke by around 40%.
Professor Zhengming Chen, the principal investigator of the China Kadoorie Biobank, said: “Patients with CVD and hypertension should also be encouraged to consume more fresh fruit. Many western populations have experienced a rapid decrease in CVD mortality during the past several decades, especially stroke mortality since the early 1950s, for reasons that are not yet fully explained. Improved access to fresh fruit may well have contributed importantly to that decline.”
The researchers concluded: “Our results show the benefit of eating fruit in the healthy general population and in patients with CVD and hypertension. Fruit consumption is an effective way to cut CVD risk and should not only be regarded as ‘might be useful’. Policies are needed to promote the availability, affordability and acceptability of fresh fruit through educational and regulatory measures.”

Not All Marine Phytoplankton Need To Take Their Vitamins

Canadian Institute for Advanced Research
Some species of marine phytoplankton, such as the prolific bloomer Emiliania huxleyi, can grow without consuming vitamin B1 (thiamine), researchers have discovered. The finding contradicts the common view that E. huxleyi and many other eukaryotic microbes depend on scarce supplies of thiamine in the ocean to survive.
“It’s a really different way to think about the ocean,” says CIFAR Senior Fellow Alexandra Worden, co-author on The ISME Journal paper with CIFAR fellows John Archibald (Dalhousie University), Adrián Reyes-Prieto (University of New Brunswick) and three lead authors from Worden’s lab at the Monterey Bay Aquarium Research Institute, Darcy McRose, Jian Guo and Adam Monier.
All living creatures need thiamine to live, as well as other vitamins. Organisms may produce some of their own vitamins, the way that human cells create vitamin D with help from sunlight, but sometimes they rely on other organisms to produce the vitamins they need and then consume them. For example, oranges and other fruits produce vitamin C, which humans need in their diets.
Until now, many marine microbes with cells that have a nucleus — eukaryotes — were thought to depend on other organisms to produce thiamine. If this were the case, B1 would be a major factor in controlling the growth of algae such as E. huxleyi, whose blooms are sometimes so large you can detect them from space. But the researchers found that E. huxleyi grows equally well in water that contains a precursor chemical to thiamine, known as HMP, as it does in an environment rich with thiamine. In fact, it could grow without any thiamine at all.
“If we added thiamine or we added the intermediate, there was absolutely no difference in the growth rate. They were growing equally well,” Worden says.
It was the discovery of a surprising biological mechanism that led the researchers toward this new understanding of thiamine. Genetic analysis had revealed 31 new eukaryotic riboswitches, which are segments of RNA that operate like mechanical switches to turn genes on or off. The researchers then found, unexpectedly, that the riboswitches were tied to genes of unknown function, not genes known to be connected with the production of thiamine. Further testing revealed these organisms didn’t only have a taste for thiamine — they liked HMP too.
“Our study shows that conclusions regarding the importance of vitamin B1 in regulating algal communities need to be re-evaluated,” Worden says.
This is the second recent study to find that vitamin B1 is less important than previously thought. Another paper in The ISME Journal published this August by Stephen Giovannoni’s lab at Oregon State University found that the most abundant strain of bacteria in the ocean, SAR11, grows well in an environment with HMP but not with thiamine alone.
Biochemistry suggests HMP should be more stable than thiamine in the environment, but researchers must next investigate how plentiful the molecule is in the open ocean. Understanding how phytoplankton survive is crucial for predicting how climate change could alter the Earth’s marine ecosystem; for example, as the supply of vitamins in the ocean depletes. Phytoplankton take up carbon dioxide and eventually sink to the bottom of the ocean, which makes their growth a major factor in how much carbon remains in the atmosphere.
“If you want to model the global carbon cycle and you’re putting into the equation that external sources of this vitamin are needed and critical for certain algae (based on prior reports), that its availability shapes which phytoplankton will grow, your predictions will be incorrect,” Worden says.
She says this study shows that marine researchers need to reconsider the methods they have relied on to understand the genetic processes by which ocean microbes adapt, evolve and survive. In the past these methods have been based largely on analogy to biochemical pathways as characterized in medically, industrially or agriculturally relevant organisms, i.e., “model organisms.”
“What we need to recognize is that there might be some other piece to the puzzle, that is different from that in the characterized model organisms, especially when most, but not all of the parts known from model organisms are present,” Worden says.

Reducing Red Meat Consumption Key To Keeping Greenhouse Gas Emissions Manageable

Chuck Bednar for redOrbit.com – Your Universe Online
Unless global red meat and dairy product consumption is reduced, greenhouse gases resulting from food production will increase by 80 percent in the years to come, a team of researchers from the UK reported Sunday in the journal Nature Climate Change.
This dire warning comes as an increasing number of people all over the world are “adopting American-style diets, leading to a sizeable increase in meat and dairy consumption,” said BBC News environmental analyst Roger Harrabin. If this continues, the authors warn that an increasing amount of cropland will have to be converted for use by livestock to keep up with the demand.
As a result, deforestation would increase carbon emissions, increased livestock production would cause a spike in methane emissions, and more widespread fertilizer use would accelerate climate change, Harrabin said. Conversely, the authors wrote that a scenario in which all countries achieved healthier diets (marked by reduced consumption levels of sugars, fats and meat products) significantly reduced the impact on the environment.
“There are basic laws of biophysics that we cannot evade,” lead investigator Bojana Bajzelj, a research associate in the University of Cambridge’s Department of Engineering, told BBC News. “The average efficiency of livestock converting plant feed to meat is less than 3 percent, and as we eat more meat, more arable cultivation is turned over to producing feedstock for animals that provide meat for humans.”
“The losses at each stage are large, and as humans globally eat more and more meat, conversion from plants to food becomes less and less efficient, driving agricultural expansion and releasing more greenhouse gases,” she continued. “Agricultural practices are not necessarily at fault here – but our choice of food is.”
“Unless we make some serious changes in food consumption trends, we would have to completely de-carbonize the energy and industry sectors to stay within emissions budgets that avoid dangerous climate change. That is practically impossible – so, as well as encouraging sustainable agriculture, we need to re-think what we eat,” co-author Professor Pete Smith of the University of Aberdeen’s Institute of Biological and Environmental Sciences, added in a statement.
The study authors wrote that, based on current trends, food production alone will cause total greenhouse gas emissions to exceed their global targets by 2050, and that current agricultural yields will not meet the projected food demands of the expected 9.6 billion people worldwide expected to make up the world’s population by that point.
Unless our dietary habits change, Bajzelj, Smith and their colleagues said that by 2050, cropland would have expanded by 42 percent and fertilizer use would spike by 45 percent since 2009. Furthermore, 10 percent of the world’s tropical forests would be destroyed over the next 35 years, and deforestation, fertilizer use and livestock methane emissions would cause food production-related greenhouse gases to increase by nearly 80 percent.
“The report says the situation can be radically improved if farmers in developing countries are helped to achieve the best possible yields from their land,” Harrabin said. “Another big improvement will come if the world’s population learns to stop wasting food. The researchers say if people could also be persuaded to eat healthier diets, those three measures alone could halve agricultural greenhouse gas levels from their 2009 level.”
“It is imperative to find ways to achieve global food security without expanding crop or pastureland. Food production is a main driver of biodiversity loss and a large contributor to climate change and pollution, so our food choices matter,” said Bajzelj.
The researchers are recommending a diet that includes just two 85 gram (three ounce) portions of red meat and five eggs per week, as well as one serving of poultry each day.
Co-author Keith Richards from the Cambridge Department of Geography explained, “This is not a radical vegetarian argument; it is an argument about eating meat in sensible amounts as part of healthy, balanced diets. Managing the demand better, for example by focusing on health education, would bring double benefits – maintaining healthy populations, and greatly reducing critical pressures on the environment.”
—–
SHOP NOW: Trowel and Error: Over 700 Tips, Remedies and Shortcuts for the Gardener

Antarctic Sea Level Rising Faster Than The Global Average, Claims Satellite Data

Chuck Bednar for redOrbit.com – Your Universe Online
The sea level around the coast of Antarctica is expected to rise faster than the projected global rate, experts from the University of Southampton report in research appearing Sunday in the advanced online edition of the journal Nature Geoscience.
In the paper, lead author Craig Rye and colleagues from the National Oceanography Centre, the British Antarctic Survey and the Scottish Association for Marine Science explain that satellite data from the past 19 years revealed that melting glaciers have caused the sea level there to rise by 2 cm more than the global average of 6 cm.
The researchers said they detected this rapid increase in sea level after studying satellite scans of an area spanning more than one million square kilometers. They added that melting of the Antarctic ice sheet and thinning of floating ice shelves has added an estimated 350 gigatons of additional freshwater to the surrounding ocean.
As a result, the salinity of the ocean water has reduced (a fact that the study authors said has been corroborated by ship-based studies of the water), and Rye explained that since freshwater is less dense than salt water, regions that have accumulated an excess of the former are expected to experiences a localized increase in sea level.
The authors said that most of this increase in freshwater has been found in the region around the Antarctic Peninsula and the Amundsen Sea, and by using the satellite measurements in combination with computer simulations of ocean circulation, they found that the region was experiencing sea level increases greater than the regional mean.
“On the basis of the model simulations, we conclude that this sea-level rise is almost entirely related to steric adjustment, rather than changes in local ocean mass, with a halosteric rise in the upper ocean and thermosteric contributions at depth,” they wrote. “We conclude that accelerating discharge from the Antarctic Ice Sheet has had a pronounced and widespread impact on the adjacent subpolar seas over the past two decades.”
In other words, the sea rise increasing in the Antarctic is due primarily to halosteric (due to the influx of freshwater), while deeper waters are being affected by increases in water temperature, or thermosteric changes. The results of the computer simulation matched closely with the real-world data obtained by the satellites, the authors said.
“The computer model supports our theory that the sea-level rise we see in our satellite data is almost entirely caused by freshening (a reduction in the salinity of the water) from the melting of the ice sheet and its fringing ice shelves,” said Rye, who oversaw the data analysis and was the corresponding author on the study.
“The interaction between air, sea and ice in these seas is central to the stability of the Antarctic Ice Sheet and global sea levels, as well as other environmental processes, such as the generation of Antarctic bottom water, which cools and ventilates much of the global ocean abyss,” he added.
Last month, researchers from the Alfred Wegener Institute (AWI) in Germany used data the European Space Agency’s (ESA) CryoSat-2 spacecraft has been used to accurately map elevation changes in Antarctica, and found that the ice sheet there was reducing in volume by approximately 125 cubic kilometers per year.
—–
Shop Amazon – Rent eTextbooks – Save up to 80%

Researcher Uploads Over Two Million Historic Book Images To Flickr Commons

Chuck Bednar for redOrbit.com – Your Universe Online
Millions of photos and illustrations from the pages of public domain books originally digitized by the US Internet Archive have been uploaded to Flickr by a research fellow at Georgetown University in Washington DC.
Kalev Leetaru, the Yahoo! Fellow in Residence of International Values, Communications Technology & the Global Internet at the Institute for the Study of Diplomacy in Georgetown’s Edmund A. Walsh School of Foreign Service, has already uploaded over two million images originating from more than 600 million of the archive’s books, BBC News technology desk editor Leo Kelion reported on Friday.
Leetaru’s database of Internet Archive Book Images is 100 percent searchable (thanks to tags that are automatically added) and downloadable, Kelion and Megan Geuss of Ars Technica explained. When the library books were originally scanned, the Optical Character Recognition (OCR) software used automatically discarded sections of the text that it recognized as images, they noted.
In order to correct that issue, Leetaru wrote a new program that took advantage of the OCR program. His software went back and rediscovered those discarded portions of text, automatically converted them to JPEG format, and uploaded them to the photo sharing website. In addition, the software copied the caption for each image and the text from the paragraphs that immediately preceded and followed the image in the text, Kelion and Guess explained.
To date, 2.6 of the 14 million total images have been uploaded to Flickr Commons, Robert Miller, Global Director of Books for the Internet Archive, said in a blog post. He added that the organization would soon be able to continuously add to the collection from the more than 1,000 new ebooks that are being scanned on a daily basis.
“This way of discovering and reading a book will help transform our medical heritage collection as it goes up online. This is a big step forward and will bring digitized book collections to new audiences,” Dr. Simon Chaplin, Head of the Wellcome Library, told Miller. The Internet Archive added that they planned to continue working with Flickr to introduce new sub-collections and new ways to use image recognition tools for educational purposes.
Furthermore, anyone interested in learning more about the books from which each image came can access the full text from a link in each picture’s caption, added Josh Ong of The Next Web. The images are from 1500 to 1922, which is when copyright restrictions began in the US, and most of them have been difficult to access until now.
“For all these years all the libraries have been digitizing their books, but they have been putting them up as PDFs or text searchable works. They have been focusing on the books as a collection of words. This inverts that,” Leetaru told Kelion. “It’s amazing to see the total range of images and how the portrayals of things have changed over time.”
“Most of the images that are in the books are not in any of the art galleries of the world – the original copies have long ago been lost,” he continued. “I think one of the greatest things people will do is time travel through the images.” Leetaru also said that he hoped that other libraries throughout the world would follow his lead, running this process through their collection of digitized books in order to “constantly expand this universe of images.”
—–
Nikon COOLPIX L830 16 MP CMOS Digital Camera with 34x Zoom NIKKOR Lens and Full 1080p HD Video (Black)

Chemical Fingerprint Of Sibling Stars Due To Mixing Of Gas During Formation

Chuck Bednar for redOrbit.com – Your Universe Online
Astrophysicists and computational astronomers from the University of California, Santa Cruz have discovered why sibling stars look alike – those formed from a single cloud share the same chemical fingerprint due to early, fast and turbulent mixing of gas in the giant molecular clouds where star formation occurs.
Stars are made primarily of hydrogen and helium, but they also contain trace amounts of elements such as carbon, oxygen and iron, study authors Mark Krumholz and Yi Feng explained. Scientists can determine how abundant each of those trace elements is by carefully measuring the wavelength of light coming from those stars.
When two stars are selected at random, the abundance of their trace elements will differ slightly, with one possessing more iron or carbon than another, they noted. However, two stars selected from the same gravitationally-bound star cluster always share the same abundances, much like how family members share the same basic genes.
“The pattern of abundances is like a DNA fingerprint, where all the members of a family share a common set of genes,” Krumholz, an associate professor of astronomy and astrophysics at UCSC, said in a statement Sunday. He added that it was important to measure this so-called fingerprint because most of the time, stellar families drift apart and migrate to different parts of the galaxy.
Since those abundances are set at birth, astronomers have often wondered if it would be possible to determine if two stars that are now located on opposite ends of the galaxy actually came from the same giant molecular cloud when they formed billions of years ago – and, if so, could they even be able to track down our Sun’s long-lost siblings?
As explained in the latest edition of the journal Nature, Krumholz and Feng, a graduate student at the university, developed supercomputer simulations of interstellar gas coming together to form a cloud which eventually collapses under its own gravity to form a star cluster. Since studies of interstellar gas show far greater variation in chemical abundances than typically observed in stars within the same open star cluster, the researchers added tracer dyes to the simulation’s two gas streams.
They placed red dye in one stream and blue in another, and their results showed extreme turbulence as the two streams came together. This turbulence effectively mixed the tracer dyes together, and by the time the cloud began collapsing and forming stars, the material that formed the stars had turned purple in color. As a result, the stars that formed were also of that hue, which Krumholtz said explains why stellar siblings have the same abundances.
“The simulation revealed exactly why stars that are born together end up having the same trace element abundances: as the cloud that forms them is assembled, it gets thoroughly mixed very fast,” he said. “This was actually a surprise: I didn’t expect the turbulence to be as violent as it was, and so I didn’t expect the mixing to be as rapid or efficient. I thought we’d get some blue stars and some red stars, instead of getting all purple stars.”
Their research, which was supported by NASA and the National Science Foundation (NSF), also demonstrated that the mixing occurs very quickly, before much of the gas becomes a star. This is good news when it comes to the search for the sun’s siblings, since it indicates that the chemical uniformity of star clusters is commonplace, and that even those stars resulting from clouds that produce few of them have extremely similar chemical signatures.
“The idea of finding the siblings of the sun through chemical tagging is not new, but no one had any idea if it would work. The underlying problem was that we didn’t really know why stars in clusters are chemically homogeneous, and so we couldn’t make any sensible predictions about what would happen in the environment where the Sun formed,” Krumholz said. “This study puts the idea on much firmer footing and will hopefully spur greater efforts toward making use of this technique.”
Image 2 (below): This is an image from a computer simulation shows a collision of two streams of interstellar gas, leading to gravitational collapse of the gas and the formation of a star cluster at the center. In this image, the gas streams were labeled with blue and red “tracer dyes,” and the purple color indicates thorough mixing of the two gas streams during the collapse. Credit: Y. Feng and M. Krumholz
—–
NightWatch: A Practical Guide to Viewing the Universe by Terence Dickinson. Revised Fourth Edition: updated for use through 2025.

Need For Enhanced Cues Of Strength Led To Evolution Of Universal Angry Face

Chuck Bednar for redOrbit.com – Your Universe Online
Regardless of age, race, gender or nationality, all people make the same facial expression when they’re angry, experts from the University of California, Santa Barbara (UCSB) and Australia’s Griffith University report in the latest online edition of Evolution and Human Behavior.
The study authors call it the universal “anger face,” noting that it is characterized by a lowered brow, a thinning of the lips and a flaring of the nostrils. In their research, they identified the functional advantages that caused this particular expression to evolve and become what they call “part of our basic biology as humans.”
“Each element is designed to help intimidate others by making the angry individual appear more capable of delivering harm if not appeased,” said co-author Dr. Aaron Sell, a lecturer at Griffith University who was previously a postdoctoral scholar at UCSB’s Center for Evolutionary Psychology, according to the Huffington Post. “The expression is cross-culturally universal, and even congenitally blind children make this same face without ever having seen one.”
The research, which is part of a larger analysis of the evolutionary function of anger, discovered that the expression uses seven distinct muscle groups that contract in a highly stereotyped manner. Dr. Sell’s team set out to discover exactly why evolution selected these specific muscular contractions to depict the emotion.
He and his colleagues showed 141 men and women different computer images of a male face, some of which had been altered to include one of the key facial features linked to anger, said Huffington Post reporter Jacqueline Howard. The manipulated photos were shown next to the original untouched version, and study participants were asked to select which image made the man depicted appear to be physically stronger.
With just one slight change, such as a lowered brow, neither of the faces appeared to be angry, the researchers said according to Daily Mail reporter Mark Prigg. However, when both faces were shown to a subject, that person indicated that the face with the lowered brow looked like it belonged to a physically stronger man.
“The experiment was repeated one-by-one with each of the other major components of the classic anger face – raised cheekbones (as in a snarl), lips thinned and pushed out, the mouth raised (as in defiance), the nose flared and the chin pushed out and up,” Prigg explained. “As predicted, the presence by itself of any one of these muscle contractions led observers to judge that the person making the face was physically stronger.”
“Our previous research showed that humans are exceptionally good at assessing fighting ability just by looking at someone’s face,” Dr. Sell said in a recent statement. “Since people who are judged to be stronger tend to get their way more often, other things being equal, we concluded that the explanation for evolution of the form of the human anger face is surprisingly simple – it is a threat display.”
These threat displays are similar to those used by other animals for self-preservation and protection purposes and consist of cues to exaggerate fighting ability, the researchers said. Since study participants consistently rated faces containing even one of these muscle movements as belonging to a physically stronger person, the findings support the notion that this universal anger face evolved in order to enhance cues of strength.
“This makes sense of why evolution selected this particular facial display to co-occur with the onset of anger,” said co-author and UCSB professor of anthropology John Tooby. “Anger is triggered by the refusal to accept the situation, and the face immediately organizes itself to advertise to the other party the costs of not making the situation more acceptable. What is most pleasing about these results is that no feature of the anger face appears to be arbitrary; they all deliver the same message.”
—–
The Anger Trap: Free Yourself from the Frustrations that Sabotage Your Life by Les Carter

New DNA Study Reveals Lost History Of The Paleo-Eskimo People

Chuck Bednar for redOrbit.com – Your Universe Online
The Paleo-Eskimo people that lived in the Arctic from roughly 5,000 years ago to about 700 years ago, were the first humans to live in the region and survived there without outside contact for more than 4,000 years, researchers reported Friday in the journal Science.
In addition, lead investigator Eske Willerslev of the Centre for GeoGenetics at the Natural History Museum of Denmark and her colleagues report that the Paleo-Eskimos represented a distinct wave of migration that was separate from both the Native Americans (who crossed the Bering Strait far earlier than the Paleo-Eskimos) and the Inuit (which traveled from Siberia to the Arctic several thousand years later).
“The North American Arctic was one of the last major regions to be settled by modern humans,” the museum explained in a recent statement. “This happened when people crossed the Bering Strait from Siberia and wandered into a new world. While the area has long been well researched by archaeologists, little is known of its genetic prehistory.”
According to BBC News, much of our understanding of this culture’s history was based on artifacts acquired by archaeologists. In order to discover a more complete picture of the Paleo-Eskimos, however, Willerslev and more than 50 experts from institutions all over the world conducted a new genetic analysis and discovered that they and modern-day Native Americans arrived in separate migrations.
Furthermore, their research revealed that the culture included very few female members, said Heather Pringle of National Geographic. In fact, by analyzing the diversity in maternally-inherited DNA in genetic samples suggested that there might have been just one woman traveling along with the Paleo-Eskimo migrants, leading Willersley to tell Pringle that she could not recall “any other group having such low diversity.”
Jennifer Raff, a geneticist and anthropologist at the University of Texas, Austin, who was not one of the authors on the Science study, called it a large step forward in Arctic studies. She told National Geographic, “This research has answered several important questions about North American Arctic prehistory,” demonstrating, for example, that Paleo-Eskimos are genetically distinct and arrived separately from the ancestors of the Inuit.
The research team obtained bone, teeth, or hair samples from 169 ancient human remains from Arctic Siberia, Alaska, Canada, and Greenland. However, as Pringle noted, it was difficult to find enough ancient DNA as few of those samples contained well-preserved genetic material. The reason for this, they explained, is because the ancient cultures often buried their dead on the surface instead of digging graves that would have protected them from the repeated freezing and thawing that ultimately damaged or destroyed the DNA.
As a result, Willerslev and her colleagues were only able to acquire whole genome data from 26 of the samples. None of the samples covered more than 30 percent of the genome, and most of them yielded 10 percent or less, the National Geographic reporter said. However, the researchers were able to adjust, taking account of the damaged DNA and the missing data, and “extracting the most information possible out of difficult samples,” Raff said.
One thing the study was unable to determine, the authors said, is why the Paleo-Eskimos disappeared around the same time that the ancestors of the Inuit beginning to colonize the Arctic. However, they did conclude that there is little doubt that the Inuit ancestors, who reached Greenland around 700 years ago, were technologically superior to their predecessors.
—–
Amazon.com – Read eBooks using the FREE Kindle Reading App on Most Devices

Ralph Lauren And Athos Enter Wearable Tech Market With New Products

Chuck Bednar for redOrbit.com – Your Universe Online
One of the biggest names in fashion is entering the wearable tech industry, as Ralph Lauren recently announced that it had developed a new polo shirt featuring biometric sensors that had been knitted directly into the fabric of the product.
The product is known as the Polo Tech shirt, and it was unveiled on the opening day of the US Open tennis tournament (for which Ralph Lauren is the official clothing provider). Company officials explained that the shirt is designed to combine biometrics and active lifestyle apparel in order to improve fitness and overall wellness.
Polo Tech features technology developed by Canadian-based sports science and engineering firm OMsignal, and the sensors capture an array of biological and physiological data from the individual wearing the shirt. Ralph Lauren also claims that the compression shirt has a “sleek look” and a “second-skin fit” that “enhances comfort and agility.”
According to San Francisco Chronicle reporters Michelle Devera and Carolyne Zinko, the Polo Tech shirt, which the company planned to test at the US Open on tennis player Marcus Giron, can detect data pertaining to movement and direction, as well as measure a person’s heartbeat, breathing, stress levels and energy output. The information is then transmitted from a type of black box to the cloud, where it is analyzed and then passed on to a smartphone app.
“We just wanted you to be able to put on a shirt and go,” David Lauren, senior vice president of advertising at Ralph Lauren, told Julianne Pepitone of NBC News during an interview at the company’s New York offices this week. He added that he was “really surprised” that other big-name retailers had yet to launch similar garments, telling Pepitone, “I kept reading the newspaper, waiting for someone else to get out ahead of us.”
Now that they have unveiled the Polo Tech, however, industry experts expect other companies to follow. As JP Gownder, a principal analyst at tech research firm Forrester, told NBC News, “For [wearables] to reach mass adoption, it’s an exercise in cultural engineering. People need to want to wear it on their own merits… If someone doesn’t know wearable-embedded clothing exists, or what it looks like, they aren’t going to covet it.”

Like Ralph Lauren, a Redwood City, California-based company known as Athos is also entering the wearable clothing market with a line of shirts and shorts that use electromyography to detect heart rate, respiration and muscle exertion, Devera and Zinko reported. Their products cost $99 each, and transmit data using a $199 thumb-size metal Core device ($199) in a port on the clothes to a smartphone app.
Using that app, individuals can obtain instant and detailed feedback on their body’s performance, such as how much harder one part of a leg is working than another, as well as advice on how to improve their form and performance levels. The idea was first developed when cash-strapped co-founders Dhananja Jayalath and Chris Wiebe needed a more economical alternative to personal trainers while they were attending college.
“They had time for the gym but no money for personal trainers and started developing crude sensors that they welded onto their clothing,” the Chronicle reporters said, adding that the technology has “since been refined” and will even be used this season by the Golden State Warriors of the National Basketball Association (NBA).
Only time will tell whether or not this type of wearable tech can find a foothold with the general public – after all, as Pepitone explained, a recent poll asking men and women what type of wearables would interest them discovered that 42 percent said they would only wear a wrist device, while just 19 percent said they would be interested in garments. However, industry experts said this should not be a cause for concern for Athos or Ralph Lauren.
“I think this is low-hanging fruit for them, because people want wearables that look good – and we really don’t have much of that yet,” said Ramon Llamas, research manager at tech research firm IDC. “There are a number of customers out there who will say, ‘Yes, I’ll pay a little bit more for something that looks high-end.’”
—–
Shop Amazon – Wearable Technology: Electronics

Microsoft Officially Pulls The Plug On Windows Live Messenger

Chuck Bednar for redOrbit.com – Your Universe Online
More than two years after acquiring Skype, Microsoft is officially pulling the plug on Windows Live Messenger, announcing Thursday that the venerable instant messaging client would no longer be supported starting in November.
Windows Live Messenger, originally launched in 1999 as MSN Messenger, was switched off for most users in 2013, according to BBC News. However, users in China continued to use the service, which had as many as 330 users as recently as 2009. Those individuals will have their service transferred to the recently-acquired Internet telephony provider by October 31, the media outlet added.
MSN Messenger first arrived in China and was renamed Windows Live Messenger in 2005, and anyone still using the service will have their contacts automatically moved to the recently-acquired telephony service, HotHardware’s Sean Knight said. Users attempting to download the software through Microsoft’s official website are being greeted with a message notifying them of the service’s impending termination and advising them to download Skype instead.
When Messenger launched, it was a real-time texting service designed to rival services such as ICQ and AOL’s AIM clients, and according to Tom Warren of The Verge, Microsoft raised AOL’s ire by reverse-engineering AIM’s chat protocol to allow their clients to sign into AOL’s program. In the years that followed, Microsoft added custom emoticons, the ability to play games against friends, and a series of other features intended to enhance the IM service.
“Though the messaging platform currently has relatively few users, its official closure marks the end of an era, of sorts, for many millennials who came of age while chatting on MSN,” said Mashable’s Karissa Bell. She added that former users took to social media “to eulogize the instant messaging client that once ruled dial-up Internet,” but added that Microsoft is offering users free Skype credits to help ease the pain.
In an obituary to the service, the BBC’s David Lee said that it “touched the lives of millions of teenagers who, in an age before real social networking, were just getting accustomed to what it was like to live on the internet. MSN Messenger heralded a new era: a time when chatting up a classmate no longer meant the terrifying prospect of actually having to say something to them.”
“Other sites, smarter and better looking, would see Messenger cast aside. In an age of exciting digital discovery, Messenger became the web’s wooden toy,” he added. “After a long career, it spent its final year enjoying a comfortable retirement in China. Its less well-regarded relative, Windows Messenger, still battles on on work computers the world over.”
In October 2011, Microsoft announced that their $8.5 billion acquisition of Skype had been finalized. As part of the deal, the Internet telephony provider was established as a new business unit within Microsoft, and Skype CEO Tony Bates was chosen to be president of the newly-created Skype division.
“By bringing together the best of Microsoft and the best of Skype, we are committed to empowering consumers and businesses around the globe to connect in new ways,” Bates said in a statement at the time. “Together, we will be able to accelerate Skype´s goal to reach 1 billion users daily.”
“Skype is a phenomenal product and brand that is loved by hundreds of millions of people around the world,” added then-Microsoft CEO Steve Ballmer. “We look forward to working with the Skype team to create new ways for people to stay connected to family, friends, clients and colleagues – anytime, anywhere.”
—–
ORDER YOURS TODAY! Amazon Fire Phone, 32GB (AT&T)

Study: Female College Students Spend Up To 10 Hours Per Day Using Their Phones

Chuck Bednar for redOrbit.com – Your Universe Online
Roughly 60 percent of college students admit that they could be addicted to using their cell phones, researchers from Baylor University claim in a new study published online Tuesday in the Journal of Behavioral Addictions.
Dr. James Roberts, the Ben H. Williams Professor of Marketing in Baylor’s Hankamer School of Business, and colleagues from Universitat Internacional de Catalunya in Barcelona, Spain and Xavier University in Cincinnati, Ohio also reported that female college students spent an average of 10 hours per day using their mobile devices.
In comparison, male college students spent nearly eight hours per day using their cell phones. Roberts calls those figures “astounding,” and he and co-authors Luc Honore Petnji Yaya and the late Dr. Chris Manolis explain that the type of excessive use exhibited by both men and women poses a potential threat to their academic performance.
“As cellphone functions increase, addictions to this seemingly indispensable piece of technology become an increasingly realistic possibility,” Dr. Roberts said in a statement Wednesday, noting that some of the men and women his team interviewed even confessed that they felt agitated when the device was out of sight.
As part of their research, the study authors conducted an online survey of 164 college students, focusing on 24 different cellphone activities. They found that the amount of time spent on 11 of those activities varied significantly by gender. Some of those functions (including Pinterest and Instagram) are clearly associated with cell phone addiction, while others (Internet use, playing video games) are not typically associated with addiction, they said.
The survey found that responders reported spending the greatest amount of time sending and receiving text messages (an average of 94.6 minutes a day) and using email applications (48.5 minutes). Checking Facebook was third (38.6 minutes), followed by surfing the Internet (34.4 minutes) and listening to MP3 players (26.9 minutes).
While men send approximately the same number of emails as women, they spend less time on each, leading Dr. Roberts to state that this suggest that men send “shorter, more utilitarian messages than their female counterparts.” Overall, women were found to spend more time on their cell phones, and more likely to use them for social reasons such as texting or sending emails, while men tended to use them utilitarian or entertainment purposes.
However, Dr. Roberts noted that male college students “are not immune to the allure of social media,” and also spent time visiting social media websites such as Facebook, Instagram and Twitter. Among the reasons guys used Twitter were to follow athletes, catch up on the latest news, or “waste time,” as one male student told the study author.
Excessive cell phone use poses several possible risks for college students, the authors explained. The devices could wind up being a distraction in the classroom, or even a way to cheat on exams. Furthermore, excessive cell phone use could also cause conflict with family members, professors or employers, and some individuals could even pretend that they are taking a call or sending a text to avoid having to deal with a difficult situation, they added.
Dr. Roberts said that this latest survey is more extensive than previous studies when it comes to examining the number and types of cell phone activities, and is also the first study to examine which specific activities are significantly associated with addictions to mobile devices and which ones are not.
He added that it was important to pinpoint which activities that transform these devices “from being a helpful tool to one that undermines our well-being and that of others.” Those activities examined by the study included calling, texting, emailing, Web surfing, banking, taking pictures, playing games, reading books, using a calendar or a clock, and using various apps, including Google Maps, Facebook, Twitter, Pinterest, Instagram, YouTube and iTunes.
—–
Join Amazon Student – FREE Two-Day Shipping for College Students

Scientists Discover Mite Species Living, Eating, Sleeping And Fornicating On Human Faces

Chuck Bednar for redOrbit.com – Your Universe Online
There are mites crawling all over your face right now, and it doesn’t matter what you do or how hard you wash or how much soap you use, you can’t get rid of them – and you can thank North Carolina State University graduate student Megan S. Thoemmes and her colleagues for that unsettling bit of knowledge.
Thoemmes, who is currently studying in the NC State Department of Biological Sciences and the W.M. Keck Center for Behavioral Biology, is co-author of a study appearing in the latest edition of the journal PLOS ONE which describes Demodex mites, a group of hair follicle and sebaceous gland-dwelling species that eat, sleep and even fornicate on human faces.
Ed Yong of National Geographic puts the research into perspective: “Think of all the adults you know. Think of your parents and grandparents. Think of the teachers you had at school, your doctors and dentists… and the actors you see on TV. All of these people probably have little mites crawling, eating, sleeping, and having sex on their faces.”
Yong notes that there are over 48,000 different species of mites in the world, and most of them resemble “lozenges on spindly legs.” The two types of Demodex mites that live on people’s faces, however, look more like “wall plugs – long cones with stubby legs at one end. They don’t look like much, and most of us have never looked at one at all. But these weird creatures are almost certainly the animals we spend the most time with.”

Image Caption: This is a Demodex folliculorum. It lives on your face. Credit: USDA, Confocal and Electron Microscopy Unit
Demodex mites, NC State adjunct assistant professor of entomology Michelle Trautwein explained in a statement Wednesday, are microscopic arachnids (relatives of ticks and spiders) that live in the pores of mammal skin. In humans, they have been found to reside in the general vicinity of the nose, but they have also been found in all mammal species except for the platypus and its egg-laying kin.
According to Medical Daily’s Samantha Olson, one of the two species, Demodex brevis, is related to the mites known to cause mange in dogs. While scientists have known about these miniature arachnids, which consume the oil secreted by our skin, for more than a century, this marks the first-ever in-depth scientific analysis of the mites, she added.
Thoemmes, fellow NC State students Robert R. Dunn and Daniel J. Fergus, and experts from the North Carolina Museum of Natural Sciences and the California Academy of Sciences in San Francisco recruited men and women at a science event in Raleigh, North Carolina and scraped the sides of their noses, Olson added. They found DNA belonging to the Dermodex mites on the skin of every single individual who was over the age of 18.
“The first time I found one on my face I didn’t sleep for four nights,” Thoemmes told NPR. “They’re actually pretty cute. With their eight little legs, they look like they’re almost swimming through the oil. It’s like having friends with you all the time. Realizing that everyone has them and they’re likely not causing any problems, it’s pretty reassuring.”
The researchers explained to Medical Daily that they are uncertain how the mites are spread, but one theory suggests that since children are less likely to have them, they are originally passed from mother to infant during breastfeeding. Given that they were found on 100 percent of adults, however, the study authors said that they plan to delve deeper into the mites’ background, especially to see if there are any health concerns associated with them.
“Considering how common these creatures are, there’s still so much we don’t know about them. We don’t know where our two face-mite species came from, or what their closest relatives are. We also don’t know how many other face-mites exist,” Yong said.
Given that each Demodex species tends to stick to one mammal host, that there are over 5,000 species of mammals, and that many have more than one type of mite, he added that this means that there “could potentially be 10,000 species of Demodex left to discover.”
—–
SHOP NOW: AmScope B100B-MS 40X-2000X Biological Binocular Compound Microscope with Mechanical Stage

Bee-utiful Research: Experts Working To Improve Health Of Key Agricultural Pollinator

Chuck Bednar for redOrbit.com – Your Universe Online
While honeybees may be best known as producers of honey and beeswax useful for candles and seals, experts from the University of Arizona want to remind you that the insects play an important role in the agriculture industry.
“Honeybees are responsible for pollinating agricultural crops that make up one-third of our diet, including fruits and vegetables. They’re the cornerstones of heart-healthy and cancer prevention diets,” adjunct professor Gloria DeGrandi-Hoffman, who is also a research leader at the USDA’s Carl Hayden Bee Research Center (CHBRC) explained in a statement Thursday.
In light of the essential role the insects play in pollinating crops, DeGrandi-Hoffman and her colleagues are working to optimize the health of honeybee colonies by improving their nutrition and finding ways to better control Varroa destructors – bloodsucking parasitic mites that can cause Deformed Wing Virus (DWV) in honeybees.
“We’re the honeybee nutrition lab,” DeGrandi-Hoffman said. “Humans are healthier when we have good nutrition and so are bees. We study the effects of malnutrition on bees, including the effects of fungicides and pesticides and how they alter the ability of bees to acquire nutrients from flower nectar.”
DeGrandi-Hoffman noted that the lab also examines the role that microbes play in helping bees digest their food and acquire nutrients from it, and added that she and her colleagues are also working on the honeybee microbiome project – an initiative inspired by the Human Microbiome Project that hopes to understand the role of and interactions between various microbes that live on or inside of honeybees.
“Just like in humans, microbes play an important role in digestion and overall health and immunity in bees. Honeybee colonies are healthier if they have a diverse micro-biome,” the professor said, explaining that honeybees play such a key role in agricultural pollination because they can be housed in colonies, and can then be transported to fields and released once the time is right to pollinate flowers.
“We can manage them and bring pollinating populations into key agricultural systems,” she said. “For example, in February bees are brought from all over the country to pollinate almond trees. There aren’t enough native pollinators to pollinate all the almond crops, but we can bring honeybees into the orchards, open up the colonies and instantly have thousands and thousands of pollinators working with those trees.”
Each winter, beekeepers estimate that they lose nearly one-third of their colonies, said DeGrandi-Hoffman. Droughts can also be devastating to them, since plants that serve as essential sources of food to the honeybees fail to bloom in such conditions. Fortunately, the Arizona-based researcher said that the region around her university has a healthy population of honeybees and native pollinators, which in turn is beneficial to human health.
Families looking to promote the pollinator population in and around their homes can do so by growing plants such as sunflowers and asters, she added. She said that lists of plants that serve as good food sources for honeybees and other types of pollinating insects can be found online through the Arizona-Sonora Desert Museum website, and urges caution when using pesticides.
“Homeowners sometimes use pesticides and fungicides without thinking of their possible effects on non-target organisms like bees. These products can cause bee kills,” the Arizona professor explained. She added that it is not dangerous to encourage bees to hang around your home or garden in most cases. “Unless you step on one, the only time bees are defensive and could possibly sting you is if you get near their nest,” she concluded.
—–
The Beekeeper’s Bible: Bees, Honey, Recipes & Other Home Uses by Richard Jones

Google Reveals Secret Drone Project, Kept Hidden For Two Years

John Hopton for redOrbit.com – Your Universe Online
Google has revealed “Project Wing,” a program kept under wraps for two years that has produced drones intended to deliver emergency aid and commercial goods.
This video from Google shows prototypes being tested in Australia, delivering dog food to a remote farm in Queensland. Although said to be “years away from a product,” Astro Teller at Google X, the secretive tech research branch of the company, says that the drones “aspire to take another big chunk of the remaining friction of moving things around in the world,” following in the footsteps of history’s other great transportation innovations.
Google’s primary objective for their flying vehicles is to provide help to people in emergency situations. Along with the obvious advantages of being able to reach remote places affected by natural disasters, the drones could, for example, deliver defibrillator kits to people feared to be having a heart attack, moving more quickly than ambulances could. Dave Voss, the lead on Project Wing, told BBC News, “When you have a tool like this you can really allow the operators of those emergency services to add an entirely new dimension to the set of tools and solutions that they can think of.”
As the dog food test demonstrates, there is also the potential for all manner of commercial products to be delivered by Google’s drones too. This brings inevitable comparisons to Amazon’s Prime Air project, the goal of which is to “get packages into customers’ hands in 30 minutes or less using unmanned aerial vehicles.”
Reuters reports that the FAA allows limited use of drones within the United States for surveillance, law enforcement, atmospheric research and other applications. Google chose Australia for testing, reasoning that the country’s approach to the use of drones was more “progressive” than in some other places. Somewhere like the remote Queensland outback also no doubt helps to demonstrate the benefits of airborne delivery vehicles. But the US market would naturally be a significant target once a delivery network exists.
The small, sleek, white aircraft would be set at the start of their journey to follow pre-programmed routes, and then fly autonomously to their destination. This is in contrast to military drones, which are often under the constant control of a pilot on the ground. Google says that the drones have more in common with their much talked about self-driving car “than the remote-controlled airplanes people fly in parks on weekends.”
The next step for Google is to focus on making sure the vehicles, which have four electrically-driven propellers and a wingspan of approximately 1.5 meters, are able to navigate around each other and anything else that may be in the sky. They also intend to reduce the noise levels the vehicles produce, and to enhance delivery capability so that small targets, such as a doorstep, can be pinpointed.
The prototypes used in Australia flew at 40 meters, had rotors to enable vertical takeoff and landing and a fixed wing which allowed them to fly like a plane. This hybrid of a plane and a helicopter is referred to as a tail sitter. After taking off vertically, the Project Wing planes then rotate to a horizontal position for flying, and deliver their packages to the ground on a long tether.
—–
SHOP NOW: Parrot AR.Drone 2.0 Elite Edition Quadricopter

Antilles Pinktoe Tarantula, Avicularia versicolor

The Antilles Pinktoe Tarantula (Avicularia versicolor), known also as the Martinique Red Tree Spider or the Martinique Pinktoe, is native to Guadeloupe and Martinique within the Caribbean Sea, but is a popular spider pet because of its docile nature and unique coloration.

These tarantulas are arboreal (tree-dwelling). They spin intricate funnel webs in which they spend the majority of their time. While in captivity, cage height is much more significant than floor space. The décor is made up of tree branches or cork pieces to which the spider can attach its web.

The spiderlings of this species are bright blue with a black treetrunk pattern featured on the abdomen. As they grow, they gradually lose the blue coloration and their carapace turns green, their abdomen red, their legs green with purple hairs and pink tarsi. They are a more colorful version of their cousin, the Pinktoe Tarantula. On average, the males are slightly brighter in coloration than the females. Like most tarantulas, the males remain much smaller than the females, especially in the abdomen.

It is an aggressive feeder and will consume anything from crickets, worms, grasshoppers, cockroaches, beetles, moths, and other flying insects, to anole lizards. They will also consume mealworms and moth larvae, but these have to be given sparingly because of their fat percentage and the calcium-phosphor proportions.

Image Caption: Antilles Pinktoe Tarantula. Credit: Wikipedia CC BY-SA 3.0

Menopausal Hot Flashes Cost Millions In Lost Wages

April Flowers for redOrbit.com – Your Universe Online
One of the inescapable facts of growing older for women is menopause. As our hormonal systems change, so do the needs and functions of our bodies. In our grandmothers’ day, menopause was treated with hormone therapy, but that practice has fallen by the wayside in recent years — with consequences. Millions of women suffering in silence with hot flashes is a preventable side effect of the decline in hormone therapy, according to a new study led by the Yale School of Medicine.
The researchers, led by Philip Sarrel, M.D., emeritus professor in the Departments of Obstetrics, Gynecology & Reproductive Sciences, and Psychiatry, found that, for most women, moderate to severe hot flashes, known as vasomotor symptoms (VMS), are not treated. Women suffering from VMS have other symptoms as well, including fatigue, sleep disturbance, depression, anxiety and impaired short-term memory.
“Not treating these common symptoms causes many women to drop out of the labor force at a time when their careers are on the upswing,” said Sarrel. “This also places demands on health care and drives up insurance costs.”
The researchers used data collected as part of health insurance claims. They compared data from 500,000 women, 50 percent with and 50 percent without hot flashes, all insured by Fortune 500 companies. Using this data, the researchers calculated the costs of health care and work loss over a 12-month period.
Their findings, published in the journal Menopause, revealed that women who suffered from hot flashes have 1.5 million more health care visits than those without VMS. This has costly side effects: the cost of the additional health care came to $339,559,458 and the work loss represented another $27,668,410 during the study period.
The loss of ovarian hormones in the years just before and after natural menopause is the main cause of hot flashes. The symptoms can occur almost immediately following a hysterectomy, and for these women, they are usually more severe and longer lasting. VMS affects the daily function of more than 70 percent of all menopausal women and more than 90 percent of those with hysterectomies.
Until recently, hot flashes were treated with either hormone therapy or alternative approaches. The 2002 Women’s Health Initiative Study findings, however, caused a sharp drop in the use of hormone therapy because of an unfounded fear of cancer risks.
“Women are not mentioning it to their healthcare providers, and providers aren’t bringing it up,” said Sarrel. “The symptoms can be easily treated in a variety of ways, such as with low-dose hormone patches, non-hormonal medications, and simple environmental adjustments such as cooling the workplace.”
—–
A Woman’s Guide to Menopause and Perimenopause – Yale University Press Health & Wellness

Junk Food Persuades Us To Reject A Balanced Diet, Test On Rats Suggests

John Hopton for redOrbit.com – Your Universe Online
Testing on rats has shown that too much junk food may teach our brains to disregard the need for a balanced diet. Reduced self-control from eating bad food could chart a course to overeating and obesity that becomes increasingly difficult to deviate from.
Research has shown that when rats are given access to a junk food diet, the result is not only weight gain but also a reduced appetite for varied foods. The natural inclination towards a diverse and balanced diet that is widespread in animals, including humans, is impeded and continues to be for some time after junk food consumption is stopped. Natural tendency to avoid over-eating is also decreased.
A team led by Professor Margaret Morris, Head of Pharmacology from the School of Medical Sciences, UNSW Australia, observed that rats provided with a two week diet which included daily access to cafeteria foods such as pie, dumplings, cookies and cake had increased weight of ten percent and displayed significant behavioral changes. They became indifferent when it came to choosing food, and had lost their natural propensity for novelty.
The tests had young male rats associate different sound cues with different flavors of sugar water; one cue for cherry and one for grape. Healthy rats that were used to a good diet would ignore cues related to a flavor they had recently overindulged in. But after two weeks on the junk food diet this built in dietary control was less apparent and the rats stopped avoiding the sound that advertised the over-familiar taste, a habit that continued even after a healthy diet was resumed.
It is thought that a diet of junk food causes changes in the reward circuit parts of the rats’ brains, such as the orbitofrontal cortex, an area responsible for decision-making. The brain’s reward responses are similar in all mammals, and so the assumption is that a regular diet of junk food in humans will impair our self-control and make it increasingly difficult for us to say no to harmful foods.
“The interesting thing about this finding is that if the same thing happens in humans, eating junk food may change our responses to signals associated with food rewards,” says Professor Morris. “It’s like you’ve just had ice cream for lunch, yet you still go and eat more when you hear the ice cream van come by.”
These findings bring additional concern to the well-publicized problems of obesity, which results in the deaths of at least 2.8 million people worldwide every year and contributes to a series of chronic diseases.
In addition, Dr. Amy Reichelt, UNSW postdoctoral associate and lead author of the paper, published in the open-access journal Frontiers in Psychology, suggests that, “As the global obesity epidemic intensifies, advertisements may have a greater effect on people who are overweight and make snacks like chocolate bars harder to resist.”
—–
GET FIT WITH FITBIT – Fitbit Flex Wireless Activity + Sleep Wristband, Black