Mammals almost went extinct with the dinosaurs, study finds
Written By: Chuck Bednar
Brian Galloway
The same asteroid responsible for killing the dinosaurs 66 million years ago also very nearly wiped out Cretaceous period mammals, killing more than 90 percent of the warm-blooded creatures, according to new research published in the Journal of Evolutionary Biology.
In the study, lead author Dr. Nick Longrich of the Milner Centre for Evolution at the University of Bath’s Department of Biology & Biochemistry and his colleagues found that at least 93% of all known mammal species became extinct across the Cretaceous-Paleogene (K-Pg) boundary – the time marking the end of the Mesozoic Era and the start of the Cenozoic Era.
Specifically, they found that only four of 59 known species survived the impact event, far fewer than experts previously believed. However, their analysis also revealed that mammals were able to rebound far more quickly than experts had originally thought and that local diversity levels in North American experienced a “remarkably rapid” recovery less than 300,000 years later.
“Because mammals did so well after the extinction, we have tended to assume that it didn’t hit them as hard,” Dr. Longrich explained in a press release. “However our analysis shows that the mammals were hit harder than most groups of animals, such as lizards, turtles, crocodilians, but they proved to be far more adaptable in the aftermath.”
The meteor that killed the dinosaurs almost destroyed mammal life on Earth. Credit: Thinkstock
Fossil record biased towards surviving species, authors note
As part of their research, the study authors reviewed the published fossil record from western North America starting two million years before the K-Pg boundary through 300,000 years after the impact of the asteroid, comparing species diversity both before and after the event.
They found a correlation between the survival rates of the mammals and their geographic range, size, and abundance. In other words, species that were common and covered larger areas wound up being far more likely to avoid extinction as the result of the asteroid impact. This resulted in a sampling artifact in which rare creatures wound up being both more vulnerable to extinction and less likely to be recovered, they explained in their paper.
“The species that are most vulnerable to extinction are the rare ones, and because they are rare, their fossils are less likely to be found. The species that tend to survive are more common, so we tend to find them,” Dr. Longrich said. “The fossil record is biased in favor of the species that survived. As bad as things looked before, including more data shows the extinction was more severe than previously believed.”
Since the asteroid would have wiped out most types of plants and animals, those lucky few that managed to survive would have likely fed on insects that ate dead plants and animals. With the food supply so limited, only the smaller creatures would have survived, making it likely that no species larger than a cat would have escaped extinction in the wake of the impact.
‘Explosion of diversity’ witnessed in years following asteroid impact
Yet mammals rebounded far more quickly than experts had previously believed, as they were able to not only regain lost species diversity in a relatively short time but exceeded their earlier numbers by doubling the number of pre-extinction event species in just 300,000 years.
This recovery, Dr. Longrich said, occurred in different ways in different places. For instance, species that emerged in Montana ended up being distinct from those in other, nearby states, he and his colleagues said. This rapid regional increase in both diversity and disparity is unusual in geographically restricted studies, they added.
“You might expect to see the same few survivors all across the continent,” Dr. Longrich said. “But that’s not what we found. After this extinction event, there was an explosion of diversity, and it was driven by having different evolutionary experiments going on simultaneously in different locations. This may have helped drive the recovery.”
“With so many different species evolving in different directions in different parts of the world, evolution was more likely to stumble across new evolutionary paths,” he continued, adding that “it wasn’t low extinction rates, but the ability to recover and adapt in the aftermath that led the mammals to take over.”
Researchers develop new ultra-thin, flexible solar cells
Written By: Chuck Bednar
Brian Galloway
A team of engineers at the Gwangju Institute of Science and Technology in South Korea have reportedly developed a new kind of ultra-thin, bendable solar cell capable that could be used to power fitness trackers, smart glasses and other kinds of wearable technology.
In fact, their photovoltaic technology is said to be flexible enough to wrap around the average pencil, developer Jongho Lee and his colleagues explained in a statement. Their breakthrough, which is reported in detail in Monday’s issue of the journal Applied Physics Letters, utilizes a special technique that allows it to be thin and flexible while requiring fewer materials.
The solar cell is approximately one micrometer thick, or thinner than the typical human hair. In contrast, standard photovoltaics are usually several hundred times thicker, and even other kinds of thin cells are at least twice as big. The thinness of the material makes it easier to flex because it has less material at the farthest ends of the central plane than thicker photovoltaic sheets.
Lee’s team constructed the solar cells from a semiconducting material called gallium arsenide, and stamped them directly onto a flexible substrate without using an adhesive, as doing so would have increased the thickness of the material.
Photovoltaics could be used to power next-gen wearable technology
These cells were then “cold welded” to the electrode on the substrate by applying pressure at a temperature of 338 degrees Fahrenheit (170 degrees Celsius) and melting a temporary adhesive known as photoresist onto the top of the newly-assembled unit.
Eventually, the layer of photoresist was peeled away, leaving the direct metal to metal bond that also served as a way to reflect stray photons back onto the solar cells. Efficiency tests of the cells revealed that they were comparable to other, thicker photovoltaics, while also being able to wrap around an object with a radius as small as 1.4 millimeters (0.055 inches).
In addition, Lee’s team analyzed the cells and found that they experience just 25% the amount of strain as similar solar cells that are 3.5 micrometers thick. These new, thin photovoltaics are “less fragile under bending, but perform similarly or even slightly better,” Lee said. While other teams of researchers have reported success with solar cells roughly one micrometer thick, the new cells are assembled using a novel new method.
Rather than using the etching process to remove the entire substrate, the GIST-led team instead uses transfer printing to develop extremely flexible photovoltaics while using a smaller quantity of materials, the researchers explained. The new cells are so thin and flexible that they could be integrated into the frames of smart glasses or the fabrics used to make wearable technology, they added.
Blind Mexican catfish species spotted in the US for the first time
Written By: Chuck Bednar
Brian Galloway
For the first time, a rare type of eyeless catfish native to Mexico has been spotted in the US, as a team of researchers from the University of Texas at Austin identified the creature swimming in a limestone cave at the Amistad National Recreation Area near the city of Del Rio.
Known as the Mexican blindcat (Prietella phreatophila), these endangered fish are typically less than three inches long and live in areas supported by the Edwards-Trinity Aquifer underlying the Rio Grande basin in Texas and Coahuila, UT-Austin ichthyology curator Dean Hendrickson and his colleagues explained Friday in a statement.
In May, Hendrickson’s team found two of the catfish in the limestone cave, and their discovery supports the belief that the Texas and Mexico portions of the aquifer are connected by water-filled caves located under the Rio Grande. While there have been rumored sighting of the species in Texas for decades, this is the first time that such observations can be confirmed.
The two catfish, which have since been relocated to the San Antonio Zoo, “look just like the ones from Mexico,” the ichthyologist said. It is the third species of blind catfish to be identified in the US, joining the toothless blindcat (Trogloglanis pattersoni) and the widemouth blindcat (Satan eurystomus). All three species have only been spotted in Texas.
Elusive creature captured after decades of reported sightings
First described in 1954 when it was found in wells and springs in northern Mexico, the Mexican blindcat was listed both as an endangered species by the Mexican government and as a foreign endangered species by the US Fish and Wildlife Service. For years, Hendrickson and his fellow researchers have worked to find additional populations on both sides of the border.
Jack Johnson, a National Park Service resource manager at Amistad, reported seeing some of the pinkish-white colored, slow-moving creatures during the spring of 2015. After several months of searching, he, Zara Environmental LLC biologist Peter Sprouse and a team of researchers finally once again spotted the blind catfish last month.
“Cave-dwelling animals are fascinating in that they have lost many of the characteristics we are familiar with in surface animals, such as eyes, pigmentation for camouflage, and speed,” Sprouse said. “They have found an ecological niche where none of those things are needed, and in there they have evolved extra-sensory abilities to succeed in total darkness.”
“Aquifer systems like the one that supports this rare fish are also the lifeblood of human populations and face threats from contamination and over-pumping of groundwater,” added Johnson. “The health of rare and endangered species like this fish at Amistad can help indicate the overall health of the aquifer and water resources upon which many people depend.”
While the fish have been transferred to the zoo, they are not yet on display to the public, vice president of conservation and research Danté Fenolio. For now, they will be housed in a facility specially designed for cave and aquifer species with the goal of keeping them “safe and healthy,” Fenolio said. “The fact that the zoo can participate now and house these very special catfish demonstrates the zoo’s commitment to the conservation of creatures that live in groundwater.”
New research shows the ‘Gospel of Jesus’s Wife’ is a fake
Written By: Chuck Bednar
Brian Galloway
A papyrus unveiled in 2012 suggesting that Jesus Christ had been married is most likely fake, the Harvard University professor of divinity who first unveiled the manuscript and who defended the authenticity of the document for years conceded in an interview earlier this week.
According to LiveScience, the documents were given to Dr. Karen King, the first professor to be appointed as the Hollis Professor of Divinity at Harvard as well as the author of several books on Christianity and Gnosticism, but an unidentified source. The manuscript, which had been written in ancient Coptic script, contained a passage apparently referring to Jesus’ wife.
The papyrus almost immediately received much media attention and scrutiny from scholars and members of the religious laity alike, the latter of whom disputed its authenticity. In April 2014, a series of test results published in the Harvard Theological Review supported its authenticity, but later research published in 2015 in the Cambridge University journal New Testament Studies had concluded that had been partially copied from an online translation of the Gospel of Thomas.
Despite the dispute, King had long maintained that the document was authentic – until this week, when during an interview with The Atlantic, which had identified the source of the papyrus as an Egyptologist with a background in Coptic who also dabbled in the making of pornographic films starring his own wife, she admitted that the disclosure “tips the balance towards forgery.”
The long, strange story of the supposedly ancient papyrus
The so-called Gospel of Jesus’ wife was acquired by King from the man who has been identified as Walter Fritz, along with a photocopy of a signed sales contract, in December 2011. However, as the Harvard professor explained to reporters, she eventually realized that she knew little about the then-anonymous source, with whom she had exchanged email and met once.
Fritz, as it turned out, was a resident of North Port, Florida who studied at the Free University’s Egyptology institute and had formally studied the Coptic language. He told King that he was a devoted family man who was independently wealthy and enjoyed trips to Disney World, but he actually made pornographic films starring his own wife, a woman who according to The Atlantic had authored a book of “universal truths” and claimed to channel the voices of angels.
According to the Daily Mail, a note written and signed by Fritz has surfaced in which he said that he was “the sole owner of a papyrus fragment… which was named ‘Gospel of Jesus’s Wife’” and that he guaranteed that neither he nor any third party had “forged, altered, or manipulated the fragment and/or its inscription in any way since it was acquired by me.”
Fritz claimed to have purchased the papyrus along with other documents from a man known as Hans-Ulrich Laukamp, the owner of ACMB-American Corporation for Milling and Boreworks in Venice, Florida, in 1999. While it is true that the two men were co-workers that the company, Laukamp’s stepson, René Ernest, was interviewed by LiveScience in 2014 and said that Laukamp did not sell the manuscript to Fritz and was not even interested in antiquities.
The website also said that it found evidence of forgery in a Greek text offered by an art company founded by Fritz in 1995, and a letter he provided to King claiming that the Gospel papyrus had been analyzed by Free University of Berlin professor Peter Munro and his colleagues was also a fake. In light of this and other evidence, King has admitted that Fritz lied to her, but said that she would not be convinced that the document itself is fake without additional testing.
New lizard species found in the Dominican Republic
Written By: Chuck Bednar
Brian Galloway
Hidden in plain sight in the middle of the most highly trafficked island in the Caribbean, a new type of chameleon-like lizard has become the first new species of anole found in the Dominican Republic in decades, researchers from the University of Toronto revealed on Friday.
The new species, which was dubbed Anolis landestoyi in honor of the research who first saw and snapped a photograph of this particular Greater Antillean anole, could help explain why different islands are home to different but similar-looking groups of lizards, the researchers explained.
Many types of Greater Antillean anoles have counterparts on other islands, Luke Mahler of the university’s Department of Ecology & Evolutionary Biology and the lead author of a new study (published in the journal The American Naturalist) detailing the findings, said in a press release.
This phenomenon is known as replicated adaptive radiation, and occurs when related types of creatures evolving on different islands diversify into similar sets of species occupying the same ecological niches, the researchers said. Most Greater Antillean anoles have close matches living on other Caribbean islands, but as many as one-fifth of them do not.
One species believed to have been an exception to this rule was the Cuban anole, a member of the Chamaeleolis group. Scientists had thought that these large, slow-moving creatures, which more closely resemble chameleons that typical anoles, were unique to Cuba. The discovery of the Anolis landestoyi suggests otherwise.
Credit: University of Toronto
Discovery could shine a light on conservation issues in the region
Despite living in the Dominican Republic, the Anolis landestoyi is “ecomorphologically similar” to the Cuban anole, the study authors wrote. Both types of lizards have short limbs and tails, and both appear to favor relatively narrow perches, strengthening the longstanding theory that lizard communities can evolve to be nearly identical, despite living on different islands.
The new species was originally spotted by local Dominican naturalist Miguel Landestoy, who contacted Mahler on multiple occasions with images of increasing quality showing the species. After receiving one particular batch, Mahler said, “I thought, ‘I need to buy a plane ticket.’”
“Our immediate thought was that this looks like something that’s supposed to be in Cuba, not in Hispaniola — the island that Haiti and the Dominican Republic share. We haven’t really seen any completely new species here since the early 1980s,” he added. “Like the discovery of a missing puzzle piece, Anolis landestoyi clarifies our view of replicated adaptive radiation in anoles.”
Mahler said that the new discovery supports the belief that island-based ecosystems evolve in a surprisingly predictable way, but unfortunately, the news isn’t all good. The Anolis landestoyi is already an at-risk species, as illegal deforestation has forced the creature to a tiny habitat in one little area in the western Dominican Republic. Mahler is hopeful that the discovery of the species will help shed new light on the conversation issues facing this part of the world.
In May, the probe sent back compositional data confirming that Hydra’s surface was dominated by nearly pristine water ice, confirming hints detected by scientists in photographs showing that the satellite had a highly reflective surface. On Thursday, NASA announcedthat newly obtained spectral observations of Nix showed that its surface has a similar ice-rich composition.
The new data was obtained by New Horizons’ Linear Etalon Imaging Spectral Array (LEISA) instrument, a near-infrared imaging spectrometer, and could provide new insight into how the distant dwarf planet’s satellite system originally formed. Furthermore, the agency added that it will help mission scientists piece together details of Pluto’s quartet of smaller outer moons (a group which also includes the moons Styx and Kerberos).
“Pluto’s small satellites probably all formed out of the cloud of debris created by the impact of a small planet onto a young Pluto,” explained New Horizons Project Scientist Hal Weaver with the Johns Hopkins University Applied Physics Laboratory (APL) in Maryland. “So we would expect them all to be made of similar material.”
Nix is a tiny moon, but it could still contain water ice. Credit: NASA
Observations shed new light on similarities, differences of Pluto’s moons
Weaver and his colleagues have collected spectra data from three of Pluto’s moons thus far (Nix, Hydra and its largest moon, Charon) and compared the results to pure water ice. They found that the surface of Nix displayed the deepest water-ice features amongst the three satellites.
The deeper features discovered on Nix are indicative of water that is relatively pure and coarse-grained, because the shape and depth of water-ice absorption is dependent upon the size and the purity of the icy grains on the surface, NASA explained. Scattering from smaller, less pure, icy grain is likely to cause spectral absorption features to become shallower and washed out.
The strong signature of water-ice absorption on the surfaces of all three satellites adds weight to this scenario,” Weaver said, adding that while he and his colleagues did not collect spectra of Styx or Kerebos – Pluto’s two smallest moons – “their high reflectivity argues that they are also likely to have water-ice surfaces.”
Captured on July 14, 2015, the Nix observations may have answered some questions, but raised others, such as why it and Hydra appear to have different surface ice textures, despite the fact that they are similar in size. In addition, researchers are puzzled as to why Hydra’s surface reflectivity is visible at wavelengths higher that of Nix’s, even though the latter moon’s surface appears to be icier and should theoretically be more reflective at visible wavelengths.
Scientists find oldest Homo erectus footprints ever discovered
Written By: Chuck Bednar
Brian Galloway
Newly discovered fossilized footprints left behind by Homo erectus, the extinct ancestor of modern humans, are believed to be approximately 800,000 years old and are potentially the oldest such remains ever discovered by researchers, according to published reports.
Discovered in the deserts of south eastern Eritrea by a team of local and Italian paleontologists, the prints were left behind in the sands of what had been an ancient lake at the time, The Local and the ANSA news agency reported this week. The footprints have been described as virtually indistinguishable from those of a modern man.
“Their age is yet to be confirmed with certainty,” Alfredo Coppa, an archaeologist from Rome’s Sapienza Universityand the leader of the expedition, told The Local. He added that footprints such as these are “extremely rare” and that they would “reveal a lot about the evolution of man, because they provide vital information about our ancestors gait and locomotion.”
Coppa and his colleagues found the fossils in a 26 square meter stone slab, and reported that the shape indicates that the prints had been filled with water after formation but before they dried out and became buried beneath the sands of what is now an extremely arid desert region.
Findings provide insight into human ancestors, surrounding ecosystem
Working with researchers from the National Museum of Eritrea, the Sapienza University team discovered the footprints at the Aalad-Amo site in eastern Eritrea. As the paleontologists pointed out to ANSA, the toes and the sole of the foot indicate that Homo erectus was an efficient runner and walker.
In addition, the prints run in a north-south direction, matching those left behind by now-extinct antelopes. That discovery, combined with the fact that the prints were preserved in a sediment of hardened sand, suggests that the area was once a lake surrounded by grasslands. The discovery is the first time that mid-Pleistocene era have ever been discovered, the news agency said.
Homo erectus lived between 1.9 million to 70,000 years ago and is believed to have originated in Africa before migrating throughout Europe and Asia. While the newfound remains could well be the oldest footprints of this species ever discovered, they are far from the oldest hominid prints to be unearthed, according to The Local. That honor belongs to Australopithecus footprints found in Tanzania in 1976, as those remains were dated to be roughly 3.8 million years old.
Coppa told the newspaper that his team planned to “carry out more digs in the area, which has so far turned up the fossilized remains of five or six different Homo erectus specimens.”
What’s in the future of water treatment? Flint’s crisis sparks an investigation
Written By: Abbey Hull
Brian Galloway
After months of news coverage talking about Flint, Michigan’s devastating state of emergency for finding lead in its drinking water, USA Today covered President Obama’s trip to Flint with this bit of relief to say: “if you’re using a filter, if you’re installing it, then Flint water at this point is drinkable.”
Flint’s switch to the Flint River as the city’s main water source caused the corrosion of lead into the water systems. Scientists have been hard at work figuring out how to solve the medical issues arising in Flint and improve water treatment systems around the world.
Where has this crisis taken us? We looked into the reality of the ever-evolving process of water treatment, and the science behind its past, present, and future.
The quality of our water is a major part of our country’s future. (Credit: Thinkstock)
How is our water treated?
Water treatment depends on the source—things change depending if the water is taken from lakes, rivers, or groundwater.
“For a river, the main thing would be to remove the solids from the water, or what we call turbidity, that have run off the surface,” said Dr. David Sabatini, David Ross Boyd Professor at the University of Oklahoma, Associate Director of the Institute for Applied Surfactant Research, and Director of Water Technologies for Emerging Regions (WaTER) Center.
“We need to destabilize those particles so that they can clump together and be big enough to settle out of the water, so systems use coagulation and flocculation, which are processes to make those particles clump together more easily and settle out more readily.”
Water treatment goes through multiple stages of filtration before it’s cleared to be used again. (Credit: Thinkstock)
Next, the water goes through sedimentation, which removes larger particles from the water, and later a sand filter, which removes the majority of what’s left behind.
“Those particles can make the water look cloudy or murky, which makes it undesirable, but they also may harbor pathogens or microorganisms which make us sick,” Sabatini explained, noting cholera and typhoid fever as just a few of the water-borne pathogens these particles could be carrying.
For those pathogens, Sabatini described the next treatment phase as a disinfection process using chemicals such as chlorine to remove pathogens. “Chlorine, similar to what we put into a swimming pool to make sure the pool water doesn’t harbor pathogens of concern, or different forms of chlorine together with other molecules like ammonia, would be traditional disinfectants,” he explained.
When treating ground water or river water, scientists must also address localized contaminants dissolving into the source water from the geology surrounding it. “There may be iron or manganese that needs to be removed for aesthetic reasons, and there may be things like arsenic, chromium, or fluoride that are in the water which may cause health concerns,” he said.
As a current resident of the Oklahoma City area, Sabatini used his home as an example: “in Central Oklahoma we have naturally occurring arsenic in our groundwater, and since arsenic is a health concern, it needs to be removed from the ground water for health reasons more so than aesthetic reasons.”
So how do we know when it’s safe to drink?
Never fear: at the Environmental Protection Agency (EPA), there are certain standards and health-based requirements treated water must meet for it to be considered public drinking water.
“The EPA has two levels of standards—primary and secondary,” Sabatini explained. “Primary is based on health concerns (pathogens, arsenic, etc). The water is required to have no more than a certain level of those compounds in it for health-based reasons, so it needs to be treated.”
“Secondary standards for things like iron are not a health-based concern, but are standards for what the general populous would find acceptable and desirable. Since it’s not health-based, secondary standards are not enforced to the same level and degree as the primary standards are,” he continued.
So for Flint, how exactly did the lead get into their drinking water?
The issue surrounding Flint’s water crisis is twofold. “Part of the issue was that the water went from being what we called depositing to being corrosive,” Sabatini continued. “The water was laying down a layer of hardness on the pipe, and over time the pipe was getting slightly smaller and smaller because of the deposits coming out of the water and the hardness sticking to the pipes. Corrosive water attacks the pipe, slowly dissolving the deposits.”
Beyond the water’s corrosiveness, the second part of the issue derives from the origin of the lead. While the lead was not a function of the source water, Sabatini explained that rather it was a function of the pipe, either in the pipe’s material or the possible lead soldering used to join the pipes.
Lead in the pipes started to seep into Flint’s water supply, causing much of the damage seen throughout the community. (Credit: Thinkstock)
“There was a time before we realized the lead, either the lead in the pipe or the lead soldering used to join the pipes, was a health concern,” he said. “When lead paint was first made, we didn’t realize the health risks, but over time those health risks became apparent. Over time we’ve encountered and addressed the lead paint issue, and, in the same way, pipes with lead that previously were used for water systems turned out to create an apparent health issue.”
Many older cities in America and abroad used lead pipes when they were first installed. In 2004 The Washington Post found evidence of high lead levels in the DC water supply. It’s very possible that there are many “ticking time bombs” under our cities that could be the next Flint.
Does this mean water pollutants have caused issues in the past?
Before scientists realized the importance of treating water, there were many outbreaks of water-borne diseases. In mid-1800s London and Chicago, historians account for large portions of each city’s population dying due to these water-borne diseases like cholera and typhoid fever which only decades later were discovered to be introduced through water.
“The number one reason we treat water is to prevent deaths,” Sabatini explained. “In London and Chicago in the 1800s we didn’t even know water was a cause for the outbreak, so we weren’t treating the water for those things at the time.”
London’s growth in the 1800’s caused many health issues including water safety concerns. (Credit: Museum of London)
Modern water supplies aren’t free from health issues. “In Milwaukee, WI there was an outbreak of cryptosporidium, another pathogen that was discovered more recently,” he explained. “As opposed to 150 years ago, we’re talking about 25 years ago. Cryptosporidium was just an emerging pathogen of concern that hadn’t been widespread or prevalent, and all of a sudden it was making its appearance, causing scientists to realize that while chlorine is a good disinfectant, we need to use a UV light or ozone to tackle some of these newer pathogens of concern.”
And, going outside of the United States, the issue is still prevalent in many parts of the world. In countries that have 8-15 percent infant mortality rates where one in seven children die before the age of five due to water-borne diseases, the threat of untreated water is still very much a part of everyday life.
“We don’t realize from which we’ve come,” Sabatini said when speaking about his work with the WaTER Center. “There’s people living in that very situation today, and we have a moral obligation to help those less fortunate than ourselves.”
So what’s next for domestic and foreign water treatment?
Flint was not the only city with a water problem—either in the United States or in the world—which proves one thing: water treatment is an ever-evolving process.
Nanobots could be part of the future of water treatment. (Credit: Thinkstock)
At the WaTER Center, students, professors, and scientists are researching the future of water treatment for a world in which water is becoming a limited resource through both advanced technologies and nature itself.
“With the new field of nanotechnology on the rise globally, we’re looking into ways where we can take advantage of what we learned in nanotechnology as it applies to drinking water treatment,” Sabatini explained. “At the same time, one of the things we’re doing at the WaTER Center is working in Africa and Southeast Asia where people are living at a dollar-per-day without the resources we have and looking for low-cost, in-country materials we might be able to use that might provide a more cost-effective manner of water treatment more accessible to their income level.
“Our hope is that as we’re looking at these cheaper solutions, nature may teach us some things that could help us treat our water more effectively. We’re operating at both ends of the spectrum: let’s take advantage of the most recent advanced technologies, but let’s also, in resource-constrained settings, look at less expensive techniques that might help guide our high-tech approaches.”
While the Flint water crisis appears to be coming to a close, scientists understand the need to continue learning about and adapting water treatment to avoid these issues in the future.
In one such case, researchers are looking into water reuse as a more economically-priced solution to the world’s current water needs.
Developing new water access and treatment techniques is critical for development in developing and third-world countries. (Credit: Thinkstock)
“In addition to withdrawing water from one source and using it, treating it, and discharging it back into the environment, the idea is to capture some of that water we treated and treat it to an even higher level to reuse it,” he concluded.
“In cities looking to supplement their current water source with a new water source, rather than moving water from 20, 30, 50, or 100 miles away, which can be quite expensive, they can look closer to home and say, ‘well, here’s this water supply right here already in our grasp. Instead of waving goodbye to it, maybe we should look into using it.’”
With a resource as vital to our survival and existence, the rising need for both water treatment and solutions to our water-sourcing concerns is becoming a more and more relevant conversation among scientists.
“We tend to take water for granted because it’s so readily available,” Sabatini continued. “As water becomes a more limited resource, we’re going to have to resort to improved technologies and increased treatment costs. I think we need to prepare ourselves to pay more and more for water ,but if you really put it into context, as vital as water is to our very survival and existence, we should be willing to pay more for it than we are.”
But still, as Flint has gained country-wide support for its crisis and now has the ability to drink its water once more (President Obama himself took a sip of it during his speech, indirectly proving this point), domestic and foreign scientists are reminded of how much more research is needed to continue providing the world with one of its most valuable resources.
“It’s very exciting at the WaTER Center to be able to help prepare students and partner with students in addressing those issues and creating a better future for those less fortunate and for all of us,” Sabatini concluded.
“Rising water lifts all ships, so if America can help other countries improve, bring stability, and increase peace around the world, we are fulfilling our global responsibilities.”
—– Image credit: Thinkstock
Oxygen found in one of the universe’s most distant galaxies
Written By: Chuck Bednar
Brian Galloway
Scientists have discovered gases containing oxygen in one of the most distant galaxies in the universe, and their discovery – reported this week in the journal Science – could provide new insight into the nature of the first stars and the birth of the first-ever galaxies.
Their target was SXDF-NB1006-2, a galaxy discovered in 2012 and later confirmed to be the most distant galaxy detected at the time of its discovery, and their work began following a series of large-scale numerical simulations of galaxy formation where they were able to discover the ALMA telescope could detect light from ionized oxygen there.
A color composite image of a portion of the Subaru XMM-Newton Deep Survey Field. The red galaxy at the center of the image is the most distant galaxy, SXDF-NB1006-2. Credit: NAOJ
While the modern universe has an abundance of different chemical elements, during the earliest stages of the universe, only hot ionized gas filled with electrons and ions of hydrogen and helium existed until the universe started to cool approximately 400,000 years after the Big Bang. At that point, electrons and hydrogen ions combined, forming neutral hydrogen atoms.
Several hundred million years later, the first generation of stars formed, emitting radiation that was strong enough to once again cause hydrogen to be ionized and synthesizing heavier elements such as oxygen and carbon, the study authors explained. Analyzing heavy elements from this era can provide researchers with insight into what caused reionization and led to the formation of the first stars and galaxies, but historically, they have been extremely difficult to study.
Findings could open the door to find the cause of cosmic reionization
Analysis of very young heavy elements, the researchers explained in a statement, requires that astronomers discover objects as far away from Earth as possible. Only the most powerful types of telescopes are capable of such feats; fortunately, ALMA happens to be one such telescope.
In 2014, prior to the official start of their analysis, Inoue, Tamura, Matsuo and their colleagues conducted a series of simulations that confirmed that ALMA would be able to detect light from ionized oxygen in SXDF-NB1006-2, which is 13.1 billion light years from Earth. This indicated that the telescope array was detecting oxygen from doubly-ionized oxygen, and enabled the team to calculate that the galaxy contained just a fraction of the oxygen found in our sun.
“Our results showed this galaxy contains one tenth of oxygen found in our Sun,” co-author and Kavli Institute for the Physics and Mathematics of the Universe professor Naoki Yoshida said in a statement. “But the small abundance is expected because the universe was still young and had a short history of star formation at that time.”
The lack of dust discovered around SXDF-NB1006-2 suggests that the overwhelming majority of the gas located there is highly ionized, and that the galaxy could be “a prototype of the light sources responsible for the cosmic,” said Inoue. Tamur added that the findings are “the first step to understanding what kind of objects caused cosmic reionization.”
The first of the ExoMars program is on the way to the red planet. Credit: ESA
The first part of the ESA- Russian Federal Space Agency (Roscosmos) program, the launch of the ExoMars 2016 mission that included both the Trace Gas Orbiter (TGO) and the Schiaparelli lander, has already successfully launched and is expected to arrive at Mars this October.
However, it was determined in May that the second part of the mission, which will involve a Russian launch vehicle and lander and a European carrier module and rover, would not meet its intended 2018 launch target. The launch window had to be pushed back until July 2020, a move which left many to speculate that the rover may never actually reach the Red Planet.
During a meeting in Paris this week, however, the principles involved with the ExoMars mission decided to draw up a new project schedule, according to BBC News. They also pledged an extra $83 million (€77m/£59m) to continue development on the ExoMars vehicles until they can find a long-term solution to the program’s ongoing funding issues.
Revised schedule will put ExoMars on target for a 2020 launch
The goal, ESA agency’s director of human spaceflight and robotic exploration Dr. David Parker told the UK news outlet, is to have all of the scheduling and funding issues resolved by prior to a December meeting of ministers, and to work to ensure that they reach their new 2020 target.
“The challenges were set out to member states, and… they were asked the fundamental questions: how important is this project; do you want to continue?” he noted. “The very, very clear message came back that this remains a high priority for scientific and technological reasons.”
The rover, which was initially approved as a concept more than a decade ago, will be designed to drill up to two meters (around 6.5 feet) into the surface of Mars to hunt for evidence of biological life. Teams have repeatedly missed deadlines, however, including a hardware delay which forced this most recent postponement.
“The first critical step has been to re-establish a realistic technical schedule and contingency, both on the European side and with Russian colleagues at Roscosmos [Russia’s space agency] and their contractor Lavochkin. That’s very good news,” Dr. Parker said.
ExoMars’ first stage launched in March of this year. It recently returned this image of Mars. Credit: ESA
Cash flow injection will help, but funding issues remain
ExoMars is also over budget, with the price tag for the rover and satellite expected to cost about €1.56 billion ($1.76b/£1.24b) instead of the original €1.25 billion ($1.41b/€0.99b) originally set back in 2012. ESA member states France, Germany, Italy and the UK have committed additional funds to ensure that scientists and engineers can continue their work.
However, the BBC explained that this is only a temporary solution while the overall financial uncertainty over the project is addressed. Dr. Parker declined the media outlet’s request to go into detail about the funding shortfall, citing sensitive and ongoing negotiations over the final manufacturing prices with unidentified parties.
“There will now be a set of steps to demonstrate progress on the rover mission, and we will propose to member states thereafter a complete financial package to finish the job for when they meet in Lucerne (at the Council of Ministers) in December,” he told BBC News.
“These will be subscriptions within the exploration program,” Dr. Parker added. “In the meantime, it’s full speed ahead on the technical side and I have to say, from what I’ve seen, the teams are working very hard; they’re very dedicated. All the trips they make back and forth to Russia and to speak with their suppliers – it’s an impressive machine.”
Researchers develop new way to ‘see’ inside black holes
Written By: Chuck Bednar
Brian Galloway
The nature of black holes, astrophysical objects so dense and gravitationally strong that they swallow up all matter and light that ventures too close to them, prevents scientists from seeing their interiors and forces them to use mathematics to “observe” their depths.
Now, researchers from Towson University and Johns Hopkins University have come up with a breakthrough that will provide scientists with a new way to sneak a peek through a black hole’s event horizon and see what lies beneath. While their new approach doesn’t actually let us see all that goes on inside a black hole, it could shed new light on their internal structures.
Currently, scientists use special coordinate systems to illustrate the structure of a rotating black hole, the authors explained, but this can distort the results based on particular set of coordinates selected by the observer. The most accurate way to depict a black hole’s properties, they noted, is by using a series of mathematical quantities known as invariants, which have the same value regardless of which coordinates are used by the researchers.
By plotting and computing all of the independent curvature invariants of rotating, charged black holes for the first time, the authors said that they were able to discover a more complex, intricate and beautiful structure within black holes than previously thought. They presented their findings this week at the 228th meeting of the American Astronomical Society in San Diego.
Black holes are some of the strangest objects in astronomy.
Technique could help explain why not all black holes have jets
As Kielan Wilcomb, who presented the findings, and colleagues James Overduin and Richard C. Henry also reported in a paper currently available online, their method also revealed tremendous variations in curvature between different regions of the internal structure of a black hole.
In a statement, the trio explained that the findings are timely in light of the recent detections of the very first gravitational waves by the LIGO and Virgo observatories. Those spacetime ripples, produced by a pair of distant black holes colliding, are now known to exist, but as Overduin and his colleagues said, since information cannot escape a black hole, we can’t look inside them.
Thus, scientists can only explore these phenomenon mathematically, the authors of this newly published study set out to find the best way to visualize black hole interiors using these methods. In general, they said that most black holes (those with mass, spin and electric charge) possess a total of 17 curvature invariants, but due to mathematical relationships with one another, only five of them are truly independent.
The simplest of those quantities, the Ricci scalar, is at the core of general relativity theory while a second, the Weyl invariant, plays an comparable role in an alternative theory called conformal gravity. This invariant is equivalent to a third, the Kretschmann scalar, that exists in black holes that have no electric charge (which is expected to be the case most of the time).
Fluctuations in this quantity’s value near the singularity inside a spinning black hole include regions of negative curvature that are usually associated with gravitomagnetism, a phenomenon involving the gravitational analog of ordinary magnetism, the authors noted. Gravitomagnetic fields, fed by rotational energy, are believed to generate the jets emanating from the polar areas of some supermassive black holes, and improved curvature mapping inside the event horizon may help scientists understand why some galaxies have these jets and other do not.
—–
Image credit: NASA, ESA, and D. Coe, J. Anderson, and R. van der Marel
Night frog species uses a never-before-seen mating method
Written By: Chuck Bednar
Brian Galloway
An elusive, nocturnal breed of frog native to the Western Ghats region of India has a few never-before-seen tricks up its sleeve when it comes time to get intimate, as scientists have found that the creatures use a previously unknown mating position when actively reproducing.
Sathyabhama Das Biju, an amphibian expert from the University of Delhi, and an international team of colleagues were studying Bombay night frogs (Nyctibatrachus humayuni) in the forests of the Western Ghats for weeks before finding out that the amphibians utilized a sexual position previously unknown to science, adding to the six already known to be used by frogs.
According to Science News, video footage shows the male frog positioning himself loosely on the back of the female, with his limbs on the ground, tree branches or leaves in a position known as a dorsal straddle. He then releases his sperm onto her back, and leaves before the female lays her eggs. She then lays the eggs, and the sperm trickles down her back and legs onto them.
To date, Bombay tree frogs are the only of the more than 6,600 known species of frogs to use this technique, The Guardian noted, and in most cases, the male directly grasps onto the waste, armpits or head of his mate. However, in this case, they steadied themselves using branches or leaves that were nearby before releasing his sperm directly onto the female’s back.
‘A new chapter in the frog Kama Sutra’
When asked why the frog might have started using this unusual mating technique, Biju told The Guardian, “We have no idea.” Nonetheless, he and his colleagues have detailed their findings in the June 14 edition of the journal PeerJ, calling the discovery “striking” and “unique.”
However, the unusual reproductive approach is not the only unusual thing that the research team discovered about the Bombay tree frogs. They also discovered that the females are the 25th type of frog species to produce a call, that there are territorial fights between the male members of the species from time to time, and that they are the first type of Indian frog known to eat snakes.
Of course, their sexual behaviors are the most noteworthy discover, though the breakthrough did not come easy for Biju. He originally spotted two Bombay tree frogs getting intimate in 2002 but since the species is so elusive, for years he was only able to catch a quick glimpse every so often. It wasn’t until eight years later that he launched a concerted effort to observe their mating habits, and even then, it took more than 40 days for his team to obtain the footage they sought.
“It has been a wonderful experience to observe the entire breeding sequence of this unique frog. It’s like watching a scripted event,” Biju told National Geographic via email. “So far, this mating position, is known only in Bombay night frogs,” he added. Noah Gordon, a herpetologist at the University of Evansville who was not involved in the study, stated that the discovery “creates a new chapter in the frog Kama Sutra.”
Gravitational waves detected for a second (and maybe third) time
Written By: Chuck Bednar
Brian Galloway
Three months after the detection of the first ever gravitational waves, scientists were once again able to record the signal of a space-time distortion, according to new research presented Wednesday during the annual meeting of the American Astronomical Society.
The discovery, which was also detailed in a paper published in the latest edition of the journal Physical Review Letters, once again involved detecting a minute gravitational wave signal given off by a pair of black holes close to the point of merging – a phenomenon known as coalescence, the French National Centre for Scientific Research (CNRS) revealed in a statement.
The signal was detected on December 26 by scientists from the US-based LIGO and the Italian-based Virgo collaborations, confirming that these phenomenon are more frequent than experts previously believed and increasing the odds that additional gravitational waves will be detected once the two now-upgraded observatories resume operations later on this year.
While the signal for the second set of gravitational waves is said to have been weaker than the original one, but nonetheless has been confirmed with a confidence level exceeding 99.99999%, the CNRS said. Furthermore, additional analysis of the LIGO data has revealed a possible third detection of coalescing black holes on October 2, 2015, but scientists note that this event carries with it a lower degree of certainty than the other two.
Gravitational waves carry information about their origins and about the nature of gravity that cannot otherwise be obtained. Credit: NASA
Findings will help scientists determine the origins of binary black holes
The discovery will help scientists better understand pairs of black holes, which are objects that are so dense that neither matter nor light can escape them, the research center explained. Black holes are the final evolutionary stage of the most massive stars, and in some cases, pairs of these phenomena from, orbiting one another while releasing energy as gravitational waves.
Eventually, the process suddenly accelerates and the two black holes merge into one, which was the case with the black holes that were the source of gravitational waves detected last December. Observations of these two objects enabled researchers to determine that they had masses between 8 and 14 times that of the sun, while the black holes involved in the original observations back in September 2015 had solar masses of 29 and 36, according to the CNRS.
As the two black holes became lighter, they actually began to move towards each other at slower speeds, meaning that the signal detected by LIGO and Virgo lasted much longer than the original one – several seconds compared to less than one-half second. This enabled scientists to conduct a different and complementary series of tests than were performed during the September event.
December’s detection, which took place 1.4 billion light years from Earth, and other events like it could ultimately make it possible for researchers to determine the origin of binary black holes, and to determine if they were twin stars that both just happened to turn into black holes or if the gravitational pull of one attracted and captured the other. To determine this, experts will need to collect additional samples, and they are hopeful that the aforementioned upgrades to LIGO and Virgo will make that possible after the go back online in the fall of 2016.
—–
Image credit: Ossokine and A. Buonanno, Max Planck Institute for Gravitational Physics, and the Simulating eXtreme Spacetime (SXS) project
First mammal species driven to extinction by climate change
Written By: Chuck Bednar
Brian Galloway
An isolated rodent that lived on a single island located off the coast of Australia has become the first mammal to become extinct as a direct result of global climate change, a team of researchers from the University of Queenslandhave confirmed in a report released this week.
In their new study, researchers at the university and colleagues from Queensland’s Department of Environment and Heritage Protection jointly reported that Bramble Cay melomys (Melomys rubicola) had indeed vanished from their home in the eastern Torres Strait of the Great Barrier Reef, making it the first mammalian species to succumb to the Earth’s warming climate.
According to National Geographic, the melomys had last been spotted by a fisherman back in 2009, and failed attempts to trap one five years later proved unsuccessful, prompting speculation among scientists that the creature had become extinct. The long-tailed, whiskered rodent, which was the only mammal endemic to the Great Barrier Reef, was purportedly wiped out because of rising sea levels linked to manmade climate change, the New York Timesadded.
“The key factor responsible for the death of the Bramble Cay melomys is almost certainly high tides and surging seawater, which has traveled inland across the island,” Queensland researcher and study co-author Luke Leung told the newspaper via telephone. “The seawater has destroyed the animal’s habitat and food source.”
“We knew something had to be first, but this is still stunning news,” added Lee Hannah, a senior scientist for climate change biology with Conservation International who told Nat Geo that up to one-fifth of all species could be at risk of extinction due to climate change-related loss of habitat. “Certainly some species will benefit from climate change, but most will see reduced ranges.”
While gone from Bramble Cay, the creature may live on elsewhere
Also called the mosaic-tailed rat, the melomys species was named after the island which it called home, a small atoll that was at most 10 feet (3 meters) above sea level and was named, what else, Bramble Cay, Nat Geo noted. They were first spotted in 1845, and as of 1978, several hundred of the creatures lived on the island, according to the website.
Since 1998, however, rising sea levels have slowly started swallowing up the land, reducing the amount of land sitting above high tide from 9.8 acres (4 hectares) to 6.2 acres (2.5 hectares). As a result, less vegetation had been growing there, and leaving the rodents with just three percent of their original habitat. Now, scientists have recommended changing its status from endangered to extinct, which would make it the first mammalian casualty of global warming.
University of California, Berkeley professor and climate change expert Anthony D. Barnosky called the loss of the melomys “a cogent example of how climate change provides the coup de grâce to already critically endangered species,” telling the Times that it “is significant because it illustrates how the human-caused extinction process works in real time.”
There is a glimmer of hope for the creature, though: according to the CBC, while the report does indicate that the species is most likely extinct, as it was believed to have existed only on Bramble Cay, the authors do indicate that there is a slight chance that the species – or a close relative – are still alive and well in the Fly River delta of Papua New Guinea, which is thought to be the region from which the ancestors of the Bramble Cay melomys originated.
People who are suffering from fibromyalgia could find lots of relief in a day at the spa. Many different services have the potential to help lessen fibromyalgia symptoms, so the spa could be much more than just a relaxing place. It could help ease your symptoms with regular visits! Massages, facials and saunas all have different benefits that are worth trying out if you suffer. Here are just a few services that have shown to help some fibromyalgia sufferers.
Massage therapy
With massage therapy, communication with your therapist is key, considering that each person suffering from fibromyalgia suffers different pain levels. Some people may benefit from a light massage, while others can tolerate much more pressure and kneading. Also, some people may find relief from hot stimuli, others cold. The massage increases blood flow and flexibility in the muscles that are otherwise hindered to do so normally in people with fibromyalgia. In addition, aromatherapy along with a massage can help the individual to relax inside and out. Playing music during the massage can also have the same effect. Before getting a massage, make sure to speak with the professional about your specific symptoms so they can best help you relieve some of your pain most effectively.
Hydrotherapy
Hydrotherapy can prove to be helpful to some fibromyalgia sufferers, but it is noted that clients should be sure to stay hydrated through the duration of the session. Also, services such as saunas and steam rooms can be beneficial, as they increase immunity and relax the muscles. Natural substances such as water can be utilized to treat painful symptoms, depending on the severity.
Facial
Clients can also benefit from receiving facials. Those who take medication might notice differences in their skin, so a facial treatment can be helpful with those negative side effects from the medication. Also, muscles in the face might be the source of some pain, so a gentle massage may may provide relief. In general, facials are a source of rejuvenation and they just feel good. A little confidence boost never hurt anyone!
Whoever said a trip to the spa is selfish? If any of these services sound like they could help with your symptoms, make an appointment at your spa today!
Atmospheric CO2 levels set to surpass 400 ppm in 2016
Written By: Chuck Bednar
Brian Galloway
A spike in atmospheric carbon dioxide due to the El Niño climate phenomenon will likely cause levels of the greenhouse gas to surpass a significant threshold this year, averaging more than 400 parts per million for the entire year for the first time ever, a new study has revealed.
Researchers from the UK Met Office, who published their findings online Monday in the journal Nature Climate Change, reported that, based on emissions data, sea surface temperature data, and climate models, the recent El Niño event resulted in a spike in CO2 concentrations this year.
As a result, carbon dioxide concentrations as measured at the a monitoring station at the Mauna Loa volcano in Hawaii, is likely to remain above 400 ppm for all of 2016, and for the foreseeable future as well, according to reports published this week by BBC News and The Guardian.
The last time that CO2 levels were regularly above 400 ppm was between three and five million years ago, before the first modern humans stepped foot on the Earth, the study authors noted. In an average year, carbon dioxide concentrations increase by an average of 2 ppm, but this year, a record-setting increase of 3.15ppm (+/-0.53ppm) is anticipated by the Met Office.
Credit: OCO-2 /JPL-Caltech/NASA
So this would be a good time to panic, right?
However, as Richard Betts at the Met Office’s Hadley Centre in Exeter told BBC News, there is no reason to be overly concerned by the findings. “There’s nothing magical about this number,” he explained to the UK news outlet. “We don’t expect anything suddenly to happen. It’s just an interesting milestone that reminds us of our ongoing influence on the climate system.”
Using a seasonal climate model to predict sea-surface temperatures in the Eastern Pacific, which is the region typically most effected by El Niño, Betts and his fellow researchers determined that the monthly CO2 levels for 2016 would be 404.45 ppm, with a May high of 407.7 ppm and a low of 401.48 in September. The impact of El Niño has increased by 25 percent since 1997-98, when the phenomenon last hit, due to a corresponding increase in manmade emissions.
Betts also warned that we should not expect to see sub-400 ppm levels again anytime soon. As he explained to The Guardian, “Once you have passed that barrier, it takes a long time for CO2 to be removed from the atmosphere by natural processes. Even if we cut emissions, we wouldn’t see concentrations coming down for a long time.”
“We have said goodbye to measurements below 400 ppm at Mauna Loa,” he added. “We could be passing above 450 ppm in roughly 20 years. If we start to reduce our global emissions now, we could delay that moment but it is still looking like a challenge to stay below 450 ppm. If we carry on as we are going, we could pass 450 ppm even sooner than 20 years, according to the IPCC [Intergovernmental Panel on Climate Change] scenarios.”
UK researcher discovers new 200 million year old marine reptile
Written By: Chuck Bednar
Brian Galloway
A fossil previously discovered in a quarry at Nottinghamshire, England has been identified as a new species of ichthyosaur – an extinct marine reptile which dates back some 200 million years to the earliest part of the Jurassic Period – according to new research published Monday.
The fossil was identified by Dean Lomax, a paleontologist at the University of Manchester who explained that the creature is one of just a few ichthyosaur species dating back to that era, which makes the discovery of this dolphin-like creature very significant to the scientific community.
It also marks the first time that a species this ancient has been located in the UK in a place other than Dorset and Somerset, he noted. Lomax examined the specimen while visiting the New Walk Museum in Leicester, which acquired the partially-complete skeleton of the creature in 1951.
“When I first saw this specimen, I knew it was unusual,” he said in a statement. “It displays features in the bones – especially in the coracoid (part of the pectoral girdle) – that I had not seen before in Jurassic ichthyosaurs anywhere in the world. The specimen had never been published, so this rather unusual individual had been awaiting detailed examination.”
UK’s first Early Jurassic ichthyosaur specimen in 30 years
The creature was represented by a skull, pectoral bones, pelvic bones, limbs, ribs, and vertebrae, the university said. While relatively complete, the remains were described as “disorderly,” as it appeared as though the carcass had settled into the seabed prior to becoming fossilized.
“Parts of the skeleton had previously been on long-term loan to ichthyosaur specialist and former museum curator Dr. Robert Appleby, and had only returned to the museum in 2004 after he sadly passed away,” said Dr. Mark Evans, paleontologist and curator at New Walk Museum. “He was clearly intrigued by the specimen, and although he worked on it for many years, he had identified it as a previously known species but never published his findings.”
Lomax has dubbed the new species Wahlisaurus massarae in honor of paleontologists Bill Wahl and Judy Massare, who he said first inspired him to study ichthyosaurs. The creature is said to be the first new genus of ichthyosaur from the British Early Jurassic to be described since 1986, and a paper detailing the creature appears this week in the Journal of Systematic Paleontology.
While thousands of ichthyosaur specimens from this period are known and have been studied extensively over the years, the new creature identified by Lomax is from a location where it is practically unheard of for researchers to find ichthyosaurs, meaning that any new discovery may be scientifically significant, the university said. It could also shed new light on the diversity and geological distribution of the creatures, particularly during the Early Jurassic period.
Dark matter could be made of primordial black holes, study finds
Written By: Chuck Bednar
Brian Galloway
Scientists are becoming more and more convinced that dark matter, the elusive, nearly invisible material which makes up the majority of the universe, could be made up of black holes that were formed during the very first moments after the Big Bang, according to new reports.
Last month, research published in the Astrophysical Journal Letters proposed that these black holes could help explain the gravitational waves detected last year by the Laser Interferometer Gravitational-Wave Observatory (LIGO), as well as other observations of the early universe.
If that study, which was authored by Kashlinsky, an astrophysicist at NASA’s Goddard Space Flight Center in Greenbelt, Maryland, is correct, it would suggest that dark matter may be made up of black holes formed within the first second of the universe’s existence, and that thing may have evolved quite differently in those earliest moments than experts initially thought.
Credit: NYU
Investigating if the LIGO black holes were primordial
On Monday, Kashlinsky, who used the Spitzer Space Telescope to examine the background glow of infrared light in the universe (known as the cosmic infrared background, or CIB) explained his findings to Space.com, telling the website that he and his colleagues wanted to focus on the early universe, at a point beyond that which telescopes can detect individual galaxies.
“Suppose you look at New York from afar. You cannot see individual lampposts or buildings, but you can see this cumulative diffuse light that they produce,” he said. By removing the light from the known galaxies in the universe, his team was still able to detect light originating from the background glow of light sources from more than 13 billion years ago.
That was in 2007. Six years later, Kashlinsky and his colleagues used the NASA Chandra X-ray Observatory to re-examine this background glow in the X-ray portion of the spectrum, and found that the patterns within the two different sets of observations matched perfectly. Only one source would be capable of producing this kind of light in both infrared and X-ray spectrums, he noted: black holes. At the time, however, he did not realize that they could be very ancient.
LIGO’s detection of the first ever gravitational waves – cosmic ripples in the very fabric of space-time – in September 2015 changed that, Space.com said. Those waves were produced by two black holes colliding, and while that alone was significant because it marked the first time scientists were able to directly detect black holes, Johns Hopkins University astronomer Simeon Bird suggested that the black holes observed by LIGO might be primordial.
Primordial black hole proposal would explain excess background radiation
Unlike most black holes, primordial ones are not formed when a dead star collapses, the website explained. Rather, they formed shortly after the Big Bang, during a time when sound waves were prevalent throughout the universe. Regions where those sound waves were at their densest could have collapsed to form these black holes, which are currently still just hypothetical.
Inspired by Bird’s suggestion regarding the LIGO black holes, Kashlinsky investigated how these primordial black holes would influence the evolution of the universe. During the first 500 million years after the Big Bang, dark matter collapsed into masses known as halos, which then went on to act as the gravitational seeds that allowed matter to accumulate and form into stars and galaxies.
If that dark matter was made up of primordial black holes, however, there would have been far more halos, which Kashlinsky told Space.com would help explain by the excess cosmic infrared background and the excess cosmic X-ray background his team observed during their research. The first stars that formed within the halos would give off the infrared glow, while the gas that helped create those stars falling onto the black holes would have emitted the X-rays after they became heated.
“Everything fits together remarkably well,” Kashlinsky told the website on Monday. Previously, in a statement, he said, “Depending on the mechanism at work, primordial black holes could have properties very similar to what LIGO detected. If we assume this is the case, that LIGO caught a merger of black holes formed in the early universe, we can look at the consequences this has on our understanding of how the cosmos ultimately evolved.”
NASA scientists have been trying for decades to uncover seasonal patterns in the dust storms on Mars, their efforts have at last proven fruitful, but the new data that helped pave the way for their success came from a somewhat unexpected source: atmospheric temperature readings.
As the US space agency announced late last week, the researchers recently abandoned efforts to find the patterns from images showing dust grains, and instead reviewed six recent Martian years of temperature records collected by the Mars Reconnaissance Orbiter and other probes.
They discovered that there was a pattern of three types of large, regional dust storms occurring at approximately the same time during spring and summer in the planet’s southern hemisphere each year. Their findings have been published online by the journal Geophysical Research Letters.
“When we look at the temperature structure instead of the visible dust, we finally see some regularity in the large dust storms,” said David Kass from the NASA Jet Propulsion Laboratory (JPL) in Pasadena, California, instrument scientist for the MRO’s Mars Climate Sounder and the lead author of the new study.
“Recognizing a pattern in the occurrence of regional dust storms is a step toward understanding the fundamental atmospheric properties controlling them,” he continued. “We still have much to learn, but this gives us a valuable opening.”
The three types of storms (and how they were discovered)
As Kass and his colleagues explained, there is a direct link between atmospheric temperature and the dust content of Martian winds. Since dust absorbs sunlight, air filled with dust grains tends to be hotter than clear air. In some instances, the differences are extremely pronounced, with a more than 63-degree Fahrenheit (35-degree Celsius) difference in air temperature.
Additionally, this heating was found to have an impact on the global wind distribution, which is capable of producing a downward motion that warms the air beyond the dust-heated regions, the researchers noted. This enables NASA to use temperature observations to monitor both the direct and indirect effects of the dust storms on the atmosphere, which can improve the safety of future Mars missions and help predict how localized events impact global weather on the planet.
The MRO has been observing the climate on Mars since 2006, and before that, the data had been collected by the Thermal Emission Spectrometer on the Mars Global Surveyor, which studied the atmospheric temperature from 1997 through 2006. By analyzing temperature data representative of a broad layer located 16 miles (25 km) above the planet’s surface – high enough to be affected by regional storms more than by local storms – Kass and his colleagues were able to detect three different types of large regional storms, which they dubbed types A, B and C.
Type A storms, they explained, move from the north into the southern hemisphere during that part of the planet’s spring. When that happens, the sunlight on the dust warms the atmosphere, causing the winds to become stronger, lifting up more dust and further expanding the area and the vertical reach of the storm.
Conversely, Type B storms begin near the south pole shortly before the start of summer in the southern part of the planet. These storms often originate from winds generated at the edge of the retreating south-polar carbon dioxide ice cap, and may contribute to a regional haze, the authors said. Finally, Type C storms begin after Type B storms end, originating in the north during the winter season there and moving into the south similar to Type A storms. These storms are more varied in strength, peak temperature and duration than the other two, according to NASA.
SpaceX chief details ‘mind blowing’ plans for a cargo route to Mars
Written By: Chuck Bednar
Brian Galloway
SpaceX founder and CEO Elon Musk provided some details regarding plans to send his company’s Dragon capsule to Mars as early as 2018, telling reporters last week that he wants to establish “cargo routes” similar to those used by the explorers of old.
According to AFP and Washington Post reports, Musk compared the journey to Mars to those long, arduous journeys traveled by the likes of de Gama and Magellan in centuries past, noting that the voyage to the Red Planet would undoubtedly be “hard, risky, dangerous, and difficult.”
That said, Musk added that he was confident that people would be eager to sign up because “as with the establishment of the English colonies, there are people who love that. They want to be the pioneers.” Before that becomes possible, however, he said that the first step will be to set up a supply chain so that those individuals will always have enough food and equipment.
“Essentially what we’re saying is we’re establishing a cargo route to Mars,” he told the Post last Friday. “It’s a regular cargo route. You can count on it. It’s going to happen every 26 months.”
“Like a train leaving the station. And if scientists around the world know that they can count on that, and it’s going to be inexpensive, relatively speaking compared to anything in the past, then they will plan accordingly and come up with a lot of great experiments,” Musk added.
‘Mind-blowing’ Mars colony project details coming in September
Musk announced his desire to sent the unmanned Dragon capsule to Mars within the next two years or so on Twitter a few months ago, and is in the midst of a privately-funded program that hopes to send experiments and people to the Red Planet by the early 2020s. While NASA is not funding SpaceX’s program, they will provide “technical support” for the 2018 mission.
While the US space agency is working towards manned missions in the 2030s, SpaceX hopes to send two payloads worth of experiments to Mars during a 2020 launch window. By that time, Musk said, several firms should be interested in conducting experiments on the Red Planet. In 2022, he hopes to launch what will be the first mission towards establishing a Mars colony.
Beyond that, Musk declined to provide specific details on his proposed missions, telling the Post that he would do so at a September conference. However, the newspaper pointed out that he was “clearly excited about the prospect and could barely contain himself.” The SpaceX chief himself said the project would be “mind blowing… It’s going to be really great.”
“I do want to emphasize this is not about sending a few people to Mars. It’s about having an architecture that would enable the creation of a self-sustaining city on Mars with the objective of being a multi-planet species and a true space-faring civilization and one day being out there among the stars,” he added. “It’s dangerous and probably people will die – and they’ll know that… [but] they’ll pave the way, and ultimately it will be very safe to go to Mars, and it will very comfortable. But that will be many years in the future.”
Archaeologists discover 2,500 year old naval base near Athens
Written By: Chuck Bednar
Brian Galloway
A team of archeologists from Greece and Denmark have discovered a 2,500 year old naval base at the port of Piraeus, near Athens, which was one of the largest defense structures of its kind in the ancient world, according to a press release issued late last week.
The base was established in 493 BCE, and was difficult to find because it had been buried under the waters of the Mounichia fishing and yachting harbor for thousands of years, the authors said. It had massive harbor fortifications, they added, and room for hundreds of triremes – galleys that had three rows of oars and were used as warships by the ancient Greeks.
“Some days, underwater visibility in the harbor was as low as 20 centimeters so we have had extremely poor working conditions,” said University of Copenhagen archaeologist Bjørn Lovén, who led the expedition as part of the Zea Harbour Project, an extensive excavation of Athenian navel facilities in the Piraeus area from 2001 through 2012.
“However, we did finally locate the remains and excavated six ship-sheds that were used to protect the Greek ships from shipworm and from drying when they were not needed on the sea,” he added. “And the sheds were monumental: the foundations under the columns were 1.4 by 1.4 meters, and the sheds themselves were 7-8 meters tall and 50 meters long.”
Newfound base played a key role in the Battle of Salamis
The newfound naval base would have played a key role in defending ancient Greece, the study authors explained. For example, it likely provided about two-thirds of the ships that participated in the Battle of Salamis, a naval conflict which pitted an alliance of Greek city-states against the Persian Empire in 480 BC and which resulted in a decisive win for the Greeks.
“Based on pottery and carbon-14 dating from a worked piece of wood found inside the foundations of a colonnade, we dated the ship-sheds to around 520-480 BCE, or shortly thereafter,” Lovén said. “This means that these sheds probably housed the ships which were deployed to fight the Persian invasion forces during the famous Battle of Salamis 480 BCE.”
“This naval battle was a pivotal event in Greek history; it is difficult to predict what would have happened if the Greek fleet had lost at Salamis, but it is clear that a Persian victory would have had immense consequences for subsequent cultural and social developments in Europe,” he added. “The victory at Salamis rightly echoes through history and awakens awe and inspiration around the world today.”
Which superhero is the strongest? Study tries to answer heated debate
Written By: Chuck Bednar
Brian Galloway
Batman vs. Superman. Captain America vs. Iron Man. This year, this year’s cinema is packed with several superhero showdowns, and moviegoers are eagerly watching to find out which of their favorite caped crusaders would have the grit and gadgets to come out on top.
Now, students at the University of Leicester have taken a more analytical approach to the whole discussion, performing a series of simple calculations to determine just how feasible the powers of our favorite heroes are – just in time for Superman Day on Sunday, June 12, 2016.
As they reported in a series of paper appearing in the Journal of Physics Special Topics and the Journal of Interdisciplinary Science Topics, the researchers have found that DC’s iconic Man of Steel, Superman, is the best-equipped superhero of all, followed closely by Marvel’s Wolverine, Mystique, and Thor, based on an evaluation of each individual’s set of special powers.
While not the most destructive comic book character – the Leicester team gave that honor to Black Bolt of the Inhumans – Superman was found to have the widest array of superpowers in his arsenal and the fewest limitations, making him the most likely victor if all of the DC and Marvel characters analyzed were to throw down in a battle royal, they said in a statement.
Why Superman? And what about Batman?
Clearly, the decision was not made lightly, as the researchers spent seven years, from 2009 to 2016, analyzing various superheroes, and found that Superman not only benefited from having high-density muscle tissue on Earth, but could also use a powerful Super Flare attack capable of a calculated stored solar energy output of 7.07×105 Joules per second.
Following closely behind are a pair of X-Men, Wolverine and Mystique. The former benefitted greatly from his regenerative abilities and the latter scored highly for her use of genetic alteration in order to disguise herself. Thor, the superhero version of the Norse thunder god, finished fourth thanks to his high energy efficiency and explosive powers, they explained.
Fans of Batman may be distressed to learn that the Gotham Knight scored last, earning the title of the least well-equipped superhero. As the authors explained, while his cape has proven to be a tremendous asset when gliding in the movies and in comic books, in reality, Batman would reach velocities of nearly 80km/h while doing so, which would likely prove fatal when he attempted to land – meaning that his efforts to save the day would probably end quite tragically.
As Dr. Cheryl Hurkett from the University of Leicester’s Centre for Interdisciplinary Science explained, the research is in good fun, and designed to improve the students’ scientific aptitude: “An important part of being a professional scientist… is the ability to make connections between the vast quantity of information students have at their command, and being able to utilize the knowledge and techniques they have previously mastered in a new or novel context.”
“I encourage them to be as creative as possible with their subject choices,” Dr. Hurkett added, “as long as they can back it up with hard scientific facts, theories and calculations!”
Evidence supports theory that humans evolved in grasslands
Written By: Chuck Bednar
Brian Galloway
Researchers from the Columbia University Lamont-Doherty Earth Observatory have discovered new evidence to support the theory that key human traits, including large brains and the ability to walk on two legs, evolved as our ancestors adapted to living in open grasslands.
Writing in a special human-evolution issue of the journal Proceedings of the National Academy of Sciences, postdoctoral research scientist Kevin Uno and his colleagues found a 24 million year old vegetation record buried deep within seabed sediments off the coast of eastern Africa.
This vegetation is the longest and most complete record of ancient plant life discovered to date in the purported birthplace of humanity, modern-day Kenya and Ethiopia, the researchers noted in a statement. It also indicates that between 24 million and 10 million years ago, well before the first human ancestors arose, the region was dominated by woodlands with few grasses.
That all changed due to a dramatic shift in climate, and within a few million years time, grasses became dominant – a trend that continued throughout the entire course of human evolution, Uno and his colleagues said. As our ancestors adapted to these changes, they evolved physically, their diets became more flexible, and their social structures grew in complexity.
Life on the ancient grasslands made humans into what we are today.
Plant remains obtained through core drilling key to new findings
Genetic evidence suggests that early hominids first split from other apes between six and seven million years ago, and many scientists believe that it was the shift from dense forests to savannas in eastern Africa that served as the catalyst for their eventual development into modern humans. The new study indicates that the rise of grasslands had a tremendous impact on hominins.
“The entire evolution of our lineage has involved us living and working in or near grasslands. This now gives us a timeline for the development of those grasses, and tells us they were part of our evolution from the very beginning,” Uno said, adding that those savannas likely popped up in small patches at first, and were only one of several factors – including the ability to hunt in a more open landscape – that resulted in the physical and social advancement of our species.
Unlike previous studies, which collected scattered evidence in the form of pollen and chemical isotopes that were at most four million years old, the new study analyzed a series of sediments obtained through core drilling by a research ship working in the waters near northeaster Africa. These sediment cores contained tens of millions of years world of chemicals from plants which grew on land but were later washed out to sea, where they collected and were preserved.
Filling in the details of grassland evolution in eastern Africa
By analyzing carbon-based chemicals called alkanes, which comprise the waxy outer parts of leaves and contain the fingerprints of different types of plants, the study authors determined that grasses started to arise roughly 10 million years ago, and their coverage area seemed to increase by seven to eight percent every one million years. By two to three million years ago, grasses had become the dominant form of vegetation in eastern Africa, and they remain so today.
The findings match previous chemical analyses from ancient herbivore teeth, which showed that the creatures started to switch to a more grass-rich diet approximately 10 million years ago, said Uno. Several million years later, the first hominins appeared, and by 3.8 million years ago, their tooth enamel indicates that they had developed a flexible diet that included food based on grasses – that is, the meat of creatures that ate grass, not grass itself.
“Lots of people have conjectured that grasslands had a central role in human evolution,” study co-author Peter deMenocal, a climate scientist at Lamont-Doherty, explained. “But everyone has been waffling about when those grasslands emerged and how widespread they were. This really helps answer the question.”
Smithsonian Institution anthropologist Richard Potts called the study “the very best examination and most compelling demonstration” of long-term grassland expansion, adding that “bipedality emerged as a way of combining walking on the ground and climbing trees; toolmaking expanded the adjustments to a much wider range of foods; brains are the quintessential organ of flexibility. Geographic expansion requires adaptability to change.”
New algorithm improves fluid interface dynamics calculations
Written By: Chuck Bednar
Brian Galloway
Scientists at the Lawrence Berkeley National Laboratory devised a new framework that can more accurately resolve the Navier-Stokes equations, a series of mathematical statements used to predict how fluids will flow, according to a study published Friday in Science Advances.
The Navier-Stokes equations are based on the application of Newton’s second law to the motion of fluids, along with the assumption that the stress in fluid is the combination of a diffusing viscous term and a pressure term. Currently, they are used in a vast array of different fields, such as special effects for movies, industrial research, and engineering, lab officials explained.
However, as they noted in a statement, some computational methods used to solve these complex mathematical problems are unable to accurately resolve intricate fluid dynamics occurring beside moving boundaries or surfaces, or how tiny structures influence the motion of those surfaces and the surrounding environment. Hence, the need to improve those computational methods.
Research makes it easier to determine speed, pressure of fluids
Enter Robert Saye, a research fellow in the Berkeley Lab mathematics group. He has come up with a new formula for the Navier-Stokes equations that makes them earlier to calculate using numerical computation methods. Saye’s algorithms are capable of capturing both tiny features near evolving interfaces and the influence those structures have on distant dynamics.
“These algorithms can accurately resolve the intricate structures near the surfaces attached to the fluid motion,” he explained. “As a result, you can learn all sorts of interesting things about how the motion of the interface affects the global dynamics, which ultimately allows you to design better materials or optimize geometry for better efficiency.”
“For example, in a glass of champagne, the motion of the little gas bubbles depends crucially on boundary layers surrounding the bubbles,” Saye added. “These boundary layers need to be accurately resolved, otherwise you won’t see the slight zig-zag pattern that real bubbles take as they float to the top of the glass. This particular phenomena is important in bubble aeration, a process used widely in industry to oxygenate liquids and transport materials in liquid chambers.”
His work will make it easier for researchers to use the Navier-Stokes equations to determine how quickly a fluid is moving in its environment, the amount of pressure it is under and what forces it exerts on its surroundings, the lab noted. Saye’s algorithm will also enable experts to more easily and accurately gain new insight into how each of these traits influence one another.
What is it about this method that makes it better?
As the lab explains, researchers have in the past attempted to come up with several different ways to simplify Navier-Stokes equations and their solutions, including one method in which liquids (and sometimes gases) are modeled as incompressible. Most of these methods are so-called low-order methods, Saye said, while his new technique is a high-order one.
“High-order methods are in some sense more accurate,” he explained. “One interpretation is that, for fixed computing resources, a high-order method results in more digits of accuracy compared to a low-order method. On the other hand, it is often the case that you only need a handful of digits of accuracy in your simulation. In this case, a high-order method requires less computing power, sometimes significantly less.”
Also, low-order methods for fluid interface dynamics tend to introduce something known as a numerical boundary layer into the calculated results. These can lead to imperfections, limiting a scientist’s ability to closely examine and analyze the fluid dynamics next to the interface. When intricate dynamics are involved, things move very quickly or small features are included in the interface, high-order methods are needed, Saye said.
“I wanted to make these numerical algorithms significantly more accurate. When I thought about it that way, I realized that I needed a whole new technique to solve the equations,” he said. To do so, he applied gauge methods to the equations. “Gauge methods are about the freedom one has in choosing variables in the equations. So I essentially used these ideas to rewrite the Navier-Stokes equations in a way that is more amenable to developing very accurate simulation algorithms.”
Hawking-led team addresses black hole paradox in new study
Written By: Chuck Bednar
Brian Galloway
Does information completely disappear after entering a black hole or doesn’t it? That question has spurred much spirited debate amongst scientists, and now, renowned physicist Stephen Hawking has once again ventured into the debate to offer some clarification.
In a paper published online earlier this week by the journal Physical Review Letters, Hawking and colleagues Malcolm Perry and Andrew Strominger have weighed in on the so-called black hole information paradox, the notion that – contrary to the tenets of physics which state that no physical data can ever completely disappear – these gravitationally dense regions of spacetime consume everything around them, causing information to be lost forever.
According to Phys.org, Hawking first weighed into the issue in the 1970s, when he discovered that some information (now known as Hawking radiation) can escape, but that this information does not adequately describe everything that is swallowed up by any given black hole. So that begs the question, what happens to the rest of the information once the black hole dies? Solving this puzzle has proven to be most challenging to physicists over the last four decades.
In January, Hawking, Perry and Strominger proposed one possible solution to the issue centered around a series of quantum excitations they referred to as soft hairs. The trio proposed that these soft hairs formed a halo around the black hole and held the data pertaining to the things that had been consumed. Critics of the theory pointed out, however, that they had not explained how this soft hair and the black hole were able to exchange information.
Paradox is one step closer to being completely resolved
Based on Hawking’s findings, when a black hole consumes a part of a particle-antiparticle pair, half of that pair might escape, carrying a minute portion of the black hole’s energy along with it in the form of Hawking radiation, according to the New York Times and Business Insider.
Eventually, that energy would leak out of the black hole until it eventually disappeared, leaving behind no remnants of the black hole except said radiation. So what would happen to all of that information that it has consumed during its lifespan when it died? Hawking’s team initially believed that the radiation coming out of the black hole after it fell apart would be random, and that the information about the material that it had consumed would be lost for good.
That violated one of the main laws of modern physics: that is should always be theoretically possible to reverse time and reconstruct events of the past, as the universe is supposed to keep track of the characteristics of different types of matter and antimatter, even if those objects end up being destroyed, the Times said. As such, their physical attributes should live forever.
Having reworked their calculations and found stronger evidence that black holes are surrounded by soft hair halos, Hawking, Perry and Strominger have published their new study, asserting that this halo is where the information of everything that fell into the black hole during its life span is recorded, similar to “the pixels on your iPhone or the wavy grooves in a vinyl record,” the Times said. That information is not necessarily preserved in its proper order, but it is there.
The trio “still has not addressed [these problems] completely,” Phys.org added, “but they have reworked the math and have found stronger evidence for the existence of soft hairs – if they can do the same for gravity, and show that all of the information is held in the soft hairs, rather than just some, it should greatly increase the chances that one day the paradox will be solved once and for, offering relief for those who feared that the paradox might one day lead to having to toss out some of the most cherished theories in physics.”
The brain isn’t only source of autism symptoms, study finds
Written By: Chuck Bednar
John
For years, autism and neurodevelopmental disorders like it have been treated as though they were caused exclusively by issues with brain development, but new research published in the latest edition of the journal Cell suggests that there may be other factors at play.
According to senior author David Ginty, a professor of neurobiology at Harvard Medical School and a Howard Hughes Medical Institute investigator, and his colleagues, experiments conducted in mice revealed that some aspects of the disorder, including anxiety, touch perception, and some of the social abnormalities, are associated with issues elsewhere in the nervous system.
In fact, his team’s research has demonstrated that peripheral nerves found throughout the limbs, digits, and other parts of the body, which communicate sensory information to the brain, are also significant contributors to the symptoms associated with autism spectrum disorders (ASDs).
“An underlying assumption has been that ASD is solely a disease of the brain, but we’ve found that may not always be the case,” Ginty explained in a statement. “Advances in mouse genetics have made it possible for us to study genes linked to ASD by altering them only in certain types of nerve cells and studying the effects.”
Elevated sense of touch may explain some ASD-related behaviors
As part of their work, the study authors examined the effects of genetic mutations known to be linked to ASD in humans, including Mecp2, which a disorder known as Rett syndrome which is often linked to ASD, and Gabrb3. These genes are believed to be vital for nerve cells to function normally, and mutations in them have been linked to issues with synaptic function.
“Although we know about several genes associated with ASD, a challenge and a major goal has been to find where in the nervous system the problems occur,” explained Ginty. “By engineering mice that have these mutations only in their peripheral sensory neurons, which detect light touch stimuli acting on the skin, we’ve shown that mutations there are both necessary and sufficient for creating mice with an abnormal hypersensitivity to touch.”
He and his colleagues measured how mice reacted to touch stimuli, as well as their ability to tell the difference between the textures of different objects. Mice that had ASD gene mutations only in the sensory neurons were more sensitive to touch and could not discriminate between textures. They also had abnormalities in the transmission of neural impulses from touch-sensitive neurons in the skin and in the spinal cord neurons that rely touch-related signals to the brain.
Having established that mice with ASD-associated gene mutations had tactile perception issues, the scientists went on to evaluate their anxiety and social interaction levels by testing the mice to see how vigorously they would avoid being out in the open and how well (or poorly) they would interact with other mice. They found that the creatures that had autism-related mutations only in their peripheral sensory neurons had higher anxiety levels and interacted less with other mice.
“Mice with these ASD-associated gene mutations have a major defect in the ‘volume switch’ in their peripheral sensory neurons,” causing them to feel touch at a highly elevated level, explained first author Lauren Orefice. “An abnormal sense of touch is only one aspect of ASD, and while we don’t claim this explains all the pathologies seen in people, defects in touch processing may help to explain some of the behaviors observed in patients with ASD.”
New process fights global warming by turning CO2 into stone
Written By: Chuck Bednar
Brian Galloway
In what could be a major breakthrough for dealing with the planet’s climate change problem, a team of scientists captured a greenhouse gas and chemically changed it into a solid, and they did it far more quickly and efficiently than experts anticipated.
Working alongside engineers at the Hellisheidi power plant in Iceland, earth scientists from the US, UK, and elsewhere demonstrated the ability to convert carbon dioxide emissions into stone by pumping them into the Earth, mineralizing the majority of it in less than 24 months.
Their work could help alleviate concerns that captured CO2 being stored underground might be able to seep back out into the atmosphere, or at the very worst, explode. The technique they used is detailed in research published in the Friday, June 10 edition of the journal Science.
“We can pump down large amounts of CO2 and store it in a very safe way over a very short period of time,” study coauthor Martin Stute, a hydrologist at Columbia University’s Lamont-Doherty Earth Observatory, said in a statement. “In the future, we could think of using this for power plants in places where there’s a lot of basalt – and there are many such places.”
The rush of carbon dioxide allowed a green slime mold to grow that potentially played a part in the reaction. Credit: Columbia University
So how does this process work?
According to the study authors, the Hellisheidi power plant is the largest geothermal facility on Earth, and provides power for nearby areas by pumping up volcanically-heated water in order to operate turbines. Ordinarily, this process also gives off carbon dioxide and other volcanic gases, but new pilot project launched four years ago is looking to address that problem.
This project is known as Carbfix, and it involves mixing those gases with water pumped up from below, then injecting that solution back into the volcanic basalt. The result is a chemical reaction in which the carbon precipitates out into a chalky, white mineral. Previous research has estimated that this process could take hundreds of years or more. At the Hellisheidi plant, however, Stute’s team was able to solidify 95 percent of the carbon in only two years.
Basalt is the key to this reaction. Credit: University of Columbia
Since nearly all of the seafloors on the planet and 10 percent of continental rocks are made of basalt, there is no shortage of places where this carbon capture and solidification procedure can be used, the researchers explained. They also reported that the carbonate minerals they created have been stable, meaning that there should be little risk of carbon leakage with the technique.
Since 2014, Reyjavik Energy, the company that operates the plant, has been injecting CO2 at the rate of 5,000 tons per year while keeping pace with the mineralization efforts, and engineers said that there are plans to double the injection rate this summer. While other power plants are said to have expressed interest in the technology, the authors note that there are some obstacles, such as the amount of water needed (approximately 25 tons per ton of CO2), and the cost (in most plants, the separation and injection process would cost about $130 per ton, though it is considerably less at the Hellisheidi facility, since it uses existing infrastructure and does not purify the carbon).
In addition, earlier this year, researchers identified a type of subterranean microbe that seem to feed off carbonate minerals, releasing a more potent greenhouse gas, methane, in the process, so that could be a problem the engineers need to address. Even with the roadblocks, however, lead author Juerg Matter from the University of Southampton is confident he and his colleagues have found a new weapon in the fight against global warming. “We need to deal with rising carbon emissions,” he said. “This is the ultimate permanent storage – turn them back to stone.”
—–
Image credit: Kevin Krajick/Lamont-Doherty Earth Observatory
Mysterious Siberian crater grew to 15 times its original size
Written By: Chuck Bednar
Brian Galloway
A crater that mysteriously appeared in the Taimyr peninsula of Siberia about three years ago has grown to 15 times its original size, and recent reports indicate locals said that they recalled hearing an explosion at the time of the event.
According to the Siberian Times and the Daily Mail, the 330-foot deep crater was just 13 feet in width when a helicopter pilot and his passengers first reported seeing it in 2013, but measured an incredible 230 feet when it was last surveyed. In addition, researcher Dr. Vladimir Epifanov has told reporters that residents living up to 100 km away reported hearing a loud blast.
Eyewitnesses told Dr. Epifanov they had observed a clear glow in the sky around that same time, which would have been roughly one month after the Chelyabinsk meteorite incident in February. Russian media reports also indicate that dozens of other craters have since popped up in the region, leading Siberian residents to refer to the region as “the end of the world.”
The mysterious growing crater is also now home to a lake, which the Daily Mail said formed as permafrost in the region melted and the walls surrounding the hole caved in. That all happened in the first 18 months following the crater’s discovery, the Siberian Times said. Experts believe that it may even be larger now, but no recent surveys have been conducted.
When the crater first emerged it swallowed up a group of reindeer herders. Credit: Yamalo-nenets Autonomous region governor’s press-service
So what caused this mysterious chasm, anyway?
The discovery follows initial skepticism that the reports of the crater were a hoax, according to CBS News, but now that they have been verified, Russian scientists are working to try and find out what might have caused this gaping chasm to initially form, and then grow larger.
Some believe the crater’s origin is not of this world. While theories about aliens creating the crater are easy to dismiss, some hypothesize that it was caused by a meteorite. However, as the Daily Mail pointed out, researchers have all but eliminated the possibility due to the fact that the hole does not resemble a normal impact crater.
One theory claims the Yamal craters were created by climate change. Credit: Vladimir Eplfanov
One possibility, the UK newspaper said, is that it was caused by a pingo – a phenomenon which occurs when land covers a subsurface accumulation of ice, and that ice melts, leaving a gigantic hole behind in its wake. Another possibility is that it was the result of an underground explosion of methane, since the region is said to be rich in natural gas that, when mixed with salt and water, could result such an explosion. The official cause has yet to be determined, however.
Anna Kurchatova of the Sub-Arctic Scientific Research Centre favors the methane explanation, and that global warming likely played a role. She told CBS News that she believes that gas that had accumulated in ice mixed with sand beneath the ground, and that this mixed with salt, as the region had been a sea at one point, some 10,000 years ago. Warmer temperatures resulted in the permafrost melting, releasing the gas an causing the explosion to occur.
—–
Image credit: Yamalo-nenets Autonomous region governor’s press-service
Life could have started on carbon-based planets, study finds
Written By: Chuck Bednar
Brian Galloway
Rather than forming on Earth-like planets made of silicate rocks and iron cores, extraterrestrial life may have originated on carbon-based planets comprised of diamond or graphite, researchers from the Harvard-Smithsonian Center for Astrophysics (CfA) reported in a new study.
Writing in the Monthly Notices of the Royal Astronomical Society, lead author Natalie Mashian, a graduate physics student at Harvard, and her colleagues went on to explain that these life-supporting worlds could be discovered by searching for an extremely rare class of stars.
“This work shows that even stars with a tiny fraction of the carbon in our solar system can host planets,” Mashian, who worked on the study along with PhD thesis advisor, Avi Loeb, explained in a statement. “We have good reason to believe that alien life will be carbon-based, like life on Earth, so this also bodes well for the possibility of life in the early universe.”
The findings could strengthen the notion that alien life exists somewhere beyond our planet and our solar system, according to Wired UK, and that astronomers might be able to find it simply by searching for an ancient type of star known as a carbon-enhanced metal-poor (CEMP) star.
Life could have began on a planet made of diamond.
Studying CEMP stars may shed new insight on planetary formation
As the CfA researchers explain, the primordial universe lacked chemical elements such as carbon and oxygen, both of which are essential for life as we know it, and consisted mainly of hydrogen and helium. It wasn’t until the first stars went supernova, providing additional resources required for planet formation, did it become possible for these organisms to survive.
CEMP stars, however, contain just one hundred-thousandth as much iron as is found in our sun, meaning that they formed before interstellar space became seeded with these heavy elements. Despite this, they tend to contain higher amounts of carbon that one might expect due to their age, and this relative abundance would influence planet formation, the authors said.
These carbon-rich worlds, should they exist, would be difficult to tell apart from planets that are more like the Earth when it comes to their mass and physical size, the CfA astronomers said. The only way to tell them apart would be to analyze their atmospheres in search of gases like carbon monoxide and methane. If they exist, however, they could provide new insight into how planets (and possibly biological life) originally formed in the aftermath of the Big Bang.
“These stars are fossils from the young universe. By studying them, we can look at how planets, and possibly life in the universe, got started,” Loeb explained. Searching for these stars by using the transit method could be a “practical” way to discover “how early planets may have formed in the infant universe,” he added.
—–
Image credit: Christine Pulliam (CfA). Sun image: NASA/SDO
By analyzing high-resolution satellite images and using aerial drones to snap photographs of the area, a pair of archaeologists have discovered an enormous ceremonial monument hiding in plain sight at the Petra World Heritage site in what is now southern Jordan.
Parack and Tuttle explained that they used Google Earth, WorldView-1 and WorldView-2, along with unmanned aerial vehicles (UAVs), to find and map the monument, which is almost as long as an Olympic swimming pool and twice as wide. Believed to be a ceremonial platform, it is located just one-half mile (800 meters) south of the ancient city’s center.
The discovery comes as a bit of a surprise, the authors wrote, considering that Petra is one of the most well-known and well-surveyed archaeological sites on Earth, and yet the structure indicates that there are still features yet to be discovered in the 102 square mile (264 square km) park.
Another view of the platform. Credit: I. Labianca
A new addition to the ‘precious’ cultural heritage site
From the middle of the second century BC, when it was the capital of the Arab tribe called the Nabateans, until its abandonment at the end of the Byzantine period in the seventh century AD, Petra was a bustling center of caravan trade, according to National Geographic.
As for the newly discovered structure, reports indicate that it is comprised of a 184 foot by 161 foot (roughly 56 meter by 49 meter) platform that encloses a slightly smaller platform, the latter of which had been paved using flagstones. A small building was build atop the interior platform, facing the east, where a row of columns which had been home to a staircase can be found.
According to National Geographic, the monument is unlike any other structure located in Petra, and might have been built during the early years of the caravan city. It likely was used as part of some kind public ceremony, which the authors noted would make it the second largest, elevated, dedicated display area discovered to date in Petra, second in size only to its Monastery. While it has yet to be excavated, the discovery of pottery dating to the middle-second century BC suggest that construction on the platform was started early on by the Nabataeans.
Tuttle praised the effectiveness of satellite imagery and drones in locating the never-before-seen monument, telling Nat Geo, “I’m sure that over the course of two centuries of research [in Petra], someone had to know [this monument] was there, but it’s never been systematically studied or written up. I’ve worked in Petra for 20 years,” he added, “and I knew that something was there, but it’s certainly legitimate to call this a discovery.”
First-ever ancient oracle to Apollo discovered in Athens
Written By: Chuck Bednar
Brian Galloway
Archaeologists working on behalf of the German Archaeological Institute at Athens have found an ancient well, believed to be at least 1,800 years old, which could be the first oracle devoted to the Greek god Apollo ever discovered in that country’s capital and largest city.
According to Haaretzand Ancient Origins, the well is the first ancient oracular edifice to Apollo, the Greek god of music, art, poetry, archery and the sun (among other things), and the well itself likely would have been used for hydromancy, a divination technique that involved water.
The oracle well was located in the Kerameikos region of Athens, which was in the central part of the capital just northwest of the Acropolis. It was discovered in the Temple of Artemis Soteira, in a region that still receives water from the Eridanos River, said Dr. Jutta Stroszeck, a cultural and art history expert who led the expedition on behalf of the Institute.
“Water, and in particular drinking water, was sacred,” Dr. Stroszeck told Haaretz. “In Greek religion, it was protected by nymphs, who could become very mischievous when their water was treated badly.” People would present miniature, liquid-filled vessels and other offerings in order to appease them in such instances
New discovery comes in an area rich in ancient Greek history
Reports indicate that the well was surrounded by a wall of clay cylinders, and researchers found more than 20 Greek inscriptions, all of which included the same phrase: “Come to me, O Paean, and bring with you the true oracle.” The word, “Paean,” Ancient Origins explained, was used by ancient Greeks as a title or descriptive term for Apollo.
While oracles are typically associated with fortune telling and divination, they were also used by ancient cultures to find answers to simple, everyday matters, for finding romance and in an effort to be cured of various ailments, the website added. Furthermore, they were also consulted before the start of a journey or for applying for asylum in the sanctuary, Haaretz added.
In addition to the well, Dr. Stroszeck and her colleagues found a 2500-year-old bathhouse which would have been used by both the residents of Athens and travelers during their visits to the city. The name of the site where it was located, Kerameikos, comes from the Greek word for ceramics or pottery, it was home to many potters, vase painters and other artistic endeavors.
Located nearby was the ancient agora and the famous Academy of Plato, and the region was also home to an ancient cemetery with tombs dating back to the Early Bronze Age (2700-2000 BC), according to Ancient Origins. The cemetery was expanded sometime during the sub-Mycenaean period (1100-1000 BC) and remained in use through the Early Christian period, up to 600 AD.
—– Image credit: Jutta Stroszeck
Officially known as element 117, tennessine was discovered by Joseph Hamilton, a professor of physics at the university, his colleague A.V. Ramayya, and researchers at RNL, the University of Tennessee in Knoxville, the Lawrence Livermore National Laboratory (LLNL) in California and the Flerov Laboratory for Nuclear Reactions (FLNR) in Russia.
“Vanderbilt worked with Oak Ridge and we wanted to come up with a name that honored the accomplishments of the state. We actually decided on the name before experiments even started.” Dr. Hamilton stated in an interview with redOrbit.
In order to create tennessine, Hamilton’s team needed to take the element berkelium, which has 97 protons, purify it, and place it in a powerful heavy ion accelerator. They bombarded it with calcium-48, an isotope containing 20 protons and 28 neutrons, over a 150 day span in what they called a series of “hot fusion” reactions. They produced six atoms of the new element, which at the time made it 26th new element to be added to the periodic table since 1940.
Now, tennessine (Ts), along with element 115– called moscovium (Mc) after the Russian capital, and element 118– dubbed oganesson (Og) in honor of FLNR researcher Yuri Oganessian – have been given their provisional names from the International Union of Pure and Applied Chemistry (IUPAC), the Zurich, Switzerland-based organization that represents chemists globally.
Keith Wood, Vanderbilt University
Discovery provides new evidence of the theoretical “island of stability”
The scientists had initially submitted their findings to the IUPAC in 2012, but the agency stated that the evidence provided was not conclusive in regards to element 117 and requested additional data be submitted. Now, tennessine will become just the second element to be named after a US state, joining element 98, Californium, which was discovered in the 1950s.
“Formal certification of these provisional names is expected in five months,” Hamilton, who also serves as the director of the ORNL’s Joint Institute for Heavy Ion Research, said in a statement Wednesday. “After this occurs, the name of the State of Tennessee will be in the periodic table in textbooks of physics and chemistry worldwide forever.”
Since element 117 is a member of the halogen chemical family, which is also comprised of the elements fluorine, chlorine and bromine, it is expected to have similar chemical properties and was given a name with an “ine” ending, the researchers explained. The experiments that led to the element’s discovery were conducted under the Oganessian’s supervision, they added.
The three new elements were created by through “hot fusion” reactions in which americium-243, berkelium-249 and californium-249 were bombarded with calcium-48, and these new elements completed the seventh row of the periodic table. More importantly, the researchers noted, their existence provides evidence for the long sought-after, theoretically predicted “island of stability” concept, which predicted that super heavy elements with much higher numbers of neutrons and protons than previously known ones would have slower rates of decay.
“The new nuclei produced in this research have substantially increased lifetimes consistent with landing on the shores of the island,” Hamilton explained. “These discoveries – evidence for the island’s existence and the new elements themselves – represent a major advance in our understanding of the behavior of nuclear matter under the extreme stress of the ultra-large electrical forces that exist between the high numbers of protons that are packed into these new nuclei.”
“It’s an honor that Tennessee will be immortalized in every physics textbook printed once the name is made official. We’re proud of that,” said Hamilton.
Why are radiation levels at Bikini Island still so high?
Written By: Chuck Bednar
Brian Galloway
Radiation from nuclear weapon tests conducted on Bikini Atoll during the 1940s and 1950s was supposed to have cleared by now, but a new study has found that the island is still uninhabitable due elevated levels of gamma rays produced by elements such as cesium-137.
According to Autumn S. Bordner, a research fellow working on the K1 Project at the Columbia University Center for Nuclear Studies in New York, and her colleagues, recent estimates of the radioactive fallout levels at Bikini Atoll and the Marshall Islands were based on estimates that relied upon data collected decades ago, not recent on-site measurements.
Scientists had predicted that radiation levels would have dropped to between 16 and 24 millrems per year by this time, according to Science News and Phys.org, which would have meant that it would have been safe for people to live near the testing sites. However, Bordner and her colleagues conducted measurements in six different areas, including Bikini Island, and found that the radiation levels exceed safety standards.
“Our findings suggest that there is significant variation in the levels of external gamma radiation on the islands affected by the US nuclear testing program in the Marshall Islands,” they wrote in their paper. “Notably, Bikini Island is found to have radiation levels exceeding the agreement promulgated by the US and [Republic of the Marshall Islands] governments for safe habitation… [which] suggests that Bikini Island… may not be safe for habitation.”
Radiation ‘not terribly dangerous,’ but exceed mandated levels
Bikini Atoll had the highest gamma radiation levels of the sites tested, with a mean reading of 184 millrems per year, according to Popular Science. Three sites on Enewetak Atoll were found to have an average reading of 7.6 millirems per year, while levels of 19.8 mrem/y were measured at a location on Rongelap, which was serious affected by hydrogen bomb tests in the 1950s.
For reference sake, the researchers also collected readings at Majuro Atoll, an island far enough away from the blast zone to serve as a control, and New York’s Central Park. Majuro Atoll was found to have a 9 mrem/y reading, while Central Park measured at 13 mrem/y, they said.
Despite the fact that the numbers were so much higher than the other sites, Phys.org noted that the 184 mrem/y radiation levels detected at Bikini Island is “not considered terribly dangerous” but does exceed the government-mandated minimum-acceptability levels. Bordner’s team also said that additional studies needed to be done to determine what type of exposure people living on the island could be exposed to, such as through food, before re-habitation can commence.
“Without measuring other exposure pathways, we are not able to make a determination as to whether these islands are indeed safe for habitation,” the authors wrote. “There is a population currently living on Enewetak, in some trepidation as to whether or not their environment is safe.”
“In addition, there is currently a large population of displaced Marshallese people who desire to return to Rongelap and Bikini,” they added. “Given these circumstances, it seems imperative that further steps be taken to analyze additional exposure pathways to make a definitive statement as to whether these islands are safe for habitation.”
So why has the radiation apparently lingered so much longer than initial estimates indicated it would? Study co-author and Columbia physicist Emlyn Hughes told Science News that it was likely due to incorrect assumptions about how quickly radioactive materials would wash off of the island.
Fish can be trained to recognize faces, study finds
Written By: Chuck Bednar
Brian Galloway
We know that our pet dogs and cats can recognize our faces, but our pet fish? That’s too far-fetched to believe, right? Not according to a team of scientists from the UK and Australia, who have discovered a species of tropical fish capable of distinguishing human faces!
The research, which was carried out by a team from the University of Oxford in England and the University of Queensland in Australia and published Tuesday in the journal Scientific Reports, found that archerfish were able to learn and recognize faces with a high degree of accuracy – a task which the authors noted requires highly-developed visual recognition capabilities.
This marks the first time that a species of fish has demonstrated such an ability, lead author Dr. Cait Newport, a research fellow in the Oxford Department of Zoology, and her colleagues said in a statement. Such abilities have been previously demonstrated in birds, but unlike fish, they have been proven to possess structures similar to the neocortex (the highly developed part of the brain that is associated with seeing and hearing in humans), the researchers added.
“Being able to distinguish between a large number of human faces is a surprisingly difficult task,” Dr. Newport said, “mainly due to the fact that all human faces share the same basic features. All faces have two eyes above a nose and mouth, therefore to tell people apart we must be able to identify subtle differences in their features. If you consider the similarities in appearance between some family members, this task can be very difficult indeed.”
Study shows that complex brains aren’t necessary for facial recognition
In fact, she explained, the task is so difficult that some had hypothesized that only primates were capable of doing so, due to their large and complex brains. However, as previously mentioned, a previous study found that birds possessed similar capabilities, and Dr. Newport’s team wanted to see if other creatures with smaller, simpler brains could recognize human faces.
They discovered that fish, despite having no evolutionary need to do so and despite lacking any type of brain structure similar to the primate’s visual cortex, were able to recognize one familiar face out of a group of up to 44 never-before-seen ones. Their findings indicate that fish, despite their lack of a neocortex, nonetheless are capable of impressive feats of visual discrimination.
During their experiments, Dr. Newport and her colleagues presented archerfish with two images of human faces, and trained them to choose one using their ability to spit jets of water in order to knock down airborne prey. Next, the fish were presented with the familiar face and several that were unfamiliar, and were able to correctly pick the one that they had been trained to recognize, even when features such as head shape and color were removed from the selected pictures.
In the first experiment, the archerfish were tasked with picking the previously learned face from a group of 44 new ones, which they did with 81 percent accuracy. In the second, they were asked to choose in a scenario where facial features such as brightness and color had been standardized, and they proved to be 86 percent successful at this task.
“Fish have a simpler brain than humans and entirely lack the section of the brain that humans use for recognizing faces. Despite this, many fish demonstrate impressive visual behaviors and therefore make the perfect subjects to test whether simple brains can complete complicated tasks,” Dr. Newport said. “Once the fish had learned to recognize a face, we then showed them the same face, as well as a series of new ones.”
“In all cases, the fish continued to spit at the face they had been trained to recognize, proving that they were capable of telling the two apart. Even when we did this with faces that were potentially more difficult because they were in black and white and the head shape was standardized, the fish were still capable of finding the face they were trained to recognize,” she added. “The fact that archerfish can learn this task suggests that complicated brains are not necessarily needed to recognize human faces.”
LISA Pathfinder satellite reports major gravitational wave results
Written By: Chuck Bednar
Brian Galloway
A European Space Agency mission designed to demonstrate the technology needed to built a space-based gravitational wave detector has exceeded expectations on its first day of operation and has reported record-breaking observations of spacetime rippled over a two month span.
According to BBC Newsand Scientific American, the ESA’s LISA Pathfinder satellite was sent into orbit in order to test components of a laser measurement system that would be utilized on a future observatory, and its early success indicates that said observatory would be effective when it comes to locating evidence of merging supermassive black holes.
At the heart of the spacecraft are two 4.6-centimeter gold-platinum cubes that are falling freely through space, influenced solely by gravity and unperturbed by any other external force, agency officials explained in a statement. Thus far, they have been able to remain almost perfectly still, according to New Scientist, and have been five-times more precise than initially required.
“During commissioning, the requirements were being met already,” co-principal investigator Karsten Danzmann told BBC News. “We hadn’t tweaked anything; we’d just turned everything on to see if the laser was running and, bang, there it was. And the performance has just got better and better ever since.”
Lisa Pathfinder’s instrument. Credit: AIRBUS DS
Instruments precise enough to detect waves anywhere in the universe
The LISA Pathfinder’s mission has generated a lot of excitement in the scientific community, as the ability to reliably detect gravitational waves has been highly sought after since these ripples in space time were first detected last year by the US-based Advanced LIGO project centers.
That discovery has been widely hailed as one of the greatest scientific discoveries in decades, BBC News noted, and the ultimate goal is to take the ability to detect these phenomena into space, thus expanding the range of such searches beyond what is capable using a ground-based observatory. First, however, the technology must be tested and proven effective.
To that end, the ESA launched the LISA Pathfinder last December, hoping the test cubes released by the satellite would be able to maintain a separation of 38 centimeters during a free-fall while being measured with a laser interferometer. So far, so good, as a team of ESA scientists reported Tuesday in a paper published online by the journal Physical Review Letters: the performance of the cubes appears promising for a space-based gravitational wave observatory.
The laser interferometer system inside Lisa Pathfinder. Credit: ESA
“LISA Pathfinder’s test masses are now still with respect to each other to an astonishing degree,” Alvaro Giménez, the ESA’s Director of Science, explained in a statement Tuesday. He added that the results of the test demonstrate “the level of control needed to enable the observation of low-frequency gravitational waves with a future space observatory.”
Paul McNamara, a project scientist on the LISA Pathfinder mission, said that the measurements “have exceeded our most optimistic expectations,” and Danzmann added that “at the precision reached by LISA Pathfinder, a full-scale gravitational wave observatory in space would be able to detect fluctuations caused by the mergers of supermassive black holes in galaxies anywhere in the Universe.”
According to study author and RCHE Professor David Lambert and his colleagues, their findings refute an earlier paper which stated that DNA sequences taken from the oldest known Australian, a set of male remains recovered from Lake Mungo in New South Wales informally known as the Mungo Man, represented a now-extinct lineage of humans that pre-dated Aborigines.
That study, which was published in 2001, claimed that mitochondrial DNA (mtDNA) recovered from the 42,000-year-old Mungo Man indicated that it was not related to Aboriginal Australians, and that it must have been an extinct subspecies that diverged prior to the most recent common ancestor of modern humans, seemingly supporting the multiregional origin hypothesis.
New analysis reveals that the previous samples were contaminated
Those results have long been viewed as controversial in the scientific community, and now, 15 years after their original publication, Lambert and a team of colleagues from the US, Denmark, Austria, the UK and New Zealand argue that the findings had indeed been erroneously drawn.
“The sample from Mungo Man which we retested contained sequences from five different European people suggesting that these all represent contamination,” he said in a statement. “At the same time we re-analyzed more than 20 of the other ancient people from Willandra. We were successful in recovering the genomic sequence of one of the early inhabitants of Lake Mungo, a man buried very close to the location where Mungo Man was originally interred.”
“By going back and reanalyzing the samples with more advanced technology, we have found compelling support for the argument that Aboriginal Australians were the first inhabitants of Australia,” the professor added, noting that thanks to advancements in genomic technology, his team was able to obtain more information from ancient Aboriginal Australian remains.
Using that technology, the new study discovered multiple sources of European contamination on genetic samples taken from the Mungo Man, and the authors report that their work marks the first time scientists have been able to recover an ancient mitochondrial genome sequence from an Aboriginal person who lived prior to the Europeans’ arrival. Their study was supported by the Barkindjii, Ngiyampaa, and Muthi Muthi indigenous people.
Electric eels use ‘leaping attacks’ to defend themselves, study finds
Written By: Chuck Bednar
Brian Galloway
A well-known account of an epic battle between electric eels and horses, long unsubstantiated by scientific evidence, now appears more realistic thanks to new research demonstrating these aquatic creatures are capable of targeting land threats with powerful electrical shocks.
The study, led by Vanderbilt University biologist Kenneth Catania and published online Monday in the Proceedings of the National Academy of Sciences early edition, lends credence to the story of famed explorer and naturalist Alexander von Humboldt, who allegedly witnessed a conflict between horses and eels while on an expedition to the Amazon.
The study gives legitimacy to a story from the travels of Alexander von Humboldt
The lack of evidence led many to suggest that von Humboldt had been exaggerating his tale, but last year, Catania made the accidental discovery that eels can have a dramatic defensive reaction when cornered by a land-based threat: they will attack it by coming up out of the water, placing their chin against the object’s side, and delivering a series of powerful shocks.
In the newly published study, he details for the first time the effectiveness of this mechanism and explains the evolutionary advantage it provides to the eels. In short, he answers a question that he said had long puzzled him: “Why would the eels attack the horses instead of swimming away?”
Leaping attacks increase the voltage, amperage of eels’ shocks
As luck would have it, Catania chose to to move his eels from place to place with a metal net. Typically, he explained, the larger eels would try to avoid the net, but at times, they would leap up out of the water, place their chins on the net, and deliver electric pulses.
Fortunately for the Vanderbilt professor, he was wearing gloves and avoided shocking himself, but the behavior seemed strange. Previous research demonstrated that eels view some types of small conductors as prey, and this newly-observed behavior gave Catania the impression that they viewed the larger conductor as a potential predator.
In his previous work, Catania showed that eels use a high-frequency series of extremely short pulses when attacking their prey, causing its muscles to contract similar to a TASER. Following the net incident, he devised a series of experiments to observe their defensive mechanisms, and found that eels tend to ignore most objects that are not conductors. In addition, he found that the voltage and amperage of the shock increased the higher up the eel leaped on its target.
Leaping out of the water increases the attack’s effect because more of the energy is distributed through the target. Credit: K. C. Catania, Vanderbilt University
When an eel is completely submerged, the power of its electrical pulses is distributed throughout the water, but when its body extends out of the water, the current travels directly from its chin to its target. The shock travels through the recipient, back into the water and finally back to the tail of the eel, thus completing the circuit.
In a statement, Catania explained that this allows them to “deliver shocks with a maximum amount of power to partially submerged land animals that invade their territory” and “electrify a much larger portion of the invader’s body.”
The pulses they produce while fully submerged may not be powerful enough to keep a hungry land-based predator at bay if the predator keeps its body out of the water, he explained. By leaping out of the water, however, the eel can defend itself far more effectively, and this behavior was likely what von Humboldt witnessed.
‘Wasteful’ habits of star-forming galaxies highlighted in new study
Written By: Chuck Bednar
Brian Galloway
The tendency of galaxies to eject oxygen, carbon and iron atoms produced by star formation into their surrounding halos and even into deep space leave them devoid of the raw materials required to build additional stars and planets, according to a recently-published study.
While these heavy elements are essential to such processes, Benjamin Oppenheimer, a research associate working in the Center for Astrophysics & Space Astronomy (CASA) at the University of Colorado-Boulder, and his colleagues found that galaxies tend to “waste” large quantities of them by discharging them distances up to one million light years away.
As Oppenheimer, whose team published their findings online in a recent edition of the Monthly Notices of the Royal Astronomical Society, said Monday in a statement, “Previously, we thought that these heavier elements would be recycled in to future generations of stars and contribute to building planetary systems. As it turns out, galaxies aren’t very good at recycling.”
Using data from the Hubble Space Telescope’s Cosmic Origin Spectrograph (COS) instrument, the researchers analyzed the nearly-invisible reservoir of gases surrounding a galaxy, known as the circumgalactic medium (CGM), around both spiral and elliptical galaxies, each of which are typically home to several billion heavy element producing stars.
Star-forming galaxies like NGC-694 are the engines of the universe. Credit: NASA
Findings also reveals why different galaxies have different oxygen levels
While a typical galaxy is between 30,000 to 100,000 light years in size, the CGM can be up to one million light years big. While researchers believe that the CGM plays a key role in cycling elements into and out of the galaxy, the exact processes are still not well understood.
As part of their study, Oppenheimer’s team used the COS instrument’s ultraviolet spectroscopy capabilities to study the galaxies and the CGM surrounding them, as well as several simulations, and determined that the CGM’s in both types of galaxies contained more than half of a galaxy’s heavier elements, indicating that they are much less efficient at maintaining these raw materials than experts had previously believed.
“The remarkable similarity of the galaxies in our simulations to those targeted by the COS team enables us to interpret the observations with greater confidence,” study co-author Robert Crain, a Royal Society University Research Fellow at Liverpool John Moores University, said. The study authors also noted that their simulations helped explain why COS observations appeared to show that elliptical galaxies had less oxygen surrounding them than spiral ones.
“The CGM of the elliptical galaxies is hotter,” explained Joop Schaye, a professor at Leiden University in the Netherlands and another co-author of the new study. “The high temperatures, topping over one million degrees Kelvin, reduce the fraction of the oxygen that is five times ionized, which is the ion observed by COS.”
In contrast, the temperature of CGM gases in spiral galaxies is 300,000 degrees Kelvin, or about 50 times hotter than the sun’s surface. Oppenheimer explained that ejecting heavy elements into the CGM requires “massive amounts of energy from exploding supernovae and supermassive black holes… This is a violent and long-lasting process that can take over 10 billion years, which means that in a galaxy like the Milky Way, this highly ionized oxygen we’re observing has been there since before the Sun was born.”
The UK media organization, which was granted access to the work for their program Medicine’s Big Breakthrough: Editing Your Genes, explained that these human-pig embryos known as “chimeras” are being allowed to develop in sows for 28 days before the pregnancies are terminated and the tissues removed and analyzed by the laboratory team.
Pablo Ross, a reproductive biologist at UC Davis, and his colleagues told BBC News that these creatures should look and behave like normal pigs, except that one organ will be made up totally of human cells. The goal is to develop new organs to transplant into human patients.
The researchers are using CRISPR gene editing is used to extract DNA from a newly fertilized pig to prevent it from developing a pancreas, with the hopes that the human cells injected into it will cause the fetus to develop a human version of the organ instead. “Our hope,” Ross said, “is that this pig embryo will develop normally but the pancreas will be made almost exclusively out of human cells and could be compatible with a patient for transplantation.”
Human stem cells being injected into a pig embryo. Credit:
Revelation again leads to debate over ethics of controversial research
Last September, the NIH said that it would not support the funding of such research until it had a better notion of the possible implications, according to The Guardian. Specifically, they said that they had concerns that the presence of human cells could travel to an animal’s brain, thus making it more human. Ross told BBC News that such an occurrence is unlikely.
“We think there is very low potential for a human brain to grow, but this is something we will be investigating,” he said. Previously, he and his colleagues have injected human stem cells into pig embryos without first removing an organ (like the pancreas) to create what is known as a genetic niche – a biological void that the inserted stem cells could potentially help to fill. Other scientists in the US have conducted similar work, but no other have allowed the fetuses to be born.
Walter Low, a professor in the University of Minnesota department of neurosurgery, told BBC News that while pigs were essentially an ideal “biological incubator” for growing human organs, and that they could ultimately be used to create hearts, livers, kidneys, lungs or other body parts, that such research was still in its preliminary stages and a long way off from clinical use.
Others, such as Peter Stevenson, chief policy advisor with Compassion in World Farming, have qualms with such research. Stevenson told the BBC that he was “nervous about opening up a new source of animal suffering” that that the first step should be to encourage more people to be organ donors. “If there is still a shortage after that, we can consider using pigs,” he added, “but on the basis that we eat less meat so that there is no overall increase in the number of pigs being used for human purposes.”
Latest NASA images showcase first cloud spotted on Pluto
Written By: Chuck Bednar
Brian Galloway
Researchers analyzing an image captured by NASA’s New Horizons spacecraft shortly after it made its closest approach to Pluto have discovered what could be the first-ever cloud located in the dwarf planet’s atmosphere, the US space agency announced earlier this week.
The image, captured by the probe on July 14, 2015, was obtained at a high-phase angle, meaning that the sun was on the opposite side of Pluto relative to New Horizons’ position. Using sunlight filters, it was able to peer through and illuminate the dwarf planet’s atmospheric haze layers over the part of the planet’s surface informally known as the “Twilight Zone.”
In addition to depicting the southern part of the nitrogen ice plains called Sputnik Planum and the mountain range known as the Norgay Montes, this newly-released photo shows what officials at the agency refer to as “an intriguing bright wisp,” tens of miles in length, that they believe could be a “discreet, low-lying cloud,” which would make the first ever identified in an image captured by New Horizons during its fly-by of the dwarf planet.
New Horizons used its Ralph/Multispectral Visual Imaging Camera (MVIC) to obtain the image from a distance of approximately 13,400 miles (21,550 kilometers) from Pluto, about 19 minutes after making its closest approach to the dwarf planet. Photographs such as this one, which has a resolution of 1,400 feet (430 meters) per pixel, helps provide new insight into the planet’s hazes and surface properties, the agency explained.
So why was this wisp visible in this particular picture?
If this is indeed a cloud, NASA scientists explained that it is visible for the same reason that the haze layers in the dwarf planet’s atmosphere appear to be so bright: sunlight illuminates them as is grazes the surface at a low angle. Atmospheric models have suggested that methane clouds can form in Pluto’s atmosphere from time to time, and this may well be one of them.
Another image captured by the spacecraft shows the night side of Pluto in greater detail, as this part of the dwarf planet’s terrain is illuminated from behind by various hazes. The picture shows a rugged terrain with vast valleys and sharp mountains with relief totaling three miles (5 km).
Since the image was taken at a closer distance than previous ones depicting the area, it is higher in resolution, enabling scientists to use it as a sort of “anchor point” to get a good look at the lay of the landscape in this typically obscured part of the dwarf planet. The image shows a 460 mile (750 km) wide part of the Pluto landscape typically visible in high resolution only at twilight.
According to Space.com, New Horizons will continue to beam data from its Pluto flyby back to Earth through the fall. It is currently more than three billion miles (5 billion km) away, meaning that transmission times are on the slow side as the spacecraft continues moving towards its next target, a tiny object known as 2014 MU69 that is roughly one billion miles (1.6 billion km) away from the Earth.
Luxembourg sets aside 220 million euros for asteroid mining
Written By: Chuck Bednar
Brian Galloway
It may be one of the smallest nations in Europe, ranking only 168th in terms of total surface area and 170th in estimated population as of 2015, but the Grand Duchy of Luxembourg has big plans when it comes to mining precious metals and rare minerals from asteroids.
According to AFP and Reuters reports, Luxembourg officials announced Friday that it would be passing laws and setting aside $225 million (200 million euros) to fund an ambitious program to send a spacecraft to mine near-Earth objects. The plan is similar to one passed in the US late last year, but is the first to be established by a European country.
“We have a first budget to get started but if we need more money, we will be able to provide it,” Etienne Schneider, Luxembourg’s economy minister, told reporters during a press conference on Friday. “Luxembourg’s aims is to be in the top 10 space faring nations in the world.”
Schneider said that the plan is to recruit experts in space law to draft “comprehensive legislation” that would go into effect next year and would establish a legal framework which would open the door for private firms to invest in asteroid mining operations. Unlike the US law, Luxembourg’s plan would take a more business-friendly approach to the industry, the AFP noted.
Firms interested in obtaining minerals from near-Earth objects would need to apply for and be issued special licenses, and their activities would be subject to government supervision. Local companies and foreign businesses alike would be eligible to participate, and the proposed plan has already drawn some interest from American enterprises, according to Reuters.
Sounds cool… but is something like this even feasible for Luxembourg?
While it might come as a surprise to those who do not closely follow the international space scene, Luxembourg is has already enjoyed considerable success in the industry. Although the country is perhaps best known for its personal banking and fund management endeavors, it is also home to SES, one of world’s largest operators of communication satellites.
Plus, since it would be open to international companies (provided they establish an operations base within the nation’s borders), the program has already drawn interest from two US-based businesses, according to Reuters: Deep Space Industries, which is said to be developing a new concept spacecraft to be used for mining asteroids, and Planetary Resources, a company co-financed by Google co-founder Larry Page that is working on exploration satellites.
While there are many technological obstacles to overcome, and international treaties that need to be taken into consideration, asteroid mining could be extremely profitable – for Luxembourg and for the rest of the world, for that matter. As the AFP explained, some near-Earth objects are filled with water, carbon, sulfur, nitrogen, phosphorus and ferrous metals – raw materials that could be extracted, processed and either returned to Earth or used to help on deep space missions.
Schneider, at least, seems confident that Luxembourg can become a major player in this field, telling reporters, “We intend to become the European center for asteroid mining.” According to Ars Technica, Dr. Pete Worden, former director of NASA’s Ames Research Center, is serving as an advisor to the program, and said during the press conference that he is hopeful that asteroid mining will enable the use of smaller, less expensive rockets that can resupply in space.
“I believe the future lies in a robust space economy that is driven by commercial interests,” he added. “The interesting thing is that we’re seeing a situation here where space agencies globally are moving from doing these things themselves. Just as NASA is contracting with launch companies, what we hope here is that the resources one needs to explore space can be purchased from these entrepreneurs.”
‘Lost Ancient Greek city’ is actually a natural phenomenon
Written By: Chuck Bednar
Brian Galloway
What was believed to have been the underwater remnants of an ancient, long lost Greek city is actually the result of a natural geological phenomenon that occurred during the Pliocene era, up to five million years ago, according to new research published on Friday.
Several years ago, CNN and the New York Times explained, divers discovered what appeared to be paved floors, courtyards and other evidence of a long-forgotten city off the coast of the Greek island Zakynthos that would have been swallowed up when tidal waves washed over it.
However, writing in the latest edition of the journal Marine and Petroleum Geology, researchers from the UK’s University of East Anglia and the University of Athens in Greece revealed that it was not a man-made metropolis, but actually built by microbes, not humans, and were produced as a byproduct of their efforts to break down methane gas for energy.
“We investigated the site, which is between two and five meters under water, and found that it is actually a natural geologically occurring phenomenon,” lead author Professor Julian Andrews of the UEA School of Environmental Sciences, who along with his colleagues analyzed the mineral content and texture of the site, said in a statement.
Methane release similar to fracking spurred on the process
When divers first discovered the size near Alikanas Bay, they found structures that looked as if they could have been man-made, such as circular column bases and paved floors. However, they were puzzled by the lack of other signs of human activity, including pottery or other artifacts.
That led scientists to conduct a preliminary set of mineralogical and chemical analyses, then turn to the UoA and UEA researchers for help. Andrews’ team used a combination of microscopy, X-ray and stable isotope techniques to closely examine the remains, and found that the location was something known as a “cold seep” in which methane within the ocean floor moved upward via a series of faults and seabed sediments, where it was then consumed by microbes.
“The disk and doughnut morphology, which looked a bit like circular column bases, is typical of mineralization at hydrocarbon seeps,” Andrews explained in a statement. “We found that the linear distribution of these doughnut shaped concretions is likely the result of a sub-surface fault which has not fully ruptured the surface of the sea bed.”
“The fault allowed gases, particularly methane, to escape from depth,” he added. “Microbes in the sediment use the carbon in methane as fuel. Microbe-driven oxidation of the methane then changes the chemistry of the sediment forming a kind of natural cement, known to geologists as concretion. In this case the cement was an unusual mineral called dolomite which rarely forms in seawater, but can be quite common in microbe-rich sediments.”
The professor noted that this type of event rarely occurs in shallow water, and most phenomena like it are often discovered several hundred to thousands of meters below sea level. However, he said that the process, though natural, was similar to the effects of fracking – in both cases, natural methane leaks from rocks in hydrocarbon reservoirs, but in the latter instance, humans are simply speeding up the release of the gas.
Their proposal, which was outlined Thursday in the journal Science, involves developing their own lab-made version of the entire human genetic code with the hope that their efforts may one day lead to important medical breakthroughs, according to NPR and the New York Times.
Dubbed the Human Genome Project–Write (HGP-Write) project, the goal is to synthesize the complete genome using its chemical components, and make it possible for them to function in actual human cells. Doing so will be no small task, as published reports have indicated that at least $100 million will initially need to be raised to pursue the task of creating the three billion base pairs of DNA required for a human cell to survive and function properly.
“We just had a revolution in our ability to read genomes, [and] the same thing is happening now with writing genomes,” Church told NPR. “We have the ability to synthesize bacterial genomes and we can synthesize parts of human genomes. We would like to be able to scale that up so we can make larger and larger collections of genomes.”
Research could lead to medical breakthroughs, but there are ethical concerns
The proposal, which initially leaked in mid-May, would enable scientists to use produce and use synthetic genomes for a variety of different purposes, including inserting them into stem cells to make them safer when treating diseases, producing cell lines to make new vaccines or drug types or to create “humanized animals” for organ transplantation, according to the Times.
However, the proposal has sparked an ethical debate, raising concerns that the synthetic genomes could be used to create designer babies, or to produce children with no genetic parents. The study authors insist that they harbor no such ambitions, explaining to Science that their goal is to lower the cost of mass-engineering DNA and testing how effective it is in cells. The HGP-write project ”would push current conceptual and technical limits by orders of magnitude,” they wrote.
Those reassurances do little to qualm the fears of experts like Marcy Darnovsky, the head of the Berkeley, California-based Center for Genetics and Society (CGS), who, in a statement, said that “these self-selected scientists and entrepreneurs are launching a corporate-dominated moonshot that could open the door to producing synthetic human beings. They are doing this without the involvement or even the knowledge of the public or civil society, without consultation with other scientists, and in the absence of public policy.”
“The focus on synthesizing the human genome seems in part like a public relations stunt to get multi-billion-dollar range funding… and a lot of media attention,” she added. “Some of the speculative goals of this project sound innocuous or benign. Others would be dangerously unacceptable. There would of course be enormous technical challenges to producing synthetic humans, but it’s clear that no self-appointed group has a warrant to make decisions that could literally reshape the human genome.”
But the scientists behind HGP-Write insist that they are only looking to benefit human health by using the synthesized genome to produce cell lines which, for instance, could be resistant to even impervious to various pathogens or types of cancer. Church told NPR that he and his colleagues are well aware of the ethical issues around their work, and said that such concerns would need to be addresses, but are adamant that they have no intention of using the results of their project to engineer a new-and-improved race of designer humans.
Scientists discover how to convert skin cells into red blood cells, could revolutionize blood transfusions
Written By: John Hopton
Brian Galloway
European scientists have found a way to convert skin cells into red blood cells, potentially clearing a path to make blood transfusions much safer because blood could have a patient’s own genetic code.
All humans have a unique genetic code, and all cells within their body – brain, muscle, fat, bone and skin – share that same code. It makes sense that if unhealthy cells need to be replaced, they should be replaced from within the patients’ own body. “Convincing” skin cells to become red blood cells, however, is just as difficult as it sounds.
Enter researchers at Lund University in Sweden and the Center of Regenerative Medicine in Barcelona, who with the help of a retrovirus have identified the four genetic keys that unlock the genetic code of skin cells and reprogram them to start producing red blood cells.
“This is the first time anyone has ever succeeded in transforming skin cells into red blood cells, which is incredibly exciting,” said Sandra Capellera, a doctoral student and lead author of the study.
Using the retrovirus, the team introduced various combinations of over 60 genes into skin cells’ genome, until they finally found the formula to change skin cells into red blood cells.
Johan Flygare and Sandra Capellera. (Credit: Åsa Hansdotter / Lund University)
Help for an aging population
The study, published in the scientific journal Cell Reports, showed that just four of 20,000 genes are necessary to reprogram skin cells and have them start producing red blood cells – all four are necessary for the method to be successful.
“It’s a bit like a treasure chest where you have to turn four separate keys simultaneously in order for the chest to open,” explained Capellera.
Johan Flygare, manager of the research group and in charge of the study, said: “We have performed this experiment on mice, and the preliminary results indicate that it is also possible to reprogram skin cells from humans into red blood cells. One possible application for this technique is to make personalized red blood cells for blood transfusions, but this is still far from becoming a clinical reality.”
There would need to be significant investigation into how the newly created blood performs in living organisms. However, the significance of any clinical could be huge.
Flygare explained that: “An aging population means more blood transfusions in the future. There will also be an increasing amount of people coming from other countries with rare blood types, which means that we will not always have blood to offer them.”
Patients with anaemic diseases could benefit in particular. It’s condition in which the patient has an insufficient amount of red blood cells, millions suffer worldwide, and there is a shortage of donors. The amount of blood patients need to receive from different donors means they can eventually have allergic reactions to new blood. Making new blood from their own bodies would remove this problem entirely.
—– Image credit: Thinkstock
Astronomers take first look beneath Jupiter’s clouds
Written By: Chuck Bednar
Brian Galloway
Using radio waves and the Very Large Array (VLA) telescopes in New Mexico, researchers at the University of California, Berkeley and an international team of colleagues have managed to peek beneath the cloud tops on Jupiter and take a close look at the planet’s atmosphere.
As the researchers reported in Friday’s edition of the journal Science, they were able to conduct observations as deep as 60 miles (100 kilometers) beneath the cloud tops. They found several hot spots, regions that contain no clouds or condensable gases, and complex upwellings of ammonia in the part of Jupiter’s atmosphere beneath what’s visible to the human eye.
“We in essence created a three-dimensional picture of ammonia gas in Jupiter’s atmosphere, which reveals upward and downward motions within the turbulent atmosphere,” lead author and UC Berkley astronomy professor Imke de Pater explained in a statement. She and her colleagues hope that by studying these regions of Jupiter’s atmosphere, they will be able to determine how the planet’s intense internal heat source powers global circulation and cloud formation.
Since the ammonia gas partially absorbs the planet’s thermal radio emissions, the study authors were able to determine how much ammonia exists in the atmosphere and at what altitude it tends to be present. The resulting 3D map is said to resemble visible-light images captured by amateur astronomers, as well as those produced by the Hubble Space Telescope, de Pater added.
Credit: AAAS
Swirling ammonia upwellings among the Berkeley team’s discoveries
Co-author Michael Wong, also an astronomer at UC Berkley, told BBC News that he and his colleagues were able to detect “different zones, turbulent features, vortices – even the Great Red Spot” using the VLA, which was recently upgraded with a more instrument that enabled them to better detect and analyze radio emissions being given off by planets and other objects.
The team used a special technique that counters the so-called smearing effect that researchers typically see when studying a quickly-rotating body such as Jupiter, a planet with a 10-hour long day, the UK media outlet added. The combination of technique and technology made it possible for the Berkley-led team to collected detailed information about the various weather systems that exist beneath the cloud tops on Jupiter.
According to de Pater and her colleagues, their radio map shows ammonia-rich gases rising and forming the planet’s upper layer of clouds: an ammonium hydrosulfide cloud with temperatures of minus 100 degrees Fahrenheit (200 Kelvin) and an ammonia-ice cloud that roughly minus 170 degrees Fahrenheit (160 Kelvin). Both can easily be seen from Earth using optical telescopes.
Furthermore, the radio map also shows air that is low in ammonia content falling towards the planet’s surface, not unlike how drier air descents from above Earth’s cloud layers. It also shows that hotspots (named because they appear to be bright in radio and thermal infrared images) are ammonia-poor regions that circle around the planet just north of the equator. Located in between these hotspots are the ammonia rich upwellings that carry the gas from deep within the planet.
“All told, there is a wealth of information about the structure of Jupiter’s atmosphere in these new VLA images,” de Pater told BBC News. “We hope to resolve a number of outstanding questions with these and future studies using similar techniques,” the professor added, noting that she and her colleagues hope to conduct similar observations of the atmospheres around the gas giants Saturn and Uranus in the near future.
‘Jumping’ gene started the rise of peppered moths, study finds
Written By: Chuck Bednar
Brian Galloway
The genetic mutation that caused a never-before-seen black type of moth emerge during the 18th and 19th century Industrial Revolution in the UK has finally been discovered, according to a new study published in the Wednesday, June 1 edition of the journal Nature.
The research, which was published alongside a second paperthat detailed how this same genetic mutation enables tropical butterflies to use a variety of different color schemes, explained that an unusual “jumping gene” mutation was responsible for the rise of this dark variant of moth.
Furthermore, leaad investigator Ilik Saccheri of the University of Liverpool Institute of Integrative Biology and his colleagues were also able to find that the mutation first appeared around the year 1819, which “consistent with the historical record,” according to BBC News reports.
“This discovery fills a fundamental gap in the peppered moth story,” Dr. Saccheri explained in a statement. “The fact that this famous mutant is caused by a transposable element will hopefully attract more interest in the impact of mobile DNA on fitness and the generation of novel phenotypes.”
Despite the discovery, many questions still remain
The rise of black peppered moths, which the BBC called “an iconic evolutionary case study,” came during the early- to mid-19th century, with the first documented sighting taking place in Manchester, northern England, in 1848. However, experts have long suspected that they may have existed, undetected due to low population numbers, several years beforehand.
Now, Dr. Saccheri’s team has found that this evolutionary change, which was the result of soot darkening the walls and tree trunks of its habitat, took place nearly three decades before anyone first caught glimpse of one of these unusual moths. They used a statistical simulation to pinpoint how many generations would have been needed to achieve the levels of DNA sequence changes required to achieve the observed pattern of color variation.
“Our best estimate of 1819 shows that the mutation event occurred during the industrial revolution and that it took around 30 years for it to become common enough to be noticed,” study co-author Dr. Pascal Campagne, also from the University of Liverpool. His colleague, Dr. Arjen van’t Hof, added that the findings “provide an opportunity to further develop peppered moth industrial melanism as a tool for teaching evolutionary biology and the genetic basis of adaptation.”
Peppered moths, BBC News explained, were one of the earliest visible examples of evolutionary changes, as the creatures were first seen a decade before Darwin and Wallace originally outlined the concept of natural selection. The creatures are noctural, and these new color schemes made it much easier for the moths to sleep on trees or walls without being spotted by hungry birds.
To find out exactly what changes occurred in the species’ genes, Dr. Saccheri and his colleagues used traditional genetic mapping to find 87 different DNA differences between black moths and their white counterparts. Eventually, they found that the culprit was a transposon – a “jumping” piece of DNA that inserted itself into a gene called cortex. Strangely, however, cortex genes are not known to play a role in pigmentation, so exactly how this mutation caused black coloring in the moths remains a bit of a mystery.
Seven new peacock spider species found in Australia
Written By: Susanna Pilny
Brian Galloway
If you enjoy dancing like Uma Thurman, or if you just happen to enjoy bright colors, great news: Seven new species of peacock spiders have been found in Australia.
Some people might not be extremely thrilled at this news, but peacock spiders aren’t your standard arachnid fare. First off, they’re not harmful to humans. Second, they really don’t act like normal spiders.
“They behave very differently to how people think a spider does … they behave more like cats and dogs, moving around, perceiving and reacting to their environment,” Jürgen Otto, a biologist from Sydney who has been researching peacock spiders since 2005, told The Guardian.
Third, they’re weirdly, well, cute. Ranging from one to two tenths of an inch (2.5 to 5 mm) long, these rainbow-bright spiders dance to attract mates. But they don’t just shimmy from side to side like must people awkwardly do at the club—they more go full-on Gangnam Style.
Just try to be afraid of this:
“Each one of these species has an interesting mating behavior,” Otto told The Huffington Post. “The males usually have flaps on the side of the body that they can expand to reveal their colourful, patterned abdomen. The female sits close to the male and watches him. If a female likes a particular colour or variant, she will mate with that male. That is how these beautiful patterns evolve.”
With the addition of seven new peacock spiders, which can be found in Otto’s paper in the international jumping spider Peckhamia, the grand total now is 48 species within the Maratus genus. They’re found all across Australia, but are especially common in Western Australia—although their diminutive stature makes them a challenge to find. In fact, Otto—whose day job involves studying mites for the Department of Agriculture and Water Resources—only first came across one while walking in Australia’s Ku-ring-gai Chase National Park.
Credit: Jürgen C. Otto and David E. Hill
“I’m always looking on the ground when I walk around, mostly for mites and other small things, and I almost stepped on this little spider. That’s what started my passion,” he told The Guardian.
“If you know what you’re looking for, you can find them,” he added. “But I have to be careful not to lose them – particularly the babies – and not to squash them.”
In short, peacock spiders are nothing to fear.
“They’re colorful, they’re adorable, and even people who hate spiders—extreme arachnophobes—love these spiders,” said Otto. “They can’t help it.”
PLUS:
One of the new species, Maratus splendens, with fingernail for size:
Comets can break apart and reform later, study finds
Written By: Chuck Bednar
Brian Galloway
Despite the countless observations and massive amounts of scientific data the ESA’s Rosetta orbiter and Philae lander have collected from Comet 67P/Churyumov-Gerasimenko during the last several years, one great mystery remained: the object’s unusual rubber-duck shape.
Now, scientists from Purdue University and the University of Colorado-Boulder have found an explanation as to why the two-lobed comet has such a distinctive look: it, and other comets, can and often do break apart and reform as part of a process essential to the comet’s evolution.
The process takes place in periodic comets, or those that take less than 200 years to orbit the sun, and occurs when destructive forces cause the nucleus(the solid center of the comet) to split apart at a rate of about 0.01 per year per comet, Masatoshi Hirabayashi, a postdoctoral associate in the Purdue Department of Earth, Atmospheric, and Planetary Sciences, and his colleagues explained in the Wednesday, June 1 edition of the journal Nature.
While, as Space.com noted, previous research suggested that a comet’s nucleus can break apart and that such events occur in approximately one comet per century, little was known about the process. So Hirabayashi’s team analyzed data from Comet 67P and discovered that it (and other comets like it) may regularly experience cycles in which they break apart only to reform later.
Credit: ESA
Findings could represent a new step in the evolution of comets
While studying Rosetta’s comet, the researchers found a pair of cracks, each roughly the same size as a football field, that connected the two lobes of the so-called “rubber duck” comet. They then developed a numerical model where they increased 67P’s spin rate from one rotation per 12 hour period to one every seven to nine hours to see what this would change.
They discovered that the faster spin rate would cause more stress, and result in the formation of a pair of cracks similar in size and location to those along the neck of 67P. Based on their simulations, the study authors suggest that the spin rate of the comet’s nucleus would speed up when sunlight converted ice directly into gas plumes. The extra push the comet received from these jets could cause a strain, and that strain would ultimately lead to fractures.
By increasing the comet’s spin speed to less than seven hours per rotation, the researchers found the “head” of the rubber duck comet would pop off, co-author Daniel Scheeres, a professor in the CU-Boulder Aerospace Engineering Science Department, explained in a statement. At this point, the head and body will begin to orbit one another, and after a period ranging from a few hours up to several weeks, they will slowly collide and the comet will be reformed in its entirety.
“When we were looking at 67P, we sort of knew intuitively that if it spun fast enough, the head and body should just separate from each other,” Scheeres told Space.com. He and his colleagues examined four other comets with similar, two-lobed nuclei and discovered that the structures of these objects were consistent with this destruction-reformation cycle, suggesting that this sort of behavior is fairly common in comets that regularly approach the sun.
The professor added his team plans to analyze more comets to better learn about the shape and spin evolution of these objects in order to “better understand how this new evolutionary process we’ve identified can change comets over time” and to “explain unknown phenomena linked to comets, like how they can brighten very dramatically at random times in their orbits, or disappear from view. We think these sorts of phenomena may be linked to how they can spin fast enough to break apart.”
It’s safe to say that most fibromyalgia sufferers would love to discover a miracle cure for the illness—or at least something that would provide significant symptom relief. While the medical and scientific communities still look for a surefire cure, following a vegan diet appears to offer substantial relief of fibromyalgia symptoms.
What is a Vegan Diet and What Can You Eat?
Many people define a vegan diet by what you’re not allowed to eat: namely, any products of animal origin. That includes meat, eggs and dairy products, but also less obvious sources like gelatin and honey. Some vegans also avoid white sugar because it is bleached in animal bones.
But a vegan diet is really just one that relies primarily on plant-based matter like fruits, vegetables, nuts and seeds. Although it can be challenging to follow such a diet, particularly if you frequently have to travel or eat in restaurants, the popularity and accessibility of products like soy milk and restaurants (such as Chipotle or Subway) where you can customize your own meal make it easier than ever to make a vegan diet very doable.
Health Benefits of Vegan Diets
Plant-based vegan diets have known benefits for many health conditions. Former President Bill Clinton famously adopted a mostly-vegan diet after undergoing emergency heart surgery. Researchers have already seen that vegan diets have positive benefits in managing heart conditions, high blood pressure, type 2 diabetes and the metabolic syndrome that is a precursor to diabetes.
Vegan diets may be beneficial in treating fibromyalgia because plant-based foods are unlikely to cause inflammation, which can lead to pain and all-over soreness. Vegan, plant-based foods are also more likely to give you energy, which can help to counteract the fatigue that is common in fibromyalgia.
Drs. John McDougall and Dean Ornish have both created comprehensive diet programs that are low in fat and high in plant-based foods. As part of their research and the outcomes of those who follow their diets, they have discovered that a long list of health conditions are substantially improved by avoiding high-fat, animal-based foods.
The Right Kind of Vegan Diet
Following a vegan diet is not just about avoiding animal foods like meat and dairy. Many of us have known “junk food vegans” who still manage to avoid animal foods but don’t include many fruits and vegetables. The wide variety of processed foods means that it is now possible to follow a completely vegan diet that would still be considered unhealthy. If you don’t consume enough fruits and vegetables, a vegan diet can actually be worse for your health than one that includes meat and dairy. While it is entirely possible and realistic to follow a vegan diet and to be much healthier than someone who eats meat and dairy products, the nutrient content in vegan junk foods is usually just as poor as it is in traditional junk foods.
The desired outcome of following a vegan diet is that it can provide the vitamins and minerals that your body needs. Proper nutrition can actually heal your body from many health conditions, including fibromyalgia. Look to the purest foods that nature has to offer and you will almost certainly feel better, even if it’s not a complete cure.
While researchers have come to accept that cultural experiences have played a key role in human evolution, they had never discovered evidence to suggest that any other species of living creature underwent biological changes in order to fulfill a specific niche – until now.
Writing in Tuesday’s edition of the journal Nature Communications, Andrew Foote, an ecologist at the University of Bern, Switzerland, and his colleagues revealed that they had found evidence indicating that such cultural experiences might shape killer whale evolution.
As New Scientist explained, humans developed genes for lactose tolerance after they first became dairy farmers, indicating that their behavior had a direct influence on their genomes. Now, Foote and his fellow researchers have found that, despite their wide distribution, individual orca groups appear to remain in one general area and stick to a specific predatory strategy.
For instance, some heard their prey into bait balls, while others intentionally beach themselves in order to attract seals or other mammals. Since these groups tend to remain stable for up to several decades, these behaviors can be passed on from one generation to another. The authors identified five specific niches, and set out to see if the groups were genetically distinct from one another.
They found that the genomes of killer whales could be categorized into five distinct groups, each of which directly corresponded with one of the five cultural niches. Even though each of the orca groups shared a common ancestor as recently as 200,000 years ago, the research revealed that all of them experienced a different genetic evolution due to this social learning.
Cultural experiences change how certain groups of orca whales hunt. Credit: John Durban, NOAA Southwest Fisheries Science Center
Geographic spread is the key to orca evolution, authors says
While each group is a member of the species Orcinus orca, all of them exhibit a significantly different set of behaviors, according to the Guardian. Some live on fish while others prefer to dine on mammals and still others consume birds and reptiles. Some live in the Arctic and some in the Antarctic, and some tend to travel while others tend to roam around.
In all cases, Foote explained, these cultural niches have had an impact on the genes of the killer whales, and as he told the British newspaper, “What is remarkable is that it is incredibly close to what we see in humans. Generation time – the time of becoming an adult and having offspring – is also quite similar, roughly 25 years, and they live to roughly the same age.”
His team’s findings help to explain how orcas became genetically diverse from one another, New Scientist said. Each of the five groups were formed by a small offshoot of the overall killer whale population, and as it expanded over time, this so-called “population bottleneck” resulted in these groups adapting and evolving differently in response to their surroundings, thus ensuring that all of the groups ultimately developed a unique and distinctive genetic identity.
“I think it is linked to the geographic spread,” Dr. Foote told the Guardian. “Kkiller whales are found from the Arctic to the Antarctic and all the waters in between. Humans and also brown rats are the only other mammals that spread across such a wide geographic range. I think it is all the different prey items that make it possible. As a species they feed on almost everything.”
—–
Image credit: Thinkstock
Fibromyalgia Treating is now part of the RedOrbit.com community!
We are excited to announce that FibromyalgiaTreating.com is now part of RedOrbit.com. All of the same great people, writers and editors but now with more firepower. We now have access to an enormous amount of additional research information
from doctors and scientists. We can now extend to other conditions that may be part of your everyday lives and help you on a broad level if that is what you need. We are here for you and now, better than ever so sit back and enjoy
the new Fibromyalgia Treating by RedOrbit!