Dinosaur-like nostrils found in ancient wildebeest creature

While analyzing the fossilized skulls of an ancient wildebeest-like creature originally discovered on Rusinga Island in Kenya, researchers made a starting discovery: the extinct animal had an odd trumpet-like nasal passage similar to those of a Cretaceous period dinosaur.

The creature is question, Rusingoryx atopocranion, is a hoofed mammal that was first found at a site called Bovid Hill in the Lake Victoria region of the island, co-lead author Haley O’Brien of Ohio University and her colleagues explained earlier this week in a press release.

The similarities between its nasal features and those of lambeosaurine hadrosaur dinosaurs were said to be remarkable, despite the fact that the two species are only very distantly related and are separated by tens of millions of years, the authors said. Their findings are detailed in a new study published in the latest edition of the journal Current Biology.

“The nasal dome is a completely new structure for mammals – it doesn’t look like anything you could see in an animal that’s alive today,” said O’Brien, who is a biological scientist specializing in ecology and evolutionary biology at OU. “The closest example would be hadrosaur dinosaurs with half-circle shaped crests that enclose the nasal passages themselves.”

“The thing that’s really remarkable is that lambeosaurine hadrosaurs are the only other animals that we know of that have a similar feature,” added Dr. Daniel Peppe, an associate professor of geosciences in Baylor’s College of Arts and Sciences and co-director of the project.

Similar upbringings may explain the convergent evolution

Dr. Peppe called the discovery “a fantastic example of convergent evolution,” a phenomenon which occurs when two different creatures wind up evolving the same features independently. Rusingoryx atopocranion, a bovid, is clearly not closely related to a dinosaur, he said, but the two different species may be similar in how they lived their lives.

Both types of animals may have developed in the same way as they grew from juveniles into adults, the researchers explained. Both are believed to have been herbivores that lived in large herds in relatively open environments, and both would likely have been exposed to much the same kind of environmental pressures that would have encouraged the development of a nasal crest that allowed for vocalizations.

The findings indicate that the Rusingoryx atopocranion probably used its unusual nasal passage for communications. The trumpet-like feature would have caused the bovid to sound similar to a vuvuzela, and would have allowed it to deepen its normal vocal calls. The research even suggests that the creatures might have been able to make calls at near-infrasound levels to prevent other kinds of animals from hearing the herd communicating with one another.

“Vocalizations can alert predators, and moving their calls into a new frequency could have made communication safer,” said O’Brien. “On top of this, we know that [both] Rusingoryx and hadrosaurs were consummate herbivores, each having their own highly specialized teeth. Their respective, remarkable dental specializations may have initiated changes in the lower jaw and cheek bones that ultimately led to the type of modification we see.”

—–

Image credit: Todd S. Marshall

Rosetta orbiter shows comet 67P is full of ‘powdery grains of dust and ice’

The ESA’s Rosetta mission has solved yet another longstanding scientific mystery about the comet, conducing a series of measurements and clearly demonstrating that there are no large caverns inside of Comet 67P/Churyumov-Gerasimenko, officials at the space agency announced this week.

So what is inside the comet? Dust, and lots of it, according to a new study published Wednesday in the journal Nature. In that paper, a team of researchers led by Martin Pätzold from Rheinische Institut für Umweltforschung an der Universität zu Köln in Germany reviewed data collected by Rosetta and found that, despite its low density, the comet is not a porous object.

Comets, Gizmodo explained, are made up of a frozen mixture of water ice and rock left over from the formation of the solar system. Their low density had led scientists to ponder whether or not they had caves and caverns beneath their surface, but the new study indicates that this is not the case. While 67P is solid throughout, it is actually filled with extremely light, powdery grains of dust and ice.

Pätzold and his colleagues reported that their findings are consistent with earlier results obtained by Rosetta’s CONSERT radar experiment, which had revealed that the “head” of the twin-lobed comet had a relatively homogenous structure on spatial scales of a few tens of meters.

Lack of changes to acceleration rules out presence of big caves

As Gizmodo pointed out, the study authors used a rather ingenious technique to arrive at their conclusion. They analyzed the effect that the comet’s gravity had on radio signals received on the ground. A cavernous interior would have meant that 67P’s gravitational pull on the orbiter would have been stronger at some points in the orbit than others.

This would have resulted in a change in Rosetta’s acceleration, and thus would have produced a Doppler shift in the frequency of the spacecraft’s radio signals. Since no sizable shifts in speed were detected, however, it allowed them to conclude that the comet has a homogeneous structure comprised of three-fourths dust and one-fourth water ice.

According to the ESA, Rosetta is the first probe to successfully record this kind of measurement for a comet, and by removing the pressure of solar radiation and the escaping gas, they were able to determine the mass of the comet. They found that 67P has a mass of just under 10 billion tons, and images from the OSIRIS camera instrument place its volume at an estimated 18.7 square km, meaning that the density of the comet would be about 533 kg/m3.

There is a possibility that the comet contains small caves that have thus far escaped detection by the orbiter, Gizmodo explained. Should such caverns exist, they will likely be detected later this year, when Rosetta is guided to a controlled impact on the comet surface. The spacecraft will continue to make observations right up until impact, and those more precise readings could lead to the discovery of caves just a few hundred meters in size, the agency noted.

—–

Image credit: ESA

Discovery of female bones at Stonehenge indicative of ancient gender equality

In what some are hailing as evidence of gender equality in ancient cultures, archaeologists have discovered the cremated remains of 14 women buried alongside their male counterparts at Stonehenge, indicating that they likely were viewed as having equal social status.

According to BBC News and the New York Times, the discovery was made by Christie Willis, a PhD student at University College London, and her colleagues during an excavation at “Aubrey Hole 7,” one of 56 excavation sites dug on the outskirts of the prehistoric British monument.

The excavation uncovered the remains of nine men along with 14 women, all of whom were believed to have been buried between 3100 BCE and 2140 BCE. Willis and her colleagues also found long bone pins believed to have been used as hairpins during their dig, which was detailed in the latest issue of the magazine British Archaeology.

The discovery runs contradictory to excavations at older Neolithic tombs in southern England, where a significantly higher ratio of adult males were found, said BBC News. Willis’s team said that their findings indicate that there was a “surprising degree of gender equality” in the culture responsible for creating the Wiltshire site.

‘Women were as prominent as men’ at Stonehenge

During her investigation, Willis sorted through nearly 100 pounds (45 kg) of bone fragments to determine which part of the skeleton each came from, as well as the age and the sex of the person from which they came. Nine were male and 14 female, she said.

According to the Daily Mail, the researchers estimated the sex of each remain on the basis of the ear canal, which is found within the petrous bone – a dense, sturdy bone that is typically still identifiable after cremation. Then they used CT scans to revealed the lateral angle of the internal acoustic canal, which provided them with enough data to determine the sex of the remains.

She told BBC News that the remains had previously been dug up and reburied in Aubrey Hole 7 in the hopes that they would eventually undergo in-depth analysis. The archaeologists noted that their work had taken a total of four years, and that the bone fragments were sent to universities in Oxford and Glasgow to undergo radiocarbon dating.

“In almost every depiction of Stonehenge by artists and TV re-enactors we see lots of men, a man in charge, and few or no women,” Mike Pitts, an archaeologist and the editor of British Archaeology, told Discovery News on Wednesday. “The archaeology now shows that as far as the burials go, women were as prominent there as men.”

—–

Image credit: Thinkstock

These 5 extinct Australian megafauna will make you thankful for the deadly species they have today

People love to marvel over how everything living thing in Australia can kill you, but as it turns out, modern humans have it easy. When the first of our species arrived on the continent some 50,000 years ago, the creatures they faced put current ones to shame.

Australia was dominated by megafauna—or any animal over 100 pounds (44 kilos) by adulthood—and humans likely coexisted with these animals for upwards of thousands of years before they (thankfully) went extinct.

1. Procoptodon goliah

Let’s start off with a “gentler” animal. goliah is the largest kangaroo that ever existed (that we know of). While it measures in at about 6.6 feet tall (2 meters)—roughly the same height as the modern red kangaroo—it was 2.5 times heavier, coming in at nearly 450 pounds (200 kilograms). It could reach leaves in trees about 10 feet (3 meters) up and had long, recurved claws to grasp branches.

australian megafauna

Artist’s rendering of Procoptodon goliah. Credit: Wikipedia

2. Thylacoleo carnifex

Also known as the marsupial lion, this is the largest mammalian carnivore ever found in Australia. It was about 5 feet (1.5 meters) long and 2.5 (75 centimeters) feet tall. Marsupial lions had powerful jaws, with calculations showing it had the strongest bite of any known mammal. They also had forelimbs with semi-opposable “thumbs” and retractable claws, which they used to hunt prey and climb trees. carnifex went extinct around 46,000 years ago.

australian megafauna

Artist’s rendering of Thylacoleo carnifex. Credit: Wikipedia

3. Diprotodon optatum

This guy resembled a giant wombat, and seems to have been preyed on by marsupial lions. It is the largest known marsupial; its length was 12.5 feet (3.8 meters), and it weighed up to 6,200 pounds (2,800 kilos). optatum seems to have preferred flat, open habitats like savannahs, where it browsed on shrubs and forbs. Fascinatingly, it co-existed with humans for around 25,000 years before going extinct.

Australian megafauna

Artist’s rendering of Diprotodon optatum. Credit: Dmitry Bogdanov/Wikipedia

4. Quinkana fortirostrum

Modern crocodiles are plenty scary, but in terms of threat to humans, there’s a limitation: We’re usually terrestrial, whereas crocs are fairly restricted to killing things in or around water. Quinkana, a 16- to 19.5-foot (5-6 meter) croc that disappeared around 40,000 years ago, however, wasn’t limited in this way. Quinkana had knife-like, serrated teeth, and was fully terrestrial, using its long legs to possibly chase down land-based prey in a way that modern, short-legged aquatic crocs can’t.

australian megafauna

Credit: www.prehistoric-wildlife.com

5. Megalania prisca

This is the crown jewel of Aussie megafauna. It was a carnivorous lizard—specifically a goanna—which ranged from 11 to 16 feet (3.5 to 5 meters) in length and weighed up to 4,200 pounds (1,940 kilos). For those wondering, that little fact makes it the largest terrestrial lizard ever found. Paleontologists believe it would have been a predator at the top of the food chain, preying on large prey such as pygmy elephants and deer. It’s suspected that it had toxic, bacteria-laden saliva and roamed open forests, woodlands, and grasslands, waiting to ambush and kill its prey.

australian megafauna

Credit: http://www.outdoordesign.com.au/Landscaping-Hard-Product-Supply/garden-public-art/Giant-Australian-Prehistoric-Megalania-Natureworks/291.htm

Around 85 percent of Australia’s megafauna went extinct after humans arrived 50,000 or so years ago—and it’s largely hypothesized (and hotly debated) that humans’ drove this process forward, perhaps thanks to their consumption of limited resources the animals relied on to survive. In fact, a very recent paper has found proof that humans drove another megafauna species—a 500-pound flightless bird known as Genyornis newtoni—to its untimely demise by consuming the birds’ eggs. Regardless, we’re sure many Australians are happy not to have to live between deadly spiders and 16-foot carnivorous lizards.

—–

Feature Image: Feature Image: An illustration of a giant flightless bird known as Genyornis newtoni, surprised on her nest by a 1 ton, predatory lizard named Megalania prisca in Australia roughly 50,000 thousand years ago. (Credit: Illustration by Peter Trusler, Monash University)

NASA’s Juno spacecraft adjusts flight plan, continues on to Jupiter

NASA’s solar-powered Juno spacecraft executed the first of two scheduled maneuvers to adjust its flight plan earlier this week, bringing it one step closer to its rendezvous with Jupiter in a little under five months, officials from the US space agency announced on Wednesday.

Juno, which originally launched on August 5, 2011, fired its thrusters at 1:38pm EST (10:38am PST) on the morning of February 3, altering its speed by one foot (0.31 meters) per second and consuming 1.3 pounds (0.6 kilograms of fuel) during the process, according to NASA.

At the time of the maneuver, the spacecraft was approximately 51 million miles (82 million km) from the largest planetary inhabitant in the solar system and about 425 million miles (684 million km) from Earth. Its next trajectory correction is scheduled to take place sometime in May.

In a statement, Scott Bolton, Juno principal investigator at the Southwest Research Institute in San Antonio, said that the maneuvers would “fine tune Juno’s orbit around the sun” while also “perfecting our rendezvous with Jupiter on July 4th at 8:18 p.m. PDT [11:18 p.m. EDT].”

Record-setting solar powered craft to orbit Jupiter 33 times

Once it arrives at the Jovian system, Juno will orbit Jupiter 33 times, closing to within 3,100 miles (5,000 km) of the planet’s cloud tops once every two weeks. During these flybys, the probe will peer beneath the cloud cover and study the planet’s aurora, providing scientists with new insight into the origins, structure, atmosphere, and magnetosphere of the world below.

Last month, Juno established a new distance record for solar-powered spacecraft, shattering the previous mark held by the ESA’s Rosetta orbiter by reaching the 493 million mile mark. By the time it reaches Jupiter, it will have become just the eight spacecraft to have travelled more than 500 million miles from Earth, and the first to run on something other than nuclear power.

juno spacecraft

This graphic shows how NASA’s Juno mission to Jupiter became the most distant solar-powered explorer and influenced the future of space exploration powered by the sun. (Credits: NASA/JPL-Caltech)

As it continues towards Jupiter, Juno, which has a 30-foot-long solar array and more than 18,000 solar cells, will be collecting just 1/25th as much energy from the sun as it did early on during its voyage. Fortunately for NASA, its massive arrays are extremely efficient at converting light into energy and will be able to provide the probe with enough power to keep it fully operational.

“Juno is all about pushing the edge of technology to help us learn about our origins,” Bolton said last month in a statement. “We use every known technique to see through Jupiter’s clouds and reveal the secrets Jupiter holds of our solar system’s early history. It just seems right that the sun is helping us learn about the origin of Jupiter and the other planets that orbit it.”

—–

Feature Image: Launching from Earth in 2011, the Juno spacecraft will arrive at Jupiter in 2016 to study the giant planet from an elliptical, polar orbit. (Credits: NASA/JPL-Caltech)

Does this ancient statue reveal the Greeks had laptops with USB ports?

While it’s well known that the ancient Greeks were responsible for inventing one of the first computational devices, the Antikythera mechanism, the Internet is abuzz with claims that one statue dating back to 100 BC depicts an individual holding a modern-style laptop.

According to USA Today, the sculpture in question is known as “Grave Naiskos of an Enthroned Woman with an Attendant” and was used as a funeral marker in 100 BC. The sculpture, which is currently being displayed at the J. Paul Getty Museum in California, depicts a woman sitting in a throne-like chair while a young servant standing in front of her holds open a thin box.

Or is it a thin box? Reports published this week, including one from the Daily Mail, cite claims from people who believe the device is actually a laptop computer, complete with visible USB ports. The object, conspiracy theorists claim, is too wide to be a jewelry box and too thin to be a chest of some kind, and the woman is looking at is like a person looks at a laptop screen.

So was the Oracle of Delphi a laptop? Does this mean that the ancient Greeks actually were the ones who invented modern-day computer technology, or was time travel somehow involved, as the Daily Mail reported some people in tinfoil hats seem to believe? Highly unlikely.

Time travel? Ancient Greek laptops? Sadly, no.

Such theories date back to at least 2014, when YouTube user StillSpeakingOut posted a video in which he said that he was “not saying that this is depicting an ancient laptop… but when I look at the sculpture I can’t help but think about the Oracle of Delphi, which was supposed to allow the priests to connect with the gods to retrieve advanced information and various aspects.”

However, as Inquisitr and The Epoch Times pointed out, wax tablets that the Greeks are known to have used as early as the 14th century BC, and which have a striking similarity in appearance to the so-called laptop in the sculpture, have appeared in other works of art, including a painting from the Greek vase artist Douris that was created sometime around 500 BC.

Kristina Killgrove, a bioarchaeologist at the University of West Florida, further debunked the ancient laptop/time travel claims in an excellent piece for Forbes. She explained that the object in question may be an unrealistic depiction of a jewelry box or a wax tablet, as suggested above. As for the holes in the object, “ancient marble sculptures often have holes that used to hold wooden or other perishable objects,” Killgrove said. “Perhaps a wooden facade graced the box/tablet.”

The sculpture in question “shows evidence of reworking,” she added. It was originally a grave marker with three sides, but the top portion, the lefthand wall and the inscription on the bottom are now missing. The holes could relate to any of the missing pieces, Killgrove said. Finally, she added that there is a possibility that the sculpture itself could either be a fake, or a replica of the original.

One thing that seems clear, though, is that time travel is (sadly) not involved. Nor does it seem as though the Oracle of Delphi was an HP laptop – which seems obvious given the fact that in order to “connect with the gods” and gather information, it probably would’ve needed Wi-Fi. As tends to be the case, the most boring explanation appears to be the most plausible one.

—–

Feature Image: Courtesy of the J. Paul Getty Museum

Earth scores surprisingly low on an index of potential habitability

The Earth is capable of supporting life. We know this because we’re standing on it and breathing at the same time. But what if we were extraterrestrials living on another planet somewhere far off in the universe? Would we be able to tell that this world could support living organisms?

Possibly, according to astronomer Rory Barnes of the University of Washington-based Virtual Planetary Laboratory, who in a new study used a variety of measurements and calculations as a way to gauge the relative habitability of our home world. His research found that Earth a rating of just 82 percent, making it a good candidate for biological life, but not a great one.

Barnes, a research assistant professor of astronomy, and his colleagues developed a “habitability index for transiting planets” which they use to rank exoplanets to help identify priority candidate worlds in the search for alien life. Among the factors analyzed in the ranking system is the radius and distance of a planet’s orbital path, the size of the planet, the amount of energy it gets from its host star, and the mass and radius of that star (which is estimated using spectrometry).

Using this data, they create a model of a planet and compare its “measurements” to those known from other worlds in an attempt to sort out the likelihood that an exoplanet would be capable of supporting biological life. In their new study, they turned the microscope on our own homeworld and examined it as if it were a far-off planet – with rather surprising results.

Believe it or not, our planet is actually too close to the sun

It seems impossible that a planet known to support complex biological organisms would end up scoring anything less than a perfect 100 percent on a habitability index, but as the study authors pointed out, Earth – objectively speaking – is not exactly an ideal planet for life.

“Basically, where we lose some of the probability, or chance for life, is that we could be too close to the star,” Barnes explained in a statement. “We actually are kind of close to the inner edge of the habitable zone. If we spotted Earth with our current techniques, we would reasonably conclude that it could be too hot for life.”

In addition to being too close to the edge of the habitable zone, the area surrounding a star where an orbiting rocky planet could maintain surface water, Earth’s composition and orbital path, plus the behavior of other nearby worlds, make the planet a less-than-perfect candidate in terms of the habitability index, according to the researchers behind this new report.

“Remember, we have to think about the Earth as if we don’t know anything about it. We don’t know that it’s got oceans, and whales and thing like that – imagine it’s just this thing that dims some of the light around a nearby star when it passes,” said Barnes.

Even if we discovered a new planet that was Earth’s twin and orbited around a star that was just like the sun, he and his colleagues said, we might not choose to explore it too closely if we found another world that ranked higher on the index of habitability at the same time. That other world, they explained, would actually be the safer bet when it comes to supporting biological life.

—–

Feature Image: NASA

Is AVACEN The New Wonder Treatment For Fibromyalgia?

Avacen chronic pain relief

Time and time again, companies release new medicines that do little to help the pain and suffering caused by fibromyalgia. A pain disease marked by constant aches, unwarranted fatigue, and debilitating confusion, the cause of fibromyalgia is still a mystery to doctors and scientists.

Not until recently did many doctors even recognize fibromyalgia as a valid disease and not completely made up by the patient. And only after that progress was made did the scientific community begin crafting medications that would help treat the disease.

Unfortunately, there is no known cure to fibromyalgia. That’s why it’s imperative that the world becomes serious about finding answers to the questions patients have long been asking: what is this pain and how can I make it go away?

A few drugs, such as Cymbalta, Lyrica, and Savella hit the market with a splash. Sadly, those medications have done little to help the widespread pain felt by fibromyalgia sufferers. In fact, the extreme list of side effects to those drugs seem more daunting than facing the pain without medication.

That is, until researcher Thomas Muehlbauer invented the AVACEN Treatment Method. His report states that not one patient of his study (in which 500,000 treatments were administered) has reported a single side effect.

So you’re probably wondering how the AVACEN Treatment Method works.

Muehlbauer explains by saying, “The key is the continued infusion of heat into the circulatory system through the palm when the body is at normothermia, approximately 98.6 degrees F. The heat acts as a catalyst to reduce the thickness of the blood. The result is the body must dissipate the unwanted heat by pumping the warmed and thinner blood through the skeletal muscles to reach the heat exchange capillary network where it can be cooled by the ambient air.”

You heard right: the AVACEN Treatment is basically spa therapy for your internal organs, such as important pain receptors like the muscles. That’s some serious muscular relaxation!

In case you are concerned about the possibility of body temperatures becoming too warm, Muehlbauer continues by stating, “It should be noted that the level of heat is precisely controlled so that it cannot induce heatstroke. Actually, the level of infused heat normally doesn’t even engage the sweat glands.”

It sounds like the study, which was performed by the The University of California, San Diego and the Department of Veterans Affairs, is close to a complete breakthrough which will change the fibromyalgia community forever.

Would you consider the AVACEN Treatment Method to help control your fibromyalgia pain? Let us know in the comments what you think!

Did explorers really eat woolly mammoth in 1951? The mystery is solved

In 1951, a now-legendary dinner prepared for the Explorers Club was capped by an extraordinary treat: An actual portion of woolly mammoth, fresh from its icy prison, served hot and ready to eat.

Unfortunately, a Yale-led analysis has shown that this legend is just that—as what they ate was actually only green sea turtle.

This party happened on January 13, 1951, taking over the ballroom of the Roosevelt Hotel in New York City with a spread of Pacific spider crabs, green turtle soup, bison steaks, and pieces of what has either been remembered as 250,000-year-old woolly mammoth meat or hunks of extinct giant ground sloth.

“I’m sure people wanted to believe it,” said co-lead author Jessica Glass, a Yale graduate student in ecology and evolutionary biology, in a statement. “They had no idea that many years later, a Ph.D. student would come along and figure this out with DNA sequencing techniques.”

“To me, this was a joke that no one got,” said the other co-lead author, Matt Davis, a Yale graduate student in geology and geophysics. “It’s like a Halloween party where you put your hand in spaghetti, but they tell you it’s brains. In this case, everyone actually believed it.”

Will you be having chicken or giant sloth?

Accoring to the paper in PLOS ONE, news sources at the time were quick to leap on the story, and reported that the meat had been supplied by the Reverend Bernard Hubbard, an Alaskan explorer otherwise known as the Glacier Priest. It was said that he had found the mammoth frozen in ice on Akutan Island in the Aleutians, and had it shipped specially to NYC by a U.S. Navy captain for the event.

The man who was responsible for promoting the Explorers Club’s event, Commander Wendell Phillips Dodge—who formerly had worked as film star Mae West’s agent—quickly sent out notices announcing the dinner would feature “prehistoric meat,” which was then variably understood to be either giant ground sloth (Megatherium) or woolly mammoth by the attendees.

wooly mammoth

The actual specimen saved from the dinner. (Credit: Yale Peabody Museum of Natural History)

One man who could not make the event requested a sample be sent to him, and Dodge personally saw to his request, filling out the specimen label as sloth and not mammoth—a fact that was long considered a discovery of possibly huge scientific import, since Megatherium was not and is not believed to have lived above South America. Finding one frozen in the Aleutians, then, would have blown this hypothesis to bits.

Despite the potential implications of this find, woolly mammoth carved straight from the ice seemed to capture the attention of the public more, where it was more often remembered as such. Of course, there was a scientific issue with it being mammoth—as preserved mammoths are actually found in the frozen dirt of the permafrost and not in ice.

And the answer is…

Although frozen mammoth would have been edible. “The meat wouldn’t taste good, but you could eat it,” Davis said.

But either way, the little specimen container with its meat has stayed in museum collections ever since, until it landed at Yale in 2001. In 2014, Davis and Glass—both current members of the Explorers Club—decided to pursue the actual source of the meat, using DNA analysis and archival research to determine whether it was sloth or mammoth.

After extracting the mitochondrial genome, they were able to determine that the meat seems to have come from green sea turtle, which matches well with the archival evidence—and preserves the notion that sloths indeed never left South America.

wooly mammoth

The meat served at the 1951 Explorers Club Annual Dinner. (Courtesy of the Peabody Museum of Natural History, Division of Vertebrate Zoology. Illustration by Matt Davis / Yale University)

“If this had not been from the banquet, we would still want to know the identity of the meat, because it would have large scientific implications,” Davis said.

Meanwhile, Glass added that the Explorer’s Club banquet this year is in March—but she can’t attend. “Maybe someone will save a piece of meat for me,” she said.

—–

Feature Image:

German scientists begin long-awaited nuclear fusion experiment

On Wednesday, German Chancellor Angela Merkel began a nuclear fusion experiment that could lead to cleaner, safer nuclear power.

After nine years of planning and building, the experiment began with a small amount of hydrogen injected into a doughnut-shaped device that blasted the gas with the radiation equivalent of 6,000 microwave ovens.

The plasma created by the radiation blast lasted less than a second before cooling down. However, the brief state change was enough for German scientists to get usable information.

“Everything went well today,” Robert Wolf, a project scientist, told the Associated Press. “With a system as complex as this you have to make sure everything works perfectly and there’s always a risk.”

The Stellerator during construction in 2012 (Note the workers for scale)

The Stellerator during construction in 2012 (Note the workers for scale)

Part of a worldwide goal

The German project is part of a worldwide attempt to tame nuclear fusion, which involves atoms fusing at very high temperatures and releasing considerable amounts of energy in the process, similar to the nuclear processes inside the sun.

Advocates admit the technology is most likely many decades off, but argue that the potential to replace power generation through fossil fuels and nuclear fission reactors is too great to ignore.

The German team is working with technology created by the American physicist Lyman Spitzer in 1950. Known as a Stellarator, the device uses a complex system of magnetic coils to hold plasma inside a doughnut-shaped device long enough for fusion to happen.

In the coming years, the project team will slowly increase the temperature and duration of the plasma with the objective of keeping it stable for 30 minutes, Wolf said. “If we manage (the feat by) 2025, that’s good. Earlier is even better,” he continued.

David Anderson, a professor of physics at the University of Wisconsin who isn’t involved in the project, said the German project looks promising thus far.

“The impressive results obtained in the startup of the machine were remarkable,” he said. “This is usually a difficult and arduous process. The speed with which W7-X became operational is a testament to the care and quality of the fabrication of the device and makes a very positive statement about the stellarator concept itself.”

—–

All images credit of the Max Plank Institute

New tarantula named after late country singer Johnny Cash

A team of biologists from Auburn University and Millsaps College have identified 14 brand new species of arachnids, including a never-before-seen breed of tarantula named in honor of singer-songwriter Johnny Cash, that live in the southwestern United States.

The newfound species, which are described in a paper published by the journal ZooKeys, nearly double the number of large-bodies, hairy members of the genus Aphonopelma  known to live in the region, lead author Dr. Chris Hamilton and his colleagues explained in a statement.

“We often hear about how new species are being discovered from remote corners of the Earth, but what is remarkable is that these spiders are in our own backyard,” said Dr. Hamilton. “With the Earth in the midst of a sixth mass extinction, it is astonishing how little we know about our planet’s biodiversity, even for charismatic groups such as tarantulas.”

Among the new species is the Aphonopelma johnnycashi, which was named in honor of the legendary recording artist due to the fact that the males typically have a solid black color (Cash was well known for his propensity to wear all-black clothing), and the fact that it was originally found near Folsom Prison (which was immortalized in a Cash song).

Research finds 29 confirmed tarantula species in US

According to the study authors, tarantulas belong to the genus Aphonopelma are among the most unique kinds of spiders found in the US. One of the reasons these particular arachnids are special is the extreme range in sizes that the species can be. Some of these spiders are small enough to fit onto the face of a quarter, while other can have a leg span of up to six inches (15 cm).

Aphonopelma can be found in 12 different states throughout the southern US, ranging from west of the Mississippi River to California, and are most frequently spotted when the weather is warm and the males leave their burrows in search of mates. Scientists knew little about these arachnids prior to the new study, and as Dr. Hamilton pointed out, at one time there were claims that more than 50 different species of tarantulas were found in the US.

Johnny Cash

This is a comparison of the largest and the smallest tarantula species in the United States. These are adult females of Aphonopelma anax (L) from Texas and Aphonopelma paloma (R) from Arizona. (Credit: Dr. Brent Hendrixson (A. anax) and Dr. Chris A. Hamilton (A. paloma))

This turned out to be incorrect, as many of these so-called different species actually turned out to be members of the same species. The study authors set out to clear up the confusion surrounding tarantulas by spending more than a decade hunting them down and studying the diversity and the distribution of more than 3,000 specimens across a variety of different environments.

Calling their work most comprehensive taxonomic study ever performed on a tarantula group, Dr. Hamilton and his colleagues explained that they used an “integrative” approach to taxonomy by using anatomical, behavioural, distributional, and genetic data to discern between the various groups of arachnids. They found that there are a total of 29 Aphonopelma species living in the US, 14 of which had not been previously described.

“Two of the new species are confined to single mountain ranges in southeastern Arizona, one of the United States’ biodiversity hotspots,” said co-author Brent Hendrixson. “These fragile habitats are threatened by increased urbanization, recreation, and climate change. There is also some concern that these spiders will become popular in the pet trade due to their rarity, so we need to consider the impact that collectors may have on populations as well.”

—–

Feature Image: This is an adult male of Aphonopelma johnnycashi from California. (Credit: Dr. Chris A. Hamilton)

Dark matter isn’t what’s causing gamma rays in our galaxy

Contrary to what previous studies have suggested, new research indicates that gamma ray bursts coming from the center of the galaxy are unlikely to be from dark matter, and instead originate from a different kind of astrophysical phenomena, possibly rapidly rotating neutron stars.

Past research suggested that the gamma rays coming from the dense region of space in the inner part of the Milky Way might have been created by collisions involving dark matter particles, but studies conducted by two separate teams have found that the signals given off by the gamma rays are not characteristic of those expected from dark matter.

The two groups, one a joint team from Princeton University and the Massachusetts Institute of Technology (MIT) and the other comprised of scientists from the Netherlands, both reported this week in the journal Physical Review Letters that the rays likely came from a different source, and that the fast-rotating stars known as millisecond pulsars were the best candidates.

“Our analysis suggests that what we are seeing is evidence for a new astrophysical source of gamma rays at the center of the galaxy,” Mariangela Lisanti, an assistant professor of physics at Princeton and co-author of one of the studies, said in a statement. “This is a very complicated region of the sky and there are other astrophysical signals that could be confused with dark matter signals.”

Distribution of high-energy particles suggests a different source

Although both teams reached the same conclusion, they used different techniques to arrive there. The US group used image-processing techniques to examine how the gamma rays should appear if they were coming from the collision of a hypothesized type of dark matter particles known as weakly interacting massive particles (WIMPs).

Lisanti and her colleagues analyzed images of gamma rays captured using NASA’s Fermi space telescope, using a method of statistical analysis to test a widely-accepted dark matter model that suggests that the collision of two WIMPs causes them to annihilate each other, producing gamma rays – the highest energy form of light in the universe – in the process.

This model suggests that these high-energy particles, also known as photons, should be evenly distributed among the pixels in images captured by the Fermi telescope, according to the study authors. However, their analysis revealed that the photons were actually distributed in isolated pixels, meaning that the gamma rays probably originated from a different source.

The Dutch team reached largely the same conclusion using a method centered around wavelet transformation, and while the researchers are not 100 percent certain what that alternate source might be, they suspect that it could be millisecond pulsars. While the researchers note that their findings are a bit of a setback in the hunt for dark matter, the results are nonetheless intriguing.

Christoph Weniger, a researcher at the University of Amsterdam and the lead author of the Netherlands-based team, called the discovery a win-win, adding, “Either we find hundreds or thousands of millisecond pulsars in the upcoming decade, shedding light on the history of the Milky Way, or we find nothing. In the latter case, a dark matter explanation for the gamma ray excess will become much more obvious.”

—–

Feature Image: Studies by two independent groups from the US and the Netherlands indicate that the observed excess of gamma rays from the inner galaxy likely comes from a new source rather than from dark matter. The best candidates are rapidly rotating neutron stars, which will be prime targets for future searches. The Princeton/MIT group and the Netherlands-based group used two different techniques, non-Poissonian noise and wavelet transformation, respectively, to independently determine that the gamma ray signals were not due to dark matter annihilation. (Credit: Image courtesy of Christoph Weniger)

Astronomers have finally figured out what the Red Rectangle is

HD 44179, which first caught the attention of astronomers in the 1970s and earned the moniker the “Red Rectangle” due to its unique shape and color, has been revealed to be a dying star and not a planetary nebula as previously thought, the ESA announced this week.

While the Red Rectangle’s existence was known since the early 20th century, it wasn’t until a rocket equipped with an infrared detector was fired in its direction in 1973 that researchers began to realize just how unique HD 44179 truly was. Then, in 2007, an image captured by the Hubble Space Telescope’s Advanced Camera for Surveys showed the star in all its glory.

That image focuses on wavelengths of red light, highlighting hydrogen gas emissions (displayed in red in the photograph below). Also featured is a secondary, broader range of orange-red light which was colored blue in order to increase the contrast. Located some 2,300 light-years from Earth in the constellation Monoceros, the feature is home to a star nearing the end of its lifespan.

According to the ESA, the star in question “has puffed up as the nuclear reactions at its core have faltered, and this has resulted in it shedding its outer layers into space.” They added that this kind of gas cloud was mistakenly labelled as a planetary nebula because astronomer William Herschel believed they looked similar to Uranus (a world which he discovered).

red rectangle

The star HD 44179 is surrounded by an extraordinary structure known as the Red Rectangle. (Credit ESA/Hubble and NASA)

So what’s causing its unusual shape, anyway?

The Hubble image depicting the Red Rectangle also features an X-shape that suggests that there is something keeping the star’s atmosphere from expanding in a uniform fashion. Rather, there is likely a thick dusk disk surrounding the star that causes the gas outflow to be funnelled into two wide cones, which appear in the photograph as diagonal lines.

Hubble scientists explained that the dying star, which is located in the center of the rectangular object, pumps out gas and other material to make the nebula while also giving it is unique and distinctive shape. Furthermore, the star appears to be a close binary, which they said could help explain its unusual appearance. Much of the Red Rectangle remains mysterious, however.

Once the star finishes expelling all of its mass, it will leave behind a very hot white dwarf, and the resulting ultraviolet radiation will be so luminescent that it will cause the surrounding gas to glow, NASA and the ESA explained. The Hubble image was originally published in 2010, they added, and the field of view is roughly 25 by 20 arcseconds.

—–

Feature Image: The HD 44179 nebula, known as the “Red Rectangle”. Credit: NASA/ESA, Hans Van Winckel (Catholic University of Leuven, Belgium) and Martin Cohen (University of California)

Martian ‘cauliflowers’ suggest presence of alien life

Just recently, a study determined that it may be impossible to find life on Mars via its polar ice caps, but hope has sprung forth again in an unusual form: Martian “cauliflowers”.

In 2008, NASA’s Spirit rover stumbled upon something strange in Mars’ Gusev Crater. The crater—which is believed to have once housed hot springs and geysers—was covered in innumerous tiny, cauliflower-shaped nodules of a mineral known as opaline silica.

Opaline silica is a pretty common mineral in and of itself. On Earth, it can become widely distributed throughout soil and water via the weathering of silicate minerals, and some organisms, like diatoms, need it to live.

But finding opaline silica shaped like little trees on Mars left many scratching their heads, wondering how the mineral deposited in that shape.

Alien life

New research by Steven Ruff and Jack Farmer of Arizona State University in Tempe, however, has an exciting proposal for where these “micro-digitate silica profusions” (as they as also more scientifically known) came from. After examining similar mineral deposits in a Chilean desert, the pair has offered the notion that the “cauliflowers” were shaped by microbes—suggesting that alien life may have existed on Mars at some point.

As outlined at the American Geophysical Union Fall Meeting in December, the pair studied Chile’s Atacama Desert, which has conditions that make it fairly analogous to the surface of Mars: the soil is similar; the extreme desert climate (less than four inches of rain per year, plus temperatures that range from -13 to 113 degrees Fahrenheit) is similar; and the huge amount of ultraviolet radiation that penetrates to the ground (thanks to its elevation of 13,000 feet about sea level) all make it probably the closest thing we have to Mars on Earth.

Of course, Atacama Desert has oxygen and living organisms, but since scientists can’t swing by Mars whenever they wish to pick up some samples, so we work with what we have.

But besides the similarity in environment, Atacama has another huge similarity to Gusev Crater: It used to house geysers, too, and in fact has incredibly similar looking silica deposits. Further, there is fossil evidence that its geysers used to be home to many different kinds of microbes.

These microbes are believed to have given the “cauliflowers” in Atacama Desert their shape, as similar silica deposits have also been seen in Yellowstone National Park and in New Zealand’s Taupo Volcanic Zone—modern places where the little formations showed fossilized proof of having been created by microbes.

In Atacama Desert’s case, the fossil evidence is less clear, and it is uncertain whether or not the organisms are responsible for the silica cauliflowers. However, the case is compelling, and points to the fact that the ones found on Mars may very well have been created by living creatures.

But analogy is by no means proof.

“Having worked on modern hot springs, I have seen all forms of structures that look biological but are not,” Kurt Konhauser of the University of Alberta, who is the editor-in-chief of the journal Geobiology, told Smithsonian Magazine. “Because it looks biological doesn’t mean it is.”

We will have to wait a while to find out whether the silicate formations are life or luck, though. The next rover isn’t due to launch until 2020, and once it does, it will have to reach Mars, spend time collecting samples, and then travel all the way home before we can get our hands on real samples—that is, if NASA decides to spend time investigating the cauliflowers at all. Gusev Crater hasn’t been ruled out as a landing site, but it might not be selected at all.

—–

Feature Image: NASA/JPL-Caltech

Thanks to the Chinese, we now have the most hi-res images of the moon of all time

While you may never find yourself standing on the surface of the Moon, new true-color HD photos released by China’s space agency can essentially take you there right now.

The photos were captured by the China National Space Administration’s Chang’e-3 lunar lander and rover. Just as striking as the photos is the decision by the normally secretive space program to make them openly available for the general public.

To access and download the photos, you simply need to create a user account on China’s Science and Application Center for Moon and Deepspace Exploration website. According to reports, access outside of China can be fraught with outages and bottlenecks. Reports have said the photos are also available on the Planetary Society website.

moon

This is the Yutu rover’s view of what the team came to call Long Yan (Pyramid Rock). Credit: Chinese Academy of Sciences / China National Space Administration / The Science and Application Center for Moon and Deepspace Exploration)

The pictures show the moon’s crust in true color and breathtaking detail. The tracks of the Jade Rabbit rover can be easily seen in some images.

On December 14, 2013, the Chinese lander and lunar rover touched down on the moon’s northern Mare Imbrium. The successful landing made China the third country ever to pull it off and the first since 1976, when the Soviets landed their Luna 24 probe.

Once the 2,600-pound Chang’e lander arrived at the lunar surface, it released the 310-pound Yutu rover, whose name translates to “Jade Rabbit.” The Yutu rover was built with 6 wheels, a radar instrument, and spectrometers to evaluate the intensity of several wavelengths of light. Yutu’s geologic evaluation indicated the lunar exterior is less homogeneous than initially thought.

moon

360-degree view from the lander. Credit: Chinese Academy of Sciences / China National Space Administration / The Science and Application Center for Moon and Deepspace Exploration)

Due to Yutu being unable to properly protect itself from the cold lunar night, it encountered significant mobility issues in early 2014 and was left incapable of moving across the lunar surface.However, Yutu still had the capability to collect information, send and receive signals, and record pictures and video up until March of last year. Today, the Yutu lander is no longer functional.

China’s space agency has said its follow-up mission, Chang’e 4, is slated to launch as soon as 2018 and the mission is designed to land on the far side of the moon. Should this happen, China will become the first nation to land a craft on the so-called “dark side of the Moon.”

moon

Credit: Chinese Academy of Sciences / China National Space Administration / The Science and Application Center for Moon and Deepspace Exploration

—–

Feature Image: Chinese Academy of Sciences / China National Space Administration / The Science and Application Center for Moon and Deepspace Exploration

How sharing technology helped early humans evolve, journey to Europe

The sharing of technology played a key role in helping our Stone Age ancestors living in Africa evolve, and can help explain how and why humans ultimately traveled to Europe, according to a team of researchers from the University of Bergen and the University of the Witwatersrand.

As they explained in a recent edition of the journal PLOS One, Christopher S. Henshilwood, an archaeologist and evolutionary studies professor, and his colleagues discovered new evidence at Blombos Cave in South Africa that found that the more contact that different groups had with one another, the higher-quality tools and technology they would go on to develop.

Discovered near Cape Town, South Africa in the early 1990s, Blombos Cave had been a source of essential new information about the behavioral evolution of our species, the authors explained in a statement. The cave contains deposits from the Middle Stone Age dated at between 100,000 and 70,000 years ago, as well as Later Stone Age deposits between 700 and 2,000 years old.

Henshilwood’s team has been examining technology used by different groups in this and other parts of South Africa to see if there was any kind of contact between various groups of Middle Stone Age humans, and if so, would they have exchanged ideas with one another and what kind of impact would that have had on each different culture.

Contact enabled our ancestors to adapt, build better tools

“We are looking mainly at the part of South Africa where Blombos Cave is situated. We sought to find out how groups moved across the landscape and how they interacted,” Henshilwood said. He and his colleagues published a total of four scientific papers last year based on their work.

“The pattern we are seeing is that when demographics change, people interact more. For example, we have found similar patterns engraved on ostrich eggshells in different sites,” study co-author and University of Bergen researcher Dr. Karen van Niekerk added. “This shows that people were probably sharing symbolic material culture, at certain times but not at others.”

human evolution

Researchers from UiB and Witswatersrand have found that contact between cultures has been vital to the survival and development of Homo sapiens. (Credit: Magnus M. Haaland)

The practice of sharing technology and material culture also reveals more about early humans’ journey from Africa, across the Arabian Peninsula and ultimately to Europe, the researchers said. Contact between different groups was essential to the survival and evolution of Homo sapiens as a species, as the more contact they had with one another, the stronger their tools, technology and culture ultimately became.

“Contact across groups, and population dynamics, makes it possible to adopt and adapt new technologies and culture and is what describes Homo sapiens,” said Henshilwood. “What we are seeing is the same pattern that shaped the people in Europe who created cave art many years later.”

—–

Feature Image: Thinkstock

Man stumbles upon ancient Egyptian seal while hiking in Israel

An Israeli man out for a hike with his family found a tiny white object at the Horns of Hattin in the Lower Galilee which turned out to be a 3,500-year-old Egyptian seal in the form of a scarab, the Antiquities Authority told the Jerusalem Post and the Times of Israel Thursday.

Amit Haklai, a farmer from Kfar Hittim, reportedly spotted the engraved, beetle-shaped object while on an excursion with his children at the site of the twin peaks. Upon noticing the shape of the item, and the fact that it bore engraved decorations, he contacted the Antiquities Authority.

After analyzing the artifact, Dr. Dafna Ben-Tor, curator of Ancient Egypt at the Israel Museum, identified the seal as a scarab amulet from the New Kingdom period, the era in Egyptian history from 16th century BCE to 11th century BCE that spanned the 18th through 20th dynasties.

Dr. Ben-Tor explained to the Post that the scarab depicts Thutmose III, the sixth Pharaoh of the 18th Dynasty, seated on his throne alongside his name. Thutmose ruled Egypt from 1379 BCE to 1425 BCE and is said to be one of the great warrior pharaohs, having waged multiple successful military campaigns and expanded his empire into the lands of Canaan and Nubia.

Artifact likely originated from a 13th century BCE fortress

The scarab, which was shaped like a dung beetle, would have had “cosmological significance in ancient Egypt,” the Post explained. They are frequently unearthed by archaeologists during their excavations, and were used as amulets, personal seals, or to commemorate royal achievements.

Dr. Miki Saban, director of national treasures at the Israel Antiquities Authority, emphasized that it was “extremely important” that anyone findings such artifacts contact the agency. Such objects “enrich the archaeological knowledge about the country” and should be put on display so that the general public and see them.

According to Arutz Sheva, the site where Haklai discovered the scarab, the Horns of Hattin, is a well-known historical site. It was the location of the Battle of Hattin, which occurred in 1187 and pitted the Crusader Kingdom of Jerusalem against the forces of Saladin, the first sultan of Egypt.

Some 2,000 years before that battle, it was home to a fortress that was destroyed during the 13th century BCE, and Antiquities Authority archaeologist Yardena Alexander said that, even though the seal was found on the surface and not as part of an archaeological dig, it likely came from the same era as that fallen fortress.

Haklai, who was presented with an award for his discovery, told reporters, “It is important that my children grow up with a strong connection to their land and to the antiquities of our country.”

—–

Feature Image: Israel Antiques Authority

How Antarctic dinosaurs can solve Earth’s greatest mysteries

Paleontologists from across the world are following a hot lead to the coldest place on Earth—Antarctica—all in hopes of finding fossil clues that resolve some of prehistory’s greatest mysteries.

Millions of years ago, Antarctica was part of the supercontinent Gondwana, and was actually a warm, lush area that supported an incredible amount of life. Thanks to continental drift, Antarctica relocated to a more southerly location—meaning that a potential goldmine of flora and fauna fossils from when dinosaurs roamed the Earth are now buried under ice and snow.

The expedition, which is funded by the National Science Foundation, begins today, and will rely on an icebreaker and helicopters to reach otherwise inaccessible locations—namely, James Ross Island and other islands near it, some of the few locations in Antarctica where fossil-bearing rocks can be readily accessed.

How extinction may have affected polar ecosystems

While the general goal is to learn more about prehistory, the team—scientists from the Carnegie Museum of Natural History, University of Texas at Austin, Ohio University, the American Museum of Natural History, and other collaborators from across the U.S., Australia, and South Africa—has a few more specific interests in mind.

“Ninety-nine percent of Antarctica is covered with permanent ice,” said Matthew Lamanna, paleontologist and assistant curator at the Carnegie Museum, in a statement. “We’re looking for fossils of backboned animals that were living in Antarctica at the very end of the Age of Dinosaurs, so we can learn more about how the devastating extinction that happened right afterward might have affected polar ecosystems.”

antarctic dinosaurs

Researchers disembark on Antarctica during an expedition in 2011. (Credit: AP3)

Moreover, they hope to uncover what role Antarctica played in the evolution of vertebrate animals (ones with a backbone, including birds and mammals), which is an enormous unsolved mystery. To accomplish this, they’re looking for fossils dating from 100 million to 40 million years ago, or from the Cretaceous to the Paleogene, which contains fossils of the transition from the Age of Dinosaurs to the Age of Mammals.

“What I hope to achieve this time is to discover the first evidence of mammals in the Cretaceous of Antarctica, species that lived at the end of the Age of Dinosaurs,” said Ross MacPhee, a curator and professor at the American Museum of Natural History. “If we can find them, they will have a lot to tell us about whether any evolutionary diversifications took place in Antarctica, and whether this was followed by species spreading from there to other portions of the ancient southern supercontinent Gondwana.”

antarctic dinosaurs

This is a research camp buried in snow during 2011 expedition to Antarctica. (Credit: AP3)

Also looking for evidence of asteroid impact

Besides searching out the origins of mammals and birds, some team members will be examining the rocks to help decipher what the environment was like in the transition from the Cretaceous to the Paleogene. Geologists hope some of the rocks date to roughly 66 million years ago—and therefore capture evidence of the asteroid impact that killed all non-avian dinosaurs.

Either way, the team is wandering into little-explored territory—meaning that discoveries could be ripe for the picking.

“It’s impossible not to be excited to reach remote sites via helicopter and icebreaker to look for dinosaurs and other life forms from over 66 million years ago,” said Julia Clarke, a professor and paleontologist at the UT Austin Jackson School of Geosciences. “The Earth has undergone remarkable changes, but through all of them, life and climate and geologic processes have been linked. A single new discovery from this time period in the high southern latitudes can change what we know in transformative ways.”

antarctic dinosaurs

Researchers in Antarctica look over horizon during 2011 AP3 expedition. (Credit: AP3)

—–

Feature Image: Thinkstock

Why you should always ask the guy in the blue jacket for help

Do you want to understand more about your own thoughts and motivations? Do you wish you had a better understanding of what motivates other people and drives their decisions? Has psychology always fascinated you, but you’ve been missing a way to apply those lessons practically in your day-to-day life?
The Science of Success” is redOrbit’s newest podcast, featuring entrepreneur and investor Matt Bodnar, who explores the mindset of success, the psychology of performance, and how to get the most out of your daily life.
With gripping examples, concrete explanations of psychological research, interviews with scientists and experts, and practical ways to apply these lessons in your own life, “The Science of Success” is a must-listen for anyone interested in growth, learning, personal development, and psychology.
This week’s episode: Why you should always ask the guy in the blue jacket for help

This week we are continuing our new miniseries within “The Science of Success” called “Weapons of Influence”. This is the third episode in a six-part series based on the best selling book Influence by Robert Cialdini. If you loved the book, this will be a great refresher on the core concepts. And if you haven’t yet read it, some of this stuff is gonna blow your mind.

So what are the 6 weapons of influence?
Reciprocation
Consistency and Commitment
Social Proof (This week’s episode)
-Liking
-Authority
-Scarcity
Each one of these weapons can be a powerful tool in your belt – and something to watch out for when others try to wield them against you. Alone, each of them can create crazy outcomes in our lives and in social situations, but together they can have huge impacts.
Today’s episode covers the third weapon of influence – Social Proof. In it, we’ll cover:

  • How social proof can over-ride people’s will to live
  • Why news coverage makes mass shootings more likely
  • Why TV shows use canned laughter
  • How someone could be stabbed in front of 38 people without any help
  • How you should ask for help in a dangerous situation
For more episodes, check it out on iTunes: The Science of Success.
Also continue the conversation by following Matt on Twitter (@MattBodnar), visiting his websiteMattBodnar.com, or visiting ScienceOfSuccess.co.

King Henry VIII likely suffered from same brain injury as some NFL players, study finds

Although chronic traumatic encephalopathy (CTE) is a condition most closely associated with former professional football players, new research from a Yale University behavioral neurologist suggests that one prominent British monarch may have suffered from a similar ailment.

Many of the problems that plagued Henry VIII prior to his death in 1547, including his memory issues, his explosive temper and, his inability to control his impulses, could have been caused by traumatic brain injury, Arash Salardini of the Yale Memory Clinic and his colleagues explain in the upcoming June edition of the Journal of Clinical Neuroscience.

“It is intriguing to think that modern European history may have changed forever because of a blow to the head,” Salardini, senior author of the study, said in a statement Tuesday. Such brain trauma could have also caused Henry’s headaches, insomnia and impotence, his team noted, not other ailments often associated with the monarch, including diabetes or syphilis.

Brain trauma could explain fits of rage, cognitive issues, impotence

Best known for his infamous split with the Catholic Church over his desire to annul his marriage to Catherine of Aragon and marry Ann Boleyn, as well as the fact that he ultimately wed a total of six women (two of which he had executed), Henry VIII suffered a pair of head injuries during his thirties, Salardini and co-authors Muhammad Qaiser Ikram and Fazle Hakim said.

The research team analyzed many of the king’s letters and other historical documents to compile a picture of his medical history. They discovered that, during a jousting tournament in 1524, he was struck through the visor by a lance and left stunned. The following year, Henry was knocked unconscious after falling head-first into a brook he was trying to vault across using a pole.

However, the study authors believe that the bulk of his unpredictable behavior may be attributed to an accident that occurred during a January 1536 jousting match. During this event, a horse fell on the king, knocking him out for a period of two solid hours. According to Salardini, “historians agree his behavior changed” from this point on, as he became increasingly impulsive, erratic and forgetful – not to mention prone to fits of rage.

In addition, ailments such as metabolic syndrome and impotence could be explained by growth hormone deficiency and hypogonadism, both of which are known side effects of traumatic brain injuries, they said. As the Yale-led team said, although Henry had a reputation as a womanizer, there are reports that he had difficulty performing sexually as far back as his second marriage, to Ann Boleyn in 1533.

—–

Feature Image: Detail of portrait of Henry VIII by the workshop of Hans Holbein the Younger. (Credit: Google Art Project)

Venus flytraps can count, are first plant species to be observed doing this

Carnivorous plants like the Venus flytrap obtain their nourishment by luring insects with a fruity scent and then using a hair-trigger to catch their prey, but scientists have long wondered just how they can tell when the time is right to spring their trap. As it turns out, they count.

As Sci-news.com and Science World Report reported recently, a team of researchers led by Dr. Rainer Hedrich, a biologist at Universität Würzburg in Germany, tried to fool Venus flytraps into thinking that they had attracted insects by applying mechano-electric stimuli to their traps.

They found that a single touch to the trap’s trigger hair caused the plant to set its trap to a “ready-to-go mode,” but did not cause it to spring. A second touch caused the trap to engulf the prey and begin the digestive process. As an insect attempts to escape, it would repeatedly touch the trigger hairs, causing the plant to become increasingly excited, according to the study authors.

At this point, the plant would then begin to produce a special touch hormone, and after receiving five triggers, glands on the trap’s inner surface would start to produce digestive enzymes, as well as special transporters to collect nutrients from its victim. In a statement, Dr. Hedrich called it a “deadly spiral of capture and disintegration.”

Number of stimulations trigger different digestive responses

Researchers knew that when an insect visited a Venus flytrap’s snare and then touched its trigger hair, it would cause the plant to fire what are known as action potentials or APs. After two of the APs are triggered by a moving object, the snap trapped shut, capturing the prey.

In their new study, published in the journal Current Biology, Dr. Hedrich and his fellow researchers set out to discover just how many times the trigger hairs needed to be stimulated, and how many APs must be emitted, before the plant recognized that it had indeed trapped food.

“The number of action potentials informs [the plant] about the size and nutrient content of the struggling prey. This allows the Venus flytrap to balance the cost and benefit of hunting,” said Dr. Hedrich. His team found that two stimuli are needed to activate the touch hormone jasmonic acid-signaling pathway, while at least three are required to trigger the expression of genes which encode prey-degrading hydrolases.

If a Venus flytrap receives more than three stimulations, it ramps up the production of digestive enzymes to deal with what is likely larger prey, the researchers said, and the entire process also revealed a significant increase in the production of a substance which allows the plant to gather sodium from its victims. It is not clear at this point how the flytrap benefits from that salt, but it may have something to do with maintaining water balance in its cell walls, the authors said.

Dr. Hedrich’s team is now in the process of sequencing the genome of the Venus flytrap, and are hopeful that they will gain new insights into the chemistry and sensory system of the carnivorous plant through this research. Potentially, decoding the species’ DNA could provide clues into how the mechanisms it uses to digest its prey developed and evolved over time.

—–

Feature Image: Thinkstock

Landmark deal will protect majority of Canadian rainforest

After roughly two decades of protests and negotiations, foresters, environmental organizations, aboriginals, and government officials have reached a landmark deal that will ban trophy hunting and logging in more than four-fifths of the rainforests along the Pacific coast of Canada.

According to AFP and Associated Press reports, the deal announced on Monday will protect the Great Bear Rainforest in British Columbia from rampant deforestation and prohibit hunters from pursuing game in the part of the country that is home to the rare white Kermode bear.

The agreement, which was announced by British Columbia Premier Christy Clark on Monday, will protect 85 percent of the 16 million acre (6.4 million hectare) long rainforest – the world’s largest intact temperate rainforest, according to the AP.

The remaining 15 percent of the land, which runs from the Discovery Islands north to Alaska, will be subject to some of the strictest commercial logging standards in North America, the wire services added. The deal also puts an end to region’s commercial grizzly bear hunt and calls for the establishment of protected habitats for the marbled murrelet, the northern goshawk, and the mountain goat.

‘A unique solution for a unique area’

When negotiations began approximately 20 years ago, 95 percent of the now-protected forest land was open to logging, environmentalist Richard Brooks told the AP. Now, though, industry leaders, conservationists, the government and the 26 aboriginal tribes that live in the area have taken steps to ensure that the bulk of the forest will be protected from logging activity.

Coast Forest Products Association chief executive officer Rick Jeffery explained that the deal involved a complicated series of discussions between groups with differing points of view, but that in the end, all parties were able to reach a compromise. He called it “unprecedented in the history of our province,” adding that the agreement was “a unique solution for a unique area.”

The AFP said that the compromise “applies a novel approach to conservation that recognizes the full array of interactions within an ecosystem, including humans,” and Marily Slett, chief of the Coastal First Nations, said that the tribes envisioned a future in which “ecosystems and potential developments in the Great Bear Rainforest are in balance.”

—–

Feature Image: Thinkstock

5 reasons why you may want to avoid fluoride

My husband and I have a continuing battle that wages war every morning and night. What are the grounds for fighting, you ask? It’s over toothpaste. I am firmly on the side of no fluoride or traces of fluoride in my tube. While my husband throws caution to the wind and goes for toxic matter swimming in a pool of minty goodness.
Have you ever read on the back of a toothpaste bottle? The verbiage is downright terrifying: “Keep out of reach of children under 6 years of age. If more than used for brushing is accidentally swallowed, get medical help or contact a Poison Control Center right away.” This is warning is enough to keep me away; however, many keep on brushing. To fully help husbands everywhere understand the issue, here are the top five things you may not know about fluoride.
1. No confirmation of safety
While fluoride may help keep cavities at bay, the buildup of fluoride deposits in the body has effects on many other tissues. In 2006, leading toxicologist Dr. John Doull addressed the National Academy of Science (NAS) with a 500-page review of fluoride’s toxic properties. In response, the NAS recommended further investigation into fluoride’s effect on the skeletal, endocrine, and nervous systems. These are basic studies that have yet to be conducted even after 60 years of adding fluoride to the water system and decades of fluoridation dental market.
2. Fluoride may be stealing your IQ
In a 2012 Chinese study, researchers looked at the difference between IQ scores of children living in areas that supplemented their water supply with fluoride versus areas that do not supplement. The result showed that a child who was ingesting fluoride regularly had an IQ of 7 points lower than other children.
3. US not following fluoridation trends
The United States is only one of eight developed, civilized country that supplement their water supply with fluoride. Five European nations add fluoridated salt to their supply; however, this represents a different chemical compound. China goes so far to label fluoride as a toxic substance.
4. Serving size of fluoride is a huge miss
Back to the whole “read the label on the back of the toothpaste bottle”. The directions first read “Do not swallow”, then “use a pea-sized” amount. Wait, a pea-sized amount? Have you seen a toothpaste commercial in the past 20 years? There is clearly more than a pea size on the pretty people modeling the toothpaste. There is also much more on mine at home. To take serving size to our water supply, a pea size of tooth paste equals approximately a quarter of milligram of fluoride. According to the FDA, this quarter of milligram of fluoride represents the max amount of fluoride that can be ingested at one time. This quarter of milligram of fluoride is present in an eight ounce glass of water. So for those of you who are getting your 8 glasses of water per day…SOL.
5. Fluoride is not defined as a “nutrient”
This may not seem like a big distinction; however, it brings to light a subtle point. Since fluoride is not an essential nutrient, it must be deemed either a medication or supplement. This means local governments are determining to medicate the entire population regardless of other normal prescription based practices. Specifically, the amount of fluorine in our water supply is not recommended for all age brackets. A two-year-old child consuming multiply glasses of water could be overdosing.
Mari-Chris Savage is a licensed and registered dietitian with years of experience in nutritional consultation and corporate wellness. As a nutrition specialist, Mari-Chris has consulted individuals on improving their current health status and focusing on preventative methods for a lifetime. Mari-Chris also holds a certification in personal training to promote physical activity guidance and motivation. Her background spurs an extremely active lifestyle filled with running, hiking, barre, and yoga classes. For more from Mari-Chris, check out her website The Savage Standard
—–
Feature Image: Thinkstock

Doctors have successfully separated conjoined twins at the youngest age ever

Surgeons in Switzerland have announced the successful separation of conjoined twins at the youngest-ever age: 8 days old.

Weighing just 2.4 pounds each, the female twins were born joined together at the liver and chest. The sisters, Maya and Lydia, were born two months premature along with their non-joined triplet sister Kamilla in December.

It required five surgeons, helped by two nurses and six anesthesiologists, to perform the five-hour separation on the small identical twins at a hospital in Bern, Switzerland.

The twins were born stable and doctors at Inselspital Hospital decided to let them settle down after the birth then separate them after a few months. However, after a week, their condition deteriorated drastically: one struggling with hypertension and the other being affected by the contrary condition, referred to as “hypotension”.

Both conditions were life-threatening and the doctors decided their only chance was attempting a surgery never before conducted on babies so young. The chance of success for such an operation is typically pegged at 1 percent.

A surgery for the ages

Barbara Wildhaber, the pediatric surgeon who headed the team that carried out the surgery, told media properly separating the liver was the most challenging part of the procedure.

“We were prepared for the death of both babies, it was so extreme,” she said. “It was magnificent. I will remember it my entire career.”

The twins are now reportedly doing well and have begun breastfeeding.

Steffen Berger, head of pediatric surgery at Inselspital Hospital, told BBC News the medical staff’s performance was key to the operations success.

“The perfect teamwork of physicians and nursing personnel from various disciplines were the key to success here. We are very happy that the children and parents are faring so well now,” Berger said.

The pair is among the approximately 200 separated conjoined twins currently living in the world.

Also referred to as Siamese twins, conjoined siblings appear every 1 in 200,000 live births. By definition, they are born with their skin and internal organs merged together. Around half are stillborn, and the rate of survival is between five and 25 percent.

Conjoined twins develop from a single egg, which splits when it comes to standard identical twins, but not fully in the the event of conjoined siblings.

—–

Feature Image: Thinkstock

Scientists unable to drill to Earth’s mantle beneath Indian Ocean

It might sound like a Jules Verne novel come to life, but a team of scientists are currently working on the very serious endeavor of drilling down toward the center of the Earth.

Rather than searching for prehistoric humans or herds of mastodons, the Joint Oceanographic Institutions for Deep Earth Sampling (JOIDES) expedition 360 set out last month to drill deep below the floor of the Indian Ocean to try to reach the Earth’s mantle – a feat we have been trying to accomplish since the 1960s.

The mission set out to take “core samples and measurements from under the ocean floor, giving scientists a glimpse into Earth’s development and also a scientific means of measuring climate and environmental change throughout a significant part of our planet’s history,” the JOIDES Facebook page said.

More specifically, the crew planned to drill 1,500 meters (about 4,900 feet) into the Atlantis Bank gabbroic massif, where they suspect the mantle rises above the border where the crust and the mantle normally meets, also known as the Moho border. They team drilled to extract gabbros, or rocks that develop when slow-cooling magma is caught under the Earth’s surface, and crust-mantle transition to “understand the processes that produces mid-ocean ridge basalt,” among other things. The mission was expected to help find more information on magma, the Earth’s mantle, melt, and crust.

earth's mantle

Credit: Thinkstock

But…

Unfortunately, the team wasn’t able to reach the mantle with its drill.

“We may not have made it to our goal of 1300 m, but we did drill the deepest ever single-leg hole into hard rock (789 meters), which is currently the 5th deepest ever drilled into the hard ocean crust,” said a post on the project’s official blog. “We also obtained both the longest (2.85 meters) and widest (18 centimeters) single pieces of hard rock ever recovered by the International Ocean Discovery Program and its predecessors!”

A second mission is currently being planned and experts have projected that humanity will reach the mantle within the next five years.

“Our hopes are high to return to this site in the not too distant future,” the blog post concluded.

—–

Feature Image: Thinkstock

The 10 BEST ways to make a cup of coffee

“Making coffee” for most people involves eyeballing a couple of heaping spoonfuls of pre-ground coffee from a can into a plastic basket lined with a paper or mesh filter while half asleep, dumping some water into a reservoir, and pressing a button. And while that’s a perfectly viable brewing method—especially if you drink a lot of coffee and don’t want to spend much time or money per cup preparing it—it’s certainly not the only one. In fact, there were many different ways to make coffee prior to the rise of the automatic drip maker in the 1970s, and adventurous coffee connoisseurs continue to invent new ways to make a cup of Joe to this day.

We’ll be detailing ten of these brewing methods in just a moment, but first, some ground rules…

Avoid the pre-ground stuff that you find at the supermarket in three-pound cans for under $10. All of these brewing methods require more effort than your Mr. Coffee machine does, so you may as well treat yourself to some whole beans. A good place to start is your favorite coffee shop. You’ll usually pay $10-15 for a pound—decidedly more expensive than $10 for three pounds, so whether or not you make this a habit is up to you. Here’s a handy guide to the flavor profiles of coffees from different growing regions, courtesy of Serious Eats.

To get the freshest coffee possible, you’ll want to grind the coffee you need right before you use it. Burr grinders are best, as they crush the beans into fairly evenly-sized grounds. Electric ones are pretty expensive, but manual, hand-cranked grinders are much more affordable and aren’t too much of a workout (I like Hario’s offering, which can be had for about $30).

And avoid blade grinders—they chop the beans, and you’ll end up with a fine powder, large chunks, and everything in between. If you can imagine coffee that’s simultaneously weak and strong, that’s what you’re most likely to get with one of these. In fact, for better results than you’ll get with a blade grinder without having to buy a burr grinder, just ask them to grind the beans at the store for you. Yeah, I know, I already said you should grind them yourself, but if you keep the grounds in a sealed container and use them within a week or so, they’ll be just fine.

Alright, let’s get to it!

1. Cowboy Coffee

coffee

Credit: http://gearpatrol.com/2013/02/22/camp-caffeine-lessons-outdoor-brewing/

You can probably guess how this one works. If you can’t, here’s a hint: It involves a pot of boiling water and some coffee.

Of course, the specifics vary depending on personal preference. Sure, you could throw some coffee and water into a pot and bring it to a boil, but you would end up with bitter coffee. Coffee is best brewed at anywhere from 195 degrees to 205 degrees Fahrenheit, depending on who you ask. Water boils at 212 degrees, and while that doesn’t sound like a huge difference, it’s enough to overextract the coffee, bringing out some undesirable flavors.

So instead, bring your pot of water to a boil, remove it from your heat source, and let it sit for 15 to 30 seconds. This will allow the water to cool down enough that it won’t overextract your coffee grounds. Add your grounds (fine grind—here’s a cool visual guide to grind sizes), stir, and let the brew steep for three to four minutes, stirring a couple more times throughout the process. Afterwards, sprinkle some cold water on the grounds that will be floating on the top—according to those in the know, this causes them to settle to the bottom so less of them get into your cup.

2. Mud Coffee

coffee

Credit: http://theruniverse.com/2015/07/09/mud-runs-making-many-people-sick/

This is basically the same concept as cowboy coffee, only instead of making it in a pot to avoid getting grounds in your cup, you just pour not-quite boiling water into a cup full of grounds. Again, you’ll use finely ground beans (two tablespoons per six ounces of water is a good rule of thumb for most brewing methods, by the way), and you’ll let it sit for about three or four minutes, stirring occasionally but ultimately allowing them to settle to the bottom. You won’t want to down this whole cup—once you reach the thick, gritty stuff (the “mud”) you can stop drinking it.

This is a simple, traditional method of brewing coffee, and like cowboy coffee, it’s really not very precise. You’ll get good coffee, but you won’t get good coffee all the time—since you can’t limit the extraction time, some bitterness isn’t uncommon. Still, nostalgia is cool, so give these first two methods a whirl if you’re feeling up for it.

3. Turkish Coffee

coffee

Credit: Wikipedia

In Turkey, it’s Turkish coffee. In Armenia, it’s Armenian coffee. In Bosnia, it’s Bosnian coffee. In Greece… you get the idea. It’s all basically prepared the same way in that region of the world—the “Near East”, where the Middle East reaches up into Mediterranean Europe—but in the west, the name “Turkish coffee” is the one that stuck. Actually originating in Yemen (I know, it’s confusing), this one is kind of a cross between the above two methods, but the process is a bit more refined. First of all, it requires really powdery grounds, so you’ll either want to pulverize the beans with a mortar and pestle, or grind them at the store (most pro grinders have a “Turkish” setting). If you don’t want to fool with any of that, you can still get good results with the finest setting possible on your grinder.

If you want to be legit, you’ll need a cezve, a small copper pot with a long, wooden handle, but until you know whether you’re into Turkish coffee or not, you can try this method out with the same equipment you’d use for cowboy coffee.

This method includes sugar, to make the intense flavor of this coffee more palatable. Combining sugar, water, and your powdery grounds, you’ll heat the concoction over medium heat until the coffee starts to foam (keep in mind that you’re not trying to boil it, however). Before it overflows, you remove it from the heat, stir, and repeat the whole process a couple more times. Then you serve it—foam, grounds, and all. You can read about all the specific measurements here.

This method yields a really unique, intense flavor. You also have to avoid the grounds at the bottom as you do with the second method in this list (I learned that the hard way the first time I ordered Turkish coffee in a coffee shop).

One important note—unless you really like pooping a lot, you do not need to drink a lot of Turkish coffee. A few ounces will do.

4. French Press

coffee

Credit: Thinkstock

Having nothing to do with France, what’s known in America as the French press was patented by an Italian guy named Attilio Calimani in the late 1920s, back when the word “French” was used for everything from fried potatoes to fried toast for some unknown reason other than actually being French.

This is probably the most popular—and therefore, the most easy to acquire—of the “not automatic drip machine” brewing methods, likely because it’s easy to use and you can get French presses in large sizes, making them ideal for daily use.

For French press coffee, you’ll use a coarse grind, since the metal screen you’ll be pressing would let finer grounds slip through. Drop the coffee into the pitcher, bring water to a boil, let it cool, and pour it in. Again, two tablespoons of grounds per six ounces of water is standard for most brewing methods.

Give it a quick stir—just once is necessary, to make sure that all of the grounds are in contact with water. You’ll let the coffee sit for anywhere from three to six minutes. While you wait, put the plunger in and the lid on, but don’t press yet. Once the time is up, press down slowly, and the coffee is ready to drink! (I mean, pour it into a cup or other standard drinking vessel first, unless you’re just really tired.)

The brewing time varies depending on the kind of coffee beans you’re using and your own personal taste, though a good starting point is 4 minutes.

French press coffee is a bit thicker in texture than automatic drip coffee, with a little bit of grittiness. However, since the coffee doesn’t pass through a paper filter, more of the coffee’s oils will make it into your cup—you’ll actually be able to see them floating on top. This makes for a bolder flavor than filter coffee. Usually, this is good, though bright, acidic coffees can sometimes taste sour when brewed with this method.

5. Pour-Over Coffee

coffee

Credit: http://azzurrodue.com/take-a-coffee-break-in-new-york/

This works exactly the same way as an automatic drip machine: Hot water is poured over grounds, and gravity pulls that water through the grounds and a paper filter into a server. The advantage of the manual pour-over brewing method is control—over the temperature of the water, and over the brewing time. Heck, you can even control the coffee to water ratio down to 1/10 of a gram (notice the scale in the photo?). Plus, hot water never makes contact with anything plastic. So if you’re worried about BPA, this method is for you—all you need is a drip cone (usually ceramic, sometimes metal or glass), a kettle with a long, skinny spout, and something for the coffee to go into.

I use this brewing method any time I just want a cup or two for myself (so basically every day), using the Hario V60 drip cone, a Hario kettle, and, unless I’m just setting the drip cone on top of a coffee mug for one cup, a Hario serving vessel that holds 24 ounces of coffee.

For this one, you’ll be using a medium-fine grind. Two tablespoons of coffee per six ounces of water, as usual, and water that’s been boiled and allowed to cool for 30 seconds.

There are all kinds of ways to do this, so I’ll just tell you how I do it. If you want to try something different, Google will give you more information on the subject than you could ever want.

Slowly pour some water over the grounds in a spiral motion from the inside, just to wet the grounds. You’ll see some bubbling—that’s the coffee releasing CO2. Let it do that for 30 seconds before you continue brewing, to make sure that bubbles aren’t getting in the way of the brewing process. I’ve skipped this step before just to see if it makes a difference in flavor, and I can’t tell the difference, but I still allow the coffee to “de-gas” as it makes me feel like I know what I’m doing.

After that’s done with, pour with that same motion until there’s about a half-inch of water over the grounds. As it sinks, add more, maintaining that half-inch. Do that until you’ve poured the correct amount of water over the grounds.

This coffee tastes really clean. Since it’s passing through a really fine paper filter, you don’t get anything resembling sludge.

6. AeroPress

This coffee maker was invented in 2005 by Aerobie, a company that makes Frisbees, of all things. Functionally, it’s a single-serve brewer that works kind of like a French press and kind of like a drip brewer.

Basically, you put hot water and finely ground coffee into the AeroPress, stir, then press the liquid through a paper filter and into a cup. Then you add some more hot water to the cup, and it’s ready to drink. This all takes about one minute.

AeroPress is fast and costs about $30, making it accessible to anyone who wants to try something different but not spend a bunch of money or have to go through a big learning curve.

I’ve never used an AeroPress or tried coffee made with one, but it’s described as being really smooth and rich—makes sense, since it’s prepared almost like espresso (quickly, and with fine grounds).

7. Vacuum Press

Fundamentally, these things work by steeping coffee in nearly boiling water for a few minutes, then allowing it to pass through a filter into a serving vessel. Based on that information, you can probably see the similarities between this brewing method and others listed here. It’s almost like an AeroPress, only not as quick.

Of course, that’s far from all there is to it. The main appeal of the vacuum brewer is the visual presentation, hence the video above.

As the water in the bottom reservoir is heated, the pressure of the steam forces the water up through a filter and into the top portion of the brewer. The water cools as it travels upwards, so even though it’s boiling in the bottom, it’s at an ideal brewing temperature by the time it reaches the top. After it’s brewed a few minutes, you remove the heat, and as the water in the bottom cools, a vacuums is created, sucking the coffee down through the filter.

This isn’t practical, as it has a lot of parts. What’s more, those parts look like they’re pretty tricky to clean.

Aside from that, the way the coffee is actually brewed isn’t necessarily unique, so it’s not like there’s an end result that you can only get with a vacuum press.

But man… it sure is awesome to watch!

8. Cold Press Coffee

coffee

Credit: Thinkstock

Cold brewed coffee is brewed at room temperature, relying on time rather than temperature for coffee extraction. This yields coffee with very low acidity.

Fortunately, you don’t need any fancy equipment for the cold press brewing method—you just need a French press brewer, a little more coffee, and a lot more time. While the typical rule of two tablespoons of coffee to six ounces of water is based on a coffee to water ratio of 1:10, cold brewed coffee uses a ratio of 1:7.

Aside from that, it works just like the hot French press method—dump the water (room temperature, of course) into the pitcher with your coarsely ground coffee, and attach the plunger and lid.

Here’s where the process changes: Rather than three to six minutes, you’re going to let the coffee steep for 12 hours. So if you want cold brewed coffee in the morning, you’re going to have to do all of this the night before.

Cold brewed coffee is served diluted with milk or cold water for delicious iced coffee, or hot water if you want a warm cup of bold and low-acidity Joe.

9. Cold Drip Coffee

coffee

Credit: Bruer

This one requires a specialized contraption—a device that drips water over coffee grounds, but does so really slowly. Other than that, nothing new here, really. This brewing method uses the same coffee to water ratio as the cold press method, and yields the same flavor, though the resulting brew won’t have any grittiness since it passes through a filter.

10. Percolator

Percolators reigned supreme before automatic drip machines came around in the ’70s, and they’re still popular with some who want their coffee made automatically, but without any plastic parts, due to BPA concerns. As water is boiled at the very bottom of the percolator, it travels up a skinny metal tube, where it hits the top of the machine and falls onto the grounds, which sit in what can be described as a metal can with smalls holes in the top and bottom. It’ll do this until the coffee inside reaches a particular overall temperature, and the percolator will shut off (the electric ones, anyway).

This method brews coffee with boiling water, so it’s not the best for flavor. However, they work great for folks who need to make a lot of coffee and not spend much time doing it, and who don’t want to drink hot liquid that’s touched plastic.

—–

Feature Image: Thinkstock

NASA releases stunning new images from Ceres, Pluto, and Mars

Last week was a banner week for those of us who love looking at the amazing images captured by the various NASA spacecraft, as they managed to capture three astonishing new images (plus an awesome bonus one we’ve included at the end!) from the likes of Mars, Pluto and Ceres over a two-day span on Thursday and Friday.

Up first is everybody’s favorite dwarf planet, Pluto, as the US space agency released the first pictures of its atmosphere in infrared wavelengths (above). The images, which were produced using data from the Ralph/Linear Etalon Imaging Spectral Array (LEISA) on the New Horizons spacecraft, were captured on July 14 at a distance of approximately 112,000 miles (180,000 km).

According to NASA, the image covers LEISA’s entire spectral range of 1.25 to 2.5 microns, and is divided into thirds, the shortest of which was placed in the blue channel, followed by the green channel and the longest in the red channel. The blue ring which appears around Pluto is the result of the scattering of atmospheric haze particles by sunlight, NASA scientists explained.

This haze, they added, is believed to be a type of “photochemical smog” caused by the action of sunlight on methane and other molecules in the dwarf planet’s atmosphere. As a result, a mixture of hydrocarbons is produced, and those elements accumulate into extremely small particles that scatter the sunlight and produce the blue-hued haze.

Take a virtual fly-over of Ceres in new NASA video

On Friday, NASA released a new animation that allowed the viewer to embark upon a simulated flight over the dwarf planet Ceres. The video, which shows Ceres in enhanced color, was created using images obtained by the Dawn spacecraft and highlights the different surface materials that can be found on the surface of the largest object in solar system’s asteroid belt.

The simulated fly-by, which was compiled by images taken last fall at an altitude of about 900 miles (1,450 km), emphasizes the most prominent craters on Ceres, including Occator and the tall, cone-shaped mountain known as Ahuna Mons. Also visible are blue-shaded areas NASA believes contain younger material such as cracks, pits and flows, the agency noted.

“The simulated overflight shows the wide range of crater shapes that we have encountered on Ceres. The viewer can observe the sheer walls of the crater Occator, and also Dantu and Yalode, where the craters are a lot flatter,” said Ralf Jaumann, a Dawn mission scientist at the German Aerospace Center (DLR), whose framing camera team was responsible for producing the movie.

Mars rover stops to snap a selfie while studying sand dunes

Also on Friday, NASA released the latest selfie taken by Curiosity, as the Mars rover took a short break from scooping and studying sand on the Red Planet to give officials at the agency (not to mention space-science enthusiasts everywhere) a look at how its holding up.

Curiosity

This Jan. 19, 2016, self-portrait of NASA’s Curiosity Mars rover shows the vehicle at “Namib Dune,” where the rover’s activities included scuffing into the dune with a wheel and scooping samples of sand for laboratory analysis. (Credits: NASA/JPL-Caltech/MSSS)

However, as the Huffington Post pointed out, the new image isn’t like the point-and-click kind of selfie that people usually post to their social media accounts. Instead, it is a composite crafted out of 57 individual photographs the rover took of itself next to Namib Dune back on January 19.

Curiosity used the Mars Hand Lens Imagers camera at the end of its arm to snap each of the pics used to create the otherworldly selfie, although the nature of the selfie means that only part of the arm itself can be seen. It is at least the third selfie beamed by to Earth by the rover since it landed on the Red Planet in August 2012, according to the website.

When it’s not snapping photos of itself, Curiosity is in the process of gathering and analyzing the sand at a group of active dunes located in the Bagnold Dune Field site along northwestern Mount Sharp. It used its scoop to collect three sand samples in January, but during the processing of the third one, an actuator did not perform as expected. NASA officials are attempting to find and fix the problem.

BONUS: Martian sand, up close and personal

Curiosity

The Mars Hand Lens Imager (MAHLI) camera on the robotic arm of NASA’s Curiosity Mars rover used electric lights at night on Jan. 22, 2016, to illuminate this postage-stamp-size view of Martian sand grains dumped on the ground after sorting with a sieve.
(Credits: NASA/JPL-Caltech/MSSS)

—–

Feature Image: NASA/JHUAPL/SwRI

Position of the moon can influence rainfall amounts, study finds

While the moon’s influence on the timing and amplitude of ocean waves is well known, a team of researchers from the University of Washington has discovered that lunar position also creates slight, nearly imperceptible changes in precipitation amounts here on Earth.

As corresponding author Tsubasa Kohyama, a doctoral student in atmospheric sciences, and co-author/UW professor John M. Wallace reported in Saturday’s edition of the journal Geophysical Research Letters, when the moon is high in the sky, it produces forces that create bulges in the atmosphere, affecting the amount of rain received by the planet below.

“As far as I know, this is the first study to convincingly connect the tidal force of the moon with rainfall,” said Kohyama, who was studying atmospheric waves when he first discovered a minor oscillation in air pressure correlating to the phases of the moon. After two years of work, he and Wallace concluded “when the moon is overhead or underfoot, the air pressure is higher.”

High moon increases atmospheric pressure, lowers relative humidity

In instances where the moon is overhead, its gravity causes the Earth’s atmosphere to bulge in its direction, resulting in a atmospheric weight or pressure increase on that side of the planet. Higher pressure increases the temperature of air parcels below, the authors explained, which enables this now-warmer air to hold more moisture.

“It’s like the container becomes larger at higher pressure,” Kohyama explained. Rainfall amounts are affected by relative humidity because “lower humidity is less favorable for precipitation,” the UW student added. This means that when the moon is high, less precipitation will fall.

However, those changes are relatively minute, with rainfall variation of only about one percent. As Kohyama said, “No one should carry an umbrella just because the moon is rising.” Rather, he and Wallace suggest that their research could be used to help test climate models, making certain that their physics can adequately simulate how the moon’s pull can result in less rain.

Their conclusions came following an extensive review of 15 years worth of data collected by the NASA and Japan Aerospace Exploration Agency’s Tropical Rainfall Measuring Mission satellite between 1998 and 2012. Wallace said that he plans to continue looking at the data, searching for possible connections between the moon and other types of precipitation, such as downpours, and to see it lunar forces have any influence on rainstorm frequency.

—–

Feature Image: Thinkstock

UK scientists receive approval to modify human embryos

Less than one year after Chinese researchers revealed that they had genetically modified human embryos, scientists at the Francis Crick Institute in London have gotten the go-ahead to conduct similar experiments, officials at the medical research facility revealed on Monday.
According to Reuters and BBC News, Dr. Kathy Niakan, a stem cell researcher at the Institute, said she and her colleagues had been granted a license to conduct their experiments from the Human Fertilization and Embryology Authority (HFEA), the division of the UK’s Department of Health that regulates fertility clinics and regulates research involving embryos.
Dr. Niakan’s laboratory told reporters that the research would attempt to shed new light into the first moments of human life, and that they would be banned from implanting modified embryos into a woman. Nonetheless, the research is causing concern among some that it could eventually lead to genetically engineered “designer” babies.
David King, director of the UK watchdog Human Genetics Alert, told Reuters that the move was “the first step on a path… toward the legalization of GM babies,” while Dr. Sarah Chan from the University of Edinburgh told BBC News that although such research “touches on some sensitive issues… its ethical implications [had] been carefully considered by the HFEA.”
Experiments will involve recently-fertilized eggs
Reports indicate that the experiments will use CRISPR-Cas9, technology that can identify and correct genetic defects. It is to be conducted during the first seven days following fertilization, a period in which the fertilized egg develops from a single cell into a blastocyst containing 200 to 300 cells.
Once a fertilized egg reaches the blastocyst stage, some of its cells have been organized to play specific roles – forming the placenta or the yolk sac, for example. Some parts of our DNA tend to be very active at this stage in human development, the researchers said, and it is believed that some of these genes play a key role in guiding our early growth.
How and why these processes take place remains a mystery, and experts are uncertain what they are going and what might go wrong genetically prior to a miscarriage. Dr. Niakan, who has been researching human development for more than a decade, will look to modify these genes during their experiments, and will only be using donated embryos, according to BBC News.
HFEA said that experiments could start within the next several months, and Paul Nurse, director of the Crick Institute, said that he was “delighted” that their application had been approved, and that Dr Niakan’s research would be “important for understanding how a healthy human embryo develops and will enhance our understanding of IVF success rates.”
“This project, by increasing our understanding of how the early human embryo develops and grows, will add to the basic scientific knowledge needed for devising strategies to assist infertile couples and reduce the anguish of miscarriage,” said Bruce Whitelaw, an animal biotechnology professor at Edinburgh University’s Roslin Institute. He added that the approval was granted “after robust assessment” of the proposed experiments.
—–
Feature Image: Thinkstock

Europe’s summer weather is the hottest seen in two millennia

It looks like investing in European sunscreen might be a smart business move, as a new study has found that the continent’s summer temperatures over the last three decades have been the highest ever seen in the past 2,100 years.

The study, which can be found in Environmental Research Letters, drew this conclusion after examining historical evidence along with evidence found in the rings of trees. All the trees had been alive for at least 700 years, and dated anywhere between 500 BCE and modern times. They also spanned the continent; researchers collected data from from Spain, France, Switzerland, Austria, Romania, Finland, and Sweden.

The international team of 45 scientists from 13 countries discovered that, over the course of the last two millennia or so, temperatures have varied widely. During Roman times until the third century CE, summers were generally warm. For three hundred years after that, things were generally cooler—followed by a warmer medieval period. Then, the Little Ice Age followed, bringing temperatures down during the 14th to 19th centuries.

Modern increases in temperature

The most recent time periods, though, have been marked by continuous and unusual temperature increases that mark a sharp contrast to past climate shifts. The past 30 years in particular have shown a significant increase in temperature from decades before, totaling 2.34 degrees Fahrenheit (1.3 degrees Celsius)—which not only lies well outside the natural temperature variation, but included more heat waves than before.

“Our primary findings indicate that the 1st and 10th centuries CE could have experienced European mean summer temperatures slightly but not statistically significantly (5% level) warmer than those of the 20th century,” wrote the authors in the paper. “However, summer temperatures during the last 30 years (1986–2015) have been anomalously high and we find no evidence of any period in the last 2000 years being as warm.”

Moreover, their evidence suggests that temperature fluctuations between warm and cool periods over the past two millennia were bigger than we thought, meaning that the models we currently use to understand climate across the centuries may be underestimating temperature changes throughout history. In particular, the models may predict fewer heat waves than what we’ve seen. But this knowledge may be the key to solving this issue.

“We now have a detailed picture of how summer temperatures have changed over Europe for more than two thousand years,” said the coordinator of the study, Professor Jürg Luterbacher from the University of Giessen in Germany, in a statement, “and we can use that to test the climate models that are used to predict the impacts of future global warming.”

—–

Image credit: Thinkstock

ESA launches the first node of its space ‘data superhighway’

The European Space Agency (ESA)  launched the first part of a new space-based “data superhighway” into orbit Friday night, as the first node of a network designed to improve natural disaster monitoring capabilities lifted off from the Baikonur Cosmodrome in Kazakhstan.

The node, which is a part of the European Data Relay Satellite (EDRS) program and is known as EDRS-A, is a telecommunications satellite that lifted off on board a Proton rocket at 2220 GMT, or 4.20 am local time, Reuters and BBC News reported shortly after the successful launch.

lifeoff

Credit: ESA

 

The EDRS is a $545 million (500 million euro) project designed to use lasers to gather pictures and radar images of the planet taken by other spacecraft and beam them back to Earth. By doing so, it will ensure that experts keeping tabs on earthquake and flood activity will not have to wait for probes to pass over a ground station before they can transmit their data.

Instead, the new network of satellites comprising this data superhighway will be located higher in the sky than other probes, allowing those spacecraft to simply beam their data upwards to the EDRS array, after which the system will fire it back down to get into the hands of emergency responders far more quickly than had previously been possible.

Network can relay information at a rate of 1.8Gpbs

EDRS-A is a telecommunications relay satellite that will collect data from a pair of probes that were previously put in orbit and use optical transmission equipment to monitor the planet below. They will now offload their data to the EDRS-A, which will be positioned 22,400 miles (36,000 km) above the equator, and within 20 minutes, the information should be on the ground.

Reuters and BBC News report that the relay station will send and receive data at a rate of 1.8 gigabits per second, equal to sending the entire content of the books on a three-foot-long shelf wirelessly in one second. In addition to monitoring earthquakes and floods, the EDRS system will also watch for oil spills, piracy or illegal fishing activity.

EDRS, which has been in the works for more than a decade, is a partnership between the ESA and Airbus Defense and Space, a private-sector firm that helped the agency design and develop the satellite network. A second satellite, EDRS-C, is currently scheduled to be added next year, and additional probes could follow shortly thereafter. The ESA said that Friday’s launch will not be officially declared successful until they confirm that the satellite is operational.

—–

Image credit: ESA

Microsoft co-founder’s yacht destroys huge chunk of protected coral reef

In a bizarre turn of events, a yacht owned by a billionaire who has worked to mitigate ocean acidification and stabilize and restore corals reefs has wound up damaging an estimated 14,000 square feet of the very ecosystems his charitable organizations are trying to protect.

According to Reuters and Sky News reports, officials said that a vessel owned by Microsoft co-founder Paul Allen caused serious damage to a protected coral reef in the Cayman Island, as the yacht’s anchor chain destroyed more than four-fifths of the coral at the affected site.

Allen, who left the tech giant in 2000 and now oversees the Paul G. Allen Ocean Challenge, a competition centered around protecting the ocean, was not aboard the 300-foot yacht known as the Tatoosh at the time, officials told the media. The incident took place on January 14, as the vessel was moored near a pair of diving sites on the western coast of Grand Cayman.

The investigation continues, but Allen could face fines

In a statement, philanthropic foundation, Vulcan Inc., said that the Tatoosh was  in “a position explicitly directed by the local Port Authority” and that after the ship’s crew “was alerted by a diver that her anchor chain may have impacted coral in the area,” they “promptly, and on their own accord, relocated their position to ensure the reef was protected.”

Vulcan added that they and the crew are “actively and cooperatively working with” authorities to “determine the details of what happened.” An early survey conducted by local divers have found extensive damage, but the investigation is still ongoing, according to Cayman News Service.

“In addition to assessing the damage and determining the cause of this incident, we are also paying close attention to lessons learned so that we can more effectively prevent these accidents while still hosting visiting yachts,” a spokesperson for the Cayman Department of Environment told the media outlet. Allen could potentially be fined for the damage caused by his vessel.

The ship at the heart of the matter, the Tatoosh, is said to be one of the largest yachts in the world, according to Reuters and Sky News. It is more than 303 feet long, and among its many features are a pair of helicopter landing pads, an observation lounge, a basketball court and a recording studio, said Retuers.

—–

Image credit: Thinkstock

Facebook confirms biggest change to site in years with ‘Reactions’

Facebook acknowledges that we have emotions outside of “liking” something, and users all over the world will soon be able to express these emotions through a system called “Reactions” instead of a simple thumbs-up.

According to BBC News and the Atlanta Journal-Constitution, Facebook’s Mark Zuckerberg confirmed during a recent conference call that Reactions, which is already being tested in some parts of the world (including Spain and Ireland), will be going global “pretty soon.”

Launched in response to the demand for a “dislike” button on Facebook, Reactions initially included six additional emotions: love, haha, wow, yay, angry, and sad. However, the yay emoticon was dropped because it was not “universally understood,” a Facebook representative told Bloomberg Business on Wednesday.

facebook

Credit: Facebook

Bloomberg called the move “the most drastic change to Facebook in years,” adding that the like button was the social media website’s “most recognized symbol.” While Zuckerberg’s comments indicate that this change is coming soon, no exact rollout date has been announced.

Is Reactions a tool for users, or for advertisers?

The idea behind Reactions, Zuckerberg said, was to add “a little bit of complexity” to what had been a simple interaction process. “When you only have a like button, if you share a sad piece of content or something that makes you angry, people may not have the tool to react to it.”

Of course, as BBC News noted, there may be a period of adjustment for longtime social media users, as they attempt to determine if they love or merely like a picture of a friend’s beloved pet, or whether a particularly tragic story makes them sad or angry. Advertisers, however, are already going “wow” over the possibilities presented by the new emoji-based system, they said.

Simon Calvert, head of strategy at the marketing agency Lida, told the UK media outlet that the system would be very interesting, provided it can accurately reflect human emotions. “Emotions travel five times faster than rational thought,” he explained. “The ability to build better emotional connections with consumers is something that advertisers really prize.”

“From the consumer point of view they are now giving up their emotional data for advertisers to use and manipulate,” countered Nick Oliver of People.io, a company designed to help users take control of their social media data and better understand how valuable it is to advertisers. “People open themselves up on social media and the data is used in ways they never expect.”

—–

Image credit: Facebook

Humans drove ancient Australian megafauna to extinction, study finds

People flock to the Paleo diet in order to eat like our ancestors—but incidentally, the original paleo diet drove a flock of birds to extinction, as researchers from the University of Colorado at Boulder have discovered the first reliable evidence that humans (Homo sapiens) played an enormous role in the extinction of massive flightless birds in Australia some 50,000 years ago.

The bird, which is known as Geryornis newtoni, was nearly seven feet tall and weighed a whopping 500 pounds. They appeared to have lived in across a large portion of Australia—that is, until humans arrived.

The particular evidence uncovered by the CU-Boulder team—burned eggshells—was found to date to this time period.

“Eggshells were found in 200 sites across much of the arid zone of Australia, from the west coast to the central deserts,” said Dr. Gifford Miller, professor of Geological Sciences at CU-Boulder and co-author of the paper which can be found in Nature Communications.

These eggshells were burned, but not just by a wildfire; patterns indicated that the birds’ 3.5-pound eggs were in fact cooked by humans, thereby directly decreasing the numbers of the birds’ progeny and likely playing an important role in their extinction.

“We consider this the first and only secure evidence that humans were directly preying on now-extinct Australian megafauna,” said Miller in a statement.

The proof in the pudding

The team used radiocarbon dating, which indicated that the shells were no younger than 47,000 years old. They verified this date using a fascinating technique known as optically stimulated luminescence (OSL) dating. The OSL technique can determine when certain types of crystalline materials—like the quartz grains in the eggshells—were last exposed to sunlight. In the case of the G. newtoni eggs, sunlight had been missing for some 44,000 to 54,000 years.

So the burned egg shells were old, and fit right into the time period of when humans settled in Australia. But how do we know they were actually cooked by humans? The answer came from protein.

“When we found ‘blackened’ eggshells we wanted to know for sure if the black colour was due to burning or staining,” Miller said. “Birds include about 3% protein in their eggshells, presumably to make the calcite less brittle.”

Amino acids—which are the buildings blocks of protein—are sensitive to heat. So by analyzing the amino acid content across the eggshells, they could determine whether the egg had been consumed by a wildfire or cooked by humans.

“We found that fully blackened eggshells had no amino acids left at all (temperatures above 500°C), and most importantly, eggshell fragments blackened at only on end had no amino acids at the blackened end, but concentrations almost the same as unheated eggshell at the opposite ends,” Miller continued.

“This can only be explained if the fragment was touching a very hot source (an ember) for a short period of time. And some fragments in the same cluster were not heated at all. We conclude that the only explanation is that humans harvested the giant eggs, built a fire and cooked them (which would not blacken them) then discarded the fragments in and around their fire as they ate the contents.”

This theory is strengthened by the fact that most burned eggshell fragments were found in tight clusters without other eggshells nearby—and that the amount of protein decomposition in some clusters showed that the heat affecting the eggs varied by as much as 1,000 degrees Fahrenheit.

“We can’t come up with a scenario that a wildfire could produce those tremendous gradients in heat,” Miller added.

Australia had scary fauna even 50,000 years ago

Of course, a 500-pound bird wasn’t the only type of megafauna (huge animals) around Australia at the time. A 1,000-pound kangaroo, a 25-foot long lizard, and a Volkswagen-sized tortoise all were there to greet the first humans. Shortly after our advent 50,000 years ago, though, more than 85 percent of Australia’s mammals, birds, and reptiles that weighed more than 100 pounds went extinct.

Which could mean that humans didn’t just change the fate of G. newtoni.

“We only have evidence for predation on the one species of Megafauna, Genyornis,” explained Miller. “But this is the first direct evidence of such predation, and as such it adds more evidence to a likely human role in megafaunal extinction.”

—–

Feature Image: An illustration of a giant flightless bird known as Genyornis newtoni, surprised on her nest by a 1 ton, predatory lizard named Megalania prisca in Australia roughly 50,000 thousand years ago. (Credit: Illustration by Peter Trusler, Monash University)

Elon Musk expects SpaceX to reach Mars by 2025

SpaceX owner and tech mogul Elon Musk said he’s expects one of his rockets to take him up to the International Space Station by 2020 and achieving that feat won’t be “that hard,” in a recent appearance at the StartmeupHK Festival in Hong Kong.

SpaceX’s Dragon capsule and Falcon rocket are already transporting cargo up to the ISS, but the system has yet to be certified for human transportation.

“I don’t know, maybe four or five years from now, maybe going to the Space Station would be nice,” Musk said. “And in terms of the first flights to Mars, we’re hoping to do that around 2025. Nine years from now or thereabouts.”

Despite the risks of space travel and foreign environment, Musk joked that the trip wouldn’t be too challenging for him.

“I don’t think it’s that hard, honestly,” he said. “You float around. It’s not that hard to float around.”

Musk wouldn’t be the first citizen to visit the ISS as several space tourists have already paid big bucks to check the trip off their respective bucket lists.

More on Mars

The tech icon also has his sights set on Mars.

“Going to Mars is definitely going to be hard and dangerous and difficult in every way you can imagine,” he said. “But if you care about being safe and comfortable, going to Mars would be a terrible choice.”

However, “Mars is the next natural step,” Musk said. “In fact it’s the only planet we have a shot at establishing a self-sustaining city on. Once we do establish such a city there will be a strong forcing function for the improvement of space flight technology that will then enable us to establish colonies elsewhere in the Solar System, and ultimately extend beyond our Solar System.”

While a lot is currently riding on the continued development of the current SpaceX travel system, Musk said “we’ll have a next generation rocket and spacecraft beyond the Falcon-Dragon series, and I’m hoping to describe that architecture later this year at the International Astronautical Congress” – referring to a conference in Mexico this September.

—–

Feature Image: Thinkstock

Monstrous gas cloud on collision course with our galaxy will create 2 million suns

A massive cloud of hydrogen gas speeding towards the Milky Way at speeds topping 700,000 miles per hours could form up to two million new stars once it finally arrives, according to new research published in a recent edition of the Astrophysical Journal Letters.

While researchers first discovered the object known as the Smith Cloud in the 1960s, little was known about its chemical composition until Dr. Nicolas Lehner of the University of Notre Dame and his colleagues determined that it contained elements extremely similar to our sun.

What that means, The Independent explained, is that it originated in the outer edges of the Milky Way and was not a starless galaxy or a big body of gas that had originated in intergalactic space, as some researchers had speculated. Following its formation, it was somehow ejected into space roughly 70 million years ago, Dr. Lehner and his colleagues reported in their study.

galaxy

This diagram shows the 100-million-year-long trajectory of the Smith Cloud as it arcs out of the plane of our Milky Way galaxy and then returns like a boomerang. Hubble Space Telescope measurements show that the cloud came out of a region near the edge of the galaxy’s disk of stars 70 million years ago. The cloud is now stretched into the shape of a comet by gravity and gas pressure. Following a ballistic path, the cloud will fall back into the disk and trigger new star formation 30 million years from now. (Credit: NASA/ESA/A. Feild (STScI))

Currently, the Smith Cloud is speeding back towards its galaxy of origin, and it is expected to crash into the Milky Way’s disk in approximately 30 million years, according to NASA. Once it returns, it is expected to be the catalyst for a “spectacular burst of star formation,” since it could provide enough gas to produce up to two million suns, they added.

Determining its origin by figuring out its chemical composition

Dr. Lehner and his fellow researchers used the Hubble space telescope to determine the amount of heavier elements the Smith Cloud contained relative to its hydrogen content. Using Hubble’s Cosmic Origins Spectrograph, they observed UV radiation from the cores of three active galaxies located billions of light years behind the cloud to estimate its chemical composition.

Specifically, they searched for absorption from the element sulfur, which can be used to figure out the amount of heavier elements that reside within a gas cloud. By doing so, they determined that the Smith Cloud was as rich in sulfur as the outer disk of the Milky Way, which means that it was polluted by stellar material, which would not be the case had it originated from outside the galaxy.

gas cloud

Hubble’s Cosmic Origins Spectrograph can measure how the light from distant background objects is affected as it passes through the cloud, yielding clues to the chemical composition of the cloud. Astronomers trace the cloud’s origin to the disk of our Milky Way. Combined ultraviolet and radio observations correlate to the cloud’s infall velocities, providing solid evidence that the spectral features link to the cloud’s dynamics. (Credit: NASA/ESA/A. Feild (STScI))

“The cloud is an example of how the galaxy is changing with time,” researcher Andrew Fox of the Space Telescope Science Institute in Baltimore, Maryland, said in a statement. “It’s telling us that the Milky Way is a bubbling, very active place where gas can be thrown out of one part of the disk and then return back down into another.”

“We have found several massive gas clouds in the Milky Way halo that may serve as future fuel for star formation in its disk, but, for most of them, their origins remain a mystery,” Dr. Lehner added. “The Smith Cloud is certainly one of the best examples that shows that recycled gas is an important mechanism in the evolution of galaxies.”

Now that they have determined the cloud’s origins, and have a good idea what will happen once it finally returns to the Milky Way, one mystery remains: how did the cloud come to arrive at its current location? What force or phenomenon forced it out of the Milky Way, and how did it stay intact? That, the researchers said, is a question that only additional research can answer.

—–

This composite image shows the size and location of the Smith Cloud on the sky. The cloud appears in false-color, radio wavelengths as observed by the Green Bank Telescope in West Virginia. The visible-light image of the background star field shows the cloud’s location in the direction of the constellation Aquila. (Credit: Saxton/Lockman/NRAO/AUI/NSF/Mellinger)

LHC upgrades require careful removal of 9,000 cables

Procrastination has put experts at the European Organization for Nuclear Research (CERN) in a rather precarious position: they need to upgrade the massive particle accelerator called the Large Hadron Collider, but to do so, they will first have to remove thousands of obsolete cables.

According to Motherboard and The Verge, CERN engineers had previously decided to put off removing obsolete cables while improving the collider, and the number has soared to more than 9,000. Now, they need to upgrade the machine again, but they’re running out of room, meaning that all of those old cables now need to be carefully removed.

Reports suggest that this will be no easy task, even for some of the planet’s most knowledgeable boffins . The cables need to be identified and manually disconnected before replacements can be installed, as they are blocking the way. Of course, with a machine as complex as the LHC, there is always the risk that someone could make a mistake, with potentially disastrous results.

“Telling apart functioning and out-of-use cables in one of the world’s biggest and most expensive experiments is a high-stakes game,” Motherboard explained. “Pull out the wrong cable, and at best you might have lost some data monitoring capabilities. Worst case scenario, you might yank out a crucial safety cable and the accelerator simply won’t work.”

“That’s why it’s so tricky to complete this operation – because any mistake could start major trouble at the restart of the accelerator,” Sébastien Evrard, the engineer in charge of the cord-removal project, told the website. “Of course, in an ideal world we would remove the old and obsolete cables before installing new ones, but this was not the case.”

Project will take four year to complete

The clean-up is in preparation for CERN’s LHC Injectors Upgrade Project, which is scheduled to take place in 2019. Three injectors, each of which play a role in accelerating particles into beams before they enter the collider, each have roughly 3,000 unused cables that need removed.

A team of 60 mechanical engineers, led by Evrard, has already started identifying the cable with the help of a database, and will begin disconnecting them at the end of the year when the collider is temporarily shut down. However, the process with take four years to fully complete.

Evrard told Motherboard that the team started disconnecting cables in one of the injectors, the Proton Synchrotron Booster (PS Booster), in December and have already removed an estimated 2,700 thus far. Each of the cables is said to be 50 meters long, and the website explained that the process is “painstaking.” Evrard himself joked that the work was “not sexy” and said that it had been difficult to motivate personnel to complete the necessary tasks.

Compounding things is the fact that the database being used by the engineers is “not 100 percent reliable,” Evrard said, meaning that his team needs to “go onsite and check the correct location of all these expected obsolete cables to see if they are really obsolete or still in use. From the experience we’ve got in the past few weeks, we can say that about two percent of the cables that were expected to be obsolete are in fact still in use.”

—–

Feature Image: CERN

Researchers halt the progression of ALS in mice

In a new study published by the Neurobiology of Disease journal, researchers working with mice claim to have halted the progression of amyotrophic lateral sclerosis (ALS), or Lou Gehrig’s disease, a fatal neurodegenerative disorder.

Named after the famous New York Yankee baseballer who developed the disease, ALS has baffled researchers for decades.

“We are shocked at how well this treatment can stop the progression of ALS,” study author Joseph Beckman, a professor of biochemistry at Oregon State University, said in a statement.

ALS is known to be a result of the death and degeneration of motor neurons in the spine. The breakdown has been linked to mutations in an enzyme called copper, zinc superoxide dismutase.

Showing great promise

The study team used a mouse model thought to closely mimic the human reaction to treatment with a compound called copper-ATSM. The compound is a known to help deliver copper expressly to cells with impaired mitochondria, and reaches the spinal cord where it can treat ALS. Copper-ATSM has low toxicity, readily permeates the blood-brain barrier, is already utilized in human medicine, at much lower doses, and is well-tolerated in laboratory animals at far greater amounts. Any copper not required after use of copper-ATSM is rapidly eliminated from the body, the researchers said.

ALS

Copper, zinc superoxide dismutase. (Credit: Oregon State University)

Using the treatment, scientists stopped the advancement of ALS in one kind of transgenic mouse model, which normally would die within two weeks without care. Many of these mice have survived for more than 650 days, 500 days longer than any previous study has been able to accomplish.

In some trials, the regimen was begun, and then withheld. In this scenario the mice started to show ALS symptoms within two months after medical care was stopped, and would die within a month. But if medical care was resumed, the mice gained weight, advancement of the disease once again was halted, and the mice lived another 6 to 12 months longer.

“We have a solid understanding of why the treatment works in the mice, and we predict it should work in both familial and possibly sporadic human patients,” Beckman said. “But we won’t know until we try.”

The researchers noted a potential treatment is not likely to lead to recovery from neuronal loss already brought on by ALS. However, it could hamper further disease development when started after a diagnosis. It could also possibly treat carriers of SOD mutant genes that induce ALS.

—–

Feature Image: Thinkstock

Head-on collision between Earth, Theia likely created the moon

The collision between Earth and the “planetary embryo” known as Theia that took place roughly 100 million years after our planet formed was most likely responsible for forming the moon, according to new research published Friday in the journal Science.

While the high-velocity impact between Earth and Theia has been well documented by scientists, previous studies had suggested that the two objects simply side-swiped one another. However, in their new paper, Edward Young, a professor of geochemistry and cosmochemistry at UCLA, and his colleagues have found new evidence suggesting that it was a head-on impact.

Young’s team analyzed seven rocks collected from the moon and brought back to Earth by the astronauts of the Apollo 12, 15, and 17 missions, as well as six volcanic rocks from the planet’s mantle, and conducted a chemical analysis and comparison of each set of samples. They found no distinguishable difference between the oxygen isotopes between the Earth and moon rocks.

The findings contradict a 2014 German study that the lunar rocks would have a unique ratio of oxygen isotopes, different from those on Earth. The discovery suggests that a glancing collision between Earth and Theia was unlikely, as such an impact would have caused the moon to have been made primarily of material from Theia that had a non-Earth-like chemical composition.

Oxygen isotopes reveal striking similarities between Earth, moon rocks

During their analysis, Young and his colleagues focused on oxygen atoms, which they noted makes up 50 percent of the weight and 90 percent of the volume of a rock. The overwhelming majority of Earth’s oxygen is referred to as O-16 oxygen, because every atom contains eight protons and eight neutrons, but there are trace amounts of heavier oxygen isotopes.

Those isotopes, O-17 and O-18, have one and two additional neutrons, respectively, the study authors said. Every planet in the solar system has a unique ratio of O-17 to O-16, and by using high-tech equipment to measure the isotope signatures of both the Earth and moon rocks, they found that the ratio of oxygen isotopes in each were nearly identical.

This indicates that the collision between Earth and Theia was no little fender-bender, Young explained. “Theia was thoroughly mixed into both the Earth and the moon, and evenly dispersed between them,” he said, noting that the forming planet did not survive the collision in-tact. “This explains why we don’t see a different signature of Theia in the moon versus the Earth.”

Had Theia survived the impact, it probably would have become a planet similar in size to either Mars or the Earth, the researchers said. Furthermore, they believe that the impact with Theia may have removed any water that might have been found on the young Earth, only for a plethora of small asteroids rich in water to restore the planet’s H2O million of years after the collision.

—–

Feature Image: Thinkstock

More babies are being born with organs outside their bodies, and doctors don’t know why

A rare genetic defect that causes infants to be born with their intestines and possibly other organs protruding through an opening in their abdominal wall is on the rise, and doctors are unsure why, according to a Centers for Disease Control and Prevention (CDC) report released Friday.

The condition is known as gastroschisis, and as the Philadelphia Daily News explained, newborn babies born with the defect require immediate surgery. In most cases, the procedure is successful and the babies do well, but the increased prevalence of the ailment has the CDC concerned.

According to the Washington Post, the agency’s study indicated that the number of gastroschisis cases reported between 2006 and 2012 were 30 percent higher than those reported between 1995 and 2005. The rate more than doubled among African American mothers under the age of 20.

However, in their report, CDC scientists noted that the observed increases could not be explained by demographic changes in maternal age or race. Smoking, low pre-birth weight and illegal drug or alcohol use have been linked to gastroschisis, but it remains unclear if any of those factors are to blame for the increased prevalence of the condition.

In a statement, Dr. Coleen Boyle, director of the CDC’s National Center on Birth Defects and Developmental Disabilities, said that she and her colleagues were concerned by the findings and that research was “urgently needed” to determine why more infants were being affected.

Cause uncertain, but experts emphasize the importance of prenatal care

While surgery can allow doctors to place the organs back within the baby’s body and repair the stomach wall, babies born with gastroschisis often still have difficulties eating or digesting food, and in some instances, the condition can be life threatening due to post-procedure infection.

As part of their research, the CDC collected data from 14 states and compared the prevalence of the condition among babies born to mothers of different ages and races from 1995 and 2005, and those born between 2006 and 2012. They found that gastroschisis was most common in teenage mothers and in young African-American women.

Overall, the condition remains rare, with about 2,000 US babies being born with gastroschisis each year, but the number of infants suffering from the defect increased among mothers of all ages and ethnicities. The increase in gastroschisis births among teen mothers came despite an overall decline in the number of live births among women under 20, the CDC added.

“The concerning part of this is the inexorable rise in gastroschisis going back to the 1970s,” Dr. Edward McCabe, senior vice president and chief medical officer at the March of Dimes, told the Daily News. “When you see something rising as fast as this is in all population groups, and in all ages, it tells you something serious is going on. We need to try and figure out what it is.”

Is there anything a mom-to-be can do to reduce the risk to her child?

“The most important thing is prenatal care,” Dr. James Greenberg, co-director of the Perinatal Institute and director of Neonatology at Cincinnati Children’s Hospital Medical Center, told CBS News. “This can be picked up on a routine second trimester ultrasound. An 18-week ultrasound can identify this. For the caretakers, knowing ahead of time is very valuable for these babies.”

For the first time ever, researchers have used a computer to read someone’s mind

By implanting electrodes in the temporal lobes of awake patients, scientists from the University of Washington and colleagues have decoded brain signals close to the speed of perception, a new study published Thursday in the journal PLOS Computational Biology has revealed.

In addition, UW computational neuroscientist Rajesh Rao, neurosurgeon Jeff Ojemann and their fellow researchers were able to analyze the study participants’ neural responses to two different categories of visual stimuli (images of faces and pictures of houses).

This, in turn, made it possible for them to predict which type of image the patients were viewing and when. Their efforts were 95 percent accurate, the study authors explained in a statement, and could help researchers better understand how the temporal lobe perceives different objects.

Rao, who is also a professor of computer science and engineering at the university and the head of the National Science Foundation’s Center for Sensorimotor Engineering, noted that he and his colleagues were also “trying to understand… how one could use a computer to extract and predict what someone is seeing in real time.”

Participants were having difficulty treating epilepsy symptoms

The study, which involved seven epilepsy patients receiving care at Harborview Medical Center in Seattle, could be considered “a proof of concept toward building a communication mechanism for patients who are paralyzed or have had a stroke and are completely locked-in,” he added.

Each of the patients had been experiencing epileptic seizures, and medication was doing little to alleviate their symptoms, explained Ojemann. So they decided to undergo a surgical procedure in which they had electrodes temporarily implanted in the temporal lobes of their brains in the hope that it would help doctors locate the focal points of those seizures.

As the authors explained, the temporal lobes, which are located behind the eyes and ears, process sensory input and are often the source of a patient’s epileptic seizures. They are also often linked to Alzheimer’s disease and dementia, and are more vulnerable to head trauma than other parts of the brain.

Each patient had electrodes from multiple locations on the temporal lobes connected to computer software which extracted two different brain signal properties: event-related potentials, which are the result of hundreds of thousands of neurons being activated after initial exposure to an image, and broadband spectral changes, which involve additional processing of already-presented data.

Predictions of an image’s content were 96 percent accurate

Once the procedure was complete, each patient were shown a random sequence of pictures on a computer monitor. Each image was either a face or a house, lasted just 400 milliseconds and was interspersed with blank gray screens. The subjects were asked to look for a picture of an upside-down house, while the software sampled and digitized brain signals 1,000 times per second.

mind reading

This illustrates brain signals representing activity spurred by visual stimuli experienced by the subjects in this study. In this example, images of human faces generated more brain activity than images of houses. (This was not the result in every case.) (Credit: Illustration by Kai Miller and Brian Donohue)

Rao said that the researchers received “different responses from different (electrode) locations; some were sensitive to faces and some were sensitive to houses.” The program also analyzed the information to determine which combination of electrode locations and signal types most closely matched what each of the patients actually saw.

“By training an algorithm on the subjects’ responses to the (known) first two-thirds of the images,” the university said, the researchers were able to “examine the brain signals representing the final third of the images… and predict with 96 percent accuracy whether and when (within 20 milliseconds) the subjects were seeing a house, a face or a gray screen.”

“Traditionally scientists have looked at single neurons. Our study gives a more global picture, at the level of very large networks of neurons, of how a person who is awake and paying attention perceives a complex visual object,” Rao said, adding that their technique was a step forward for brain mapping technology and could determine, in real time, what areas of the brain are sensitive to different kinds of data.

—–

Feature Image: This illustrates brain signals representing activity spurred by visual stimuli experienced by the subjects in this study. In this example, images of human faces generated more brain activity than images of houses. (This was not the result in every case.) (Credit: Illustration by Kai Miller and Brian Donohue)

Babylonians used geometry to track Jupiter 1400 years before Europeans

Once again, archaeology has shown that ancient humans were far more advanced than we like to give them credit for—in this case, showing us that math principles that weren’t developed until the 14th century in Europe were actually in use 1,400 or more years earlier in Babylon.

There exist some 110 cuneiform known tablets created by the Babylonians that explain how they computed things such as planetary positions using math—but for years, researchers have been puzzled by four mostly complete tablets mentioning Jupiter and some unusual geometry.

That is, until a new tablet was recently uncovered, which was revealed to depict new information that clarified what was being explained in the other four.

babylonians

Credit: Trustees of the British Museum/Mathieu Ossendrijver

According to the paper, which is published today in Science (DOI 10.1126/science.aad8085), these five tablets all date to between 350 and 50 BCE, and track the motion of Jupiter as it moves across the sky over the course of 60 and 120 days. The tablets used a moderately complex bit of geometry to achieve this—involving creating a trapezoid to determine Jupiter’s total degrees of motion in a time period.

Such an idea didn’t spring up overnight, however.

“They observed and recorded Jupiter’s distance to nearby reference stars along its path,” author Mathieu Ossendrijver, a professor at Humboldt-Universität zu Berlin who focuses especially on Babylonian mathematics and astronomy, told redOrbit by email. “These observational reports are known as astronomical diaries, which were written between 700 and 50 BCE. From all these data they eventually, after 400 BCE, were able to construct mathematical models of Jupiter’s motion.”

It’s all Babylonian to me

Some may recall that the ancient Greeks also actively used geometry (looking at you, Euclid)—and may be wondering what makes the Babylonians’ math so unique that it wasn’t used again for 1,400 years. As it turns out, the difference comes in terms of levels of abstraction.

The Greeks used geometry to describe the configurations of figures in physical space, but the Babylonians added in an entirely new dimension to the mix: time.

“The Babylonian geometry in these tablets is essentially an application of methods that were developed by the Babylonians near 1800 BCE, but they now apply them in a totally new manner, because the trapezoid figures are not defined in real space (so the speak the space in which we live and in which the planets move) but in a more abstract mathematical space
obtained by drawing velocity against time,” Ossendrijver told redOrbit.

It took the rest of the Western world over a thousand years to catch up. In the 14th century CE, scholars from Oxford theorized and philosopher Nicole Oresme of Paris confirmed that one could compute the displacement of an object by creating a trapezoid using velocity. From there, it became canon.

For those curious, an explanation of the math used is below.

Math alert

To make a long story short, both the Greeks and the Babylonians used this figure and this formula in various applications of math and science:

babylonians

But where a Greek trapezoid used measurements of, say, length for a, b, and h to determine the area of a trapezoid, the Babylonians used velocity for a and b, measured in degree shift per day, and used the total number of days for h to determine the total shift of Jupiter.

Or, in other words, “You get the total distance traveled by Jupiter after 60 days by adding up the distances that it travels on each of the 60 days,” wrote Ossendrijver. “Think of these little distances per day as columns with a width of 1 day and a height equal to the distance. Then imagine arranging all these 60 columns vertically next to one another, that gives you the trapezoid figure.”

Which looks like this:

Babylonians

The distance travelled by Jupiter after 60 days, 10º45′, is computed as the area of the trapezoid. The trapezoid is then divided into two smaller ones in order to find the time (tc) in which Jupiter covers half this distance. (Photo and illustration: Trustees of the British Museum/Mathieu Ossendrijver)

And the Babylonians used this to figure out the total distance traversed by Jupiter: “So the total distance travelled after 60 days is the total length of all these columns, which equals the area of the trapezoid,” wrote Ossendrijver.

In terms of the formula, the area of this trapezoid equals the total displacement, and works like this:

babylonians

For those curious, the displacement over 60 days was 10 degrees and 45 minutes, and for 120 days it was 16 degrees and 15 minutes. The math for that works out like this for the 60-day period:

babylonians

The math here different than we normally expect, as we work on a base 100 system, whereas the Babylonians worked in a base 60 system. Regardless, it turns out to be correct.

—–

Feature Image: Five spots – one colored white, one blue, and three black – are scattered across the upper half of the planet. Closer inspection by NASA’s Hubble Space Telescope reveals that these spots are actually a rare alignment of three of Jupiter’s largest moons – Io, Ganymede, and Callisto – across the planet’s face. In this image, the telltale signatures of this alignment are the shadows [the three black circles] cast by the moons. Io’s shadow is located just above center and to the left; Ganymede’s on the planet’s left edge; and Callisto’s near the right edge. Only two of the moons, however, are visible in this image. Io is the white circle in the center of the image, and Ganymede is the blue circle at upper right. Callisto is out of the image and to the right. This image was taken March 28, 2004, with Hubble’s Near Infrared Camera and Multi-Object Spectrometer. (Credit: NASA, ESA, and E. Karkoschka (University of Arizona))

 

Genome of nearly extinct Hawaiian crow sequenced in effort to revive species

Hawaii is looking to increase its murder rate….But maybe not in the way you think.

The ‘alalā, or Hawaiian crow (Corvus hawaiiensis), has been extinct in the wild since 2002, but a conservation program out of San Diego Zoo Global managed to preserve the species in their Hawaiian bird centers, and through careful breeding brought the population up from its lowest point of about 20 crows.

Following this bottleneck, scientists have grown concerned about a lack of genetic diversity in the ‘alalā. And so, as announced at the Plant and Animal Genomics XXIV Conference in San Diego, a collaboration between PacBio, San Diego Zoo Global, and the University of Hawaii sequenced the genome of the birds, in order to track and counteract any issues that arise from a genetic paucity.

Up until now, researchers have been able to keep the genes diverse enough for the species to survive, but now that they have achieved their original goal—to raise the population to 75 or more birds—it’s time for the crows to leave the nest.

“We have been working for many years to build up a large enough — and genetically diverse enough — population to allow us to begin putting the ‘Alalā back in the wild,” said Bryce Masuda, conservation program manager of the San Diego Zoo’s Hawaii Endangered Bird Conservation Program, in a San Diego Zoo Global statement. “We have achieved our goal, and are now preparing to release birds into the wild in 2016.”

Returning to their native habitat

The birds—which were originally driven to the brink of extinction thanks to habitat loss, newly introduced predators, and newly imported diseases—will soon be flying in their native forests on the island of Hawaii. But there are unknown dangers associated with reintroducing the crows; should their now more limited genes prove unable to adapt properly to the environment, they could face extinction once more. Hence, the genome project.

“Learning more about the genome of the species can help us understand more about how that species will interact with and fit back into its native habitat,” said Jolene Sutton, assistant professor at the University of Hawaii, Hilo.

“Through scientific collaboration with PacBio, we now have a map of ‘Alalā DNA that could prove critical to their long term recovery. We are absolutely thrilled with the quality of the sequencing, and we have already identified several gene locations that we think could have a big influence on reintroduction success.”

—–

Feature Image: The Hawaiian crow (Corvus hawaiiensis). (Credit: Zoological Society of San Diego)

New treatment cures Type 1 diabetes for six months without insulin injections

In individuals with Type 1 diabetes, the immune system assaults the pancreas, leaving patients without the ability to naturally manage blood sugar.
According to two new studies, published in Nature Medicine and Nature Biotechnology, researchers have developed a way to transplant pancreatic cells, replacing those lost to Type 1 diabetes, and were able to use the technique to temporarily cure the disease in mice.
Individuals with type 1 diabetes currently cope with their condition by carefully keeping track of the sugar in their blood, measuring it many times per day and then injecting themselves with insulin to maintain proper blood sugar amounts. However, precise control of blood sugar is challenging to achieve, and patients face a range of long-term medical complications consequently.
Doctors have been experimenting with ways to transplant health pancreatic cells since the 1980s, but the alginate gels used to encapsulate cells had been causing scarring – rendering the treatment ineffective.
“We decided to take an approach where you cast a very wide net and see what you can catch,” Arturo Vegas, an assistant professor of chemistry at Boston University and author on both studies, said in a statement. “We made all these derivatives of alginate by attaching different small molecules to the polymer chain, in hopes that these small molecule modifications would somehow give it the ability to prevent recognition by the immune system.”
TMTD is the key
After developing a library of almost 800 alginate derivatives, the scientists conducted a number of tests in mice and primates. One of the most promising was a derivative referred to as triazole-thiomorpholine dioxide (TMTD). The study team decided on a species of mice with a strong immune system and inserted human islet cells encapsulated in TMTD into a area of the abdominal cavity referred to as the intraperitoneal space.

type 1 diabetes

A stealth material surface, shown here, has been engineered to provide an “invisibility cloak” against the body’s immune system cells. In this electron microscopy image, you can see the material’s surface topography. (Credit: MIT)


After implantation, the cells instantly started generating insulin as dictated by blood sugar amounts and were able to kept blood sugar in check for the length of the study, more than 170 days.
“The really exciting part of this was being able to show, in an immune-competent mouse, that when encapsulated these cells do survive for a long period of time, at least six months,” said study author Omid Veiseh, a senior postdoc at the Koch Institute and Boston Children’s Hospital. “The cells can sense glucose and secrete insulin in a controlled manner, alleviating the mice’s need for injected insulin.”
The scientists said they now plan to test their new materials in primates, with the purpose of ultimately holding clinical trials in diabetic patients. If successful, this method could mean long-term blood sugar control for individuals with diabetes.
“Being insulin-independent is the goal,” Vegas said. “This would be a state-of-the-art way of doing that, better than any other technology could. Cells are able to detect glucose and release insulin far better than any piece of technology we’ve been able to develop.”
—–
Feature Image: A glucose-stimulated insulin, derived from stem cells, producing cells. (Credit: MIT)

Largest solar system in galaxy, planet with one-million-year orbit discovered

Previously believed to have been a rogue planet wandering through the galaxy without having a star to orbit, the gas giant called 2MASS J2126-8140 is actually 600 billion miles (or one trillion kilometers) away from its sun, Australian National University scientists have discovered.

According to Popular Science and Space.com, the discovery makes this one-planet solar system the largest found to date, as astronomer Simon Murphy and his colleagues reported that 2MASS takes approximately 900,000 years to complete a single orbit around its host star.

To put things into perspective, the planet is approximately 7,000 astronomical units (AU) from its sun. For the sake of comparison, Neptune is about 30 AU from the sun, while Pluto’s average distance is about 40 AU and the newly discovered “Planet Nine” is at most 1,200 AU away.

largest solar system

False colour infrared image of TYC 9486-927-1 and 2MASS J2126. The arrows show the projected movement of the star and planet on the sky over 1000 years. The scale indicates a distance of 4000 Astronomical Units (AU), where 1 AU is the average distance between the Earth and the Sun. (Credit: 2MASS/S. Murphy/ANU)

In a statement, Dr. Niall Deacon of the University of Hertfordshire, lead author of a new study detailing the team’s findings, confirmed that this was “the widest planet system found so far and both the members of it have been known for eight years, but nobody had made the link between the objects before.”

He added that the planet “is not quite as lonely as we first thought” when it was first discovered as part of an infrared sky survey, “but it’s certainly in a very long distance relationship.”

Distance between planet and star is 6,000 times that of the Earth and Sun

Two years ago, Canadian researchers identified the planet as a potential member of the Tucana Horologium Association, a 45 million year old group of stars and brown dwarfs, meaning that it was young enough and low enough in mass to be classified as a free-floating planet.

Meanwhile, in the same region of the sky, a brown dwarf known as TYC 9486-927-1 was found and determined not to be the member of any known group of young stars. No one had considered that there might be a link between the two objects until Dr. Deacon’s team reviewed a record of known young stars and free-floating planets to see if any of them could be linked.

What they discovered was that both TYC 9486-927-1 and 2MASS J2126 were travelling through space together and that both of them were about 104 light years away from the sun. Upon further examination, they determined that the objects were between 10 and 45 million years old, and that 2MASS is a gas giant approximately 12 to 15 times bigger than Jupiter.

Previously, the greatest known distance between a planet and its star was 2,500 AU, according to the study authors, and Huffington Post UK noted that 2MASS J2126 is 6,000 times further away from TYC 9486-927-1 than the Earth is from the sun. Because of that tremendous distance, there is absolutely no way that the planet could ever support biological life, they added.

Murphy said that he and his colleagues are not sure how this type of solar system might have originally formed, but that there was “no way it formed in the same way as our solar system did, from a large disc of dust and gas.” Instead, they suspect that they formed from “a filament of gas that pushed them together in the same direction.”

—–

Feature Image: An artist’s impression of 2MASS J2126. (Credit: University of Hertfordshire/Neil Cook)

Blue Origin to up flight frequency; SpaceX gets OK to launch military satellites

Last week, when Blue Origin beat rival SpaceX to the punch and became the first company to reuse a booster rocket, it marked what some media outlets called a next-generation “space race” between the two firms – one that both companies are doing their best to win.

On Monday, SpaceX announced that the latest version of its Falcon 9 rocket has been granted clearance from the US Air Force to transport high-grade military satellites into orbit, ensuring that the upgraded launcher can compete for national security contracts.

spacex

SpaceX’s Falcon 9 rocket, taking off at Cape Canaveral on Dec. 21, 2015. (Credit: SpaceX)

According to Spaceflight Now, the new Falcon 9 features higher-thrust engines, larger fuel tanks and a super-chilled propellant mixture. It was certified by the Air Force’s California-based Space and Missile Systems Center commander, Lt. Gen. Samuel Greaves on Monday, allowing the new 229-foot (70-meter) tall booster to participate in missions involving military probes.

The upgraded Falcon 9, which first launched back in December, uses nine Merlin 1D first stage engines that produce a combined 1.5 million force-pounds of thrust, or 0.2 million more than its predecessor. That upgrade, along with changes made to the propellant mixture, allows the rocket to carry 30 percent larger payloads than before.

New Shepard looking to launch with research payloads by year’s end

Not to be outdone, Blue Origin on Monday announced that, in the wake of two successful flights of its New Shepard suborbital vehicle in eight weeks, it would begin increasing the frequency of test flights in the future by reducing the time spent servicing the rocket between launches.

Rob Meyerson, president of the Jeff Bezos-owned aerospace firm, told Space News that the New Shepard was in good condition following its January 22 launch, and that while the firm had more data to review, all indications were that “the vehicle performed perfectly” during its voyage from a test site in West Texas to a peak altitude of 101.7 kilometers and back again.

The New Shepard used in the January 22 flight had previously been used in a similar flight last November, and Meyerson told Space News that Blue Origin planned to shorten the time between test flights in the future. The next flights will use the same vehicle that has already traveled to the edge of space twice, with some tweaks to the hardware and software as needed.

Meyerson also noted that “dozens” of additional tests flights are scheduled to be flown before the company would begin carrying actual passengers beyond the Karman line, which is the boundary line that officially separates Earth’s atmosphere from space. That timeframe is tentative, and may be moved up or pushed back depending on how well the flight test program goes, he added.

The company also said that it is hopeful that it will begin carrying research payloads before the end of 2016. Meyerson said that they are working alongside scientists from Purdue University, the University of Central Florida and Louisiana State University in an attempt to obtain initial “pathfinder” experiments for use on the New Shepard.

—–

Feature Image: Blue Origin’s New Shepard coming in for a landing. (Credit: Amazon)

Researchers have discovered how and why cancer tumors grow

How tumors form and why cancer spreads are major and long-standing questions experts need to answer in their quest to combat the disease. Two University of Iowa studies, which made real-time recordings of the progress of cancerous breast tissue cells, have now discovered that cancer cells actively seek out and pull in healthy cells.

Cancerous cells send out cables of sorts, grabbing their neighbors both healthy and cancerous, and only five percent of cancer cells are needed to form tumors, according to a statement.

“It’s not like things sticking to each other,” said David Soll, biology professor at the UI and corresponding author on the paper, published in the American Journal of Cancer Research. “It’s that these cells go out and actively recruit. It’s complicated stuff, and it’s not passive. No one had a clue that there were specialized cells in this process, and that it’s a small number that pulls all the rest in.”

Cancerous cells that form tumors are known as tumorigenic cells, and the new knowledge acquired by the studies can help to pinpoint what sorts of antibodies are best suited to eliminating them.

The Monoclonal Antibody Research Institute and the Developmental Studies Hybridoma Bank, created by the National Institutes of Health and directed by Soll, together contain one of the world’s largest collections of antibodies that could be used for the anti-cancer testing.

Only cancer cells do it – but why?

A previous, related study showed that only cancer cells behave in this recruiting behavior, probing for other cells, pulling them in and enlarging tumors.

“There’s nothing but tumorigenic cells in the bridge (between cells),” Soll said, “and that’s the discovery. The tumorigenic cells know what they’re doing. They make tumors.”

tumors

University of Iowa researchers have documented how cancerous tumors form by tracking in real time the movement of individual cells in 3-D. They report that just 5 percent of cancer cells are needed to form tumors, a ratio that heretofore had been unknown. (Credit: Soll Laboratory)

As evil as cancer seems to all of us, there must be a reason for the behavior that goes beyond malicious intent.

The researchers posit that deep in our primitive past, the cells were programmed to form embryos. They seems to be now recruiting other cells to make tissue that then forms layered, self-sustaining architecture.

“You might want one big tumor capable of producing the tissue it needs to form a micro-environment,” Soll explained. “It’s as if it’s building its own defenses against the body’s efforts to defeat them.”

—–

Feature Image: Screenshot from YouTube/University of Iowa

Cameras on ISS may be used as for-hire surveillance of countries, report finds

When you think of cameras on the International Space Station being used to capture images of the Earth below, you typically think of stunning visuals of the Northern Lights or incredible pics from the blizzard that hammered the eastern United States this past weekend.

However, a new report published Monday by the New York Times indicates that the cameras on the ISS could have another, possibly more sinister use: as for-hire surveillance equipment which countries can use to keep tabs on their borders and their neighboring nations from outer space.

According to the story, emails recently released by the European Commission indicated that one Canadian firm suggested that the European Union begin using Theia and Iris, the cameras on the orbiting research facility, to help their border agency keep tabs on political boundaries.

That company, UrtheCast (pronounced “Earth cast”) currently has a deal with the space station’s primary Russian contractor, the Times said, and helps Moscow’s space agency operate Theia and Iris. In an email, the firm told the EU’s border agency, Frontex, that their cameras would provide “an unprecedented capability for an integrated persistent space surveillance.”

The EU declined the offer, but UrtheCast has other clients

While UrtheCast’s pitch was just one of several sent to Frontex over the last few years, including one involving a “floating frontier surveillance platform” and another involving algorithms which could be used to predict border crossings, it is the only one that would use instruments on a space station that was explicitly created for “peaceful purposes” back in 1998, the Times said.

In making its proposal, UrtheCast told Frontex that the cameras would also offer “extraction of situation awareness at certain regions, facilities or events” and could “provide reliable evidences on certain events without intruding” on the airspace of neighboring countries using airplanes or drones. An EU agency spokesman said that the offer, which was made in 2013, was declined.

UrtheCast executive, Jeff Rath, declined to comment on the matter, telling the newspaper, “We don’t talk about our customers.” However, he did say that they would “sell to governments” and businesses, as well as nongovernmental organizations. For example, video from the ISS cameras has been used in commercials from both Heineken and Pepsi, and securities filings revealed that UrtheCast has signed a five-year, $65 million deal with a undisclosed customer.

A recent filing, obtained by the Times, also stated that “many government customers” needed satellite imagery “to supervise and manage, among other things, resources, animal migrations and national borders,” and that a plethora of customers were using the company’s services “to track environmental changes, natural disasters and human conflicts.”

NASA spokesman Daniel Huot told the publication via email that the US space agency “did not have any involvement with the UrtheCast payload,” which was “pitched to and flown by” Russia and was under the auspices of Roscosmos. Russian space officials did not respond to the paper’s requests for comment.

—–

Feature Image: Expedition 46 flight engineer Tim Peake of the European Space Agency (ESA) shared this stunning nighttime photograph with his social media followers on Jan. 25, 2016, writing, “Beautiful night pass over Italy, Alps and Mediterranean.” (Credit: ESA/NASA)

How will a recession Impact on the housing market?

According to Paul Hodges, an established expert in the economic impact of demographics, housing prices could be set for a significant decline in the next few years. A proposed drop in the region of 50% has been mooted, although this would depend on whether the forecast global recession takes hold as expected in 2016.

If the Chairman of IeC, who has also predicted a range of ground-breaking economic events over the course of the last 18 months, the impact of a worldwide recession could be devastating and trigger a chain of events that replicate what took place after the subprime mortgage collapse of 2008.

In many ways, the prospect of a global recession has loomed for over a year. The decline in oil prices could be considered as a precursor for this, as this underlined the weak performance of commodities and the impact that geo-political conflicts are continuing to have on the global economy. After this, the Chinese economy has experienced a sustained and significant slump, and one that is unlikely to be addressed any time soon. Even in economies that have experienced growth in the last year (such as the UK and the U.S., for example) may struggle in 2016 as interest rates begin to rise in small increments.

This brings us onto the UK property market, which remains the key driver of growth in the British economy. The Bank of England (BoE) is expected to confirm two incremental interest rate increases of 0.25% in 2016, following the example set by the Federal Reserve in America. While this is indicative of growing sentiment and remains excellent news for savers, however, it will increase the typical mortgage rate in the UK and force home-owners to pay for their homes.

While this is not catastrophic in itself, the onset of another recession would change the financial and real estate landscape beyond all recognition. As interest rates and property prices rise, for example, existing home-owners and home buyers will be forced to invest higher amounts when purchasing real estate. Once the recession hits and prices begin to fall, however, this will leave a plethora of individuals burdened with negative equity and mortgage agreements that they can no longer afford.

While the circumstances may be different, this state of affairs would almost certainly bring back harrowing memories of the 2008 recession. The impact would certainly be similar, with thousands of home-owners facing the prospect of either selling their properties quickly or simply foreclosing on their mortgage and relinquished their most valuable asset.

This is especially true if house prices are slashed by 50% in some regions, as this would generate huge levels of negative equity that would arguably be more pronounced than during the last recession. Properties in London, the South East and the South West would be most at risk, as these areas have recorded the most disproportionate growth over the last 18 months and thrived on the back of an imbalanced market.

Aztecs sacrificed locals, not just prisoners of war, new study finds

It has been widely thought that the humans sacrificed at the Aztec empire’s Great Temple of Tenochtitlan were prisoners of war or people from conquered regions.

However, a new research has found that some people sacrificed by the Aztecs were actually locals.

The prevailing theory has said the people from conquered lands were shipped out to the Great Temple and sacrificed immediately. But an analysis of bone fragments has shown some of the human sacrifices had been living around the temple for a minimum of six years.

To reach their results, the scientists took specimens from the remains of six individuals discovered among the Great Temple’s sacrificial victims, removing material from skulls and teeth. The specimens were subjected to a strontium isotope evaluation to distinguish the individuals’ places of origin.

Regular aspect of Aztec life

The scientists said they were operating on the idea that in ancient societies, it was not very practical for people to travel from one area to the other, and that people generally ate regional food items. They found that men and women designated for sacrifice, but not among captured warriors, became “captives to be servants for the elite,” or individuals with some substantial political rank. People whose remains were analyzed lived between 1469 and 1521.

Also, young men seized in war were not the only people ritualistically killed. The victims also included women, the elderly, and children.

For many years, historians were skeptical of Spanish accounts of Aztec human-sacrifice rituals. They were frequently dismissed as stories designed to portray indigenous people as more savage than they actually were; a rationalizing of “civilized” colonial governance. This was a standard justification employed during the approximately five centuries of European colonialism around the planet.

However, archaeological research indicates human sacrifice was certainly a regular aspect of Aztec life. And the fervor with which it was employed can be tracked back to the political reforms of one man: Tlacaelel, who launched a campaign of religious codification in 1428. Tlacaelel also oversaw military development and territorial expansion of the empire.

—–

Feature Image: Thinkstock